Picture for Daisy Stanton

Daisy Stanton

Very Attentive Tacotron: Robust and Unbounded Length Generalization in Autoregressive Transformer-Based Text-to-Speech

Add code
Oct 29, 2024
Viaarxiv icon

Learning the joint distribution of two sequences using little or no paired data

Add code
Dec 06, 2022
Viaarxiv icon

Speaker Generation

Add code
Nov 07, 2021
Figure 1 for Speaker Generation
Figure 2 for Speaker Generation
Figure 3 for Speaker Generation
Figure 4 for Speaker Generation
Viaarxiv icon

Non-saturating GAN training as divergence minimization

Add code
Oct 15, 2020
Figure 1 for Non-saturating GAN training as divergence minimization
Figure 2 for Non-saturating GAN training as divergence minimization
Figure 3 for Non-saturating GAN training as divergence minimization
Figure 4 for Non-saturating GAN training as divergence minimization
Viaarxiv icon

Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis

Add code
Oct 23, 2019
Figure 1 for Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Figure 2 for Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Figure 3 for Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Figure 4 for Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Viaarxiv icon

Semi-Supervised Generative Modeling for Controllable Speech Synthesis

Add code
Oct 03, 2019
Figure 1 for Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Figure 2 for Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Figure 3 for Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Figure 4 for Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Viaarxiv icon

Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis

Add code
Jul 09, 2019
Figure 1 for Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Figure 2 for Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Figure 3 for Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Figure 4 for Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Viaarxiv icon

Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis

Add code
Aug 04, 2018
Figure 1 for Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis
Figure 2 for Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis
Figure 3 for Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis
Figure 4 for Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis
Viaarxiv icon

Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron

Add code
Mar 24, 2018
Figure 1 for Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron
Figure 2 for Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron
Figure 3 for Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron
Figure 4 for Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron
Viaarxiv icon

Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis

Add code
Mar 23, 2018
Figure 1 for Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis
Figure 2 for Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis
Figure 3 for Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis
Figure 4 for Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis
Viaarxiv icon