Picture for Matt Shannon

Matt Shannon

Very Attentive Tacotron: Robust and Unbounded Length Generalization in Autoregressive Transformer-Based Text-to-Speech

Add code
Oct 29, 2024
Viaarxiv icon

Learning the joint distribution of two sequences using little or no paired data

Add code
Dec 06, 2022
Figure 1 for Learning the joint distribution of two sequences using little or no paired data
Figure 2 for Learning the joint distribution of two sequences using little or no paired data
Figure 3 for Learning the joint distribution of two sequences using little or no paired data
Figure 4 for Learning the joint distribution of two sequences using little or no paired data
Viaarxiv icon

Global Normalization for Streaming Speech Recognition in a Modular Framework

Add code
May 26, 2022
Figure 1 for Global Normalization for Streaming Speech Recognition in a Modular Framework
Figure 2 for Global Normalization for Streaming Speech Recognition in a Modular Framework
Figure 3 for Global Normalization for Streaming Speech Recognition in a Modular Framework
Figure 4 for Global Normalization for Streaming Speech Recognition in a Modular Framework
Viaarxiv icon

Speaker Generation

Add code
Nov 07, 2021
Figure 1 for Speaker Generation
Figure 2 for Speaker Generation
Figure 3 for Speaker Generation
Figure 4 for Speaker Generation
Viaarxiv icon

Non-saturating GAN training as divergence minimization

Add code
Oct 15, 2020
Figure 1 for Non-saturating GAN training as divergence minimization
Figure 2 for Non-saturating GAN training as divergence minimization
Figure 3 for Non-saturating GAN training as divergence minimization
Figure 4 for Non-saturating GAN training as divergence minimization
Viaarxiv icon

Properties of f-divergences and f-GAN training

Add code
Sep 02, 2020
Figure 1 for Properties of f-divergences and f-GAN training
Figure 2 for Properties of f-divergences and f-GAN training
Viaarxiv icon

Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis

Add code
Oct 23, 2019
Figure 1 for Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Figure 2 for Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Figure 3 for Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Figure 4 for Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Viaarxiv icon

Semi-Supervised Generative Modeling for Controllable Speech Synthesis

Add code
Oct 03, 2019
Figure 1 for Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Figure 2 for Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Figure 3 for Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Figure 4 for Semi-Supervised Generative Modeling for Controllable Speech Synthesis
Viaarxiv icon

Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis

Add code
Jul 09, 2019
Figure 1 for Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Figure 2 for Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Figure 3 for Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Figure 4 for Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Viaarxiv icon

Optimizing expected word error rate via sampling for speech recognition

Add code
Jun 08, 2017
Figure 1 for Optimizing expected word error rate via sampling for speech recognition
Figure 2 for Optimizing expected word error rate via sampling for speech recognition
Figure 3 for Optimizing expected word error rate via sampling for speech recognition
Figure 4 for Optimizing expected word error rate via sampling for speech recognition
Viaarxiv icon