Picture for Wubo Li

Wubo Li

Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning

Add code
Oct 27, 2020
Figure 1 for Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning
Figure 2 for Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning
Figure 3 for Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning
Figure 4 for Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning
Viaarxiv icon

TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog

Add code
Oct 21, 2020
Figure 1 for TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Figure 2 for TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Figure 3 for TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Figure 4 for TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Viaarxiv icon

A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition

Add code
Jun 23, 2020
Figure 1 for A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Figure 2 for A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Figure 3 for A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Figure 4 for A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Viaarxiv icon

Improving Transformer-based Speech Recognition Using Unsupervised Pre-training

Add code
Oct 31, 2019
Figure 1 for Improving Transformer-based Speech Recognition Using Unsupervised Pre-training
Figure 2 for Improving Transformer-based Speech Recognition Using Unsupervised Pre-training
Figure 3 for Improving Transformer-based Speech Recognition Using Unsupervised Pre-training
Figure 4 for Improving Transformer-based Speech Recognition Using Unsupervised Pre-training
Viaarxiv icon

TCT: A Cross-supervised Learning Method for Multimodal Sequence Representation

Add code
Oct 23, 2019
Figure 1 for TCT: A Cross-supervised Learning Method for Multimodal Sequence Representation
Figure 2 for TCT: A Cross-supervised Learning Method for Multimodal Sequence Representation
Figure 3 for TCT: A Cross-supervised Learning Method for Multimodal Sequence Representation
Figure 4 for TCT: A Cross-supervised Learning Method for Multimodal Sequence Representation
Viaarxiv icon

A Multi-Modal Chinese Poetry Generation Model

Add code
Jun 26, 2018
Figure 1 for A Multi-Modal Chinese Poetry Generation Model
Figure 2 for A Multi-Modal Chinese Poetry Generation Model
Figure 3 for A Multi-Modal Chinese Poetry Generation Model
Figure 4 for A Multi-Modal Chinese Poetry Generation Model
Viaarxiv icon