Picture for Minh-Thang Luong

Minh-Thang Luong

MTet: Multi-domain Translation for English and Vietnamese

Add code
Oct 19, 2022
Figure 1 for MTet: Multi-domain Translation for English and Vietnamese
Figure 2 for MTet: Multi-domain Translation for English and Vietnamese
Figure 3 for MTet: Multi-domain Translation for English and Vietnamese
Figure 4 for MTet: Multi-domain Translation for English and Vietnamese
Viaarxiv icon

Combined Scaling for Zero-shot Transfer Learning

Add code
Nov 19, 2021
Figure 1 for Combined Scaling for Zero-shot Transfer Learning
Figure 2 for Combined Scaling for Zero-shot Transfer Learning
Figure 3 for Combined Scaling for Zero-shot Transfer Learning
Figure 4 for Combined Scaling for Zero-shot Transfer Learning
Viaarxiv icon

Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference

Add code
Sep 24, 2021
Figure 1 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 2 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 3 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 4 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Viaarxiv icon

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning

Add code
Sep 13, 2021
Figure 1 for STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Figure 2 for STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Figure 3 for STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Figure 4 for STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Viaarxiv icon

Pre-Training Transformers as Energy-Based Cloze Models

Add code
Dec 15, 2020
Figure 1 for Pre-Training Transformers as Energy-Based Cloze Models
Figure 2 for Pre-Training Transformers as Energy-Based Cloze Models
Figure 3 for Pre-Training Transformers as Energy-Based Cloze Models
Figure 4 for Pre-Training Transformers as Energy-Based Cloze Models
Viaarxiv icon

Towards Domain-Agnostic Contrastive Learning

Add code
Nov 09, 2020
Figure 1 for Towards Domain-Agnostic Contrastive Learning
Figure 2 for Towards Domain-Agnostic Contrastive Learning
Figure 3 for Towards Domain-Agnostic Contrastive Learning
Figure 4 for Towards Domain-Agnostic Contrastive Learning
Viaarxiv icon

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

Add code
Mar 23, 2020
Viaarxiv icon

Towards a Human-like Open-Domain Chatbot

Add code
Feb 27, 2020
Figure 1 for Towards a Human-like Open-Domain Chatbot
Figure 2 for Towards a Human-like Open-Domain Chatbot
Figure 3 for Towards a Human-like Open-Domain Chatbot
Figure 4 for Towards a Human-like Open-Domain Chatbot
Viaarxiv icon

A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages

Add code
Nov 19, 2019
Figure 1 for A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages
Figure 2 for A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages
Figure 3 for A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages
Figure 4 for A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages
Viaarxiv icon

Self-training with Noisy Student improves ImageNet classification

Add code
Nov 11, 2019
Figure 1 for Self-training with Noisy Student improves ImageNet classification
Figure 2 for Self-training with Noisy Student improves ImageNet classification
Figure 3 for Self-training with Noisy Student improves ImageNet classification
Figure 4 for Self-training with Noisy Student improves ImageNet classification
Viaarxiv icon