Picture for Shuohui Chen

Shuohui Chen

OPT: Open Pre-trained Transformer Language Models

Add code
May 05, 2022
Figure 1 for OPT: Open Pre-trained Transformer Language Models
Figure 2 for OPT: Open Pre-trained Transformer Language Models
Figure 3 for OPT: Open Pre-trained Transformer Language Models
Figure 4 for OPT: Open Pre-trained Transformer Language Models
Viaarxiv icon

Efficient Large Scale Language Modeling with Mixtures of Experts

Add code
Dec 20, 2021
Figure 1 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 2 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 3 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 4 for Efficient Large Scale Language Modeling with Mixtures of Experts
Viaarxiv icon

Few-shot Learning with Multilingual Language Models

Add code
Dec 20, 2021
Figure 1 for Few-shot Learning with Multilingual Language Models
Figure 2 for Few-shot Learning with Multilingual Language Models
Figure 3 for Few-shot Learning with Multilingual Language Models
Figure 4 for Few-shot Learning with Multilingual Language Models
Viaarxiv icon

MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark

Add code
Aug 21, 2020
Figure 1 for MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark
Figure 2 for MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark
Figure 3 for MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark
Figure 4 for MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark
Viaarxiv icon