Picture for Dinghan Shen

Dinghan Shen

HiddenCut: Simple Data Augmentation for Natural Language Understanding with Better Generalization

Add code
May 31, 2021
Figure 1 for HiddenCut: Simple Data Augmentation for Natural Language Understanding with Better Generalization
Figure 2 for HiddenCut: Simple Data Augmentation for Natural Language Understanding with Better Generalization
Figure 3 for HiddenCut: Simple Data Augmentation for Natural Language Understanding with Better Generalization
Figure 4 for HiddenCut: Simple Data Augmentation for Natural Language Understanding with Better Generalization
Viaarxiv icon

What Makes Good In-Context Examples for GPT-$3$?

Add code
Jan 17, 2021
Figure 1 for What Makes Good In-Context Examples for GPT-$3$?
Figure 2 for What Makes Good In-Context Examples for GPT-$3$?
Figure 3 for What Makes Good In-Context Examples for GPT-$3$?
Figure 4 for What Makes Good In-Context Examples for GPT-$3$?
Viaarxiv icon

MixKD: Towards Efficient Distillation of Large-scale Language Models

Add code
Nov 01, 2020
Figure 1 for MixKD: Towards Efficient Distillation of Large-scale Language Models
Figure 2 for MixKD: Towards Efficient Distillation of Large-scale Language Models
Figure 3 for MixKD: Towards Efficient Distillation of Large-scale Language Models
Figure 4 for MixKD: Towards Efficient Distillation of Large-scale Language Models
Viaarxiv icon

A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation

Add code
Oct 23, 2020
Figure 1 for A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation
Figure 2 for A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation
Figure 3 for A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation
Figure 4 for A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation
Viaarxiv icon

CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding

Add code
Oct 16, 2020
Figure 1 for CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding
Figure 2 for CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding
Figure 3 for CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding
Figure 4 for CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding
Viaarxiv icon

Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model

Add code
Oct 14, 2020
Figure 1 for Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model
Figure 2 for Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model
Figure 3 for Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model
Figure 4 for Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model
Viaarxiv icon

Improving Text Generation with Student-Forcing Optimal Transport

Add code
Oct 12, 2020
Figure 1 for Improving Text Generation with Student-Forcing Optimal Transport
Figure 2 for Improving Text Generation with Student-Forcing Optimal Transport
Figure 3 for Improving Text Generation with Student-Forcing Optimal Transport
Figure 4 for Improving Text Generation with Student-Forcing Optimal Transport
Viaarxiv icon

Generative Semantic Hashing Enhanced via Boltzmann Machines

Add code
Jun 16, 2020
Figure 1 for Generative Semantic Hashing Enhanced via Boltzmann Machines
Figure 2 for Generative Semantic Hashing Enhanced via Boltzmann Machines
Figure 3 for Generative Semantic Hashing Enhanced via Boltzmann Machines
Figure 4 for Generative Semantic Hashing Enhanced via Boltzmann Machines
Viaarxiv icon

Improving Disentangled Text Representation Learning with Information-Theoretic Guidance

Add code
Jun 06, 2020
Figure 1 for Improving Disentangled Text Representation Learning with Information-Theoretic Guidance
Figure 2 for Improving Disentangled Text Representation Learning with Information-Theoretic Guidance
Figure 3 for Improving Disentangled Text Representation Learning with Information-Theoretic Guidance
Figure 4 for Improving Disentangled Text Representation Learning with Information-Theoretic Guidance
Viaarxiv icon

Improving Adversarial Text Generation by Modeling the Distant Future

Add code
May 04, 2020
Figure 1 for Improving Adversarial Text Generation by Modeling the Distant Future
Figure 2 for Improving Adversarial Text Generation by Modeling the Distant Future
Figure 3 for Improving Adversarial Text Generation by Modeling the Distant Future
Figure 4 for Improving Adversarial Text Generation by Modeling the Distant Future
Viaarxiv icon