Picture for Yiren Chen

Yiren Chen

Create and Find Flatness: Building Flat Training Spaces in Advance for Continual Learning

Add code
Sep 20, 2023
Viaarxiv icon

TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities

Add code
Dec 13, 2022
Figure 1 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 2 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 3 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 4 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Viaarxiv icon

A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning

Add code
Oct 18, 2022
Figure 1 for A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning
Figure 2 for A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning
Figure 3 for A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning
Figure 4 for A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning
Viaarxiv icon

Bridging the Gap Between Clean Data Training and Real-World Inference for Spoken Language Understanding

Add code
Apr 13, 2021
Figure 1 for Bridging the Gap Between Clean Data Training and Real-World Inference for Spoken Language Understanding
Figure 2 for Bridging the Gap Between Clean Data Training and Real-World Inference for Spoken Language Understanding
Figure 3 for Bridging the Gap Between Clean Data Training and Real-World Inference for Spoken Language Understanding
Figure 4 for Bridging the Gap Between Clean Data Training and Real-World Inference for Spoken Language Understanding
Viaarxiv icon

Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees

Add code
Mar 07, 2021
Figure 1 for Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Figure 2 for Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Figure 3 for Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Figure 4 for Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Viaarxiv icon

AutoADR: Automatic Model Design for Ad Relevance

Add code
Oct 14, 2020
Figure 1 for AutoADR: Automatic Model Design for Ad Relevance
Figure 2 for AutoADR: Automatic Model Design for Ad Relevance
Figure 3 for AutoADR: Automatic Model Design for Ad Relevance
Figure 4 for AutoADR: Automatic Model Design for Ad Relevance
Viaarxiv icon

Improving BERT with Self-Supervised Attention

Add code
Apr 29, 2020
Figure 1 for Improving BERT with Self-Supervised Attention
Figure 2 for Improving BERT with Self-Supervised Attention
Figure 3 for Improving BERT with Self-Supervised Attention
Figure 4 for Improving BERT with Self-Supervised Attention
Viaarxiv icon

TextNAS: A Neural Architecture Search Space tailored for Text Representation

Add code
Dec 23, 2019
Figure 1 for TextNAS: A Neural Architecture Search Space tailored for Text Representation
Figure 2 for TextNAS: A Neural Architecture Search Space tailored for Text Representation
Figure 3 for TextNAS: A Neural Architecture Search Space tailored for Text Representation
Figure 4 for TextNAS: A Neural Architecture Search Space tailored for Text Representation
Viaarxiv icon