Picture for Xiangyang Zhou

Xiangyang Zhou

Distilling Knowledge from Pre-trained Language Models via Text Smoothing

Add code
May 08, 2020
Figure 1 for Distilling Knowledge from Pre-trained Language Models via Text Smoothing
Figure 2 for Distilling Knowledge from Pre-trained Language Models via Text Smoothing
Figure 3 for Distilling Knowledge from Pre-trained Language Models via Text Smoothing
Figure 4 for Distilling Knowledge from Pre-trained Language Models via Text Smoothing
Viaarxiv icon

How to Evaluate the Next System: Automatic Dialogue Evaluation from the Perspective of Continual Learning

Add code
Dec 10, 2019
Figure 1 for How to Evaluate the Next System: Automatic Dialogue Evaluation from the Perspective of Continual Learning
Figure 2 for How to Evaluate the Next System: Automatic Dialogue Evaluation from the Perspective of Continual Learning
Figure 3 for How to Evaluate the Next System: Automatic Dialogue Evaluation from the Perspective of Continual Learning
Figure 4 for How to Evaluate the Next System: Automatic Dialogue Evaluation from the Perspective of Continual Learning
Viaarxiv icon

Proactive Human-Machine Conversation with Explicit Conversation Goals

Add code
Jun 13, 2019
Figure 1 for Proactive Human-Machine Conversation with Explicit Conversation Goals
Figure 2 for Proactive Human-Machine Conversation with Explicit Conversation Goals
Figure 3 for Proactive Human-Machine Conversation with Explicit Conversation Goals
Figure 4 for Proactive Human-Machine Conversation with Explicit Conversation Goals
Viaarxiv icon

Power-Law Graph Cuts

Add code
Nov 25, 2014
Figure 1 for Power-Law Graph Cuts
Figure 2 for Power-Law Graph Cuts
Figure 3 for Power-Law Graph Cuts
Figure 4 for Power-Law Graph Cuts
Viaarxiv icon