Picture for Bartuer Zhou

Bartuer Zhou

DialogVED: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation

Add code
Apr 27, 2022
Figure 1 for DialogVED: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation
Figure 2 for DialogVED: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation
Figure 3 for DialogVED: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation
Figure 4 for DialogVED: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation
Viaarxiv icon

Contextual Fine-to-Coarse Distillation for Coarse-grained Response Selection in Open-Domain Conversations

Add code
Sep 24, 2021
Figure 1 for Contextual Fine-to-Coarse Distillation for Coarse-grained Response Selection in Open-Domain Conversations
Figure 2 for Contextual Fine-to-Coarse Distillation for Coarse-grained Response Selection in Open-Domain Conversations
Figure 3 for Contextual Fine-to-Coarse Distillation for Coarse-grained Response Selection in Open-Domain Conversations
Figure 4 for Contextual Fine-to-Coarse Distillation for Coarse-grained Response Selection in Open-Domain Conversations
Viaarxiv icon

ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation

Add code
Apr 16, 2021
Figure 1 for ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Figure 2 for ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Figure 3 for ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Figure 4 for ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Viaarxiv icon