Picture for Pengcheng Yang

Pengcheng Yang

CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations

Add code
Oct 30, 2020
Figure 1 for CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations
Figure 2 for CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations
Figure 3 for CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations
Figure 4 for CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations
Viaarxiv icon

Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions

Add code
Sep 27, 2020
Figure 1 for Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
Figure 2 for Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
Figure 3 for Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
Figure 4 for Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
Viaarxiv icon

Visual Agreement Regularized Training for Multi-Modal Machine Translation

Add code
Dec 27, 2019
Figure 1 for Visual Agreement Regularized Training for Multi-Modal Machine Translation
Figure 2 for Visual Agreement Regularized Training for Multi-Modal Machine Translation
Figure 3 for Visual Agreement Regularized Training for Multi-Modal Machine Translation
Figure 4 for Visual Agreement Regularized Training for Multi-Modal Machine Translation
Viaarxiv icon

Pun-GAN: Generative Adversarial Network for Pun Generation

Add code
Oct 24, 2019
Figure 1 for Pun-GAN: Generative Adversarial Network for Pun Generation
Figure 2 for Pun-GAN: Generative Adversarial Network for Pun Generation
Figure 3 for Pun-GAN: Generative Adversarial Network for Pun Generation
Figure 4 for Pun-GAN: Generative Adversarial Network for Pun Generation
Viaarxiv icon

Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation

Add code
Aug 08, 2019
Figure 1 for Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
Figure 2 for Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
Figure 3 for Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
Figure 4 for Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
Viaarxiv icon

Automatic Generation of Personalized Comment Based on User Profile

Add code
Jul 24, 2019
Figure 1 for Automatic Generation of Personalized Comment Based on User Profile
Figure 2 for Automatic Generation of Personalized Comment Based on User Profile
Figure 3 for Automatic Generation of Personalized Comment Based on User Profile
Figure 4 for Automatic Generation of Personalized Comment Based on User Profile
Viaarxiv icon

Memorized Sparse Backpropagation

Add code
Jun 01, 2019
Figure 1 for Memorized Sparse Backpropagation
Figure 2 for Memorized Sparse Backpropagation
Figure 3 for Memorized Sparse Backpropagation
Figure 4 for Memorized Sparse Backpropagation
Viaarxiv icon

A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer

Add code
May 24, 2019
Figure 1 for A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer
Figure 2 for A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer
Figure 3 for A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer
Figure 4 for A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer
Viaarxiv icon

Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy

Add code
Nov 01, 2018
Figure 1 for Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy
Figure 2 for Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy
Figure 3 for Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy
Figure 4 for Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy
Viaarxiv icon

A Deep Reinforced Sequence-to-Set Model for Multi-Label Text Classification

Add code
Sep 10, 2018
Figure 1 for A Deep Reinforced Sequence-to-Set Model for Multi-Label Text Classification
Figure 2 for A Deep Reinforced Sequence-to-Set Model for Multi-Label Text Classification
Figure 3 for A Deep Reinforced Sequence-to-Set Model for Multi-Label Text Classification
Figure 4 for A Deep Reinforced Sequence-to-Set Model for Multi-Label Text Classification
Viaarxiv icon