Picture for Changtong Zan

Changtong Zan

Building Accurate Translation-Tailored LLMs with Language Aware Instruction Tuning

Add code
Mar 21, 2024
Viaarxiv icon

Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot Translation

Add code
Sep 28, 2023
Viaarxiv icon

Unsupervised Dense Retrieval with Relevance-Aware Contrastive Pre-Training

Add code
Jun 05, 2023
Viaarxiv icon

Prompt-Learning for Cross-Lingual Relation Extraction

Add code
Apr 20, 2023
Viaarxiv icon

Vega-MT: The JD Explore Academy Translation System for WMT22

Add code
Sep 21, 2022
Figure 1 for Vega-MT: The JD Explore Academy Translation System for WMT22
Figure 2 for Vega-MT: The JD Explore Academy Translation System for WMT22
Figure 3 for Vega-MT: The JD Explore Academy Translation System for WMT22
Figure 4 for Vega-MT: The JD Explore Academy Translation System for WMT22
Viaarxiv icon

On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation

Add code
Sep 15, 2022
Figure 1 for On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation
Figure 2 for On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation
Figure 3 for On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation
Figure 4 for On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation
Viaarxiv icon

Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation

Add code
Apr 16, 2022
Figure 1 for Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation
Figure 2 for Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation
Figure 3 for Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation
Figure 4 for Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation
Viaarxiv icon