Picture for Zhanglin Wu

Zhanglin Wu

Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation

Add code
Sep 25, 2024
Figure 1 for Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation
Figure 2 for Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation
Figure 3 for Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation
Figure 4 for Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation
Viaarxiv icon

Exploring the traditional NMT model and Large Language Model for chat translation

Add code
Sep 24, 2024
Viaarxiv icon

Multilingual Transfer and Domain Adaptation for Low-Resource Languages of Spain

Add code
Sep 24, 2024
Viaarxiv icon

Machine Translation Advancements of Low-Resource Indian Languages by Transfer Learning

Add code
Sep 24, 2024
Viaarxiv icon

Speaker-Smoothed kNN Speaker Adaptation for End-to-End ASR

Add code
Jun 07, 2024
Viaarxiv icon

R-BI: Regularized Batched Inputs enhance Incremental Decoding Framework for Low-Latency Simultaneous Speech Translation

Add code
Jan 11, 2024
Viaarxiv icon

Text Style Transfer Back-Translation

Add code
Jun 02, 2023
Viaarxiv icon

KG-BERTScore: Incorporating Knowledge Graph into BERTScore for Reference-Free Machine Translation Evaluation

Add code
Jan 30, 2023
Viaarxiv icon

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

Add code
Dec 22, 2021
Figure 1 for Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models
Figure 2 for Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models
Figure 3 for Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models
Figure 4 for Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models
Viaarxiv icon

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

Add code
Dec 22, 2021
Figure 1 for Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation
Figure 2 for Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation
Figure 3 for Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation
Figure 4 for Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation
Viaarxiv icon