Picture for Sangha Kim

Sangha Kim

Label-Free Multi-Domain Machine Translation with Stage-wise Training

Add code
May 06, 2023
Viaarxiv icon

Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement

Add code
Oct 18, 2021
Figure 1 for Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement
Figure 2 for Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement
Figure 3 for Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement
Figure 4 for Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement
Viaarxiv icon

Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems

Add code
Oct 13, 2021
Figure 1 for Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems
Figure 2 for Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems
Figure 3 for Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems
Figure 4 for Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems
Viaarxiv icon

Infusing Future Information into Monotonic Attention Through Language Models

Add code
Sep 07, 2021
Figure 1 for Infusing Future Information into Monotonic Attention Through Language Models
Figure 2 for Infusing Future Information into Monotonic Attention Through Language Models
Figure 3 for Infusing Future Information into Monotonic Attention Through Language Models
Figure 4 for Infusing Future Information into Monotonic Attention Through Language Models
Viaarxiv icon

Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation

Add code
Dec 29, 2020
Figure 1 for Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation
Figure 2 for Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation
Figure 3 for Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation
Figure 4 for Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation
Viaarxiv icon

Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation

Add code
Oct 13, 2020
Figure 1 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 2 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 3 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 4 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Viaarxiv icon

Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning

Add code
Nov 11, 2019
Figure 1 for Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning
Figure 2 for Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning
Figure 3 for Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning
Figure 4 for Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning
Viaarxiv icon