Picture for Amr Hendy

Amr Hendy

Microsoft ATL Cairo

How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation

Add code
Feb 18, 2023
Figure 1 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 2 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 3 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 4 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Viaarxiv icon

Domain Specific Sub-network for Multi-Domain Neural Machine Translation

Add code
Oct 18, 2022
Figure 1 for Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Figure 2 for Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Figure 3 for Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Figure 4 for Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Viaarxiv icon

Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation

Add code
Aug 11, 2022
Figure 1 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 2 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 3 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 4 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Viaarxiv icon

Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs

Add code
Nov 26, 2021
Figure 1 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 2 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 3 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 4 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Viaarxiv icon

Scalable and Efficient MoE Training for Multitask Multilingual Models

Add code
Sep 22, 2021
Figure 1 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 2 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 3 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 4 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Viaarxiv icon

Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions

Add code
Nov 16, 2020
Figure 1 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 2 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 3 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 4 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Viaarxiv icon