Picture for Alexandra Chronopoulou

Alexandra Chronopoulou

What Matters for Model Merging at Scale?

Add code
Oct 04, 2024
Viaarxiv icon

Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization

Add code
Nov 15, 2023
Figure 1 for Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization
Figure 2 for Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization
Figure 3 for Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization
Figure 4 for Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization
Viaarxiv icon

On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss

Add code
Jun 04, 2023
Viaarxiv icon

Improving Isochronous Machine Translation with Target Factors and Auxiliary Counters

Add code
May 22, 2023
Viaarxiv icon

Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation

Add code
May 22, 2023
Viaarxiv icon

Jointly Optimizing Translations and Speech Timing to Improve Isochrony in Automatic Dubbing

Add code
Feb 25, 2023
Viaarxiv icon

AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

Add code
Feb 14, 2023
Figure 1 for AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models
Figure 2 for AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models
Figure 3 for AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models
Figure 4 for AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models
Viaarxiv icon

Language-Family Adapters for Multilingual Neural Machine Translation

Add code
Sep 30, 2022
Figure 1 for Language-Family Adapters for Multilingual Neural Machine Translation
Figure 2 for Language-Family Adapters for Multilingual Neural Machine Translation
Figure 3 for Language-Family Adapters for Multilingual Neural Machine Translation
Figure 4 for Language-Family Adapters for Multilingual Neural Machine Translation
Viaarxiv icon

Efficient Hierarchical Domain Adaptation for Pretrained Language Models

Add code
Dec 16, 2021
Figure 1 for Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Figure 2 for Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Figure 3 for Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Figure 4 for Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Viaarxiv icon

Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation

Add code
Apr 14, 2021
Figure 1 for Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation
Figure 2 for Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation
Figure 3 for Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation
Figure 4 for Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Translation
Viaarxiv icon