Picture for Manuel Mager

Manuel Mager

DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction

Add code
Dec 12, 2024
Viaarxiv icon

Inference time LLM alignment in single and multidomain preference spectrum

Add code
Oct 24, 2024
Figure 1 for Inference time LLM alignment in single and multidomain preference spectrum
Figure 2 for Inference time LLM alignment in single and multidomain preference spectrum
Figure 3 for Inference time LLM alignment in single and multidomain preference spectrum
Figure 4 for Inference time LLM alignment in single and multidomain preference spectrum
Viaarxiv icon

Neural Machine Translation for the Indigenous Languages of the Americas: An Introduction

Add code
Jun 11, 2023
Viaarxiv icon

Ethical Considerations for Machine Translation of Indigenous Languages: Giving a Voice to the Speakers

Add code
May 31, 2023
Viaarxiv icon

Exploring Segmentation Approaches for Neural Machine Translation of Code-Switched Egyptian Arabic-English Text

Add code
Oct 11, 2022
Figure 1 for Exploring Segmentation Approaches for Neural Machine Translation of Code-Switched Egyptian Arabic-English Text
Figure 2 for Exploring Segmentation Approaches for Neural Machine Translation of Code-Switched Egyptian Arabic-English Text
Figure 3 for Exploring Segmentation Approaches for Neural Machine Translation of Code-Switched Egyptian Arabic-English Text
Figure 4 for Exploring Segmentation Approaches for Neural Machine Translation of Code-Switched Egyptian Arabic-English Text
Viaarxiv icon

BPE vs. Morphological Segmentation: A Case Study on Machine Translation of Four Polysynthetic Languages

Add code
Mar 16, 2022
Figure 1 for BPE vs. Morphological Segmentation: A Case Study on Machine Translation of Four Polysynthetic Languages
Figure 2 for BPE vs. Morphological Segmentation: A Case Study on Machine Translation of Four Polysynthetic Languages
Figure 3 for BPE vs. Morphological Segmentation: A Case Study on Machine Translation of Four Polysynthetic Languages
Figure 4 for BPE vs. Morphological Segmentation: A Case Study on Machine Translation of Four Polysynthetic Languages
Viaarxiv icon

IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task

Add code
Jun 30, 2021
Figure 1 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 2 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 3 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 4 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Viaarxiv icon

AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages

Add code
Apr 18, 2021
Figure 1 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 2 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 3 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 4 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Viaarxiv icon

Tackling the Low-resource Challenge for Canonical Segmentation

Add code
Oct 06, 2020
Figure 1 for Tackling the Low-resource Challenge for Canonical Segmentation
Figure 2 for Tackling the Low-resource Challenge for Canonical Segmentation
Figure 3 for Tackling the Low-resource Challenge for Canonical Segmentation
Figure 4 for Tackling the Low-resource Challenge for Canonical Segmentation
Viaarxiv icon

GPT-too: A language-model-first approach for AMR-to-text generation

Add code
May 27, 2020
Figure 1 for GPT-too: A language-model-first approach for AMR-to-text generation
Figure 2 for GPT-too: A language-model-first approach for AMR-to-text generation
Figure 3 for GPT-too: A language-model-first approach for AMR-to-text generation
Figure 4 for GPT-too: A language-model-first approach for AMR-to-text generation
Viaarxiv icon