Picture for Alessandro Raganato

Alessandro Raganato

DIETA: A Decoder-only transformer-based model for Italian-English machine TrAnslation

Add code
Jan 25, 2026
Viaarxiv icon

Investigating Task Arithmetic for Zero-Shot Information Retrieval

Add code
May 01, 2025
Viaarxiv icon

Reasoning Capabilities and Invariability of Large Language Models

Add code
May 01, 2025
Viaarxiv icon

SemEval-2025 Task 3: Mu-SHROOM, the Multilingual Shared Task on Hallucinations and Related Observable Overgeneration Mistakes

Add code
Apr 16, 2025
Viaarxiv icon

How to Blend Concepts in Diffusion Models

Add code
Jul 19, 2024
Viaarxiv icon

SemEval-2024 Shared Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes

Add code
Mar 20, 2024
Viaarxiv icon

MAMMOTH: Massively Multilingual Modular Open Translation @ Helsinki

Add code
Mar 12, 2024
Figure 1 for MAMMOTH: Massively Multilingual Modular Open Translation @ Helsinki
Figure 2 for MAMMOTH: Massively Multilingual Modular Open Translation @ Helsinki
Figure 3 for MAMMOTH: Massively Multilingual Modular Open Translation @ Helsinki
Figure 4 for MAMMOTH: Massively Multilingual Modular Open Translation @ Helsinki
Viaarxiv icon

Democratizing Machine Translation with OPUS-MT

Add code
Dec 04, 2022
Viaarxiv icon

XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization

Add code
Oct 13, 2020
Figure 1 for XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization
Figure 2 for XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization
Figure 3 for XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization
Figure 4 for XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization
Viaarxiv icon

Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation

Add code
Feb 24, 2020
Figure 1 for Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Figure 2 for Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Figure 3 for Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Figure 4 for Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Viaarxiv icon