Picture for Marcos Treviso

Marcos Treviso

How Effective are State Space Models for Machine Translation?

Add code
Jul 07, 2024
Figure 1 for How Effective are State Space Models for Machine Translation?
Figure 2 for How Effective are State Space Models for Machine Translation?
Figure 3 for How Effective are State Space Models for Machine Translation?
Figure 4 for How Effective are State Space Models for Machine Translation?
Viaarxiv icon

xTower: A Multilingual LLM for Explaining and Correcting Translation Errors

Add code
Jun 27, 2024
Viaarxiv icon

Scaling up COMETKIWI: Unbabel-IST 2023 Submission for the Quality Estimation Shared Task

Add code
Sep 21, 2023
Viaarxiv icon

CREST: A Joint Framework for Rationalization and Counterfactual Text Generation

Add code
May 26, 2023
Viaarxiv icon

The Inside Story: Towards Better Understanding of Machine Translation Neural Evaluation Metrics

Add code
May 19, 2023
Viaarxiv icon

CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task

Add code
Sep 13, 2022
Figure 1 for CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
Figure 2 for CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
Figure 3 for CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
Figure 4 for CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
Viaarxiv icon

Learning to Scaffold: Optimizing Model Explanations for Teaching

Add code
Apr 22, 2022
Figure 1 for Learning to Scaffold: Optimizing Model Explanations for Teaching
Figure 2 for Learning to Scaffold: Optimizing Model Explanations for Teaching
Figure 3 for Learning to Scaffold: Optimizing Model Explanations for Teaching
Figure 4 for Learning to Scaffold: Optimizing Model Explanations for Teaching
Viaarxiv icon

Predicting Attention Sparsity in Transformers

Add code
Sep 24, 2021
Figure 1 for Predicting Attention Sparsity in Transformers
Figure 2 for Predicting Attention Sparsity in Transformers
Figure 3 for Predicting Attention Sparsity in Transformers
Figure 4 for Predicting Attention Sparsity in Transformers
Viaarxiv icon

Sparse Continuous Distributions and Fenchel-Young Losses

Add code
Aug 04, 2021
Figure 1 for Sparse Continuous Distributions and Fenchel-Young Losses
Figure 2 for Sparse Continuous Distributions and Fenchel-Young Losses
Figure 3 for Sparse Continuous Distributions and Fenchel-Young Losses
Figure 4 for Sparse Continuous Distributions and Fenchel-Young Losses
Viaarxiv icon

Sparse and Continuous Attention Mechanisms

Add code
Jun 12, 2020
Figure 1 for Sparse and Continuous Attention Mechanisms
Figure 2 for Sparse and Continuous Attention Mechanisms
Figure 3 for Sparse and Continuous Attention Mechanisms
Figure 4 for Sparse and Continuous Attention Mechanisms
Viaarxiv icon