Picture for Ankur P. Parikh

Ankur P. Parikh

SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation

Add code
May 22, 2023
Viaarxiv icon

Extrapolative Controlled Sequence Generation via Iterative Refinement

Add code
Mar 08, 2023
Figure 1 for Extrapolative Controlled Sequence Generation via Iterative Refinement
Figure 2 for Extrapolative Controlled Sequence Generation via Iterative Refinement
Figure 3 for Extrapolative Controlled Sequence Generation via Iterative Refinement
Figure 4 for Extrapolative Controlled Sequence Generation via Iterative Refinement
Viaarxiv icon

Reward Gaming in Conditional Text Generation

Add code
Nov 16, 2022
Viaarxiv icon

SQuId: Measuring Speech Naturalness in Many Languages

Add code
Oct 12, 2022
Figure 1 for SQuId: Measuring Speech Naturalness in Many Languages
Figure 2 for SQuId: Measuring Speech Naturalness in Many Languages
Figure 3 for SQuId: Measuring Speech Naturalness in Many Languages
Figure 4 for SQuId: Measuring Speech Naturalness in Many Languages
Viaarxiv icon

Simple Recurrence Improves Masked Language Models

Add code
May 23, 2022
Figure 1 for Simple Recurrence Improves Masked Language Models
Figure 2 for Simple Recurrence Improves Masked Language Models
Figure 3 for Simple Recurrence Improves Masked Language Models
Figure 4 for Simple Recurrence Improves Masked Language Models
Viaarxiv icon

Learning Compact Metrics for MT

Add code
Oct 12, 2021
Figure 1 for Learning Compact Metrics for MT
Figure 2 for Learning Compact Metrics for MT
Figure 3 for Learning Compact Metrics for MT
Figure 4 for Learning Compact Metrics for MT
Viaarxiv icon

Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning

Add code
Aug 30, 2021
Figure 1 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 2 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 3 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 4 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Viaarxiv icon

Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution

Add code
Mar 11, 2021
Figure 1 for Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution
Figure 2 for Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution
Figure 3 for Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution
Figure 4 for Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution
Viaarxiv icon

Learning to Evaluate Translation Beyond English: BLEURT Submissions to the WMT Metrics 2020 Shared Task

Add code
Oct 19, 2020
Figure 1 for Learning to Evaluate Translation Beyond English: BLEURT Submissions to the WMT Metrics 2020 Shared Task
Figure 2 for Learning to Evaluate Translation Beyond English: BLEURT Submissions to the WMT Metrics 2020 Shared Task
Figure 3 for Learning to Evaluate Translation Beyond English: BLEURT Submissions to the WMT Metrics 2020 Shared Task
Figure 4 for Learning to Evaluate Translation Beyond English: BLEURT Submissions to the WMT Metrics 2020 Shared Task
Viaarxiv icon

Harnessing Multilinguality in Unsupervised Machine Translation for Rare Languages

Add code
Sep 23, 2020
Figure 1 for Harnessing Multilinguality in Unsupervised Machine Translation for Rare Languages
Figure 2 for Harnessing Multilinguality in Unsupervised Machine Translation for Rare Languages
Figure 3 for Harnessing Multilinguality in Unsupervised Machine Translation for Rare Languages
Figure 4 for Harnessing Multilinguality in Unsupervised Machine Translation for Rare Languages
Viaarxiv icon