Picture for Letitia Parcalabescu

Letitia Parcalabescu

Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?

Add code
Apr 29, 2024
Viaarxiv icon

On Measuring Faithfulness of Natural Language Explanations

Add code
Nov 13, 2023
Viaarxiv icon

ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models

Add code
Nov 13, 2023
Viaarxiv icon

MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks

Add code
Dec 15, 2022
Viaarxiv icon

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

Add code
Dec 14, 2021
Figure 1 for VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
Figure 2 for VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
Figure 3 for VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
Figure 4 for VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
Viaarxiv icon

MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning

Add code
Dec 09, 2021
Figure 1 for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Figure 2 for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Figure 3 for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Figure 4 for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Viaarxiv icon

What is Multimodality?

Add code
Mar 10, 2021
Figure 1 for What is Multimodality?
Figure 2 for What is Multimodality?
Figure 3 for What is Multimodality?
Viaarxiv icon

Seeing past words: Testing the cross-modal capabilities of pretrained V&L models

Add code
Dec 22, 2020
Figure 1 for Seeing past words: Testing the cross-modal capabilities of pretrained V&L models
Figure 2 for Seeing past words: Testing the cross-modal capabilities of pretrained V&L models
Figure 3 for Seeing past words: Testing the cross-modal capabilities of pretrained V&L models
Figure 4 for Seeing past words: Testing the cross-modal capabilities of pretrained V&L models
Viaarxiv icon

AMR Similarity Metrics from Principles

Add code
Jan 29, 2020
Figure 1 for AMR Similarity Metrics from Principles
Figure 2 for AMR Similarity Metrics from Principles
Figure 3 for AMR Similarity Metrics from Principles
Figure 4 for AMR Similarity Metrics from Principles
Viaarxiv icon