Picture for Dani Yogatama

Dani Yogatama

The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities

Add code
Nov 07, 2024
Viaarxiv icon

Causal Interventions on Causal Paths: Mapping GPT-2's Reasoning From Syntax to Semantics

Add code
Oct 28, 2024
Viaarxiv icon

Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models

Add code
May 03, 2024
Figure 1 for Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models
Figure 2 for Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models
Figure 3 for Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models
Figure 4 for Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models
Viaarxiv icon

Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models

Add code
Apr 18, 2024
Viaarxiv icon

IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations

Add code
Apr 02, 2024
Viaarxiv icon

Understanding In-Context Learning with a Pelican Soup Framework

Add code
Feb 16, 2024
Viaarxiv icon

DeLLMa: A Framework for Decision Making Under Uncertainty with Large Language Models

Add code
Feb 04, 2024
Viaarxiv icon

On Retrieval Augmentation and the Limitations of Language Model Training

Add code
Nov 16, 2023
Viaarxiv icon

The Distributional Hypothesis Does Not Fully Explain the Benefits of Masked Language Model Pretraining

Add code
Oct 25, 2023
Viaarxiv icon

Interpretable Diffusion via Information Decomposition

Add code
Oct 12, 2023
Viaarxiv icon