Picture for Giovanni Monea

Giovanni Monea

Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers

Add code
Nov 13, 2024
Figure 1 for Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers
Figure 2 for Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers
Figure 3 for Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers
Figure 4 for Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers
Viaarxiv icon

Controllable Context Sensitivity and the Knob Behind It

Add code
Nov 11, 2024
Figure 1 for Controllable Context Sensitivity and the Knob Behind It
Figure 2 for Controllable Context Sensitivity and the Knob Behind It
Figure 3 for Controllable Context Sensitivity and the Knob Behind It
Figure 4 for Controllable Context Sensitivity and the Knob Behind It
Viaarxiv icon

LLMs Are In-Context Reinforcement Learners

Add code
Oct 07, 2024
Viaarxiv icon

Do Llamas Work in English? On the Latent Language of Multilingual Transformers

Add code
Feb 24, 2024
Viaarxiv icon

A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia

Add code
Dec 04, 2023
Figure 1 for A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia
Figure 2 for A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia
Figure 3 for A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia
Figure 4 for A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia
Viaarxiv icon

PaSS: Parallel Speculative Sampling

Add code
Nov 22, 2023
Viaarxiv icon