Picture for Michael Hahn

Michael Hahn

Saarland University

The Bayesian Origin of the Probability Weighting Function in Human Representation of Probabilities

Add code
Oct 06, 2025
Viaarxiv icon

One Size Fits None: Rethinking Fairness in Medical AI

Add code
Jun 17, 2025
Viaarxiv icon

Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness

Add code
Jun 16, 2025
Figure 1 for Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness
Figure 2 for Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness
Figure 3 for Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness
Figure 4 for Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness
Viaarxiv icon

Born a Transformer -- Always a Transformer?

Add code
May 27, 2025
Viaarxiv icon

Language models can learn implicit multi-hop reasoning, but only if they have lots of training data

Add code
May 23, 2025
Viaarxiv icon

Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B

Add code
Mar 31, 2025
Figure 1 for Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B
Figure 2 for Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B
Figure 3 for Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B
Figure 4 for Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B
Viaarxiv icon

Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers

Add code
Feb 04, 2025
Viaarxiv icon

Emergent Stack Representations in Modeling Counter Languages Using Transformers

Add code
Feb 03, 2025
Viaarxiv icon

A Formal Framework for Understanding Length Generalization in Transformers

Add code
Oct 03, 2024
Figure 1 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 2 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 3 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 4 for A Formal Framework for Understanding Length Generalization in Transformers
Viaarxiv icon

Separations in the Representational Capabilities of Transformers and Recurrent Architectures

Add code
Jun 13, 2024
Viaarxiv icon