Picture for Michael Hahn

Michael Hahn

Saarland University

Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers

Add code
Feb 04, 2025
Viaarxiv icon

Emergent Stack Representations in Modeling Counter Languages Using Transformers

Add code
Feb 03, 2025
Viaarxiv icon

A Formal Framework for Understanding Length Generalization in Transformers

Add code
Oct 03, 2024
Figure 1 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 2 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 3 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 4 for A Formal Framework for Understanding Length Generalization in Transformers
Viaarxiv icon

Separations in the Representational Capabilities of Transformers and Recurrent Architectures

Add code
Jun 13, 2024
Viaarxiv icon

The Expressive Capacity of State Space Models: A Formal Language Perspective

Add code
May 27, 2024
Viaarxiv icon

InversionView: A General-Purpose Method for Reading Information from Neural Activations

Add code
May 27, 2024
Viaarxiv icon

Linguistic Structure from a Bottleneck on Sequential Information Processing

Add code
May 20, 2024
Viaarxiv icon

Why are Sensitive Functions Hard for Transformers?

Add code
Feb 25, 2024
Figure 1 for Why are Sensitive Functions Hard for Transformers?
Figure 2 for Why are Sensitive Functions Hard for Transformers?
Figure 3 for Why are Sensitive Functions Hard for Transformers?
Figure 4 for Why are Sensitive Functions Hard for Transformers?
Viaarxiv icon

A Cross-Linguistic Pressure for Uniform Information Density in Word Order

Add code
Jun 06, 2023
Viaarxiv icon

A Theory of Emergent In-Context Learning as Implicit Structure Induction

Add code
Mar 14, 2023
Viaarxiv icon