Picture for Issei Sato

Issei Sato

The University of Tokyo

Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding

Add code
Oct 16, 2024
Viaarxiv icon

On Expressive Power of Looped Transformers: Theoretical Analysis and Enhancement via Timestep Encoding

Add code
Oct 02, 2024
Viaarxiv icon

Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment

Add code
Sep 26, 2024
Viaarxiv icon

Benign or Not-Benign Overfitting in Token Selection of Attention Mechanism

Add code
Sep 26, 2024
Viaarxiv icon

Optimal Memorization Capacity of Transformers

Add code
Sep 26, 2024
Viaarxiv icon

Top-Down Bayesian Posterior Sampling for Sum-Product Networks

Add code
Jun 18, 2024
Viaarxiv icon

Understanding Linear Probing then Fine-tuning Language Models from NTK Perspective

Add code
May 27, 2024
Viaarxiv icon

End-to-End Training Induces Information Bottleneck through Layer-Role Differentiation: A Comparative Analysis with Layer-wise Training

Add code
Feb 14, 2024
Viaarxiv icon

Understanding Parameter Saliency via Extreme Value Theory

Add code
Oct 27, 2023
Viaarxiv icon

Initialization Bias of Fourier Neural Operator: Revisiting the Edge of Chaos

Add code
Oct 10, 2023
Viaarxiv icon