Picture for Gabriel Peyré

Gabriel Peyré

CNRS and ENS-PSL

Towards Understanding the Universality of Transformers for Next-Token Prediction

Add code
Oct 03, 2024
Viaarxiv icon

Transformers are Universal In-context Learners

Add code
Aug 02, 2024
Viaarxiv icon

Keep the Momentum: Conservation Laws beyond Euclidean Gradient Flows

Add code
May 21, 2024
Viaarxiv icon

Understanding the training of infinitely deep and wide ResNets with Conditional Optimal Transport

Add code
Mar 19, 2024
Viaarxiv icon

Enhancing Hypergradients Estimation: A Study of Preconditioning and Reparameterization

Add code
Feb 26, 2024
Viaarxiv icon

How do Transformers perform In-Context Autoregressive Learning?

Add code
Feb 08, 2024
Figure 1 for How do Transformers perform In-Context Autoregressive Learning?
Figure 2 for How do Transformers perform In-Context Autoregressive Learning?
Figure 3 for How do Transformers perform In-Context Autoregressive Learning?
Figure 4 for How do Transformers perform In-Context Autoregressive Learning?
Viaarxiv icon

Understanding the Regularity of Self-Attention with Optimal Transport

Add code
Dec 22, 2023
Viaarxiv icon

Structured Transforms Across Spaces with Cost-Regularized Optimal Transport

Add code
Nov 23, 2023
Viaarxiv icon

Abide by the Law and Follow the Flow: Conservation Laws for Gradient Flows

Add code
Jun 30, 2023
Viaarxiv icon

Test like you Train in Implicit Deep Learning

Add code
May 24, 2023
Figure 1 for Test like you Train in Implicit Deep Learning
Figure 2 for Test like you Train in Implicit Deep Learning
Figure 3 for Test like you Train in Implicit Deep Learning
Figure 4 for Test like you Train in Implicit Deep Learning
Viaarxiv icon