Picture for Yatin Dandi

Yatin Dandi

A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities

Add code
Oct 24, 2024
Figure 1 for A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities
Figure 2 for A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities
Viaarxiv icon

Online Learning and Information Exponents: On The Importance of Batch size, and Time/Complexity Tradeoffs

Add code
Jun 04, 2024
Viaarxiv icon

Fundamental limits of weak learnability in high-dimensional multi-index models

Add code
May 24, 2024
Viaarxiv icon

Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions

Add code
May 24, 2024
Viaarxiv icon

Asymptotics of feature learning in two-layer networks after one gradient-step

Add code
Feb 07, 2024
Viaarxiv icon

The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents

Add code
Feb 05, 2024
Viaarxiv icon

A Gentle Introduction to Gradient-Based Optimization and Variational Inequalities for Machine Learning

Add code
Sep 09, 2023
Viaarxiv icon

Sampling with flows, diffusion and autoregressive neural networks: A spin-glass perspective

Add code
Aug 27, 2023
Viaarxiv icon

Learning Two-Layer Neural Networks, One Step at a Time

Add code
May 29, 2023
Viaarxiv icon

Universality laws for Gaussian mixtures in generalized linear models

Add code
Feb 17, 2023
Viaarxiv icon