Picture for Luca Pesce

Luca Pesce

A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities

Add code
Oct 24, 2024
Viaarxiv icon

Online Learning and Information Exponents: On The Importance of Batch size, and Time/Complexity Tradeoffs

Add code
Jun 04, 2024
Viaarxiv icon

Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions

Add code
May 24, 2024
Viaarxiv icon

Asymptotics of feature learning in two-layer networks after one gradient-step

Add code
Feb 07, 2024
Viaarxiv icon

The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents

Add code
Feb 05, 2024
Viaarxiv icon

Learning Two-Layer Neural Networks, One Step at a Time

Add code
May 29, 2023
Viaarxiv icon

Are Gaussian data all you need? Extents and limits of universality in high-dimensional generalized linear estimation

Add code
Feb 17, 2023
Viaarxiv icon

Subspace clustering in high-dimensions: Phase transitions \& Statistical-to-Computational gap

Add code
May 26, 2022
Figure 1 for Subspace clustering in high-dimensions: Phase transitions \& Statistical-to-Computational gap
Figure 2 for Subspace clustering in high-dimensions: Phase transitions \& Statistical-to-Computational gap
Figure 3 for Subspace clustering in high-dimensions: Phase transitions \& Statistical-to-Computational gap
Figure 4 for Subspace clustering in high-dimensions: Phase transitions \& Statistical-to-Computational gap
Viaarxiv icon