Picture for Clayton Sanford

Clayton Sanford

One-layer transformers fail to solve the induction heads task

Add code
Aug 26, 2024
Viaarxiv icon

Understanding Transformer Reasoning Capabilities via Graph Algorithms

Add code
May 28, 2024
Viaarxiv icon

Transformers, parallel computation, and logarithmic depth

Add code
Feb 14, 2024
Viaarxiv icon

Representational Strengths and Limitations of Transformers

Add code
Jun 05, 2023
Viaarxiv icon

Learning Single-Index Models with Shallow Neural Networks

Add code
Oct 27, 2022
Viaarxiv icon

On Scrambling Phenomena for Randomly Initialized Recurrent Networks

Add code
Oct 11, 2022
Figure 1 for On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Figure 2 for On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Figure 3 for On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Figure 4 for On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Viaarxiv icon

Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias

Add code
Jun 10, 2022
Figure 1 for Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias
Figure 2 for Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias
Figure 3 for Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias
Figure 4 for Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias
Viaarxiv icon

Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals

Add code
Feb 10, 2022
Viaarxiv icon

Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem

Add code
Oct 19, 2021
Figure 1 for Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Figure 2 for Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Figure 3 for Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Figure 4 for Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Viaarxiv icon

Support vector machines and linear regression coincide with very high-dimensional features

Add code
May 28, 2021
Figure 1 for Support vector machines and linear regression coincide with very high-dimensional features
Figure 2 for Support vector machines and linear regression coincide with very high-dimensional features
Figure 3 for Support vector machines and linear regression coincide with very high-dimensional features
Figure 4 for Support vector machines and linear regression coincide with very high-dimensional features
Viaarxiv icon