Picture for Brett W. Larsen

Brett W. Larsen

Tensor Decomposition Meets RKHS: Efficient Algorithms for Smooth and Misaligned Data

Add code
Aug 11, 2024
Viaarxiv icon

Does your data spark joy? Performance gains from domain upsampling at the end of training

Add code
Jun 05, 2024
Viaarxiv icon

Duality of Bures and Shape Distances with Implications for Comparing Neural Representations

Add code
Nov 19, 2023
Viaarxiv icon

Estimating Shape Distances on Neural Representations with Limited Samples

Add code
Oct 09, 2023
Viaarxiv icon

Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?

Add code
Oct 06, 2022
Figure 1 for Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
Figure 2 for Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
Figure 3 for Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
Figure 4 for Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
Viaarxiv icon

Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks

Add code
Jun 02, 2022
Figure 1 for Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks
Figure 2 for Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks
Figure 3 for Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks
Figure 4 for Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks
Viaarxiv icon

How many degrees of freedom do we need to train deep networks: a loss landscape perspective

Add code
Jul 13, 2021
Figure 1 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 2 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 3 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 4 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Viaarxiv icon