Picture for Blake Woodworth

Blake Woodworth

SIERRA

Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy

Add code
Feb 07, 2023
Viaarxiv icon

Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays

Add code
Jun 15, 2022
Figure 1 for Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Figure 2 for Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Figure 3 for Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Viaarxiv icon

Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares

Add code
Apr 11, 2022
Figure 1 for Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares
Figure 2 for Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares
Viaarxiv icon

A Stochastic Newton Algorithm for Distributed Convex Optimization

Add code
Oct 07, 2021
Figure 1 for A Stochastic Newton Algorithm for Distributed Convex Optimization
Figure 2 for A Stochastic Newton Algorithm for Distributed Convex Optimization
Figure 3 for A Stochastic Newton Algorithm for Distributed Convex Optimization
Figure 4 for A Stochastic Newton Algorithm for Distributed Convex Optimization
Viaarxiv icon

The Minimax Complexity of Distributed Optimization

Add code
Sep 01, 2021
Figure 1 for The Minimax Complexity of Distributed Optimization
Figure 2 for The Minimax Complexity of Distributed Optimization
Figure 3 for The Minimax Complexity of Distributed Optimization
Figure 4 for The Minimax Complexity of Distributed Optimization
Viaarxiv icon

A Field Guide to Federated Optimization

Add code
Jul 14, 2021
Figure 1 for A Field Guide to Federated Optimization
Figure 2 for A Field Guide to Federated Optimization
Figure 3 for A Field Guide to Federated Optimization
Figure 4 for A Field Guide to Federated Optimization
Viaarxiv icon

An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning

Add code
Jun 04, 2021
Viaarxiv icon

On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent

Add code
Feb 19, 2021
Figure 1 for On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
Figure 2 for On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
Viaarxiv icon

The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication

Add code
Feb 02, 2021
Viaarxiv icon

Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy

Add code
Jul 13, 2020
Figure 1 for Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
Figure 2 for Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
Figure 3 for Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
Figure 4 for Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
Viaarxiv icon