Picture for Blake Woodworth

Blake Woodworth

SIERRA

Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic Regression

Add code
Jan 23, 2025
Viaarxiv icon

Gradient Descent Converges Linearly to Flatter Minima than Gradient Flow in Shallow Linear Networks

Add code
Jan 15, 2025
Viaarxiv icon

Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy

Add code
Feb 07, 2023
Viaarxiv icon

Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays

Add code
Jun 15, 2022
Figure 1 for Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Figure 2 for Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Figure 3 for Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Viaarxiv icon

Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares

Add code
Apr 11, 2022
Figure 1 for Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares
Figure 2 for Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares
Viaarxiv icon

A Stochastic Newton Algorithm for Distributed Convex Optimization

Add code
Oct 07, 2021
Figure 1 for A Stochastic Newton Algorithm for Distributed Convex Optimization
Figure 2 for A Stochastic Newton Algorithm for Distributed Convex Optimization
Figure 3 for A Stochastic Newton Algorithm for Distributed Convex Optimization
Figure 4 for A Stochastic Newton Algorithm for Distributed Convex Optimization
Viaarxiv icon

The Minimax Complexity of Distributed Optimization

Add code
Sep 01, 2021
Figure 1 for The Minimax Complexity of Distributed Optimization
Figure 2 for The Minimax Complexity of Distributed Optimization
Figure 3 for The Minimax Complexity of Distributed Optimization
Figure 4 for The Minimax Complexity of Distributed Optimization
Viaarxiv icon

A Field Guide to Federated Optimization

Add code
Jul 14, 2021
Figure 1 for A Field Guide to Federated Optimization
Figure 2 for A Field Guide to Federated Optimization
Figure 3 for A Field Guide to Federated Optimization
Figure 4 for A Field Guide to Federated Optimization
Viaarxiv icon

An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning

Add code
Jun 04, 2021
Viaarxiv icon

On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent

Add code
Feb 19, 2021
Figure 1 for On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
Figure 2 for On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
Viaarxiv icon