Picture for Eduard Gorbunov

Eduard Gorbunov

Byzantine-Robust and Differentially Private Federated Optimization under Weaker Assumptions

Add code
Mar 24, 2026
Viaarxiv icon

On the Role of Batch Size in Stochastic Conditional Gradient Methods

Add code
Mar 22, 2026
Viaarxiv icon

Byzantine-Robust Optimization under $(L_0, L_1)$-Smoothness

Add code
Mar 12, 2026
Viaarxiv icon

Convergence of Clipped-SGD for Convex $(L_0,L_1)$-Smooth Optimization with Heavy-Tailed Noise

Add code
May 27, 2025
Figure 1 for Convergence of Clipped-SGD for Convex $(L_0,L_1)$-Smooth Optimization with Heavy-Tailed Noise
Figure 2 for Convergence of Clipped-SGD for Convex $(L_0,L_1)$-Smooth Optimization with Heavy-Tailed Noise
Viaarxiv icon

Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy

Add code
Feb 17, 2025
Figure 1 for Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy
Figure 2 for Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy
Figure 3 for Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy
Figure 4 for Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy
Viaarxiv icon

Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization

Add code
Dec 03, 2024
Viaarxiv icon

Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning

Add code
Nov 29, 2024
Figure 1 for Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning
Figure 2 for Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning
Figure 3 for Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning
Figure 4 for Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning
Viaarxiv icon

Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

Add code
Oct 22, 2024
Figure 1 for Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
Figure 2 for Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
Figure 3 for Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
Figure 4 for Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
Viaarxiv icon

Low-Resource Machine Translation through the Lens of Personalized Federated Learning

Add code
Jun 18, 2024
Figure 1 for Low-Resource Machine Translation through the Lens of Personalized Federated Learning
Figure 2 for Low-Resource Machine Translation through the Lens of Personalized Federated Learning
Figure 3 for Low-Resource Machine Translation through the Lens of Personalized Federated Learning
Figure 4 for Low-Resource Machine Translation through the Lens of Personalized Federated Learning
Viaarxiv icon

Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed

Add code
Jun 06, 2024
Figure 1 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 2 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 3 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 4 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Viaarxiv icon