Picture for Eduard Gorbunov

Eduard Gorbunov

Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

Add code
Oct 22, 2024
Viaarxiv icon

Low-Resource Machine Translation through the Lens of Personalized Federated Learning

Add code
Jun 18, 2024
Viaarxiv icon

Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed

Add code
Jun 06, 2024
Figure 1 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 2 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 3 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 4 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Viaarxiv icon

Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad

Add code
Mar 05, 2024
Figure 1 for Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Figure 2 for Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Figure 3 for Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Figure 4 for Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Viaarxiv icon

Federated Learning Can Find Friends That Are Beneficial

Add code
Feb 14, 2024
Viaarxiv icon

Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences

Add code
Nov 23, 2023
Viaarxiv icon

Byzantine-Tolerant Methods for Distributed Variational Inequalities

Add code
Nov 08, 2023
Viaarxiv icon

Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems

Add code
Nov 07, 2023
Viaarxiv icon

Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates

Add code
Oct 15, 2023
Viaarxiv icon

High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise

Add code
Oct 03, 2023
Viaarxiv icon