Picture for Zhize Li

Zhize Li

Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression

Add code
Oct 29, 2023
Figure 1 for Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression
Figure 2 for Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression
Figure 3 for Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression
Viaarxiv icon

Coresets for Vertical Federated Learning: Regularized Linear Regression and $K$-Means Clustering

Add code
Oct 26, 2022
Viaarxiv icon

Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization

Add code
Aug 22, 2022
Figure 1 for Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
Figure 2 for Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
Figure 3 for Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
Figure 4 for Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
Viaarxiv icon

SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression

Add code
Jun 20, 2022
Figure 1 for SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression
Figure 2 for SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression
Viaarxiv icon

3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation

Add code
Feb 02, 2022
Figure 1 for 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation
Figure 2 for 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation
Figure 3 for 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation
Figure 4 for 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation
Viaarxiv icon

BEER: Fast $O$ Rate for Decentralized Nonconvex Optimization with Communication Compression

Add code
Jan 31, 2022
Figure 1 for BEER: Fast $O$ Rate for Decentralized Nonconvex Optimization with Communication Compression
Figure 2 for BEER: Fast $O$ Rate for Decentralized Nonconvex Optimization with Communication Compression
Figure 3 for BEER: Fast $O$ Rate for Decentralized Nonconvex Optimization with Communication Compression
Figure 4 for BEER: Fast $O$ Rate for Decentralized Nonconvex Optimization with Communication Compression
Viaarxiv icon

Faster Rates for Compressed Federated Learning with Client-Variance Reduction

Add code
Dec 24, 2021
Figure 1 for Faster Rates for Compressed Federated Learning with Client-Variance Reduction
Figure 2 for Faster Rates for Compressed Federated Learning with Client-Variance Reduction
Viaarxiv icon

EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback

Add code
Oct 07, 2021
Figure 1 for EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Figure 2 for EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Figure 3 for EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Figure 4 for EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Viaarxiv icon

DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization

Add code
Oct 04, 2021
Figure 1 for DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
Figure 2 for DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
Figure 3 for DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
Figure 4 for DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
Viaarxiv icon

FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning

Add code
Aug 10, 2021
Figure 1 for FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Figure 2 for FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Figure 3 for FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Figure 4 for FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Viaarxiv icon