Picture for Zhaoxian Wu

Zhaoxian Wu

Pipeline Gradient-based Model Training on Analog In-memory Accelerators

Add code
Oct 19, 2024
Viaarxiv icon

Single-Timescale Multi-Sequence Stochastic Approximation Without Fixed Point Smoothness: Theories and Applications

Add code
Oct 17, 2024
Viaarxiv icon

On the Trade-off between Flatness and Optimization in Distributed Learning

Add code
Jun 28, 2024
Viaarxiv icon

Towards Exact Gradient-based Training on Analog In-memory Computing

Add code
Jun 18, 2024
Viaarxiv icon

Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial Environment

Add code
Jul 16, 2023
Viaarxiv icon

Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data

Add code
Sep 17, 2020
Figure 1 for Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data
Figure 2 for Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data
Figure 3 for Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data
Figure 4 for Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data
Viaarxiv icon

Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks

Add code
Dec 29, 2019
Figure 1 for Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Figure 2 for Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Figure 3 for Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Figure 4 for Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Viaarxiv icon