Picture for Zhuqing Liu

Zhuqing Liu

Do We Really Need to Design New Byzantine-robust Aggregation Rules?

Add code
Jan 29, 2025
Viaarxiv icon

Poisoning Attacks and Defenses to Federated Unlearning

Add code
Jan 29, 2025
Figure 1 for Poisoning Attacks and Defenses to Federated Unlearning
Figure 2 for Poisoning Attacks and Defenses to Federated Unlearning
Figure 3 for Poisoning Attacks and Defenses to Federated Unlearning
Viaarxiv icon

Byzantine-Robust Federated Learning over Ring-All-Reduce Distributed Computing

Add code
Jan 29, 2025
Figure 1 for Byzantine-Robust Federated Learning over Ring-All-Reduce Distributed Computing
Figure 2 for Byzantine-Robust Federated Learning over Ring-All-Reduce Distributed Computing
Figure 3 for Byzantine-Robust Federated Learning over Ring-All-Reduce Distributed Computing
Viaarxiv icon

Adversarial Attacks to Multi-Modal Models

Add code
Sep 10, 2024
Figure 1 for Adversarial Attacks to Multi-Modal Models
Figure 2 for Adversarial Attacks to Multi-Modal Models
Figure 3 for Adversarial Attacks to Multi-Modal Models
Figure 4 for Adversarial Attacks to Multi-Modal Models
Viaarxiv icon

Federated Multi-Objective Learning

Add code
Oct 15, 2023
Figure 1 for Federated Multi-Objective Learning
Figure 2 for Federated Multi-Objective Learning
Figure 3 for Federated Multi-Objective Learning
Figure 4 for Federated Multi-Objective Learning
Viaarxiv icon

PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities

Add code
Mar 05, 2023
Viaarxiv icon

DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel Optimization

Add code
Dec 10, 2022
Viaarxiv icon

SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity in Federated Min-Max Learning

Add code
Oct 02, 2022
Figure 1 for SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity in Federated Min-Max Learning
Figure 2 for SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity in Federated Min-Max Learning
Figure 3 for SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity in Federated Min-Max Learning
Figure 4 for SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity in Federated Min-Max Learning
Viaarxiv icon

SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters

Add code
Aug 27, 2022
Figure 1 for SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
Figure 2 for SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
Figure 3 for SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
Figure 4 for SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
Viaarxiv icon

NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data

Add code
Aug 17, 2022
Figure 1 for NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data
Figure 2 for NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data
Figure 3 for NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data
Figure 4 for NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data
Viaarxiv icon