Picture for Minghong Fang

Minghong Fang

Kevin

LoBAM: LoRA-Based Backdoor Attack on Model Merging

Add code
Nov 23, 2024
Viaarxiv icon

Adversarial Attacks to Multi-Modal Models

Add code
Sep 10, 2024
Figure 1 for Adversarial Attacks to Multi-Modal Models
Figure 2 for Adversarial Attacks to Multi-Modal Models
Figure 3 for Adversarial Attacks to Multi-Modal Models
Figure 4 for Adversarial Attacks to Multi-Modal Models
Viaarxiv icon

Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning

Add code
Jul 09, 2024
Viaarxiv icon

Byzantine-Robust Decentralized Federated Learning

Add code
Jun 18, 2024
Figure 1 for Byzantine-Robust Decentralized Federated Learning
Figure 2 for Byzantine-Robust Decentralized Federated Learning
Figure 3 for Byzantine-Robust Decentralized Federated Learning
Figure 4 for Byzantine-Robust Decentralized Federated Learning
Viaarxiv icon

Understanding Server-Assisted Federated Learning in the Presence of Incomplete Client Participation

Add code
May 04, 2024
Viaarxiv icon

Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction

Add code
Apr 22, 2024
Figure 1 for Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Figure 2 for Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Figure 3 for Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Figure 4 for Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Viaarxiv icon

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

Add code
Mar 05, 2024
Figure 1 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 2 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 3 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 4 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Viaarxiv icon

GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis

Add code
Feb 21, 2024
Viaarxiv icon

Poisoning Federated Recommender Systems with Fake Users

Add code
Feb 18, 2024
Viaarxiv icon

Competitive Advantage Attacks to Decentralized Federated Learning

Add code
Oct 20, 2023
Figure 1 for Competitive Advantage Attacks to Decentralized Federated Learning
Figure 2 for Competitive Advantage Attacks to Decentralized Federated Learning
Figure 3 for Competitive Advantage Attacks to Decentralized Federated Learning
Figure 4 for Competitive Advantage Attacks to Decentralized Federated Learning
Viaarxiv icon