Picture for Fangzhao Wu

Fangzhao Wu

Measuring Human Contribution in AI-Assisted Content Generation

Add code
Aug 27, 2024
Figure 1 for Measuring Human Contribution in AI-Assisted Content Generation
Figure 2 for Measuring Human Contribution in AI-Assisted Content Generation
Figure 3 for Measuring Human Contribution in AI-Assisted Content Generation
Figure 4 for Measuring Human Contribution in AI-Assisted Content Generation
Viaarxiv icon

Uncovering Safety Risks in Open-source LLMs through Concept Activation Vector

Add code
Apr 18, 2024
Viaarxiv icon

Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models

Add code
Dec 21, 2023
Viaarxiv icon

Towards Attack-tolerant Federated Learning via Critical Parameter Analysis

Add code
Aug 18, 2023
Figure 1 for Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Figure 2 for Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Figure 3 for Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Figure 4 for Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Viaarxiv icon

FedDefender: Client-Side Attack-Tolerant Federated Learning

Add code
Jul 18, 2023
Viaarxiv icon

FedSampling: A Better Sampling Strategy for Federated Learning

Add code
Jun 25, 2023
Viaarxiv icon

Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark

Add code
May 17, 2023
Viaarxiv icon

Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher

Add code
Apr 25, 2023
Viaarxiv icon

DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision

Add code
Mar 15, 2023
Viaarxiv icon

Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias

Add code
Mar 01, 2023
Figure 1 for Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Figure 2 for Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Figure 3 for Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Figure 4 for Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Viaarxiv icon