Picture for Xiaoyi Chen

Xiaoyi Chen

The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks

Add code
Oct 24, 2023
Figure 1 for The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
Figure 2 for The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
Figure 3 for The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
Figure 4 for The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
Viaarxiv icon

NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning

Add code
Mar 03, 2023
Viaarxiv icon

Kallima: A Clean-label Framework for Textual Backdoor Attacks

Add code
Jun 03, 2022
Figure 1 for Kallima: A Clean-label Framework for Textual Backdoor Attacks
Figure 2 for Kallima: A Clean-label Framework for Textual Backdoor Attacks
Figure 3 for Kallima: A Clean-label Framework for Textual Backdoor Attacks
Figure 4 for Kallima: A Clean-label Framework for Textual Backdoor Attacks
Viaarxiv icon

MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive Strategies for Urban Autonomous Navigation

Add code
Aug 17, 2020
Figure 1 for MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive Strategies for Urban Autonomous Navigation
Figure 2 for MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive Strategies for Urban Autonomous Navigation
Figure 3 for MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive Strategies for Urban Autonomous Navigation
Figure 4 for MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive Strategies for Urban Autonomous Navigation
Viaarxiv icon

BadNL: Backdoor Attacks Against NLP Models

Add code
Jun 01, 2020
Figure 1 for BadNL: Backdoor Attacks Against NLP Models
Figure 2 for BadNL: Backdoor Attacks Against NLP Models
Figure 3 for BadNL: Backdoor Attacks Against NLP Models
Viaarxiv icon