Picture for Zhaohan Xi

Zhaohan Xi

Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation

Add code
Oct 03, 2024
Figure 1 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 2 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 3 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 4 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Viaarxiv icon

Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics

Add code
Oct 02, 2024
Viaarxiv icon

PromptFix: Few-shot Backdoor Removal via Adversarial Prompt Tuning

Add code
Jun 06, 2024
Viaarxiv icon

Robustifying Safety-Aligned Large Language Models through Clean Data Curation

Add code
May 31, 2024
Figure 1 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 2 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 3 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 4 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Viaarxiv icon

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

Add code
Dec 14, 2023
Figure 1 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 2 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 3 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 4 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Viaarxiv icon

Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks

Add code
Sep 23, 2023
Figure 1 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 2 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 3 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 4 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Viaarxiv icon

On the Security Risks of Knowledge Graph Reasoning

Add code
May 03, 2023
Figure 1 for On the Security Risks of Knowledge Graph Reasoning
Figure 2 for On the Security Risks of Knowledge Graph Reasoning
Figure 3 for On the Security Risks of Knowledge Graph Reasoning
Figure 4 for On the Security Risks of Knowledge Graph Reasoning
Viaarxiv icon

Demystifying Self-supervised Trojan Attacks

Add code
Oct 13, 2022
Figure 1 for Demystifying Self-supervised Trojan Attacks
Figure 2 for Demystifying Self-supervised Trojan Attacks
Figure 3 for Demystifying Self-supervised Trojan Attacks
Figure 4 for Demystifying Self-supervised Trojan Attacks
Viaarxiv icon

Reasoning over Multi-view Knowledge Graphs

Add code
Sep 27, 2022
Figure 1 for Reasoning over Multi-view Knowledge Graphs
Figure 2 for Reasoning over Multi-view Knowledge Graphs
Figure 3 for Reasoning over Multi-view Knowledge Graphs
Figure 4 for Reasoning over Multi-view Knowledge Graphs
Viaarxiv icon

Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era

Add code
Feb 22, 2022
Figure 1 for Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era
Figure 2 for Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era
Figure 3 for Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era
Figure 4 for Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era
Viaarxiv icon