Picture for Haibo Hu

Haibo Hu

A Sample-Level Evaluation and Generative Framework for Model Inversion Attacks

Add code
Feb 26, 2025
Viaarxiv icon

FUNU: Boosting Machine Unlearning Efficiency by Filtering Unnecessary Unlearning

Add code
Jan 28, 2025
Viaarxiv icon

RALAD: Bridging the Real-to-Sim Domain Gap in Autonomous Driving with Retrieval-Augmented Learning

Add code
Jan 21, 2025
Viaarxiv icon

Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data

Add code
Jan 10, 2025
Figure 1 for Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data
Figure 2 for Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data
Figure 3 for Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data
Figure 4 for Fine-tuning is Not Fine: Mitigating Backdoor Attacks in GNNs with Limited Clean Data
Viaarxiv icon

Structure-Preference Enabled Graph Embedding Generation under Differential Privacy

Add code
Jan 07, 2025
Figure 1 for Structure-Preference Enabled Graph Embedding Generation under Differential Privacy
Figure 2 for Structure-Preference Enabled Graph Embedding Generation under Differential Privacy
Figure 3 for Structure-Preference Enabled Graph Embedding Generation under Differential Privacy
Figure 4 for Structure-Preference Enabled Graph Embedding Generation under Differential Privacy
Viaarxiv icon

Don't Lose Yourself: Boosting Multimodal Recommendation via Reducing Node-neighbor Discrepancy in Graph Convolutional Network

Add code
Dec 25, 2024
Viaarxiv icon

ModeSeq: Taming Sparse Multimodal Motion Prediction with Sequential Mode Modeling

Add code
Nov 17, 2024
Viaarxiv icon

New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes

Add code
Oct 16, 2024
Figure 1 for New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Figure 2 for New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Figure 3 for New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Figure 4 for New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Viaarxiv icon

Alignment-Aware Model Extraction Attacks on Large Language Models

Add code
Sep 04, 2024
Figure 1 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 2 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 3 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 4 for Alignment-Aware Model Extraction Attacks on Large Language Models
Viaarxiv icon

Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models

Add code
Aug 05, 2024
Figure 1 for Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Figure 2 for Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Figure 3 for Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Figure 4 for Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Viaarxiv icon