Picture for Shiqing Ma

Shiqing Ma

Bias Similarity Across Large Language Models

Add code
Oct 15, 2024
Viaarxiv icon

Speculative Coreset Selection for Task-Specific Fine-tuning

Add code
Oct 02, 2024
Viaarxiv icon

Data-centric NLP Backdoor Defense from the Lens of Memorization

Add code
Sep 21, 2024
Viaarxiv icon

Unlocking Adversarial Suffix Optimization Without Affirmative Phrases: Efficient Black-box Jailbreaking via LLM as Optimizer

Add code
Aug 21, 2024
Viaarxiv icon

UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening

Add code
Jul 16, 2024
Viaarxiv icon

Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models

Add code
Jul 15, 2024
Viaarxiv icon

Efficient DNN-Powered Software with Fair Sparse Models

Add code
Jul 03, 2024
Viaarxiv icon

MeanSparse: Post-Training Robustness Enhancement Through Mean-Centered Feature Sparsification

Add code
Jun 09, 2024
Viaarxiv icon

Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation

Add code
Jun 02, 2024
Figure 1 for Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Figure 2 for Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Figure 3 for Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Figure 4 for Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Viaarxiv icon

Towards Imperceptible Backdoor Attack in Self-supervised Learning

Add code
May 23, 2024
Figure 1 for Towards Imperceptible Backdoor Attack in Self-supervised Learning
Figure 2 for Towards Imperceptible Backdoor Attack in Self-supervised Learning
Figure 3 for Towards Imperceptible Backdoor Attack in Self-supervised Learning
Figure 4 for Towards Imperceptible Backdoor Attack in Self-supervised Learning
Viaarxiv icon