Picture for Aishan Liu

Aishan Liu

Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning

Add code
Dec 16, 2024
Viaarxiv icon

PTSBench: A Comprehensive Post-Training Sparsity Benchmark Towards Algorithms and Models

Add code
Dec 10, 2024
Viaarxiv icon

CopyrightShield: Spatial Similarity Guided Backdoor Defense against Copyright Infringement in Diffusion Models

Add code
Dec 02, 2024
Viaarxiv icon

Visual Adversarial Attack on Vision-Language Models for Autonomous Driving

Add code
Nov 27, 2024
Figure 1 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 2 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 3 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Figure 4 for Visual Adversarial Attack on Vision-Language Models for Autonomous Driving
Viaarxiv icon

TrojanRobot: Backdoor Attacks Against Robotic Manipulation in the Physical World

Add code
Nov 18, 2024
Viaarxiv icon

LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment

Add code
Oct 28, 2024
Figure 1 for LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment
Figure 2 for LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment
Figure 3 for LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment
Figure 4 for LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment
Viaarxiv icon

Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving

Add code
Sep 11, 2024
Figure 1 for Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Figure 2 for Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Figure 3 for Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Figure 4 for Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Viaarxiv icon

GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models

Add code
Aug 22, 2024
Viaarxiv icon

Compromising Embodied Agents with Contextual Backdoor Attacks

Add code
Aug 06, 2024
Figure 1 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 2 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 3 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 4 for Compromising Embodied Agents with Contextual Backdoor Attacks
Viaarxiv icon

GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing

Add code
Jun 30, 2024
Viaarxiv icon