Picture for Aishan Liu

Aishan Liu

LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment

Add code
Oct 28, 2024
Viaarxiv icon

Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving

Add code
Sep 11, 2024
Viaarxiv icon

GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models

Add code
Aug 22, 2024
Viaarxiv icon

Compromising Embodied Agents with Contextual Backdoor Attacks

Add code
Aug 06, 2024
Figure 1 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 2 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 3 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 4 for Compromising Embodied Agents with Contextual Backdoor Attacks
Viaarxiv icon

GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing

Add code
Jun 30, 2024
Viaarxiv icon

Revisiting Backdoor Attacks against Large Vision-Language Models

Add code
Jun 27, 2024
Viaarxiv icon

Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks

Add code
Jun 10, 2024
Viaarxiv icon

Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt

Add code
Jun 06, 2024
Viaarxiv icon

LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions

Add code
Jun 04, 2024
Viaarxiv icon

Towards Transferable Attacks Against Vision-LLMs in Autonomous Driving with Typography

Add code
May 23, 2024
Viaarxiv icon