Picture for Wei-Bin Lee

Wei-Bin Lee

Beyond Natural Language Perplexity: Detecting Dead Code Poisoning in Code Generation Datasets

Add code
Feb 28, 2025
Viaarxiv icon

Layer-Aware Task Arithmetic: Disentangling Task-Specific and Instruction-Following Knowledge

Add code
Feb 27, 2025
Viaarxiv icon

A Survey on Backdoor Threats in Large Language Models (LLMs): Attacks, Defenses, and Evaluations

Add code
Feb 06, 2025
Figure 1 for A Survey on Backdoor Threats in Large Language Models (LLMs): Attacks, Defenses, and Evaluations
Figure 2 for A Survey on Backdoor Threats in Large Language Models (LLMs): Attacks, Defenses, and Evaluations
Figure 3 for A Survey on Backdoor Threats in Large Language Models (LLMs): Attacks, Defenses, and Evaluations
Figure 4 for A Survey on Backdoor Threats in Large Language Models (LLMs): Attacks, Defenses, and Evaluations
Viaarxiv icon

A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving

Add code
Jun 17, 2024
Figure 1 for A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving
Figure 2 for A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving
Figure 3 for A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving
Figure 4 for A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving
Viaarxiv icon