Picture for Lanqing Hong

Lanqing Hong

Corrupted but Not Broken: Rethinking the Impact of Corrupted Data in Visual Instruction Tuning

Add code
Feb 18, 2025
Viaarxiv icon

Certifying Language Model Robustness with Fuzzed Randomized Smoothing: An Efficient Defense Against Backdoor Attacks

Add code
Feb 09, 2025
Viaarxiv icon

Effective Black-Box Multi-Faceted Attacks Breach Vision Large Language Model Guardrails

Add code
Feb 09, 2025
Viaarxiv icon

Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models

Add code
Nov 29, 2024
Figure 1 for Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models
Figure 2 for Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models
Figure 3 for Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models
Figure 4 for Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models
Viaarxiv icon

MagicDriveDiT: High-Resolution Long Video Generation for Autonomous Driving with Adaptive Control

Add code
Nov 21, 2024
Viaarxiv icon

AtomThink: A Slow Thinking Framework for Multimodal Mathematical Reasoning

Add code
Nov 18, 2024
Viaarxiv icon

LLMs Can Evolve Continually on Modality for X-Modal Reasoning

Add code
Oct 26, 2024
Viaarxiv icon

EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions

Add code
Sep 26, 2024
Figure 1 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 2 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 3 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 4 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Viaarxiv icon

CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration

Add code
Sep 17, 2024
Figure 1 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 2 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 3 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 4 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Viaarxiv icon

CoSafe: Evaluating Large Language Model Safety in Multi-Turn Dialogue Coreference

Add code
Jun 25, 2024
Viaarxiv icon