Picture for Chenhang Cui

Chenhang Cui

Transport and Merge: Cross-Architecture Merging for Large Language Models

Add code
Feb 05, 2026
Viaarxiv icon

Risky-Bench: Probing Agentic Safety Risks under Real-World Deployment

Add code
Feb 03, 2026
Viaarxiv icon

Self-Guard: Defending Large Reasoning Models via enhanced self-reflection

Add code
Jan 31, 2026
Viaarxiv icon

Lingua-SafetyBench: A Benchmark for Safety Evaluation of Multilingual Vision-Language Models

Add code
Jan 30, 2026
Viaarxiv icon

Improving Alignment in LVLMs with Debiased Self-Judgment

Add code
Aug 28, 2025
Figure 1 for Improving Alignment in LVLMs with Debiased Self-Judgment
Figure 2 for Improving Alignment in LVLMs with Debiased Self-Judgment
Figure 3 for Improving Alignment in LVLMs with Debiased Self-Judgment
Figure 4 for Improving Alignment in LVLMs with Debiased Self-Judgment
Viaarxiv icon

VFlowOpt: A Token Pruning Framework for LMMs with Visual Information Flow-Guided Optimization

Add code
Aug 07, 2025
Viaarxiv icon

DTPA: Dynamic Token-level Prefix Augmentation for Controllable Text Generation

Add code
Aug 06, 2025
Figure 1 for DTPA: Dynamic Token-level Prefix Augmentation for Controllable Text Generation
Figure 2 for DTPA: Dynamic Token-level Prefix Augmentation for Controllable Text Generation
Figure 3 for DTPA: Dynamic Token-level Prefix Augmentation for Controllable Text Generation
Figure 4 for DTPA: Dynamic Token-level Prefix Augmentation for Controllable Text Generation
Viaarxiv icon

RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards

Add code
Jun 09, 2025
Figure 1 for RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards
Figure 2 for RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards
Figure 3 for RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards
Figure 4 for RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards
Viaarxiv icon

Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models

Add code
Nov 19, 2024
Figure 1 for Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
Figure 2 for Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
Figure 3 for Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
Figure 4 for Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
Viaarxiv icon

Dual-Optimized Adaptive Graph Reconstruction for Multi-View Graph Clustering

Add code
Oct 30, 2024
Viaarxiv icon