Picture for Kuofeng Gao

Kuofeng Gao

Towards Distillation-Resistant Large Language Models: An Information-Theoretic Perspective

Add code
Feb 03, 2026
Viaarxiv icon

Seeing Through the Chain: Mitigate Hallucination in Multimodal Reasoning Models via CoT Compression and Contrastive Preference Optimization

Add code
Feb 03, 2026
Viaarxiv icon

Imperceptible Jailbreaking against Large Language Models

Add code
Oct 06, 2025
Viaarxiv icon

Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs

Add code
May 26, 2025
Viaarxiv icon

Wolf Hidden in Sheep's Conversations: Toward Harmless Data-Based Backdoor Attacks for Jailbreaking Large Language Models

Add code
May 23, 2025
Viaarxiv icon

Your Language Model Can Secretly Write Like Humans: Contrastive Paraphrase Attacks on LLM-Generated Text Detectors

Add code
May 21, 2025
Figure 1 for Your Language Model Can Secretly Write Like Humans: Contrastive Paraphrase Attacks on LLM-Generated Text Detectors
Figure 2 for Your Language Model Can Secretly Write Like Humans: Contrastive Paraphrase Attacks on LLM-Generated Text Detectors
Figure 3 for Your Language Model Can Secretly Write Like Humans: Contrastive Paraphrase Attacks on LLM-Generated Text Detectors
Figure 4 for Your Language Model Can Secretly Write Like Humans: Contrastive Paraphrase Attacks on LLM-Generated Text Detectors
Viaarxiv icon

Towards Dataset Copyright Evasion Attack against Personalized Text-to-Image Diffusion Models

Add code
May 05, 2025
Viaarxiv icon

Benchmarking Open-ended Audio Dialogue Understanding for Large Audio-Language Models

Add code
Dec 06, 2024
Viaarxiv icon

Denial-of-Service Poisoning Attacks against Large Language Models

Add code
Oct 14, 2024
Figure 1 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 2 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 3 for Denial-of-Service Poisoning Attacks against Large Language Models
Figure 4 for Denial-of-Service Poisoning Attacks against Large Language Models
Viaarxiv icon

Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning

Add code
Oct 14, 2024
Figure 1 for Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning
Figure 2 for Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning
Figure 3 for Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning
Figure 4 for Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning
Viaarxiv icon