Picture for Gongshen Liu

Gongshen Liu

Consensus Entropy: Harnessing Multi-VLM Agreement for Self-Verifying and Self-Improving OCR

Add code
Apr 16, 2025
Viaarxiv icon

Probing then Editing Response Personality of Large Language Models

Add code
Apr 14, 2025
Viaarxiv icon

Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models

Add code
Mar 03, 2025
Viaarxiv icon

Investigating the Adaptive Robustness with Knowledge Conflicts in LLM-based Multi-Agent Systems

Add code
Feb 21, 2025
Viaarxiv icon

InsightVision: A Comprehensive, Multi-Level Chinese-based Benchmark for Evaluating Implicit Visual Semantics in Large Vision Language Models

Add code
Feb 19, 2025
Viaarxiv icon

U-GIFT: Uncertainty-Guided Firewall for Toxic Speech in Few-Shot Scenario

Add code
Jan 01, 2025
Viaarxiv icon

Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining

Add code
Dec 03, 2024
Figure 1 for Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining
Figure 2 for Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining
Figure 3 for Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining
Figure 4 for Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining
Viaarxiv icon

NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models

Add code
Oct 16, 2024
Figure 1 for NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
Figure 2 for NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
Figure 3 for NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
Figure 4 for NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
Viaarxiv icon

Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities

Add code
Jul 10, 2024
Figure 1 for Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities
Figure 2 for Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities
Figure 3 for Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities
Figure 4 for Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities
Viaarxiv icon

TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models

Add code
May 22, 2024
Figure 1 for TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models
Figure 2 for TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models
Figure 3 for TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models
Figure 4 for TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models
Viaarxiv icon