Picture for Zhuosheng Zhang

Zhuosheng Zhang

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University

ChemAgent: Self-updating Library in Large Language Models Improves Chemical Reasoning

Add code
Jan 11, 2025
Viaarxiv icon

Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs

Add code
Dec 30, 2024
Viaarxiv icon

Look Before You Leap: Enhancing Attention and Vigilance Regarding Harmful Content with GuidelineLLM

Add code
Dec 10, 2024
Viaarxiv icon

Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining

Add code
Dec 03, 2024
Viaarxiv icon

NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models

Add code
Oct 16, 2024
Figure 1 for NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
Figure 2 for NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
Figure 3 for NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
Figure 4 for NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
Viaarxiv icon

Dynamic Planning for LLM-based Graphical User Interface Automation

Add code
Oct 01, 2024
Viaarxiv icon

MEGen: Generative Backdoor in Large Language Models via Model Editing

Add code
Aug 20, 2024
Figure 1 for MEGen: Generative Backdoor in Large Language Models via Model Editing
Figure 2 for MEGen: Generative Backdoor in Large Language Models via Model Editing
Figure 3 for MEGen: Generative Backdoor in Large Language Models via Model Editing
Figure 4 for MEGen: Generative Backdoor in Large Language Models via Model Editing
Viaarxiv icon

Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions

Add code
Aug 05, 2024
Figure 1 for Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions
Figure 2 for Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions
Figure 3 for Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions
Figure 4 for Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions
Viaarxiv icon

DOCBENCH: A Benchmark for Evaluating LLM-based Document Reading Systems

Add code
Jul 15, 2024
Viaarxiv icon

Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities

Add code
Jul 10, 2024
Figure 1 for Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities
Figure 2 for Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities
Figure 3 for Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities
Figure 4 for Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities
Viaarxiv icon