Picture for Yuji Zhang

Yuji Zhang

Agentic Reasoning for Large Language Models

Add code
Jan 18, 2026
Viaarxiv icon

Current Agents Fail to Leverage World Model as Tool for Foresight

Add code
Jan 08, 2026
Viaarxiv icon

Atomic Reasoning for Scientific Table Claim Verification

Add code
Jun 08, 2025
Viaarxiv icon

ModelingAgent: Bridging LLMs and Mathematical Modeling for Real-World Challenges

Add code
May 21, 2025
Figure 1 for ModelingAgent: Bridging LLMs and Mathematical Modeling for Real-World Challenges
Figure 2 for ModelingAgent: Bridging LLMs and Mathematical Modeling for Real-World Challenges
Figure 3 for ModelingAgent: Bridging LLMs and Mathematical Modeling for Real-World Challenges
Figure 4 for ModelingAgent: Bridging LLMs and Mathematical Modeling for Real-World Challenges
Viaarxiv icon

TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews

Add code
Mar 26, 2025
Viaarxiv icon

The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination

Add code
Feb 22, 2025
Figure 1 for The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination
Figure 2 for The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination
Figure 3 for The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination
Figure 4 for The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination
Viaarxiv icon

Internal Activation as the Polar Star for Steering Unsafe LLM Behavior

Add code
Feb 04, 2025
Figure 1 for Internal Activation as the Polar Star for Steering Unsafe LLM Behavior
Figure 2 for Internal Activation as the Polar Star for Steering Unsafe LLM Behavior
Figure 3 for Internal Activation as the Polar Star for Steering Unsafe LLM Behavior
Figure 4 for Internal Activation as the Polar Star for Steering Unsafe LLM Behavior
Viaarxiv icon

EscapeBench: Pushing Language Models to Think Outside the Box

Add code
Dec 18, 2024
Viaarxiv icon

Integrative Decoding: Improve Factuality via Implicit Self-consistency

Add code
Oct 02, 2024
Figure 1 for Integrative Decoding: Improve Factuality via Implicit Self-consistency
Figure 2 for Integrative Decoding: Improve Factuality via Implicit Self-consistency
Figure 3 for Integrative Decoding: Improve Factuality via Implicit Self-consistency
Figure 4 for Integrative Decoding: Improve Factuality via Implicit Self-consistency
Viaarxiv icon

A Survey on the Honesty of Large Language Models

Add code
Sep 27, 2024
Figure 1 for A Survey on the Honesty of Large Language Models
Figure 2 for A Survey on the Honesty of Large Language Models
Figure 3 for A Survey on the Honesty of Large Language Models
Figure 4 for A Survey on the Honesty of Large Language Models
Viaarxiv icon