Picture for Xiaosong Yuan

Xiaosong Yuan

Context Tokens are Anchors: Understanding the Repetition Curse in dMLLMs from an Information Flow Perspective

Add code
Jan 28, 2026
Viaarxiv icon

Hallucination Begins Where Saliency Drops

Add code
Jan 28, 2026
Viaarxiv icon

Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models

Add code
Jun 14, 2025
Figure 1 for Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models
Figure 2 for Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models
Figure 3 for Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models
Figure 4 for Efficient Reasoning Through Suppression of Self-Affirmation Reflections in Large Reasoning Models
Viaarxiv icon

Improving Complex Reasoning with Dynamic Prompt Corruption: A soft prompt Optimization Approach

Add code
Mar 17, 2025
Figure 1 for Improving Complex Reasoning with Dynamic Prompt Corruption: A soft prompt Optimization Approach
Figure 2 for Improving Complex Reasoning with Dynamic Prompt Corruption: A soft prompt Optimization Approach
Figure 3 for Improving Complex Reasoning with Dynamic Prompt Corruption: A soft prompt Optimization Approach
Figure 4 for Improving Complex Reasoning with Dynamic Prompt Corruption: A soft prompt Optimization Approach
Viaarxiv icon

Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs

Add code
Nov 15, 2024
Figure 1 for Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs
Figure 2 for Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs
Figure 3 for Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs
Figure 4 for Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs
Viaarxiv icon

Instance-adaptive Zero-shot Chain-of-Thought Prompting

Add code
Sep 30, 2024
Figure 1 for Instance-adaptive Zero-shot Chain-of-Thought Prompting
Figure 2 for Instance-adaptive Zero-shot Chain-of-Thought Prompting
Figure 3 for Instance-adaptive Zero-shot Chain-of-Thought Prompting
Figure 4 for Instance-adaptive Zero-shot Chain-of-Thought Prompting
Viaarxiv icon

From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models

Add code
Jun 04, 2024
Figure 1 for From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models
Figure 2 for From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models
Figure 3 for From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models
Figure 4 for From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models
Viaarxiv icon

TC-GAT: Graph Attention Network for Temporal Causality Discovery

Add code
Apr 21, 2023
Figure 1 for TC-GAT: Graph Attention Network for Temporal Causality Discovery
Figure 2 for TC-GAT: Graph Attention Network for Temporal Causality Discovery
Figure 3 for TC-GAT: Graph Attention Network for Temporal Causality Discovery
Figure 4 for TC-GAT: Graph Attention Network for Temporal Causality Discovery
Viaarxiv icon