Picture for Siyang Gao

Siyang Gao

Riemannian Flow Matching for Disentangled Graph Domain Adaptation

Add code
Jan 31, 2026
Viaarxiv icon

Best Arm Identification with LLM Judges and Limited Human

Add code
Jan 29, 2026
Viaarxiv icon

Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging

Add code
May 08, 2025
Viaarxiv icon

Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas

Add code
Mar 04, 2025
Figure 1 for Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
Figure 2 for Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
Figure 3 for Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
Figure 4 for Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
Viaarxiv icon

Stochastically Constrained Best Arm Identification with Thompson Sampling

Add code
Jan 07, 2025
Viaarxiv icon

Image-of-Thought Prompting for Visual Reasoning Refinement in Multimodal Large Language Models

Add code
May 22, 2024
Figure 1 for Image-of-Thought Prompting for Visual Reasoning Refinement in Multimodal Large Language Models
Figure 2 for Image-of-Thought Prompting for Visual Reasoning Refinement in Multimodal Large Language Models
Figure 3 for Image-of-Thought Prompting for Visual Reasoning Refinement in Multimodal Large Language Models
Figure 4 for Image-of-Thought Prompting for Visual Reasoning Refinement in Multimodal Large Language Models
Viaarxiv icon

In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation

Add code
Mar 12, 2024
Figure 1 for In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Figure 2 for In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Figure 3 for In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Figure 4 for In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Viaarxiv icon

FELM: Benchmarking Factuality Evaluation of Large Language Models

Add code
Oct 01, 2023
Viaarxiv icon

Evaluating Factual Consistency of Summaries with Large Language Models

Add code
May 23, 2023
Figure 1 for Evaluating Factual Consistency of Summaries with Large Language Models
Figure 2 for Evaluating Factual Consistency of Summaries with Large Language Models
Figure 3 for Evaluating Factual Consistency of Summaries with Large Language Models
Figure 4 for Evaluating Factual Consistency of Summaries with Large Language Models
Viaarxiv icon

Convergence Rate Analysis for Optimal Computing Budget Allocation Algorithms

Add code
Nov 29, 2022
Viaarxiv icon