Picture for Guangzhi Xiong

Guangzhi Xiong

Retrieving Counterfactuals Improves Visual In-Context Learning

Add code
Mar 17, 2026
Viaarxiv icon

Med-V1: Small Language Models for Zero-shot and Scalable Biomedical Evidence Attribution

Add code
Mar 05, 2026
Viaarxiv icon

Neural Additive Experts: Context-Gated Experts for Controllable Model Additivity

Add code
Feb 11, 2026
Viaarxiv icon

CASL: Concept-Aligned Sparse Latents for Interpreting Diffusion Models

Add code
Jan 21, 2026
Viaarxiv icon

Reasoning Beyond Chain-of-Thought: A Latent Computational Mode in Large Language Models

Add code
Jan 12, 2026
Viaarxiv icon

Toward Faithful Retrieval-Augmented Generation with Sparse Autoencoders

Add code
Dec 09, 2025
Figure 1 for Toward Faithful Retrieval-Augmented Generation with Sparse Autoencoders
Figure 2 for Toward Faithful Retrieval-Augmented Generation with Sparse Autoencoders
Figure 3 for Toward Faithful Retrieval-Augmented Generation with Sparse Autoencoders
Figure 4 for Toward Faithful Retrieval-Augmented Generation with Sparse Autoencoders
Viaarxiv icon

Concept-RuleNet: Grounded Multi-Agent Neurosymbolic Reasoning in Vision Language Models

Add code
Nov 13, 2025
Viaarxiv icon

GCAV: A Global Concept Activation Vector Framework for Cross-Layer Consistency in Interpretability

Add code
Aug 28, 2025
Figure 1 for GCAV: A Global Concept Activation Vector Framework for Cross-Layer Consistency in Interpretability
Figure 2 for GCAV: A Global Concept Activation Vector Framework for Cross-Layer Consistency in Interpretability
Figure 3 for GCAV: A Global Concept Activation Vector Framework for Cross-Layer Consistency in Interpretability
Figure 4 for GCAV: A Global Concept Activation Vector Framework for Cross-Layer Consistency in Interpretability
Viaarxiv icon

MedCite: Can Language Models Generate Verifiable Text for Medicine?

Add code
Jun 07, 2025
Viaarxiv icon

Toward Reliable Biomedical Hypothesis Generation: Evaluating Truthfulness and Hallucination in Large Language Models

Add code
May 20, 2025
Viaarxiv icon