Picture for Chenghua Lin

Chenghua Lin

for the Alzheimer's Disease Neuroimaging Initiative

RIGOURATE: Quantifying Scientific Exaggeration with Evidence-Aligned Claim Evaluation

Add code
Jan 07, 2026
Viaarxiv icon

Aligning Findings with Diagnosis: A Self-Consistent Reinforcement Learning Framework for Trustworthy Radiology Reporting

Add code
Jan 06, 2026
Viaarxiv icon

When Agents See Humans as the Outgroup: Belief-Dependent Bias in LLM-Powered Agents

Add code
Jan 06, 2026
Viaarxiv icon

Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space

Add code
Dec 31, 2025
Viaarxiv icon

Encyclo-K: Evaluating LLMs with Dynamically Composed Knowledge Statements

Add code
Dec 31, 2025
Viaarxiv icon

Ara-HOPE: Human-Centric Post-Editing Evaluation for Dialectal Arabic to Modern Standard Arabic Translation

Add code
Dec 25, 2025
Viaarxiv icon

R-GenIMA: Integrating Neuroimaging and Genetics with Interpretable Multimodal AI for Alzheimer's Disease Progression

Add code
Dec 22, 2025
Viaarxiv icon

Seeing isn't Hearing: Benchmarking Vision Language Models at Interpreting Spectrograms

Add code
Nov 17, 2025
Viaarxiv icon

Scaling Latent Reasoning via Looped Language Models

Add code
Oct 29, 2025
Figure 1 for Scaling Latent Reasoning via Looped Language Models
Figure 2 for Scaling Latent Reasoning via Looped Language Models
Figure 3 for Scaling Latent Reasoning via Looped Language Models
Figure 4 for Scaling Latent Reasoning via Looped Language Models
Viaarxiv icon

Drivel-ology: Challenging LLMs with Interpreting Nonsense with Depth

Add code
Sep 04, 2025
Figure 1 for Drivel-ology: Challenging LLMs with Interpreting Nonsense with Depth
Figure 2 for Drivel-ology: Challenging LLMs with Interpreting Nonsense with Depth
Figure 3 for Drivel-ology: Challenging LLMs with Interpreting Nonsense with Depth
Figure 4 for Drivel-ology: Challenging LLMs with Interpreting Nonsense with Depth
Viaarxiv icon