Picture for Ying Ding

Ying Ding

PathMoE: Interpretable Multimodal Interaction Experts for Pediatric Brain Tumor Classification

Add code
Mar 02, 2026
Viaarxiv icon

Choosing How to Remember: Adaptive Memory Structures for LLM Agents

Add code
Feb 15, 2026
Viaarxiv icon

ICODEN: Ordinary Differential Equation Neural Networks for Interval-Censored Data

Add code
Feb 10, 2026
Viaarxiv icon

Rethinking the Value of Multi-Agent Workflow: A Strong Single Agent Baseline

Add code
Jan 18, 2026
Viaarxiv icon

Position: Thematic Analysis of Unstructured Clinical Transcripts with Large Language Models

Add code
Sep 18, 2025
Viaarxiv icon

A Multi-Stage Large Language Model Framework for Extracting Suicide-Related Social Determinants of Health

Add code
Aug 07, 2025
Viaarxiv icon

Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection

Add code
May 23, 2025
Figure 1 for Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection
Figure 2 for Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection
Figure 3 for Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection
Figure 4 for Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection
Viaarxiv icon

Beyond Feature Importance: Feature Interactions in Predicting Post-Stroke Rigidity with Graph Explainable AI

Add code
Apr 10, 2025
Viaarxiv icon

TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews

Add code
Mar 26, 2025
Viaarxiv icon

MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models

Add code
Feb 20, 2025
Figure 1 for MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models
Figure 2 for MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models
Figure 3 for MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models
Figure 4 for MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models
Viaarxiv icon