Picture for Jia Zhu

Jia Zhu

LoopGuard: Breaking Self-Reinforcing Attention Loops via Dynamic KV Cache Intervention

Add code
Apr 11, 2026
Viaarxiv icon

Head-wise Modality Specialization within MLLMs for Robust Fake News Detection under Missing Modality

Add code
Apr 08, 2026
Viaarxiv icon

PeroMAS: A Multi-agent System of Perovskite Material Discovery

Add code
Feb 10, 2026
Viaarxiv icon

DynaGen: Unifying Temporal Knowledge Graph Reasoning with Dynamic Subgraphs and Generative Regularization

Add code
Dec 14, 2025
Viaarxiv icon

Automatic Failure Attribution and Critical Step Prediction Method for Multi-Agent Systems Based on Causal Inference

Add code
Sep 10, 2025
Viaarxiv icon

E3-Rewrite: Learning to Rewrite SQL for Executability, Equivalence,and Efficiency

Add code
Aug 12, 2025
Viaarxiv icon

LegalReasoner: Step-wised Verification-Correction for Legal Judgment Reasoning

Add code
Jun 09, 2025
Viaarxiv icon

Benchmarking Multi-National Value Alignment for Large Language Models

Add code
Apr 19, 2025
Figure 1 for Benchmarking Multi-National Value Alignment for Large Language Models
Figure 2 for Benchmarking Multi-National Value Alignment for Large Language Models
Figure 3 for Benchmarking Multi-National Value Alignment for Large Language Models
Figure 4 for Benchmarking Multi-National Value Alignment for Large Language Models
Viaarxiv icon

Adaptation Method for Misinformation Identification

Add code
Apr 19, 2025
Figure 1 for Adaptation Method for Misinformation Identification
Figure 2 for Adaptation Method for Misinformation Identification
Figure 3 for Adaptation Method for Misinformation Identification
Figure 4 for Adaptation Method for Misinformation Identification
Viaarxiv icon

DIDS: Domain Impact-aware Data Sampling for Large Language Model Training

Add code
Apr 17, 2025
Figure 1 for DIDS: Domain Impact-aware Data Sampling for Large Language Model Training
Figure 2 for DIDS: Domain Impact-aware Data Sampling for Large Language Model Training
Figure 3 for DIDS: Domain Impact-aware Data Sampling for Large Language Model Training
Figure 4 for DIDS: Domain Impact-aware Data Sampling for Large Language Model Training
Viaarxiv icon