Picture for Jiaxiang Liu

Jiaxiang Liu

ERNIE 5.0 Technical Report

Add code
Feb 04, 2026
Viaarxiv icon

BMAM: Brain-inspired Multi-Agent Memory Framework

Add code
Jan 28, 2026
Viaarxiv icon

Self-Calibrated Consistency can Fight Back for Adversarial Robustness in Vision-Language Models

Add code
Oct 26, 2025
Viaarxiv icon

Modest-Align: Data-Efficient Alignment for Vision-Language Models

Add code
Oct 24, 2025
Viaarxiv icon

An approach for systematic decomposition of complex llm tasks

Add code
Oct 09, 2025
Viaarxiv icon

3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks

Add code
Jun 11, 2025
Figure 1 for 3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
Figure 2 for 3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
Figure 3 for 3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
Figure 4 for 3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
Viaarxiv icon

Know-MRI: A Knowledge Mechanisms Revealer&Interpreter for Large Language Models

Add code
Jun 10, 2025
Viaarxiv icon

Leveraging Pretrained Diffusion Models for Zero-Shot Part Assembly

Add code
May 01, 2025
Viaarxiv icon

Capability Localization: Capabilities Can be Localized rather than Individual Knowledge

Add code
Feb 28, 2025
Figure 1 for Capability Localization: Capabilities Can be Localized rather than Individual Knowledge
Figure 2 for Capability Localization: Capabilities Can be Localized rather than Individual Knowledge
Figure 3 for Capability Localization: Capabilities Can be Localized rather than Individual Knowledge
Figure 4 for Capability Localization: Capabilities Can be Localized rather than Individual Knowledge
Viaarxiv icon

Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models

Add code
Feb 10, 2025
Figure 1 for Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
Figure 2 for Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
Figure 3 for Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
Figure 4 for Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
Viaarxiv icon