Picture for Jiaxiang Liu

Jiaxiang Liu

Self-Calibrated Consistency can Fight Back for Adversarial Robustness in Vision-Language Models

Add code
Oct 26, 2025
Viaarxiv icon

Modest-Align: Data-Efficient Alignment for Vision-Language Models

Add code
Oct 24, 2025
Viaarxiv icon

An approach for systematic decomposition of complex llm tasks

Add code
Oct 09, 2025
Viaarxiv icon

3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks

Add code
Jun 11, 2025
Figure 1 for 3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
Figure 2 for 3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
Figure 3 for 3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
Figure 4 for 3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
Viaarxiv icon

Know-MRI: A Knowledge Mechanisms Revealer&Interpreter for Large Language Models

Add code
Jun 10, 2025
Viaarxiv icon

Leveraging Pretrained Diffusion Models for Zero-Shot Part Assembly

Add code
May 01, 2025
Viaarxiv icon

Capability Localization: Capabilities Can be Localized rather than Individual Knowledge

Add code
Feb 28, 2025
Figure 1 for Capability Localization: Capabilities Can be Localized rather than Individual Knowledge
Figure 2 for Capability Localization: Capabilities Can be Localized rather than Individual Knowledge
Figure 3 for Capability Localization: Capabilities Can be Localized rather than Individual Knowledge
Figure 4 for Capability Localization: Capabilities Can be Localized rather than Individual Knowledge
Viaarxiv icon

Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models

Add code
Feb 10, 2025
Figure 1 for Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
Figure 2 for Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
Figure 3 for Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
Figure 4 for Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models
Viaarxiv icon

KPL: Training-Free Medical Knowledge Mining of Vision-Language Models

Add code
Jan 20, 2025
Figure 1 for KPL: Training-Free Medical Knowledge Mining of Vision-Language Models
Figure 2 for KPL: Training-Free Medical Knowledge Mining of Vision-Language Models
Figure 3 for KPL: Training-Free Medical Knowledge Mining of Vision-Language Models
Figure 4 for KPL: Training-Free Medical Knowledge Mining of Vision-Language Models
Viaarxiv icon

MedCoT: Medical Chain of Thought via Hierarchical Expert

Add code
Dec 18, 2024
Figure 1 for MedCoT: Medical Chain of Thought via Hierarchical Expert
Figure 2 for MedCoT: Medical Chain of Thought via Hierarchical Expert
Figure 3 for MedCoT: Medical Chain of Thought via Hierarchical Expert
Figure 4 for MedCoT: Medical Chain of Thought via Hierarchical Expert
Viaarxiv icon