Picture for Xiaofeng Liu

Xiaofeng Liu

Domain Adaptive Diabetic Retinopathy Grading with Model Absence and Flowing Data

Add code
Dec 02, 2024
Viaarxiv icon

Inspiring the Next Generation of Segment Anything Models: Comprehensively Evaluate SAM and SAM 2 with Diverse Prompts Towards Context-Dependent Concepts under Different Scenes

Add code
Dec 02, 2024
Viaarxiv icon

Progressive Compositionality In Text-to-Image Generative Models

Add code
Oct 22, 2024
Viaarxiv icon

Point-supervised Brain Tumor Segmentation with Box-prompted MedSAM

Add code
Aug 01, 2024
Figure 1 for Point-supervised Brain Tumor Segmentation with Box-prompted MedSAM
Figure 2 for Point-supervised Brain Tumor Segmentation with Box-prompted MedSAM
Viaarxiv icon

Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views

Add code
Jul 17, 2024
Figure 1 for Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views
Figure 2 for Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views
Figure 3 for Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views
Figure 4 for Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views
Viaarxiv icon

Light-weight Fine-tuning Method for Defending Adversarial Noise in Pre-trained Medical Vision-Language Models

Add code
Jul 02, 2024
Viaarxiv icon

Fair Text to Medical Image Diffusion Model with Subgroup Distribution Aligned Tuning

Add code
Jun 21, 2024
Viaarxiv icon

IRSRMamba: Infrared Image Super-Resolution via Mamba-based Wavelet Transform Feature Modulation Model

Add code
May 16, 2024
Viaarxiv icon

Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism

Add code
Apr 30, 2024
Figure 1 for Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism
Figure 2 for Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism
Figure 3 for Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism
Figure 4 for Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism
Viaarxiv icon

Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server

Add code
Mar 21, 2024
Figure 1 for Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server
Figure 2 for Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server
Figure 3 for Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server
Figure 4 for Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server
Viaarxiv icon