Picture for Xianhang Li

Xianhang Li

MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations for Medicine

Add code
Aug 06, 2024
Viaarxiv icon

What If We Recaption Billions of Web Images with LLaMA-3?

Add code
Jun 12, 2024
Viaarxiv icon

Autoregressive Pretraining with Mamba in Vision

Add code
Jun 11, 2024
Figure 1 for Autoregressive Pretraining with Mamba in Vision
Figure 2 for Autoregressive Pretraining with Mamba in Vision
Figure 3 for Autoregressive Pretraining with Mamba in Vision
Figure 4 for Autoregressive Pretraining with Mamba in Vision
Viaarxiv icon

Medical Vision Generalist: Unifying Medical Imaging Tasks in Context

Add code
Jun 08, 2024
Viaarxiv icon

Scaling White-Box Transformers for Vision

Add code
Jun 03, 2024
Viaarxiv icon

3D-TransUNet for Brain Metastases Segmentation in the BraTS2023 Challenge

Add code
Mar 23, 2024
Viaarxiv icon

Revisiting Adversarial Training at Scale

Add code
Jan 09, 2024
Viaarxiv icon

3D TransUNet: Advancing Medical Image Segmentation through Vision Transformers

Add code
Oct 11, 2023
Viaarxiv icon

Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image Segmentation

Add code
Jul 21, 2023
Figure 1 for Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image Segmentation
Figure 2 for Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image Segmentation
Figure 3 for Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image Segmentation
Figure 4 for Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image Segmentation
Viaarxiv icon

CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy

Add code
Jun 27, 2023
Viaarxiv icon