Picture for Ying Hu

Ying Hu

An Interpretable and Stable Framework for Sparse Principal Component Analysis

Add code
Mar 14, 2026
Viaarxiv icon

GLEAM: A Multimodal Imaging Dataset and HAMM for Glaucoma Classification

Add code
Mar 13, 2026
Viaarxiv icon

Training Together, Diagnosing Better: Federated Learning for Collagen VI-Related Dystrophies

Add code
Dec 18, 2025
Viaarxiv icon

Ultrasound Report Generation with Multimodal Large Language Models for Standardized Texts

Add code
May 13, 2025
Viaarxiv icon

BitNet b1.58 2B4T Technical Report

Add code
Apr 16, 2025
Figure 1 for BitNet b1.58 2B4T Technical Report
Figure 2 for BitNet b1.58 2B4T Technical Report
Figure 3 for BitNet b1.58 2B4T Technical Report
Figure 4 for BitNet b1.58 2B4T Technical Report
Viaarxiv icon

GDiffRetro: Retrosynthesis Prediction with Dual Graph Enhanced Molecular Representation and Diffusion Generation

Add code
Jan 14, 2025
Figure 1 for GDiffRetro: Retrosynthesis Prediction with Dual Graph Enhanced Molecular Representation and Diffusion Generation
Figure 2 for GDiffRetro: Retrosynthesis Prediction with Dual Graph Enhanced Molecular Representation and Diffusion Generation
Figure 3 for GDiffRetro: Retrosynthesis Prediction with Dual Graph Enhanced Molecular Representation and Diffusion Generation
Figure 4 for GDiffRetro: Retrosynthesis Prediction with Dual Graph Enhanced Molecular Representation and Diffusion Generation
Viaarxiv icon

Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model

Add code
Nov 19, 2024
Figure 1 for Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model
Figure 2 for Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model
Figure 3 for Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model
Figure 4 for Med-2E3: A 2D-Enhanced 3D Medical Multimodal Large Language Model
Viaarxiv icon

DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer

Add code
Oct 19, 2024
Figure 1 for DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer
Figure 2 for DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer
Figure 3 for DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer
Figure 4 for DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer
Viaarxiv icon

Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We Learn How Vision-Language Models Function

Add code
Sep 30, 2024
Figure 1 for Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We Learn How Vision-Language Models Function
Figure 2 for Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We Learn How Vision-Language Models Function
Figure 3 for Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We Learn How Vision-Language Models Function
Figure 4 for Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We Learn How Vision-Language Models Function
Viaarxiv icon

Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE

Add code
Sep 26, 2024
Figure 1 for Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE
Figure 2 for Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE
Figure 3 for Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE
Figure 4 for Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE
Viaarxiv icon