Picture for Dong Hye Ye

Dong Hye Ye

Self-learned representation-guided latent diffusion model for breast cancer classification in deep ultraviolet whole surface images

Add code
Jan 16, 2026
Viaarxiv icon

KOCOBrain: Kuramoto-Guided Graph Network for Uncovering Structure-Function Coupling in Adolescent Prenatal Drug Exposure

Add code
Jan 16, 2026
Viaarxiv icon

DiA-gnostic VLVAE: Disentangled Alignment-Constrained Vision Language Variational AutoEncoder for Robust Radiology Reporting with Missing Modalities

Add code
Nov 08, 2025
Viaarxiv icon

Physics-Guided Multi-View Graph Neural Network for Schizophrenia Classification via Structural-Functional Coupling

Add code
May 21, 2025
Viaarxiv icon

Unified Cross-Modal Attention-Mixer Based Structural-Functional Connectomics Fusion for Neuropsychiatric Disorder Diagnosis

Add code
May 21, 2025
Viaarxiv icon

Breast Cancer Classification in Deep Ultraviolet Fluorescence Images Using a Patch-Level Vision Transformer Framework

Add code
May 12, 2025
Viaarxiv icon

Dynamic Contextual Attention Network: Transforming Spatial Representations into Adaptive Insights for Endoscopic Polyp Diagnosis

Add code
Apr 28, 2025
Figure 1 for Dynamic Contextual Attention Network: Transforming Spatial Representations into Adaptive Insights for Endoscopic Polyp Diagnosis
Figure 2 for Dynamic Contextual Attention Network: Transforming Spatial Representations into Adaptive Insights for Endoscopic Polyp Diagnosis
Figure 3 for Dynamic Contextual Attention Network: Transforming Spatial Representations into Adaptive Insights for Endoscopic Polyp Diagnosis
Figure 4 for Dynamic Contextual Attention Network: Transforming Spatial Representations into Adaptive Insights for Endoscopic Polyp Diagnosis
Viaarxiv icon

GCS-M3VLT: Guided Context Self-Attention based Multi-modal Medical Vision Language Transformer for Retinal Image Captioning

Add code
Dec 23, 2024
Figure 1 for GCS-M3VLT: Guided Context Self-Attention based Multi-modal Medical Vision Language Transformer for Retinal Image Captioning
Figure 2 for GCS-M3VLT: Guided Context Self-Attention based Multi-modal Medical Vision Language Transformer for Retinal Image Captioning
Figure 3 for GCS-M3VLT: Guided Context Self-Attention based Multi-modal Medical Vision Language Transformer for Retinal Image Captioning
Figure 4 for GCS-M3VLT: Guided Context Self-Attention based Multi-modal Medical Vision Language Transformer for Retinal Image Captioning
Viaarxiv icon

Multi-modal Imaging Genomics Transformer: Attentive Integration of Imaging with Genomic Biomarkers for Schizophrenia Classification

Add code
Jul 28, 2024
Figure 1 for Multi-modal Imaging Genomics Transformer: Attentive Integration of Imaging with Genomic Biomarkers for Schizophrenia Classification
Figure 2 for Multi-modal Imaging Genomics Transformer: Attentive Integration of Imaging with Genomic Biomarkers for Schizophrenia Classification
Figure 3 for Multi-modal Imaging Genomics Transformer: Attentive Integration of Imaging with Genomic Biomarkers for Schizophrenia Classification
Viaarxiv icon

Deep learning for automated detection of breast cancer in deep ultraviolet fluorescence images with diffusion probabilistic model

Add code
Jul 01, 2024
Figure 1 for Deep learning for automated detection of breast cancer in deep ultraviolet fluorescence images with diffusion probabilistic model
Figure 2 for Deep learning for automated detection of breast cancer in deep ultraviolet fluorescence images with diffusion probabilistic model
Figure 3 for Deep learning for automated detection of breast cancer in deep ultraviolet fluorescence images with diffusion probabilistic model
Viaarxiv icon