Picture for Sangjoon Park

Sangjoon Park

RT-Surv: Improving Mortality Prediction After Radiotherapy with Large Language Model Structuring of Large-Scale Unstructured Electronic Health Records

Add code
Aug 09, 2024
Viaarxiv icon

Enhancing Demand Prediction in Open Systems by Cartogram-aided Deep Learning

Add code
Mar 24, 2024
Viaarxiv icon

Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model

Add code
Feb 28, 2024
Figure 1 for Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model
Figure 2 for Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model
Figure 3 for Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model
Figure 4 for Objective and Interpretable Breast Cosmesis Evaluation with Attention Guided Denoising Diffusion Anomaly Detection Model
Viaarxiv icon

RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization

Add code
Nov 27, 2023
Viaarxiv icon

LLM-driven Multimodal Target Volume Contouring in Radiation Oncology

Add code
Nov 03, 2023
Viaarxiv icon

Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model

Add code
Feb 27, 2023
Viaarxiv icon

MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling

Add code
Jan 05, 2023
Figure 1 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 2 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 3 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Figure 4 for MS-DINO: Efficient Distributed Training of Vision Transformer Foundation Model in Medical Domain through Masked Sampling
Viaarxiv icon

Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation

Add code
Aug 10, 2022
Figure 1 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 2 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 3 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Figure 4 for Alternating Cross-attention Vision-Language Model for Efficient Learning with Medical Image and Report without Curation
Viaarxiv icon

Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation

Add code
Apr 07, 2022
Figure 1 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 2 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 3 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Figure 4 for Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Viaarxiv icon

AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation

Add code
Feb 13, 2022
Figure 1 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 2 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 3 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Figure 4 for AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
Viaarxiv icon