Picture for Sungnyun Kim

Sungnyun Kim

DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models

Add code
Oct 04, 2024
Viaarxiv icon

Diffusion-based Episodes Augmentation for Offline Multi-Agent Reinforcement Learning

Add code
Aug 23, 2024
Viaarxiv icon

Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition

Add code
Jul 04, 2024
Viaarxiv icon

FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning

Add code
Jun 04, 2024
Viaarxiv icon

DistiLLM: Towards Streamlined Distillation for Large Language Models

Add code
Feb 06, 2024
Figure 1 for DistiLLM: Towards Streamlined Distillation for Large Language Models
Figure 2 for DistiLLM: Towards Streamlined Distillation for Large Language Models
Figure 3 for DistiLLM: Towards Streamlined Distillation for Large Language Models
Figure 4 for DistiLLM: Towards Streamlined Distillation for Large Language Models
Viaarxiv icon

STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models

Add code
Dec 14, 2023
Viaarxiv icon

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models

Add code
May 24, 2023
Viaarxiv icon

Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification

Add code
May 23, 2023
Figure 1 for Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Figure 2 for Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Figure 3 for Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Figure 4 for Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Viaarxiv icon

Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation

Add code
May 19, 2023
Viaarxiv icon

Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning

Add code
Mar 24, 2023
Figure 1 for Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Figure 2 for Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Figure 3 for Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Figure 4 for Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Viaarxiv icon