Picture for Seokil Ham

Seokil Ham

Parameter Efficient Mamba Tuning via Projector-targeted Diagonal-centric Linear Transformation

Add code
Nov 21, 2024
Viaarxiv icon

Diffusion Model Patching via Mixture-of-Prompts

Add code
May 30, 2024
Figure 1 for Diffusion Model Patching via Mixture-of-Prompts
Figure 2 for Diffusion Model Patching via Mixture-of-Prompts
Figure 3 for Diffusion Model Patching via Mixture-of-Prompts
Figure 4 for Diffusion Model Patching via Mixture-of-Prompts
Viaarxiv icon

Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts

Add code
Mar 14, 2024
Figure 1 for Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts
Figure 2 for Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts
Figure 3 for Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts
Figure 4 for Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts
Viaarxiv icon

NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks

Add code
Nov 01, 2023
Figure 1 for NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
Figure 2 for NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
Figure 3 for NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
Figure 4 for NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
Viaarxiv icon