Picture for Eunhyeok Park

Eunhyeok Park

HOT: Hadamard-based Optimized Training

Add code
Mar 27, 2025
Viaarxiv icon

PCM : Picard Consistency Model for Fast Parallel Sampling of Diffusion Models

Add code
Mar 25, 2025
Viaarxiv icon

SEAL: Scaling to Emphasize Attention for Long-Context Retrieval

Add code
Jan 25, 2025
Figure 1 for SEAL: Scaling to Emphasize Attention for Long-Context Retrieval
Figure 2 for SEAL: Scaling to Emphasize Attention for Long-Context Retrieval
Figure 3 for SEAL: Scaling to Emphasize Attention for Long-Context Retrieval
Figure 4 for SEAL: Scaling to Emphasize Attention for Long-Context Retrieval
Viaarxiv icon

PTQ4VM: Post-Training Quantization for Visual Mamba

Add code
Dec 29, 2024
Figure 1 for PTQ4VM: Post-Training Quantization for Visual Mamba
Figure 2 for PTQ4VM: Post-Training Quantization for Visual Mamba
Figure 3 for PTQ4VM: Post-Training Quantization for Visual Mamba
Figure 4 for PTQ4VM: Post-Training Quantization for Visual Mamba
Viaarxiv icon

QEFT: Quantization for Efficient Fine-Tuning of LLMs

Add code
Oct 11, 2024
Viaarxiv icon

HLQ: Fast and Efficient Backpropagation via Hadamard Low-rank Quantization

Add code
Jun 21, 2024
Viaarxiv icon

Task-Oriented Diffusion Model Compression

Add code
Jan 31, 2024
Figure 1 for Task-Oriented Diffusion Model Compression
Figure 2 for Task-Oriented Diffusion Model Compression
Figure 3 for Task-Oriented Diffusion Model Compression
Figure 4 for Task-Oriented Diffusion Model Compression
Viaarxiv icon

FRDiff: Feature Reuse for Exquisite Zero-shot Acceleration of Diffusion Models

Add code
Dec 06, 2023
Viaarxiv icon

OWQ: Lessons learned from activation outliers for weight quantization in large language models

Add code
Jun 13, 2023
Figure 1 for OWQ: Lessons learned from activation outliers for weight quantization in large language models
Figure 2 for OWQ: Lessons learned from activation outliers for weight quantization in large language models
Figure 3 for OWQ: Lessons learned from activation outliers for weight quantization in large language models
Figure 4 for OWQ: Lessons learned from activation outliers for weight quantization in large language models
Viaarxiv icon

Temporal Dynamic Quantization for Diffusion Models

Add code
Jun 04, 2023
Viaarxiv icon