Picture for Changhun Lee

Changhun Lee

QEFT: Quantization for Efficient Fine-Tuning of LLMs

Add code
Oct 11, 2024
Viaarxiv icon

Repurformer: Transformers for Repurposing-Aware Molecule Generation

Add code
Jul 16, 2024
Viaarxiv icon

A Bi-objective Perspective on Controllable Language Models: Reward Dropout Improves Off-policy Control Performance

Add code
Oct 06, 2023
Viaarxiv icon

OWQ: Lessons learned from activation outliers for weight quantization in large language models

Add code
Jun 13, 2023
Viaarxiv icon

INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold

Add code
Apr 18, 2022
Figure 1 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 2 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 3 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 4 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Viaarxiv icon

Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution

Add code
Dec 02, 2020
Figure 1 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 2 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 3 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 4 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Viaarxiv icon