Picture for Jung Hyun Lee

Jung Hyun Lee

LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices

Add code
Jul 16, 2024
Figure 1 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 2 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 3 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 4 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Viaarxiv icon

Token-Supervised Value Models for Enhancing Mathematical Reasoning Capabilities of Large Language Models

Add code
Jul 12, 2024
Viaarxiv icon

HyperCLOVA X Technical Report

Add code
Apr 13, 2024
Viaarxiv icon

Label-Noise Robust Diffusion Models

Add code
Feb 27, 2024
Viaarxiv icon

FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization

Add code
Jun 01, 2023
Viaarxiv icon

Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization

Add code
May 23, 2023
Figure 1 for Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Figure 2 for Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Figure 3 for Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Figure 4 for Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Viaarxiv icon

Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss

Add code
Sep 05, 2021
Figure 1 for Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss
Figure 2 for Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss
Figure 3 for Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss
Figure 4 for Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss
Viaarxiv icon

Compressed Sensing via Measurement-Conditional Generative Models

Add code
Jul 02, 2020
Figure 1 for Compressed Sensing via Measurement-Conditional Generative Models
Figure 2 for Compressed Sensing via Measurement-Conditional Generative Models
Figure 3 for Compressed Sensing via Measurement-Conditional Generative Models
Figure 4 for Compressed Sensing via Measurement-Conditional Generative Models
Viaarxiv icon

Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization

Add code
Nov 29, 2019
Figure 1 for Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
Figure 2 for Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
Figure 3 for Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
Figure 4 for Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
Viaarxiv icon