Picture for Dongsoo Lee

Dongsoo Lee

LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices

Add code
Jul 16, 2024
Viaarxiv icon

To FP8 and Back Again: Quantifying the Effects of Reducing Precision on LLM Training Stability

Add code
May 29, 2024
Viaarxiv icon

HyperCLOVA X Technical Report

Add code
Apr 13, 2024
Viaarxiv icon

No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization

Add code
Feb 28, 2024
Viaarxiv icon

DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation

Add code
Feb 27, 2024
Viaarxiv icon

Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models

Add code
Sep 27, 2023
Viaarxiv icon

FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization

Add code
Jun 01, 2023
Viaarxiv icon

Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization

Add code
May 23, 2023
Viaarxiv icon

AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

Add code
Oct 08, 2022
Figure 1 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 2 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 3 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 4 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Viaarxiv icon

DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation

Add code
Sep 22, 2022
Figure 1 for DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Figure 2 for DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Figure 3 for DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Figure 4 for DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Viaarxiv icon