Picture for Jeonghoon Kim

Jeonghoon Kim

Laboratory for Natural and Artificial Kinästhese, Convergence Research Center for Artificial Intelligence, Department of Artificial Intelligence, Dongguk University, Seoul, South Korea

LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices

Add code
Jul 16, 2024
Figure 1 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 2 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 3 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 4 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Viaarxiv icon

Improving Multi-hop Logical Reasoning in Knowledge Graphs with Context-Aware Query Representation Learning

Add code
Jun 11, 2024
Viaarxiv icon

HyperCLOVA X Technical Report

Add code
Apr 13, 2024
Viaarxiv icon

Domain Generalization in LiDAR Semantic Segmentation Leveraged by Density Discriminative Feature Embedding

Add code
Dec 19, 2023
Viaarxiv icon

Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models

Add code
Sep 27, 2023
Viaarxiv icon

FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization

Add code
Jun 01, 2023
Viaarxiv icon

Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization

Add code
May 23, 2023
Figure 1 for Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Figure 2 for Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Figure 3 for Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Figure 4 for Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Viaarxiv icon

AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

Add code
Oct 08, 2022
Figure 1 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 2 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 3 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 4 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Viaarxiv icon

CogME: A Novel Evaluation Metric for Video Understanding Intelligence

Add code
Jul 21, 2021
Figure 1 for CogME: A Novel Evaluation Metric for Video Understanding Intelligence
Figure 2 for CogME: A Novel Evaluation Metric for Video Understanding Intelligence
Figure 3 for CogME: A Novel Evaluation Metric for Video Understanding Intelligence
Figure 4 for CogME: A Novel Evaluation Metric for Video Understanding Intelligence
Viaarxiv icon

Towards a Federated Learning Framework for Heterogeneous Devices of Internet of Things

Add code
May 31, 2021
Figure 1 for Towards a Federated Learning Framework for Heterogeneous Devices of Internet of Things
Figure 2 for Towards a Federated Learning Framework for Heterogeneous Devices of Internet of Things
Figure 3 for Towards a Federated Learning Framework for Heterogeneous Devices of Internet of Things
Figure 4 for Towards a Federated Learning Framework for Heterogeneous Devices of Internet of Things
Viaarxiv icon