Picture for Taeho Kim

Taeho Kim

LLMem: Estimating GPU Memory Usage for Fine-Tuning Pre-Trained LLMs

Add code
Apr 16, 2024
Viaarxiv icon

HyperCLOVA X Technical Report

Add code
Apr 13, 2024
Viaarxiv icon

Self-Supervised Learning from Non-Object Centric Images with a Geometric Transformation Sensitive Architecture

Add code
Apr 27, 2023
Viaarxiv icon

Tensor Slicing and Optimization for Multicore NPUs

Add code
Apr 06, 2023
Viaarxiv icon

Selection of the Most Probable Best

Add code
Jul 15, 2022
Figure 1 for Selection of the Most Probable Best
Figure 2 for Selection of the Most Probable Best
Figure 3 for Selection of the Most Probable Best
Viaarxiv icon

CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution

Add code
Jul 04, 2022
Figure 1 for CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Figure 2 for CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Figure 3 for CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Figure 4 for CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Viaarxiv icon

Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment

Add code
Feb 21, 2022
Figure 1 for Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Figure 2 for Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Figure 3 for Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Figure 4 for Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Viaarxiv icon