Picture for Yulhwa Kim

Yulhwa Kim

Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models

Add code
Jun 18, 2024
Viaarxiv icon

L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ

Add code
Feb 15, 2024
Viaarxiv icon

SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks

Add code
Feb 14, 2024
Viaarxiv icon

Squeezing Large-Scale Diffusion Models for Mobile

Add code
Jul 03, 2023
Viaarxiv icon

BitSplit-Net: Multi-bit Deep Neural Network with Bitwise Activation Function

Add code
Mar 23, 2019
Figure 1 for BitSplit-Net: Multi-bit Deep Neural Network with Bitwise Activation Function
Figure 2 for BitSplit-Net: Multi-bit Deep Neural Network with Bitwise Activation Function
Figure 3 for BitSplit-Net: Multi-bit Deep Neural Network with Bitwise Activation Function
Figure 4 for BitSplit-Net: Multi-bit Deep Neural Network with Bitwise Activation Function
Viaarxiv icon

Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators

Add code
Nov 06, 2018
Figure 1 for Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators
Figure 2 for Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators
Figure 3 for Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators
Figure 4 for Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators
Viaarxiv icon