Picture for Kang Eun Jeon

Kang Eun Jeon

Efficient Multi-bit Quantization Network Training via Weight Bias Correction and Bit-wise Coreset Sampling

Add code
Oct 23, 2025
Viaarxiv icon

MSQ: Memory-Efficient Bit Sparsification Quantization

Add code
Jul 30, 2025
Viaarxiv icon

TruncQuant: Truncation-Ready Quantization for DNNs with Flexible Weight Bit Precision

Add code
Jun 13, 2025
Viaarxiv icon

MEMHD: Memory-Efficient Multi-Centroid Hyperdimensional Computing for Fully-Utilized In-Memory Computing Architectures

Add code
Feb 11, 2025
Viaarxiv icon

Column-wise Quantization of Weights and Partial Sums for Accurate and Efficient Compute-In-Memory Accelerators

Add code
Feb 11, 2025
Figure 1 for Column-wise Quantization of Weights and Partial Sums for Accurate and Efficient Compute-In-Memory Accelerators
Figure 2 for Column-wise Quantization of Weights and Partial Sums for Accurate and Efficient Compute-In-Memory Accelerators
Figure 3 for Column-wise Quantization of Weights and Partial Sums for Accurate and Efficient Compute-In-Memory Accelerators
Figure 4 for Column-wise Quantization of Weights and Partial Sums for Accurate and Efficient Compute-In-Memory Accelerators
Viaarxiv icon

Low-Rank Compression for IMC Arrays

Add code
Feb 10, 2025
Figure 1 for Low-Rank Compression for IMC Arrays
Figure 2 for Low-Rank Compression for IMC Arrays
Figure 3 for Low-Rank Compression for IMC Arrays
Figure 4 for Low-Rank Compression for IMC Arrays
Viaarxiv icon