Picture for Jaeha Kung

Jaeha Kung

OPAL: Outlier-Preserved Microscaling Quantization A ccelerator for Generative Large Language Models

Add code
Sep 06, 2024
Viaarxiv icon

LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training

Add code
Nov 04, 2022
Viaarxiv icon

FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support

Add code
Mar 13, 2022
Figure 1 for FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Figure 2 for FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Figure 3 for FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Figure 4 for FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Viaarxiv icon

ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack

Add code
Nov 18, 2021
Figure 1 for ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack
Figure 2 for ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack
Figure 3 for ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack
Figure 4 for ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack
Viaarxiv icon