Picture for Jinhwan Park

Jinhwan Park

On the compression of shallow non-causal ASR models using knowledge distillation and tied-and-reduced decoder for low-latency on-device speech recognition

Add code
Dec 15, 2023
Viaarxiv icon

Macro-block dropout for improved regularization in training end-to-end speech recognition models

Add code
Dec 29, 2022
Viaarxiv icon

S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima

Add code
Sep 05, 2020
Figure 1 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 2 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 3 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 4 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Viaarxiv icon

Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference

Add code
Mar 30, 2018
Figure 1 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 2 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 3 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 4 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Viaarxiv icon

FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks

Add code
Sep 30, 2016
Figure 1 for FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
Figure 2 for FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
Figure 3 for FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
Figure 4 for FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks
Viaarxiv icon

FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only

Add code
Aug 29, 2016
Figure 1 for FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only
Figure 2 for FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only
Figure 3 for FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only
Figure 4 for FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only
Viaarxiv icon