Picture for Andrea Fasoli

Andrea Fasoli

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Add code
Feb 16, 2023
Viaarxiv icon

Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization

Add code
Jun 16, 2022
Figure 1 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Figure 2 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Figure 3 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Viaarxiv icon

4-bit Quantization of LSTM-based Speech Recognition Models

Add code
Aug 27, 2021
Figure 1 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 2 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 3 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 4 for 4-bit Quantization of LSTM-based Speech Recognition Models
Viaarxiv icon