Picture for Sourav Bhattacharya

Sourav Bhattacharya

MobileQuant: Mobile-friendly Quantization for On-device Language Models

Add code
Aug 25, 2024
Figure 1 for MobileQuant: Mobile-friendly Quantization for On-device Language Models
Figure 2 for MobileQuant: Mobile-friendly Quantization for On-device Language Models
Figure 3 for MobileQuant: Mobile-friendly Quantization for On-device Language Models
Figure 4 for MobileQuant: Mobile-friendly Quantization for On-device Language Models
Viaarxiv icon

Linear-Complexity Self-Supervised Learning for Speech Processing

Add code
Jul 18, 2024
Viaarxiv icon

Fast Inference Through The Reuse Of Attention Maps In Diffusion Models

Add code
Dec 13, 2023
Viaarxiv icon

Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition

Add code
Jul 12, 2023
Viaarxiv icon

Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement

Add code
Nov 08, 2022
Viaarxiv icon

Defensive Tensorization

Add code
Oct 26, 2021
Figure 1 for Defensive Tensorization
Figure 2 for Defensive Tensorization
Figure 3 for Defensive Tensorization
Figure 4 for Defensive Tensorization
Viaarxiv icon

Bunched LPCNet : Vocoder for Low-cost Neural Text-To-Speech Systems

Add code
Aug 11, 2020
Figure 1 for Bunched LPCNet : Vocoder for Low-cost Neural Text-To-Speech Systems
Figure 2 for Bunched LPCNet : Vocoder for Low-cost Neural Text-To-Speech Systems
Figure 3 for Bunched LPCNet : Vocoder for Low-cost Neural Text-To-Speech Systems
Figure 4 for Bunched LPCNet : Vocoder for Low-cost Neural Text-To-Speech Systems
Viaarxiv icon

Iterative Compression of End-to-End ASR Model using AutoML

Add code
Aug 06, 2020
Figure 1 for Iterative Compression of End-to-End ASR Model using AutoML
Figure 2 for Iterative Compression of End-to-End ASR Model using AutoML
Figure 3 for Iterative Compression of End-to-End ASR Model using AutoML
Figure 4 for Iterative Compression of End-to-End ASR Model using AutoML
Viaarxiv icon

MobiSR: Efficient On-Device Super-Resolution through Heterogeneous Mobile Processors

Add code
Aug 21, 2019
Figure 1 for MobiSR: Efficient On-Device Super-Resolution through Heterogeneous Mobile Processors
Figure 2 for MobiSR: Efficient On-Device Super-Resolution through Heterogeneous Mobile Processors
Figure 3 for MobiSR: Efficient On-Device Super-Resolution through Heterogeneous Mobile Processors
Figure 4 for MobiSR: Efficient On-Device Super-Resolution through Heterogeneous Mobile Processors
Viaarxiv icon

Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data

Add code
Nov 29, 2017
Figure 1 for Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data
Figure 2 for Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data
Figure 3 for Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data
Figure 4 for Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data
Viaarxiv icon