Picture for Dibakar Gope

Dibakar Gope

Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers

Add code
Aug 21, 2023
Viaarxiv icon

PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep Learning Models on Edge Devices

Add code
Jan 26, 2023
Viaarxiv icon

CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers

Add code
Nov 17, 2022
Viaarxiv icon

Restructurable Activation Networks

Add code
Aug 17, 2022
Figure 1 for Restructurable Activation Networks
Figure 2 for Restructurable Activation Networks
Figure 3 for Restructurable Activation Networks
Figure 4 for Restructurable Activation Networks
Viaarxiv icon

Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge

Add code
Dec 29, 2021
Figure 1 for Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge
Figure 2 for Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge
Figure 3 for Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge
Figure 4 for Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge
Viaarxiv icon

Collapsible Linear Blocks for Super-Efficient Super Resolution

Add code
Mar 17, 2021
Figure 1 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 2 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 3 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 4 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Viaarxiv icon

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

Add code
Oct 25, 2020
Figure 1 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 2 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 3 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 4 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Viaarxiv icon

Rank and run-time aware compression of NLP Applications

Add code
Oct 06, 2020
Figure 1 for Rank and run-time aware compression of NLP Applications
Figure 2 for Rank and run-time aware compression of NLP Applications
Figure 3 for Rank and run-time aware compression of NLP Applications
Viaarxiv icon

High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands

Add code
Aug 03, 2020
Figure 1 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 2 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 3 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 4 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Viaarxiv icon

Ternary MobileNets via Per-Layer Hybrid Filter Banks

Add code
Nov 04, 2019
Figure 1 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 2 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 3 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 4 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Viaarxiv icon