Picture for Abhisek Kundu

Abhisek Kundu

AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

Add code
Apr 14, 2023
Viaarxiv icon

Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads

Add code
Apr 14, 2021
Figure 1 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 2 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 3 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 4 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Viaarxiv icon

K-TanH: Hardware Efficient Activations For Deep Learning

Add code
Oct 21, 2019
Figure 1 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 2 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 3 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 4 for K-TanH: Hardware Efficient Activations For Deep Learning
Viaarxiv icon

A Study of BFLOAT16 for Deep Learning Training

Add code
Jun 13, 2019
Figure 1 for A Study of BFLOAT16 for Deep Learning Training
Figure 2 for A Study of BFLOAT16 for Deep Learning Training
Figure 3 for A Study of BFLOAT16 for Deep Learning Training
Figure 4 for A Study of BFLOAT16 for Deep Learning Training
Viaarxiv icon

Ternary Residual Networks

Add code
Oct 31, 2017
Figure 1 for Ternary Residual Networks
Figure 2 for Ternary Residual Networks
Figure 3 for Ternary Residual Networks
Figure 4 for Ternary Residual Networks
Viaarxiv icon

Ternary Neural Networks with Fine-Grained Quantization

Add code
May 30, 2017
Figure 1 for Ternary Neural Networks with Fine-Grained Quantization
Figure 2 for Ternary Neural Networks with Fine-Grained Quantization
Figure 3 for Ternary Neural Networks with Fine-Grained Quantization
Figure 4 for Ternary Neural Networks with Fine-Grained Quantization
Viaarxiv icon

Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point

Add code
Feb 01, 2017
Figure 1 for Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point
Figure 2 for Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point
Viaarxiv icon

A Randomized Rounding Algorithm for Sparse PCA

Add code
Nov 22, 2016
Figure 1 for A Randomized Rounding Algorithm for Sparse PCA
Figure 2 for A Randomized Rounding Algorithm for Sparse PCA
Figure 3 for A Randomized Rounding Algorithm for Sparse PCA
Figure 4 for A Randomized Rounding Algorithm for Sparse PCA
Viaarxiv icon

Relaxed Leverage Sampling for Low-rank Matrix Completion

Add code
Apr 07, 2016
Figure 1 for Relaxed Leverage Sampling for Low-rank Matrix Completion
Figure 2 for Relaxed Leverage Sampling for Low-rank Matrix Completion
Figure 3 for Relaxed Leverage Sampling for Low-rank Matrix Completion
Figure 4 for Relaxed Leverage Sampling for Low-rank Matrix Completion
Viaarxiv icon

Approximating Sparse PCA from Incomplete Data

Add code
Mar 12, 2015
Figure 1 for Approximating Sparse PCA from Incomplete Data
Figure 2 for Approximating Sparse PCA from Incomplete Data
Figure 3 for Approximating Sparse PCA from Incomplete Data
Figure 4 for Approximating Sparse PCA from Incomplete Data
Viaarxiv icon