Picture for Sudhakar Sah

Sudhakar Sah

Token Pruning using a Lightweight Background Aware Vision Transformer

Add code
Oct 12, 2024
Viaarxiv icon

ActNAS : Generating Efficient YOLO Models using Activation NAS

Add code
Oct 11, 2024
Viaarxiv icon

MCUBench: A Benchmark of Tiny Object Detectors on MCUs

Add code
Sep 27, 2024
Viaarxiv icon

QGen: On the Ability to Generalize in Quantization Aware Training

Add code
Apr 19, 2024
Viaarxiv icon

Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity

Add code
Sep 27, 2023
Viaarxiv icon

DeepliteRT: Computer Vision at the Edge

Add code
Sep 19, 2023
Figure 1 for DeepliteRT: Computer Vision at the Edge
Figure 2 for DeepliteRT: Computer Vision at the Edge
Figure 3 for DeepliteRT: Computer Vision at the Edge
Figure 4 for DeepliteRT: Computer Vision at the Edge
Viaarxiv icon

YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems

Add code
Jul 26, 2023
Figure 1 for YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Figure 2 for YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Figure 3 for YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Figure 4 for YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Viaarxiv icon

DeepGEMM: Accelerated Ultra Low-Precision Inference on CPU Architectures using Lookup Tables

Add code
Apr 18, 2023
Viaarxiv icon

Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime

Add code
Jul 18, 2022
Figure 1 for Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime
Figure 2 for Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime
Figure 3 for Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime
Figure 4 for Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime
Viaarxiv icon