Picture for Arash Fayyazi

Arash Fayyazi

CHOSEN: Compilation to Hardware Optimization Stack for Efficient Vision Transformer Inference

Add code
Jul 17, 2024
Viaarxiv icon

PEANO-ViT: Power-Efficient Approximations of Non-Linearities in Vision Transformers

Add code
Jun 21, 2024
Viaarxiv icon

Scalable Superconductor Neuron with Ternary Synaptic Connections for Ultra-Fast SNN Hardware

Add code
Feb 27, 2024
Viaarxiv icon

Sensitivity-Aware Mixed-Precision Quantization and Width Optimization of Deep Neural Networks Through Cluster-Based Tree-Structured Parzen Estimation

Add code
Aug 16, 2023
Viaarxiv icon

SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining

Add code
May 08, 2023
Viaarxiv icon

A Fast Training-Free Compression Framework for Vision Transformers

Add code
Mar 04, 2023
Viaarxiv icon

Efficient Compilation and Mapping of Fixed Function Combinational Logic onto Digital Signal Processors Targeting Neural Network Inference and Utilizing High-level Synthesis

Add code
Jul 30, 2022
Figure 1 for Efficient Compilation and Mapping of Fixed Function Combinational Logic onto Digital Signal Processors Targeting Neural Network Inference and Utilizing High-level Synthesis
Figure 2 for Efficient Compilation and Mapping of Fixed Function Combinational Logic onto Digital Signal Processors Targeting Neural Network Inference and Utilizing High-level Synthesis
Figure 3 for Efficient Compilation and Mapping of Fixed Function Combinational Logic onto Digital Signal Processors Targeting Neural Network Inference and Utilizing High-level Synthesis
Figure 4 for Efficient Compilation and Mapping of Fixed Function Combinational Logic onto Digital Signal Processors Targeting Neural Network Inference and Utilizing High-level Synthesis
Viaarxiv icon

Sparse Periodic Systolic Dataflow for Lowering Latency and Power Dissipation of Convolutional Neural Network Accelerators

Add code
Jun 30, 2022
Figure 1 for Sparse Periodic Systolic Dataflow for Lowering Latency and Power Dissipation of Convolutional Neural Network Accelerators
Figure 2 for Sparse Periodic Systolic Dataflow for Lowering Latency and Power Dissipation of Convolutional Neural Network Accelerators
Figure 3 for Sparse Periodic Systolic Dataflow for Lowering Latency and Power Dissipation of Convolutional Neural Network Accelerators
Figure 4 for Sparse Periodic Systolic Dataflow for Lowering Latency and Power Dissipation of Convolutional Neural Network Accelerators
Viaarxiv icon

NullaNet Tiny: Ultra-low-latency DNN Inference Through Fixed-function Combinational Logic

Add code
Apr 07, 2021
Figure 1 for NullaNet Tiny: Ultra-low-latency DNN Inference Through Fixed-function Combinational Logic
Figure 2 for NullaNet Tiny: Ultra-low-latency DNN Inference Through Fixed-function Combinational Logic
Viaarxiv icon

SynergicLearning: Neural Network-Based Feature Extraction for Highly-Accurate Hyperdimensional Learning

Add code
Aug 04, 2020
Figure 1 for SynergicLearning: Neural Network-Based Feature Extraction for Highly-Accurate Hyperdimensional Learning
Figure 2 for SynergicLearning: Neural Network-Based Feature Extraction for Highly-Accurate Hyperdimensional Learning
Figure 3 for SynergicLearning: Neural Network-Based Feature Extraction for Highly-Accurate Hyperdimensional Learning
Figure 4 for SynergicLearning: Neural Network-Based Feature Extraction for Highly-Accurate Hyperdimensional Learning
Viaarxiv icon