Picture for S. R. Nandakumar

S. R. Nandakumar

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Add code
Feb 16, 2023
Viaarxiv icon

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Add code
Nov 10, 2021
Figure 1 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 2 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 3 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 4 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Viaarxiv icon

Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses

Add code
May 28, 2019
Figure 1 for Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Figure 2 for Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Figure 3 for Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Figure 4 for Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Viaarxiv icon