Picture for Irem Boybat

Irem Boybat

A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing

Add code
Feb 12, 2024
Figure 1 for A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Figure 2 for A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Figure 3 for A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Figure 4 for A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Viaarxiv icon

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing

Add code
May 17, 2023
Figure 1 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 2 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 3 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 4 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Viaarxiv icon

Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics

Add code
Sep 21, 2022
Figure 1 for Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
Figure 2 for Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
Figure 3 for Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
Figure 4 for Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
Viaarxiv icon

A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks

Add code
Jan 04, 2022
Figure 1 for A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Figure 2 for A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Figure 3 for A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Figure 4 for A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Viaarxiv icon

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Add code
Nov 10, 2021
Figure 1 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 2 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 3 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 4 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Viaarxiv icon

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

Add code
Mar 25, 2020
Figure 1 for ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning
Figure 2 for ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning
Figure 3 for ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning
Figure 4 for ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning
Viaarxiv icon

Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses

Add code
May 28, 2019
Figure 1 for Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Figure 2 for Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Figure 3 for Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Figure 4 for Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Viaarxiv icon

Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate Codes

Add code
Jun 17, 2017
Figure 1 for Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate Codes
Figure 2 for Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate Codes
Figure 3 for Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate Codes
Figure 4 for Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate Codes
Viaarxiv icon