Picture for Manuel Le Gallo

Manuel Le Gallo

Kernel Approximation using Analog In-Memory Computing

Add code
Nov 05, 2024
Viaarxiv icon

Roadmap to Neuromorphic Computing with Emerging Technologies

Add code
Jul 02, 2024
Viaarxiv icon

Training of Physical Neural Networks

Add code
Jun 05, 2024
Figure 1 for Training of Physical Neural Networks
Figure 2 for Training of Physical Neural Networks
Figure 3 for Training of Physical Neural Networks
Figure 4 for Training of Physical Neural Networks
Viaarxiv icon

A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing

Add code
Feb 12, 2024
Figure 1 for A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Figure 2 for A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Figure 3 for A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Figure 4 for A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Viaarxiv icon

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

Add code
Jul 18, 2023
Viaarxiv icon

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing

Add code
May 17, 2023
Figure 1 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 2 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 3 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 4 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Viaarxiv icon

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Add code
Feb 16, 2023
Viaarxiv icon

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Add code
Nov 10, 2021
Figure 1 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 2 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 3 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 4 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Viaarxiv icon

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

Add code
Apr 05, 2021
Figure 1 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 2 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 3 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 4 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Viaarxiv icon

Robust High-dimensional Memory-augmented Neural Networks

Add code
Oct 05, 2020
Figure 1 for Robust High-dimensional Memory-augmented Neural Networks
Figure 2 for Robust High-dimensional Memory-augmented Neural Networks
Figure 3 for Robust High-dimensional Memory-augmented Neural Networks
Figure 4 for Robust High-dimensional Memory-augmented Neural Networks
Viaarxiv icon