Picture for Martino Dazzi

Martino Dazzi

EfQAT: An Efficient Framework for Quantization-Aware Training

Add code
Nov 17, 2024
Viaarxiv icon

A Fully-Integrated 5mW, 0.8Gbps Energy-Efficient Chip-to-Chip Data Link for Ultra-Low-Power IoT End-Nodes in 65-nm CMOS

Add code
Sep 05, 2021
Figure 1 for A Fully-Integrated 5mW, 0.8Gbps Energy-Efficient Chip-to-Chip Data Link for Ultra-Low-Power IoT End-Nodes in 65-nm CMOS
Figure 2 for A Fully-Integrated 5mW, 0.8Gbps Energy-Efficient Chip-to-Chip Data Link for Ultra-Low-Power IoT End-Nodes in 65-nm CMOS
Figure 3 for A Fully-Integrated 5mW, 0.8Gbps Energy-Efficient Chip-to-Chip Data Link for Ultra-Low-Power IoT End-Nodes in 65-nm CMOS
Figure 4 for A Fully-Integrated 5mW, 0.8Gbps Energy-Efficient Chip-to-Chip Data Link for Ultra-Low-Power IoT End-Nodes in 65-nm CMOS
Viaarxiv icon

Compiling Neural Networks for a Computational Memory Accelerator

Add code
Mar 05, 2020
Figure 1 for Compiling Neural Networks for a Computational Memory Accelerator
Figure 2 for Compiling Neural Networks for a Computational Memory Accelerator
Figure 3 for Compiling Neural Networks for a Computational Memory Accelerator
Viaarxiv icon

5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory

Add code
Jun 08, 2019
Figure 1 for 5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory
Figure 2 for 5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory
Figure 3 for 5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory
Figure 4 for 5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory
Viaarxiv icon