Picture for Michaela Blott

Michaela Blott

ACCL+: an FPGA-Based Collective Engine for Distributed Applications

Add code
Dec 18, 2023
Viaarxiv icon

Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs

Add code
Nov 21, 2023
Viaarxiv icon

Implementing Neural Network-Based Equalizers in a Coherent Optical Transmission System Using Field-Programmable Gate Arrays

Add code
Dec 09, 2022
Viaarxiv icon

LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors

Add code
Oct 11, 2022
Figure 1 for LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors
Figure 2 for LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors
Figure 3 for LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors
Figure 4 for LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors
Viaarxiv icon

Towards FPGA Implementation of Neural Network-Based Nonlinearity Mitigation Equalizers in Coherent Optical Transmission Systems

Add code
Jun 24, 2022
Figure 1 for Towards FPGA Implementation of Neural Network-Based Nonlinearity Mitigation Equalizers in Coherent Optical Transmission Systems
Figure 2 for Towards FPGA Implementation of Neural Network-Based Nonlinearity Mitigation Equalizers in Coherent Optical Transmission Systems
Viaarxiv icon

Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark

Add code
Jun 23, 2022
Figure 1 for Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark
Figure 2 for Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark
Figure 3 for Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark
Figure 4 for Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark
Viaarxiv icon

QONNX: Representing Arbitrary-Precision Quantized Neural Networks

Add code
Jun 17, 2022
Figure 1 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 2 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 3 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 4 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Viaarxiv icon

EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators

Add code
Feb 04, 2022
Figure 1 for EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators
Figure 2 for EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators
Figure 3 for EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators
Figure 4 for EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators
Viaarxiv icon

Applications and Techniques for Fast Machine Learning in Science

Add code
Oct 25, 2021
Figure 1 for Applications and Techniques for Fast Machine Learning in Science
Figure 2 for Applications and Techniques for Fast Machine Learning in Science
Figure 3 for Applications and Techniques for Fast Machine Learning in Science
Figure 4 for Applications and Techniques for Fast Machine Learning in Science
Viaarxiv icon

FAT: Training Neural Networks for Reliable Inference Under Hardware Faults

Add code
Nov 11, 2020
Figure 1 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 2 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 3 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 4 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Viaarxiv icon