Picture for Holger Fröning

Holger Fröning

Less Memory Means smaller GPUs: Backpropagation with Compressed Activations

Add code
Sep 18, 2024
Viaarxiv icon

DeepHYDRA: Resource-Efficient Time-Series Anomaly Detection in Dynamically-Configured Systems

Add code
May 13, 2024
Viaarxiv icon

Implications of Noise in Resistive Memory on Deep Neural Networks for Image Classification

Add code
Jan 11, 2024
Viaarxiv icon

Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning

Add code
Nov 29, 2023
Viaarxiv icon

On the Non-Associativity of Analog Computations

Add code
Sep 25, 2023
Viaarxiv icon

Reducing Memory Requirements for the IPU using Butterfly Factorizations

Add code
Sep 16, 2023
Viaarxiv icon

Walking Noise: Understanding Implications of Noisy Computations on Classification Tasks

Add code
Dec 20, 2022
Viaarxiv icon

Towards Hardware-Specific Automatic Compression of Neural Networks

Add code
Dec 15, 2022
Viaarxiv icon

HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness

Add code
May 31, 2022
Figure 1 for HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness
Figure 2 for HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness
Figure 3 for HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness
Figure 4 for HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness
Viaarxiv icon

The Programming of Deep Learning Accelerators as a Constraint Satisfaction Problem

Add code
Apr 13, 2021
Figure 1 for The Programming of Deep Learning Accelerators as a Constraint Satisfaction Problem
Figure 2 for The Programming of Deep Learning Accelerators as a Constraint Satisfaction Problem
Figure 3 for The Programming of Deep Learning Accelerators as a Constraint Satisfaction Problem
Figure 4 for The Programming of Deep Learning Accelerators as a Constraint Satisfaction Problem
Viaarxiv icon