Picture for Holger Fröning

Holger Fröning

Variance-Aware Noisy Training: Hardening DNNs against Unstable Analog Computations

Add code
Mar 20, 2025
Figure 1 for Variance-Aware Noisy Training: Hardening DNNs against Unstable Analog Computations
Figure 2 for Variance-Aware Noisy Training: Hardening DNNs against Unstable Analog Computations
Figure 3 for Variance-Aware Noisy Training: Hardening DNNs against Unstable Analog Computations
Figure 4 for Variance-Aware Noisy Training: Hardening DNNs against Unstable Analog Computations
Viaarxiv icon

On Hardening DNNs against Noisy Computations

Add code
Jan 24, 2025
Viaarxiv icon

Function Space Diversity for Uncertainty Prediction via Repulsive Last-Layer Ensembles

Add code
Dec 20, 2024
Viaarxiv icon

Less Memory Means smaller GPUs: Backpropagation with Compressed Activations

Add code
Sep 18, 2024
Viaarxiv icon

DeepHYDRA: Resource-Efficient Time-Series Anomaly Detection in Dynamically-Configured Systems

Add code
May 13, 2024
Figure 1 for DeepHYDRA: Resource-Efficient Time-Series Anomaly Detection in Dynamically-Configured Systems
Figure 2 for DeepHYDRA: Resource-Efficient Time-Series Anomaly Detection in Dynamically-Configured Systems
Figure 3 for DeepHYDRA: Resource-Efficient Time-Series Anomaly Detection in Dynamically-Configured Systems
Figure 4 for DeepHYDRA: Resource-Efficient Time-Series Anomaly Detection in Dynamically-Configured Systems
Viaarxiv icon

Implications of Noise in Resistive Memory on Deep Neural Networks for Image Classification

Add code
Jan 11, 2024
Viaarxiv icon

Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning

Add code
Nov 29, 2023
Figure 1 for Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning
Figure 2 for Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning
Figure 3 for Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning
Figure 4 for Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning
Viaarxiv icon

On the Non-Associativity of Analog Computations

Add code
Sep 25, 2023
Viaarxiv icon

Reducing Memory Requirements for the IPU using Butterfly Factorizations

Add code
Sep 16, 2023
Figure 1 for Reducing Memory Requirements for the IPU using Butterfly Factorizations
Figure 2 for Reducing Memory Requirements for the IPU using Butterfly Factorizations
Figure 3 for Reducing Memory Requirements for the IPU using Butterfly Factorizations
Figure 4 for Reducing Memory Requirements for the IPU using Butterfly Factorizations
Viaarxiv icon

Walking Noise: Understanding Implications of Noisy Computations on Classification Tasks

Add code
Dec 20, 2022
Viaarxiv icon