Picture for Peter Hinz

Peter Hinz

The layer-wise L1 Loss Landscape of Neural Nets is more complex around local minima

Add code
May 06, 2021
Figure 1 for The layer-wise L1 Loss Landscape of Neural Nets is more complex around local minima
Figure 2 for The layer-wise L1 Loss Landscape of Neural Nets is more complex around local minima
Figure 3 for The layer-wise L1 Loss Landscape of Neural Nets is more complex around local minima
Figure 4 for The layer-wise L1 Loss Landscape of Neural Nets is more complex around local minima
Viaarxiv icon

Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks

Add code
Apr 08, 2021
Figure 1 for Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks
Figure 2 for Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks
Figure 3 for Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks
Figure 4 for Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks
Viaarxiv icon

The Oracle of DLphi

Add code
Jan 27, 2019
Viaarxiv icon

A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks

Add code
Aug 03, 2018
Figure 1 for A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks
Figure 2 for A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks
Figure 3 for A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks
Figure 4 for A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks
Viaarxiv icon