Picture for Neha S. Wadia

Neha S. Wadia

A Gentle Introduction to Gradient-Based Optimization and Variational Inequalities for Machine Learning

Add code
Sep 09, 2023
Viaarxiv icon

Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible

Add code
Aug 25, 2020
Figure 1 for Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible
Figure 2 for Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible
Figure 3 for Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible
Figure 4 for Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible
Viaarxiv icon

Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses

Add code
Mar 23, 2020
Figure 1 for Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses
Figure 2 for Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses
Figure 3 for Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses
Figure 4 for Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses
Viaarxiv icon

Numerically Recovering the Critical Points of a Deep Linear Autoencoder

Add code
Jan 29, 2019
Figure 1 for Numerically Recovering the Critical Points of a Deep Linear Autoencoder
Figure 2 for Numerically Recovering the Critical Points of a Deep Linear Autoencoder
Figure 3 for Numerically Recovering the Critical Points of a Deep Linear Autoencoder
Figure 4 for Numerically Recovering the Critical Points of a Deep Linear Autoencoder
Viaarxiv icon