Picture for Niladri S. Chatterji

Niladri S. Chatterji

Deep Linear Networks can Benignly Overfit when Shallow Ones Do

Add code
Sep 19, 2022
Figure 1 for Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Figure 2 for Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Figure 3 for Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Viaarxiv icon

Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification

Add code
May 26, 2022
Figure 1 for Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification
Figure 2 for Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification
Figure 3 for Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification
Figure 4 for Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification
Viaarxiv icon

Random Feature Amplification: Feature Learning and Generalization in Neural Networks

Add code
Feb 15, 2022
Figure 1 for Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Viaarxiv icon

Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data

Add code
Feb 11, 2022
Viaarxiv icon

Is Importance Weighting Incompatible with Interpolating Classifiers?

Add code
Dec 24, 2021
Figure 1 for Is Importance Weighting Incompatible with Interpolating Classifiers?
Figure 2 for Is Importance Weighting Incompatible with Interpolating Classifiers?
Figure 3 for Is Importance Weighting Incompatible with Interpolating Classifiers?
Figure 4 for Is Importance Weighting Incompatible with Interpolating Classifiers?
Viaarxiv icon

Foolish Crowds Support Benign Overfitting

Add code
Oct 08, 2021
Viaarxiv icon

The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer Linear Networks

Add code
Aug 25, 2021
Viaarxiv icon

On the Theory of Reinforcement Learning with Once-per-Episode Feedback

Add code
Jun 07, 2021
Figure 1 for On the Theory of Reinforcement Learning with Once-per-Episode Feedback
Viaarxiv icon

When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?

Add code
Feb 09, 2021
Viaarxiv icon

When does gradient descent with logistic loss find interpolating two-layer networks?

Add code
Dec 04, 2020
Figure 1 for When does gradient descent with logistic loss find interpolating two-layer networks?
Figure 2 for When does gradient descent with logistic loss find interpolating two-layer networks?
Figure 3 for When does gradient descent with logistic loss find interpolating two-layer networks?
Figure 4 for When does gradient descent with logistic loss find interpolating two-layer networks?
Viaarxiv icon