Picture for Amanda Olmin

Amanda Olmin

Towards understanding epoch-wise double descent in two-layer linear neural networks

Add code
Jul 13, 2024
Viaarxiv icon

On the connection between Noise-Contrastive Estimation and Contrastive Divergence

Add code
Feb 26, 2024
Figure 1 for On the connection between Noise-Contrastive Estimation and Contrastive Divergence
Figure 2 for On the connection between Noise-Contrastive Estimation and Contrastive Divergence
Figure 3 for On the connection between Noise-Contrastive Estimation and Contrastive Divergence
Figure 4 for On the connection between Noise-Contrastive Estimation and Contrastive Divergence
Viaarxiv icon

Active Learning with Weak Labels for Gaussian Processes

Add code
Apr 18, 2022
Figure 1 for Active Learning with Weak Labels for Gaussian Processes
Figure 2 for Active Learning with Weak Labels for Gaussian Processes
Figure 3 for Active Learning with Weak Labels for Gaussian Processes
Figure 4 for Active Learning with Weak Labels for Gaussian Processes
Viaarxiv icon

Robustness and reliability when training with noisy labels

Add code
Oct 07, 2021
Figure 1 for Robustness and reliability when training with noisy labels
Figure 2 for Robustness and reliability when training with noisy labels
Figure 3 for Robustness and reliability when training with noisy labels
Figure 4 for Robustness and reliability when training with noisy labels
Viaarxiv icon

A general framework for ensemble distribution distillation

Add code
Feb 26, 2020
Figure 1 for A general framework for ensemble distribution distillation
Figure 2 for A general framework for ensemble distribution distillation
Figure 3 for A general framework for ensemble distribution distillation
Figure 4 for A general framework for ensemble distribution distillation
Viaarxiv icon