Picture for Yann N. Dauphin

Yann N. Dauphin

Neglected Hessian component explains mysteries in Sharpness regularization

Add code
Jan 24, 2024
Viaarxiv icon

Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment

Add code
Jun 05, 2023
Figure 1 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 2 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 3 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 4 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Viaarxiv icon

SAM operates far from home: eigenvalue regularization as a dynamical phenomenon

Add code
Feb 17, 2023
Viaarxiv icon

How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?

Add code
Nov 22, 2022
Viaarxiv icon

Simple and Effective Noisy Channel Modeling for Neural Machine Translation

Add code
Aug 15, 2019
Figure 1 for Simple and Effective Noisy Channel Modeling for Neural Machine Translation
Figure 2 for Simple and Effective Noisy Channel Modeling for Neural Machine Translation
Figure 3 for Simple and Effective Noisy Channel Modeling for Neural Machine Translation
Figure 4 for Simple and Effective Noisy Channel Modeling for Neural Machine Translation
Viaarxiv icon

Pay Less Attention with Lightweight and Dynamic Convolutions

Add code
Jan 29, 2019
Figure 1 for Pay Less Attention with Lightweight and Dynamic Convolutions
Figure 2 for Pay Less Attention with Lightweight and Dynamic Convolutions
Figure 3 for Pay Less Attention with Lightweight and Dynamic Convolutions
Figure 4 for Pay Less Attention with Lightweight and Dynamic Convolutions
Viaarxiv icon

Fixup Initialization: Residual Learning Without Normalization

Add code
Jan 27, 2019
Figure 1 for Fixup Initialization: Residual Learning Without Normalization
Figure 2 for Fixup Initialization: Residual Learning Without Normalization
Figure 3 for Fixup Initialization: Residual Learning Without Normalization
Figure 4 for Fixup Initialization: Residual Learning Without Normalization
Viaarxiv icon

mixup: Beyond Empirical Risk Minimization

Add code
Apr 27, 2018
Figure 1 for mixup: Beyond Empirical Risk Minimization
Figure 2 for mixup: Beyond Empirical Risk Minimization
Figure 3 for mixup: Beyond Empirical Risk Minimization
Figure 4 for mixup: Beyond Empirical Risk Minimization
Viaarxiv icon

Language Modeling with Gated Convolutional Networks

Add code
Sep 08, 2017
Figure 1 for Language Modeling with Gated Convolutional Networks
Figure 2 for Language Modeling with Gated Convolutional Networks
Figure 3 for Language Modeling with Gated Convolutional Networks
Figure 4 for Language Modeling with Gated Convolutional Networks
Viaarxiv icon

Convolutional Sequence to Sequence Learning

Add code
Jul 25, 2017
Figure 1 for Convolutional Sequence to Sequence Learning
Figure 2 for Convolutional Sequence to Sequence Learning
Figure 3 for Convolutional Sequence to Sequence Learning
Figure 4 for Convolutional Sequence to Sequence Learning
Viaarxiv icon