Picture for Alice Martin

Alice Martin

TSP, CMAP, IP Paris

The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction

Add code
Jul 15, 2020
Figure 1 for The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
Figure 2 for The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
Figure 3 for The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
Figure 4 for The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
Viaarxiv icon

On Last-Layer Algorithms for Classification: Decoupling Representation from Uncertainty Estimation

Add code
Jan 22, 2020
Figure 1 for On Last-Layer Algorithms for Classification: Decoupling Representation from Uncertainty Estimation
Figure 2 for On Last-Layer Algorithms for Classification: Decoupling Representation from Uncertainty Estimation
Figure 3 for On Last-Layer Algorithms for Classification: Decoupling Representation from Uncertainty Estimation
Figure 4 for On Last-Layer Algorithms for Classification: Decoupling Representation from Uncertainty Estimation
Viaarxiv icon