Picture for Ivan Montero

Ivan Montero

University of Washington

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Add code
Nov 07, 2022
Viaarxiv icon

Sentence Bottleneck Autoencoders from Transformer Language Models

Add code
Aug 31, 2021
Figure 1 for Sentence Bottleneck Autoencoders from Transformer Language Models
Figure 2 for Sentence Bottleneck Autoencoders from Transformer Language Models
Figure 3 for Sentence Bottleneck Autoencoders from Transformer Language Models
Figure 4 for Sentence Bottleneck Autoencoders from Transformer Language Models
Viaarxiv icon

Pivot Through English: Reliably Answering Multilingual Questions without Document Retrieval

Add code
Dec 28, 2020
Figure 1 for Pivot Through English: Reliably Answering Multilingual Questions without Document Retrieval
Figure 2 for Pivot Through English: Reliably Answering Multilingual Questions without Document Retrieval
Figure 3 for Pivot Through English: Reliably Answering Multilingual Questions without Document Retrieval
Figure 4 for Pivot Through English: Reliably Answering Multilingual Questions without Document Retrieval
Viaarxiv icon

Plug and Play Autoencoders for Conditional Text Generation

Add code
Oct 12, 2020
Figure 1 for Plug and Play Autoencoders for Conditional Text Generation
Figure 2 for Plug and Play Autoencoders for Conditional Text Generation
Figure 3 for Plug and Play Autoencoders for Conditional Text Generation
Figure 4 for Plug and Play Autoencoders for Conditional Text Generation
Viaarxiv icon