Picture for Michiel de Jong

Michiel de Jong

MEMORY-VQ: Compression for Tractable Internet-Scale Memory

Add code
Aug 28, 2023
Viaarxiv icon

GLIMMER: generalized late-interaction memory reranker

Add code
Jun 17, 2023
Viaarxiv icon

GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints

Add code
May 22, 2023
Viaarxiv icon

CoLT5: Faster Long-Range Transformers with Conditional Computation

Add code
Mar 17, 2023
Viaarxiv icon

Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute

Add code
Jan 25, 2023
Viaarxiv icon

FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference

Add code
Dec 15, 2022
Viaarxiv icon

Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing

Add code
Sep 29, 2022
Figure 1 for Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Figure 2 for Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Figure 3 for Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Figure 4 for Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Viaarxiv icon

Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering

Add code
Apr 10, 2022
Figure 1 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 2 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 3 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 4 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Viaarxiv icon

Mention Memory: incorporating textual knowledge into Transformers through entity mention attention

Add code
Oct 12, 2021
Figure 1 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 2 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 3 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 4 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Viaarxiv icon

Grounding Complex Navigational Instructions Using Scene Graphs

Add code
Jun 03, 2021
Figure 1 for Grounding Complex Navigational Instructions Using Scene Graphs
Figure 2 for Grounding Complex Navigational Instructions Using Scene Graphs
Figure 3 for Grounding Complex Navigational Instructions Using Scene Graphs
Figure 4 for Grounding Complex Navigational Instructions Using Scene Graphs
Viaarxiv icon