Picture for Yury Zemlyanskiy

Yury Zemlyanskiy

MEMORY-VQ: Compression for Tractable Internet-Scale Memory

Add code
Aug 28, 2023
Figure 1 for MEMORY-VQ: Compression for Tractable Internet-Scale Memory
Figure 2 for MEMORY-VQ: Compression for Tractable Internet-Scale Memory
Figure 3 for MEMORY-VQ: Compression for Tractable Internet-Scale Memory
Figure 4 for MEMORY-VQ: Compression for Tractable Internet-Scale Memory
Viaarxiv icon

GLIMMER: generalized late-interaction memory reranker

Add code
Jun 17, 2023
Viaarxiv icon

GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints

Add code
May 22, 2023
Viaarxiv icon

CoLT5: Faster Long-Range Transformers with Conditional Computation

Add code
Mar 17, 2023
Viaarxiv icon

Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute

Add code
Jan 25, 2023
Viaarxiv icon

FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference

Add code
Dec 15, 2022
Figure 1 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 2 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 3 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 4 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Viaarxiv icon

Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing

Add code
Sep 29, 2022
Figure 1 for Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Figure 2 for Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Figure 3 for Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Figure 4 for Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Viaarxiv icon

Mention Memory: incorporating textual knowledge into Transformers through entity mention attention

Add code
Oct 12, 2021
Figure 1 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 2 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 3 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 4 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Viaarxiv icon

ReadTwice: Reading Very Large Documents with Memories

Add code
May 11, 2021
Figure 1 for ReadTwice: Reading Very Large Documents with Memories
Figure 2 for ReadTwice: Reading Very Large Documents with Memories
Figure 3 for ReadTwice: Reading Very Large Documents with Memories
Figure 4 for ReadTwice: Reading Very Large Documents with Memories
Viaarxiv icon

DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections

Add code
Feb 26, 2021
Figure 1 for DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections
Figure 2 for DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections
Figure 3 for DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections
Figure 4 for DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections
Viaarxiv icon