Picture for Dmytro Okhonko

Dmytro Okhonko

LegoNN: Building Modular Encoder-Decoder Models

Add code
Jun 07, 2022
Figure 1 for LegoNN: Building Modular Encoder-Decoder Models
Figure 2 for LegoNN: Building Modular Encoder-Decoder Models
Figure 3 for LegoNN: Building Modular Encoder-Decoder Models
Figure 4 for LegoNN: Building Modular Encoder-Decoder Models
Viaarxiv icon

CM3: A Causal Masked Multimodal Model of the Internet

Add code
Jan 19, 2022
Figure 1 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 2 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 3 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 4 for CM3: A Causal Masked Multimodal Model of the Internet
Viaarxiv icon

The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus

Add code
Dec 18, 2021
Figure 1 for The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus
Figure 2 for The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus
Figure 3 for The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus
Figure 4 for The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus
Viaarxiv icon

CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training

Add code
Oct 14, 2021
Figure 1 for CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Figure 2 for CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Figure 3 for CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Figure 4 for CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Viaarxiv icon

VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding

Add code
Oct 01, 2021
Figure 1 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 2 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 3 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 4 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Viaarxiv icon

HTLM: Hyper-Text Pre-Training and Prompting of Language Models

Add code
Jul 14, 2021
Figure 1 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 2 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 3 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 4 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Viaarxiv icon

NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned

Add code
Jan 01, 2021
Figure 1 for NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Figure 2 for NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Figure 3 for NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Figure 4 for NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Viaarxiv icon

Unified Open-Domain Question Answering with Structured and Unstructured Knowledge

Add code
Dec 29, 2020
Figure 1 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 2 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 3 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 4 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Viaarxiv icon

fairseq S2T: Fast Speech-to-Text Modeling with fairseq

Add code
Oct 11, 2020
Figure 1 for fairseq S2T: Fast Speech-to-Text Modeling with fairseq
Figure 2 for fairseq S2T: Fast Speech-to-Text Modeling with fairseq
Figure 3 for fairseq S2T: Fast Speech-to-Text Modeling with fairseq
Figure 4 for fairseq S2T: Fast Speech-to-Text Modeling with fairseq
Viaarxiv icon

Training ASR models by Generation of Contextual Information

Add code
Oct 27, 2019
Figure 1 for Training ASR models by Generation of Contextual Information
Figure 2 for Training ASR models by Generation of Contextual Information
Figure 3 for Training ASR models by Generation of Contextual Information
Figure 4 for Training ASR models by Generation of Contextual Information
Viaarxiv icon