Picture for Junji Tomita

Junji Tomita

Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models

Add code
Mar 29, 2020
Figure 1 for Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Figure 2 for Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Figure 3 for Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Figure 4 for Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Viaarxiv icon

Length-controllable Abstractive Summarization by Guiding with Summary Prototype

Add code
Jan 21, 2020
Figure 1 for Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Figure 2 for Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Figure 3 for Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Figure 4 for Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Viaarxiv icon

Unsupervised Domain Adaptation of Language Models for Reading Comprehension

Add code
Nov 25, 2019
Figure 1 for Unsupervised Domain Adaptation of Language Models for Reading Comprehension
Figure 2 for Unsupervised Domain Adaptation of Language Models for Reading Comprehension
Figure 3 for Unsupervised Domain Adaptation of Language Models for Reading Comprehension
Figure 4 for Unsupervised Domain Adaptation of Language Models for Reading Comprehension
Viaarxiv icon

A Simple but Effective Method to Incorporate Multi-turn Context with BERT for Conversational Machine Comprehension

Add code
May 30, 2019
Figure 1 for A Simple but Effective Method to Incorporate Multi-turn Context with BERT for Conversational Machine Comprehension
Figure 2 for A Simple but Effective Method to Incorporate Multi-turn Context with BERT for Conversational Machine Comprehension
Figure 3 for A Simple but Effective Method to Incorporate Multi-turn Context with BERT for Conversational Machine Comprehension
Figure 4 for A Simple but Effective Method to Incorporate Multi-turn Context with BERT for Conversational Machine Comprehension
Viaarxiv icon

Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction

Add code
May 29, 2019
Figure 1 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 2 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 3 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Figure 4 for Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction
Viaarxiv icon

Multi-style Generative Reading Comprehension

Add code
Jan 08, 2019
Figure 1 for Multi-style Generative Reading Comprehension
Figure 2 for Multi-style Generative Reading Comprehension
Figure 3 for Multi-style Generative Reading Comprehension
Figure 4 for Multi-style Generative Reading Comprehension
Viaarxiv icon

Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension

Add code
Aug 31, 2018
Figure 1 for Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension
Figure 2 for Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension
Figure 3 for Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension
Figure 4 for Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension
Viaarxiv icon