Picture for Nghia The Pham

Nghia The Pham

DynE: Dynamic Ensemble Decoding for Multi-Document Summarization

Add code
Jun 15, 2020
Figure 1 for DynE: Dynamic Ensemble Decoding for Multi-Document Summarization
Figure 2 for DynE: Dynamic Ensemble Decoding for Multi-Document Summarization
Figure 3 for DynE: Dynamic Ensemble Decoding for Multi-Document Summarization
Figure 4 for DynE: Dynamic Ensemble Decoding for Multi-Document Summarization
Viaarxiv icon

A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal

Add code
May 20, 2020
Figure 1 for A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal
Figure 2 for A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal
Figure 3 for A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal
Figure 4 for A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal
Viaarxiv icon

Living a discrete life in a continuous world: Reference with distributed representations

Add code
Sep 04, 2017
Figure 1 for Living a discrete life in a continuous world: Reference with distributed representations
Figure 2 for Living a discrete life in a continuous world: Reference with distributed representations
Viaarxiv icon

Towards Multi-Agent Communication-Based Language Learning

Add code
May 23, 2016
Figure 1 for Towards Multi-Agent Communication-Based Language Learning
Figure 2 for Towards Multi-Agent Communication-Based Language Learning
Figure 3 for Towards Multi-Agent Communication-Based Language Learning
Figure 4 for Towards Multi-Agent Communication-Based Language Learning
Viaarxiv icon

The red one!: On learning to refer to things based on their discriminative properties

Add code
May 23, 2016
Figure 1 for The red one!: On learning to refer to things based on their discriminative properties
Figure 2 for The red one!: On learning to refer to things based on their discriminative properties
Figure 3 for The red one!: On learning to refer to things based on their discriminative properties
Figure 4 for The red one!: On learning to refer to things based on their discriminative properties
Viaarxiv icon

Combining Language and Vision with a Multimodal Skip-gram Model

Add code
Mar 12, 2015
Figure 1 for Combining Language and Vision with a Multimodal Skip-gram Model
Figure 2 for Combining Language and Vision with a Multimodal Skip-gram Model
Figure 3 for Combining Language and Vision with a Multimodal Skip-gram Model
Figure 4 for Combining Language and Vision with a Multimodal Skip-gram Model
Viaarxiv icon