Picture for Ionut-Teodor Sorodoc

Ionut-Teodor Sorodoc

Class-Agnostic Continual Learning of Alternating Languages and Domains

Add code
Apr 07, 2020
Figure 1 for Class-Agnostic Continual Learning of Alternating Languages and Domains
Figure 2 for Class-Agnostic Continual Learning of Alternating Languages and Domains
Figure 3 for Class-Agnostic Continual Learning of Alternating Languages and Domains
Figure 4 for Class-Agnostic Continual Learning of Alternating Languages and Domains
Viaarxiv icon

Recurrent Instance Segmentation using Sequences of Referring Expressions

Add code
Nov 05, 2019
Figure 1 for Recurrent Instance Segmentation using Sequences of Referring Expressions
Figure 2 for Recurrent Instance Segmentation using Sequences of Referring Expressions
Figure 3 for Recurrent Instance Segmentation using Sequences of Referring Expressions
Figure 4 for Recurrent Instance Segmentation using Sequences of Referring Expressions
Viaarxiv icon

What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue

Add code
May 16, 2019
Figure 1 for What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Figure 2 for What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Figure 3 for What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Figure 4 for What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Viaarxiv icon

AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library

Add code
May 14, 2018
Figure 1 for AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Figure 2 for AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Figure 3 for AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Figure 4 for AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Viaarxiv icon

Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision

Add code
Apr 13, 2018
Figure 1 for Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Figure 2 for Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Figure 3 for Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Figure 4 for Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Viaarxiv icon