Abstract:Entity Linking (EL) seeks to align entity mentions in text to entries in a knowledge-base and is usually comprised of two phases: candidate generation and candidate ranking. While most methods focus on the latter, it is the candidate generation phase that sets an upper bound to both time and accuracy performance of the overall EL system. This work's contribution is a significant improvement in candidate generation which thus raises the performance threshold for EL, by generating candidates that include the gold entity in the least candidate set (top-K). We propose a simple approach that efficiently embeds mention-entity pairs in dense space through a BERT-based bi-encoder. Specifically, we extend (Wu et al., 2020) by introducing a new pooling function and incorporating entity type side-information. We achieve a new state-of-the-art 84.28% accuracy on top-50 candidates on the Zeshel dataset, compared to the previous 82.06% on the top-64 of (Wu et al., 2020). We report the results from extensive experimentation using our proposed model on both seen and unseen entity datasets. Our results suggest that our method could be a useful complement to existing EL approaches.
Abstract:Distantly-supervised relation extraction (RE) is an effective method to scale RE to large corpora but suffers from noisy labels. Existing approaches try to alleviate noise through multi-instance learning and by providing additional information, but manage to recognize mainly the top frequent relations, neglecting those in the long-tail. We propose REDSandT (Relation Extraction with Distant Supervision and Transformers), a novel distantly-supervised transformer-based RE method, that manages to capture a wider set of relations through highly informative instance and label embeddings for RE, by exploiting BERT's pre-trained model, and the relationship between labels and entities, respectively. We guide REDSandT to focus solely on relational tokens by fine-tuning BERT on a structured input, including the sub-tree connecting an entity pair and the entities' types. Using the extracted informative vectors, we shape label embeddings, which we also use as attention mechanism over instances to further reduce noise. Finally, we represent sentences by concatenating relation and instance embeddings. Experiments in the NYT-10 dataset show that REDSandT captures a broader set of relations with higher confidence, achieving state-of-the-art AUC (0.424).