Abstract:The goal of dialogue relation extraction (DRE) is to identify the relation between two entities in a given dialogue. During conversations, speakers may expose their relations to certain entities by some clues, such evidences called "triggers". However, none of the existing work on DRE tried to detect triggers and leverage the information for enhancing the performance. This paper proposes TREND, a multi-tasking BERT-based model which learns to identify triggers for improving relation extraction. The experimental results show that the proposed method achieves the state-of-the-art on the benchmark datasets.