Abstract:Document Understanding is an evolving field in Natural Language Processing (NLP). In particular, visual and spatial features are essential in addition to the raw text itself and hence, several multimodal models were developed in the field of Visual Document Understanding (VDU). However, while research is mainly focused on Key Information Extraction (KIE), Relation Extraction (RE) between identified entities is still under-studied. For instance, RE is crucial to regroup entities or obtain a comprehensive hierarchy of data in a document. In this paper, we present a model that, initialized from LayoutLMv3, can match or outperform the current state-of-the-art results in RE applied to Visually-Rich Documents (VRD) on FUNSD and CORD datasets, without any specific pre-training and with fewer parameters. We also report an extensive ablation study performed on FUNSD, highlighting the great impact of certain features and modelization choices on the performances.
Abstract:Advances in the Visually-rich Document Understanding (VrDU) field and particularly the Key-Information Extraction (KIE) task are marked with the emergence of efficient Transformer-based approaches such as the LayoutLM models. Despite the good performance of KIE models when fine-tuned on public benchmarks, they still struggle to generalize on complex real-life use-cases lacking sufficient document annotations. Our research highlighted that KIE standard benchmarks such as SROIE and FUNSD contain significant similarity between training and testing documents and can be adjusted to better evaluate the generalization of models. In this work, we designed experiments to quantify the information redundancy in public benchmarks, revealing a 75% template replication in SROIE official test set and 16% in FUNSD. We also proposed resampling strategies to provide benchmarks more representative of the generalization ability of models. We showed that models not suited for document analysis struggle on the adjusted splits dropping on average 10,5% F1 score on SROIE and 3.5% on FUNSD compared to multi-modal models dropping only 7,5% F1 on SROIE and 0.5% F1 on FUNSD.