Abstract:Recent advances in training large language models (LLMs) using massive amounts of solely textual data lead to strong generalization across many domains and tasks, including document-specific tasks. Opposed to that there is a trend to train multi-modal transformer architectures tailored for document understanding that are designed specifically to fuse textual inputs with the corresponding document layout. This involves a separate fine-tuning step for which additional training data is required. At present, no document transformers with comparable generalization to LLMs are available That raises the question which type of model is to be preferred for document understanding tasks. In this paper we investigate the possibility to use purely text-based LLMs for document-specific tasks by using layout enrichment. We explore drop-in modifications and rule-based methods to enrich purely textual LLM prompts with layout information. In our experiments we investigate the effects on the commercial ChatGPT model and the open-source LLM Solar. We demonstrate that using our approach both LLMs show improved performance on various standard document benchmarks. In addition, we study the impact of noisy OCR and layout errors, as well as the limitations of LLMs when it comes to utilizing document layout. Our results indicate that layout enrichment can improve the performance of purely text-based LLMs for document understanding by up to 15% compared to just using plain document text. In conclusion, this approach should be considered for the best model choice between text-based LLM or multi-modal document transformers.
Abstract:Recent approaches towards passage retrieval have successfully employed representations from pretrained Language Models(LMs) with large effectiveness gains. However, due to high computational cost those approaches are usually limited to re-ranking scenarios. The candidates in such a scenario are typically retrieved by scalable bag-of-words retrieval models such as BM25. Although BM25 has proven decent performance as a first-stage ranker, it tends to miss relevant passages. In this context we propose CoRT, a framework and neural first-stage ranking model that leverages contextual representations from transformer-based language models to complement candidates from term-based ranking functions while causing no significant delay. Using the MS MARCO dataset, we show that CoRT significantly increases first-stage ranking quality and recall by complementing BM25 with missing candidates. Consequently, we found subsequent re-rankers achieve superior results while requiring less candidates to saturate ranking quality. Finally, we demonstrate that with CoRT a representation-focused retrieval at web-scale can be realized with latencies as low as BM25.