Abstract:Search engines have traditionally served as primary tools for information seeking. However, the new Large Language Models (LLMs) have recently demonstrated remarkable capabilities in multiple tasks and, specifically, their adoption as question answering systems is becoming increasingly prevalent. It is expected that LLM-based conversational systems and traditional web engines will continue to coexist in the future, supporting end users in various ways. But there is a need for more scientific research on the effectiveness of both types of systems in facilitating accurate information seeking. In this study, we focus on their merits in answering health questions. We conducted an extensive study comparing different web search engines, LLMs and retrieval-augmented (RAG) approaches. Our research reveals intriguing conclusions. For example, we observed that the quality of webpages potentially responding to a health question does not decline as we navigate further down the ranked lists. However, according to our evaluation, web engines are less accurate than LLMs in finding correct answers to health questions. On the other hand, LLMs are quite sensitive to the input prompts, and we also found out that RAG leads to highly effective information seeking methods.
Abstract:Computational methods for depression detection aim to mine traces of depression from online publications posted by Internet users. However, solutions trained on existing collections exhibit limited generalisation and interpretability. To tackle these issues, recent studies have shown that identifying depressive symptoms can lead to more robust models. The eRisk initiative fosters research on this area and has recently proposed a new ranking task focused on developing search methods to find sentences related to depressive symptoms. This search challenge relies on the symptoms specified by the Beck Depression Inventory-II (BDI-II), a questionnaire widely used in clinical practice. Based on the participant systems' results, we present the DepreSym dataset, consisting of 21580 sentences annotated according to their relevance to the 21 BDI-II symptoms. The labelled sentences come from a pool of diverse ranking methods, and the final dataset serves as a valuable resource for advancing the development of models that incorporate depressive markers such as clinical symptoms. Due to the complex nature of this relevance annotation, we designed a robust assessment methodology carried out by three expert assessors (including an expert psychologist). Additionally, we explore here the feasibility of employing recent Large Language Models (ChatGPT and GPT4) as potential assessors in this complex task. We undertake a comprehensive examination of their performance, determine their main limitations and analyze their role as a complement or replacement for human annotators.
Abstract:The main goal of this work is to analyze the behaviour of the FA quantifier fuzzification mechanism. As we prove in the paper, this model has a very solid theorethical behaviour, superior to most of the models defined in the literature. Moreover, we show that the underlying probabilistic interpretation has very interesting consequences.