Abstract:The conversational search task aims to enable a user to resolve information needs via natural language dialogue with an agent. In this paper, we aim to develop a conceptual framework of the actions and intents of users and agents explaining how these actions enable the user to explore the search space and resolve their information need. We outline the different actions and intents, before discussing key decision points in the conversation where the agent needs to decide how to steer the conversational search process to a successful and/or satisfactory conclusion. Essentially, this paper provides a conceptualization of the conversational search process between an agent and user, which provides a framework and a starting point for research, development and evaluation of conversational search agents.
Abstract:The effectiveness of an IR system is gauged not just by its ability to retrieve relevant results but also by how it presents these results to users; an engaging presentation often correlates with increased user satisfaction. While existing research has delved into the link between user satisfaction, IR performance metrics, and presentation, these aspects have typically been investigated in isolation. Our research aims to bridge this gap by examining the relationship between query performance, presentation and user satisfaction. For our analysis, we conducted a between-subjects experiment comparing the effectiveness of various result card layouts for an ad-hoc news search interface. Drawing data from the TREC WaPo 2018 collection, we centered our study on four specific topics. Within each of these topics, we assessed six distinct queries with varying nDCG values. Our study involved 164 participants who were exposed to one of five distinct layouts containing result cards, such as "title'', "title+image'', or "title+image+summary''. Our findings indicate that while nDCG is a strong predictor of user satisfaction at the query level, there exists no linear relationship between the performance of the query, presentation of results and user satisfaction. However, when considering the total gain on the initial result page, we observed that presentation does play a significant role in user satisfaction (at the query level) for certain layouts with result cards such as, title+image or title+image+summary. Our results also suggest that the layout differences have complex and multifaceted impacts on satisfaction. We demonstrate the capacity to equalize user satisfaction levels between queries of varying performance by changing how results are presented. This emphasizes the necessity to harmonize both performance and presentation in IR systems, considering users' diverse preferences.
Abstract:The Probability Ranking Principle (PRP) ranks search results based on their expected utility derived solely from document contents, often overlooking the nuances of presentation and user interaction. However, with the evolution of Search Engine Result Pages (SERPs), now comprising a variety of result cards, the manner in which these results are presented is pivotal in influencing user engagement and satisfaction. This shift prompts the question: How does the PRP and its user-centric counterpart, the Interactive Probability Ranking Principle (iPRP), compare in the context of these heterogeneous SERPs? Our study draws a comparison between the PRP and the iPRP, revealing significant differences in their output. The iPRP, accounting for item-specific costs and interaction probabilities to determine the ``Expected Perceived Utility" (EPU), yields different result orderings compared to the PRP. We evaluate the effect of the EPU on the ordering of results by observing changes in the ranking within a heterogeneous SERP compared to the traditional ``ten blue links''. We find that changing the presentation affects the ranking of items according to the (iPRP) by up to 48\% (with respect to DCG, TBG and RBO) in ad-hoc search tasks on the TREC WaPo Collection. This work suggests that the iPRP should be employed when ranking heterogeneous SERPs to provide a user-centric ranking that adapts the ordering based on the presentation and user engagement.