Georgia Institute of Technology
Abstract:Many hyper-personalized AI systems profile people's characteristics (e.g., personality traits) to provide personalized recommendations. These systems are increasingly used to facilitate interactions among people, such as providing teammate recommendations. Despite improved accuracy, such systems are not immune to errors when making inferences about people's most personal traits. These errors manifested as AI misrepresentations. However, the repercussions of such AI misrepresentations are unclear, especially on people's reactions and perceptions of the AI. We present two studies to examine how people react and perceive the AI after encountering personality misrepresentations in AI-facilitated team matching in a higher education context. Through semi-structured interviews (n=20) and a survey experiment (n=198), we pinpoint how people's existing and newly acquired AI knowledge could shape their perceptions and reactions of the AI after encountering AI misrepresentations. Specifically, we identified three rationales that people adopted through knowledge acquired from AI (mis)representations: AI works like a machine, human, and/or magic. These rationales are highly connected to people's reactions of over-trusting, rationalizing, and forgiving of AI misrepresentations. Finally, we found that people's existing AI knowledge, i.e., AI literacy, could moderate people's changes in their trust in AI after encountering AI misrepresentations, but not changes in people's social perceptions of AI. We discuss the role of people's AI knowledge when facing AI fallibility and implications for designing responsible mitigation and repair strategies.
Abstract:From navigation systems to smart assistants, we communicate with various AI on a daily basis. At the core of such human-AI communication, we convey our understanding of the AI's capability to the AI through utterances with different complexities, and the AI conveys its understanding of our needs and goals to us through system outputs. However, this communication process is prone to failures for two reasons: the AI might have the wrong understanding of the user and the user might have the wrong understanding of the AI. To enhance mutual understanding in human-AI communication, we posit the Mutual Theory of Mind (MToM) framework, inspired by our basic human capability of "Theory of Mind." In this paper, we discuss the motivation of the MToM framework and its three key components that continuously shape the mutual understanding during three stages of human-AI communication. We then describe a case study inspired by the MToM framework to demonstrate the power of MToM framework to guide the design and understanding of human-AI communication.