Abstract:Rises in the number of animal abuse cases are reported around the world. While chatbots have been effective in influencing their users' perceptions and behaviors, little if any research has hitherto explored the design of chatbots that embody animal identities for the purpose of eliciting empathy toward animals. We therefore conducted a mixed-methods experiment to investigate how specific design cues in such chatbots can shape their users' perceptions of both the chatbots' identities and the type of animal they represent. Our findings indicate that such chatbots can significantly increase empathy, improve attitudes, and promote prosocial behavioral intentions toward animals, particularly when they incorporate emotional verbal expressions and authentic details of such animals' lives. These results expand our understanding of chatbots with non-human identities and highlight their potential for use in conservation initiatives, suggesting a promising avenue whereby technology could foster a more informed and empathetic society.
Abstract:As artificial intelligence (AI) advances, human-AI collaboration has become increasingly prevalent across both professional and everyday settings. In such collaboration, AI can express its confidence level about its performance, serving as a crucial indicator for humans to evaluate AI's suggestions. However, AI may exhibit overconfidence or underconfidence--its expressed confidence is higher or lower than its actual performance--which may lead humans to mistakenly evaluate AI advice. Our study investigates the influences of AI's overconfidence and underconfidence on human trust, their acceptance of AI suggestions, and collaboration outcomes. Our study reveal that disclosing AI confidence levels and performance feedback facilitates better recognition of AI confidence misalignments. However, participants tend to withhold their trust as perceiving such misalignments, leading to a rejection of AI suggestions and subsequently poorer performance in collaborative tasks. Conversely, without such information, participants struggle to identify misalignments, resulting in either the neglect of correct AI advice or the following of incorrect AI suggestions, adversely affecting collaboration. This study offers valuable insights for enhancing human-AI collaboration by underscoring the importance of aligning AI's expressed confidence with its actual performance and the necessity of calibrating human trust towards AI confidence.
Abstract:In this paper, the optimization-based alignment (OBA) methods are investigated with main focus on the vector observations construction procedures for the strapdown inertial navigation system (SINS). The contributions of this study are twofold. First the OBA method is extended to be able to estimate the gyroscopes biases coupled with the attitude based on the construction process of the existing OBA methods. This extension transforms the initial alignment into an attitude estimation problem which can be solved using the nonlinear filtering algorithms. The second contribution is the comprehensive evaluation of the OBA methods and their extensions with different vector observations construction procedures in terms of convergent speed and steady-state estimate using field test data collected from different grades of SINS. This study is expected to facilitate the selection of appropriate OBA methods for different grade SINS.