Abstract:This study explores whether labeling AI as "trustworthy" or "reliable" influences user perceptions and acceptance of automotive AI technologies. Using a one-way between-subjects design, the research involved 478 online participants who were presented with guidelines for either trustworthy or reliable AI. Participants then evaluated three vignette scenarios and completed a modified version of the Technology Acceptance Model, which included variables such as perceived ease of use, human-like trust, and overall attitude. Although labeling AI as "trustworthy" did not significantly influence judgments on specific scenarios, it increased perceived ease of use and human-like trust, particularly benevolence. This suggests a positive impact on usability and an anthropomorphic effect on user perceptions. The study provides insights into how specific labels can influence attitudes toward AI technology.
Abstract:We present Reasons For and Against Vaccination (RFAV), a dataset for predicting reasons for and against vaccination, and scientific authorities used to justify them, annotated through nichesourcing and augmented using GPT4 and GPT3.5-Turbo. We show how it is possible to mine these reasons in non-structured text, under different task definitions, despite the high level of subjectivity involved and explore the impact of artificially augmented data using in-context learning with GPT4 and GPT3.5-Turbo. We publish the dataset and the trained models along with the annotation manual used to train annotators and define the task.
Abstract:Consciousness has been historically a heavily debated topic in engineering, science, and philosophy. On the contrary, awareness had less success in raising the interest of scholars in the past. However, things are changing as more and more researchers are getting interested in answering questions concerning what awareness is and how it can be artificially generated. The landscape is rapidly evolving, with multiple voices and interpretations of the concept being conceived and techniques being developed. The goal of this paper is to summarize and discuss the ones among these voices connected with projects funded by the EIC Pathfinder Challenge called ``Awareness Inside'', a nonrecurring call for proposals within Horizon Europe designed specifically for fostering research on natural and synthetic awareness. In this perspective, we dedicate special attention to challenges and promises of applying synthetic awareness in robotics, as the development of mature techniques in this new field is expected to have a special impact on generating more capable and trustworthy embodied systems.