Abstract:Inner Speech is an essential but also elusive human psychological process which refers to an everyday covert internal conversation with oneself. We argue that programming a robot with an overt self-talk system, which simulates human inner speech, might enhance human trust by improving robot transparency and anthropomorphism. For this reasons, this work aims to investigate if robot's inner speech, here intended as overt self-talk, affects human trust and anthropomorphism when human and robot cooperate. A group of participants was engaged in collaboration with the robot. During cooperation, the robot talks to itself. To evaluate if the robot's inner speech influences human trust, two questionnaires were administered to each participant before (pre-test) and after (post-test) the cooperative session with the robot. Preliminary results evidenced differences between the answers of participants in the pre-test and post-test assessment, suggesting that robot's inner speech influences human trust. Indeed, participant's levels of trust and perception of robot anthropomorphic features increase after the experimental interaction with the robot.
Abstract:In this paper we discuss some of the issues concerning the Memory and Content aspects in the recent debate on the identification of a Standard Model of the Mind (Laird, Lebiere, and Rosenbloom in press). In particular, we focus on the representational models concerning the Declarative Memories of current Cognitive Architectures (CAs). In doing so we outline some of the main problems affecting the current CAs and suggest that the Conceptual Spaces, a representational framework developed by Gardenfors, is worth-considering to address such problems. Finally, we briefly analyze the alternative representational assumptions employed in the three CAs constituting the current baseline for the Standard Model (i.e. SOAR, ACT-R and Sigma). In doing so, we point out the respective differences and discuss their implications in the light of the analyzed problems.
Abstract:During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [Laird (2012)]) adopt a classical symbolic approach, some (e.g. LEABRA [O'Reilly and Munakata (2000)]) are based on a purely connectionist model, while others (e.g. CLARION [Sun (2006)] adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning are also available [Kurup and Chandrasekaran (2007)]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter G\"ardenfors [G\"ardenfors (2000)] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by G\"ardenfors [G\"ardenfors (1997)] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one.