Abstract:The visible orientation of human eyes creates some transparency about people's spatial attention and other mental states. This leads to a dual role for the eyes as a means of sensing and communication. Accordingly, artificial eye models are being explored as communication media in human-machine interaction scenarios. One challenge in the use of eye models for communication consists of resolving spatial reference ambiguities, especially for screen-based models. Here, we introduce an approach for overcoming this challenge through the introduction of reflection-like features that are contingent on artificial eye movements. We conducted a user study with 30 participants in which participants had to use spatial references provided by dynamic eye models to advance in a fast-paced group interaction task. Compared to a non-reflective eye model and a pure reflection mode, their combination in the new approach resulted in a higher identification accuracy and user experience, suggesting a synergistic benefit.
Abstract:This paper addresses the problem of human-based driver support. Nowadays, driver support systems help users to operate safely in many driving situations. Nevertheless, these systems do not fully use the rich information that is available from sensing the human driver. In this paper, we therefore present a human-based risk model that uses driver information for improved driver support. In contrast to state of the art, our proposed risk model combines a) the current driver perception based on driver errors, such as the driver overlooking another vehicle (i.e., notice error), and b) driver personalization, such as the driver being defensive or confident. In extensive simulations of multiple interactive driving scenarios, we show that our novel human-based risk model achieves earlier warning times and reduced warning errors compared to a baseline risk model not using human driver information.