Abstract:Affective states have a critical role in driving performance and safety. They can degrade driver situation awareness and negatively impact cognitive processes, severely diminishing road safety. Therefore, detecting and assessing drivers' affective states is crucial in order to help improve the driving experience, and increase safety, comfort and well-being. Recent advances in affective computing have enabled the detection of such states. This may lead to empathic automotive user interfaces that account for the driver's emotional state and influence the driver in order to improve safety. In this work, we propose a multiview multi-task machine learning method for the detection of driver's affective states using physiological signals. The proposed approach is able to account for inter-drive variability in physiological responses while enabling interpretability of the learned models, a factor that is especially important in systems deployed in the real world. We evaluate the models on three different datasets containing real-world driving experiences. Our results indicate that accounting for drive-specific differences significantly improves model performance.