Neuro-Information Technology Group, Otto-von-Guericke University Magdeburg
Abstract:Cooperative autonomous driving plays a pivotal role in improving road capacity and safety within intelligent transportation systems, particularly through the deployment of autonomous vehicles on urban streets. By enabling vehicle-to-vehicle communication, these systems expand the vehicles environmental awareness, allowing them to detect hidden obstacles and thereby enhancing safety and reducing crash rates compared to human drivers who rely solely on visual perception. A key application of this technology is vehicle platooning, where connected vehicles drive in a coordinated formation. This paper introduces a vehicle platooning approach designed to enhance traffic flow and safety. Developed using deep reinforcement learning in the Unity 3D game engine, known for its advanced physics, this approach aims for a high-fidelity physical simulation that closely mirrors real-world conditions. The proposed platooning model focuses on scalability, decentralization, and fostering positive cooperation through the introduced predecessor-follower "sharing and caring" communication framework. The study demonstrates how these elements collectively enhance autonomous driving performance and robustness, both for individual vehicles and for the platoon as a whole, in an urban setting. This results in improved road safety and reduced traffic congestion.
Abstract:Autonomous Vehicle (AV) technology is advancing rapidly, promising a significant shift in road transportation safety and potentially resolving various complex transportation issues. With the increasing deployment of AVs by various companies, questions emerge about how AVs interact with each other and with human drivers, especially when AVs are prevalent on the roads. Ensuring cooperative interaction between AVs and between AVs and human drivers is critical, though there are concerns about possible negative competitive behaviors. This paper presents a multi-stage approach, starting with the development of a single AV and progressing to connected AVs, incorporating sharing and caring V2V communication strategy to enhance mutual coordination. A survey is conducted to validate the driving performance of the AV and will be utilized for a mixed traffic case study, which focuses on how the human drivers will react to the AV driving alongside them on the same road. Results show that using deep reinforcement learning, the AV acquired driving behavior that reached human driving performance. The adoption of sharing and caring based V2V communication within AV networks enhances their driving behavior, aids in more effective action planning, and promotes collaborative behavior amongst the AVs. The survey shows that safety in mixed traffic cannot be guaranteed, as we cannot control human ego-driven actions if they decide to compete with AV. Consequently, this paper advocates for enhanced research into the safe incorporation of AVs on public roads.
Abstract:Deception detection is an interdisciplinary field attracting researchers from psychology, criminology, computer science, and economics. We propose a multimodal approach combining deep learning and discriminative models for automated deception detection. Using video modalities, we employ convolutional end-to-end learning to analyze gaze, head pose, and facial expressions, achieving promising results compared to state-of-the-art methods. Due to limited training data, we also utilize discriminative models for deception detection. Although sequence-to-class approaches are explored, discriminative models outperform them due to data scarcity. Our approach is evaluated on five datasets, including a new Rolling-Dice Experiment motivated by economic factors. Results indicate that facial expressions outperform gaze and head pose, and combining modalities with feature selection enhances detection performance. Differences in expressed features across datasets emphasize the importance of scenario-specific training data and the influence of context on deceptive behavior. Cross-dataset experiments reinforce these findings. Despite the challenges posed by low-stake datasets, including the Rolling-Dice Experiment, deception detection performance exceeds chance levels. Our proposed multimodal approach and comprehensive evaluation shed light on the potential of automating deception detection from video modalities, opening avenues for future research.