Senior Member, IEEE
Abstract:Road markings were reported as critical road safety features, equally needed for both human drivers and for machine vision technologies utilised by advanced driver assistance systems (ADAS) and in driving automation. Visibility of road markings is achieved because of their colour contrasting with the roadway surface. During recent testing of an open-source camera-based ADAS under several visibility conditions (day, night, rain, glare), significant failures in trajectory planning were recorded and quantified. Consistently, better ADAS reliability under poor visibility conditions was achieved with Type II road markings (i.e. structured markings, facilitating moisture drainage) as compared to Type I road marking (i.e. flat lines). To further understand these failures, analysis of contrast ratio of road markings, which the tested ADAS was detecting for traffic lane recognition, was performed. The highest contrast ratio (greater than 0.5, calculated per Michelson equation) was measured at night in the absence of confounding factors, with statistically significant difference of 0.1 in favour of Type II road markings over Type I. Under daylight conditions, contrast ratio was reduced, with slightly higher values measured with Type I. The presence of rain or wet roads caused the deterioration of the contrast ratio, with Type II road markings exhibiting significantly higher contrast ratio than Type I, even though the values were low (less than 0.1). These findings matched the output of the ADAS related to traffic lane detection and underlined the importance of road marking visibility. Inadequate lane recognition by ADAS was associated with very low contrast ratio of road markings indeed. Importantly, specific minimum contrast ratio value could not be found, which was due to the complexity of ADAS algorithms...
Abstract:Research conducted previously has focused on either attitudes toward or behaviors associated with autonomous driving. In this paper, we bridge these two dimensions by exploring how attitudes towards autonomous driving influence behavior in an autonomous car. We conducted a field experiment with twelve participants engaged in non-driving related tasks. Our findings indicate that attitudes towards autonomous driving do not affect participants' driving interventions in vehicle control and eye glance behavior. Therefore, studies on autonomous driving technology lacking field tests might be unreliable for assessing the potential behaviors, attitudes, and acceptance of autonomous vehicles.
Abstract:As autonomous vehicle technology advances, the precise assessment of safety in complex traffic scenarios becomes crucial, especially in mixed-vehicle environments where human perception of safety must be taken into account. This paper presents a framework designed for assessing traffic safety in multi-vehicle situations, facilitating the simultaneous utilization of diverse objective safety metrics. Additionally, it allows the integration of subjective perception of safety by adjusting model parameters. The framework was applied to evaluate various model configurations in car-following scenarios on a highway, utilizing naturalistic driving datasets. The evaluation of the model showed an outstanding performance, particularly when integrating multiple objective safety measures. Furthermore, the performance was significantly enhanced when considering all surrounding vehicles.
Abstract:From SAE Level 3 of automation onwards, drivers are allowed to engage in activities that are not directly related to driving during their travel. However, in level 3, a misunderstanding of the capabilities of the system might lead drivers to engage in secondary tasks, which could impair their ability to react to challenging traffic situations. Anticipating driver activity allows for early detection of risky behaviors, to prevent accidents. To be able to predict the driver activity, a Deep Learning network needs to be trained on a dataset. However, the use of datasets based on simulation for training and the migration to real-world data for prediction has proven to be suboptimal. Hence, this paper presents a real-world driver activity dataset, openly accessible on IEEE Dataport, which encompasses various activities that occur in autonomous driving scenarios under various illumination and weather conditions. Results from the training process showed that the dataset provides an excellent benchmark for implementing models for driver activity recognition.
Abstract:Delivery services have undergone technological advancements, with robots now directly delivering packages to recipients. While these robots are designed for efficient functionality, they have not been specifically designed for interactions with humans. Building on the premise that incorporating human-like characteristics into a robot has the potential to positively impact technology acceptance, this study explores human reactions to a robot characterized with facial expressions. The findings indicate a correlation between anthropomorphic features and the observed responses.
Abstract:The acquisition and analysis of high-quality sensor data constitute an essential requirement in shaping the development of fully autonomous driving systems. This process is indispensable for enhancing road safety and ensuring the effectiveness of the technological advancements in the automotive industry. This study introduces the Interaction of Autonomous and Manually-Controlled Vehicles (IAMCV) dataset, a novel and extensive dataset focused on inter-vehicle interactions. The dataset, enriched with a sophisticated array of sensors such as Light Detection and Ranging, cameras, Inertial Measurement Unit/Global Positioning System, and vehicle bus data acquisition, provides a comprehensive representation of real-world driving scenarios that include roundabouts, intersections, country roads, and highways, recorded across diverse locations in Germany. Furthermore, the study shows the versatility of the IAMCV dataset through several proof-of-concept use cases. Firstly, an unsupervised trajectory clustering algorithm illustrates the dataset's capability in categorizing vehicle movements without the need for labeled training data. Secondly, we compare an online camera calibration method with the Robot Operating System-based standard, using images captured in the dataset. Finally, a preliminary test employing the YOLOv8 object-detection model is conducted, augmented by reflections on the transferability of object detection across various LIDAR resolutions. These use cases underscore the practical utility of the collected dataset, emphasizing its potential to advance research and innovation in the area of intelligent vehicles.
Abstract:For driver observation frameworks, clean datasets collected in controlled simulated environments often serve as the initial training ground. Yet, when deployed under real driving conditions, such simulator-trained models quickly face the problem of distributional shifts brought about by changing illumination, car model, variations in subject appearances, sensor discrepancies, and other environmental alterations. This paper investigates the viability of transferring video-based driver observation models from simulation to real-world scenarios in autonomous vehicles, given the frequent use of simulation data in this domain due to safety issues. To achieve this, we record a dataset featuring actual autonomous driving conditions and involving seven participants engaged in highly distracting secondary activities. To enable direct SIM to REAL transfer, our dataset was designed in accordance with an existing large-scale simulator dataset used as the training source. We utilize the Inflated 3D ConvNet (I3D) model, a popular choice for driver observation, with Gradient-weighted Class Activation Mapping (Grad-CAM) for detailed analysis of model decision-making. Though the simulator-based model clearly surpasses the random baseline, its recognition quality diminishes, with average accuracy dropping from 85.7% to 46.6%. We also observe strong variations across different behavior classes. This underscores the challenges of model transferability, facilitating our research of more robust driver observation systems capable of dealing with real driving conditions.
Abstract:In this work, we utilized the methodology outlined in the IEEE Standard 2846-2022 for "Assumptions in Safety-Related Models for Automated Driving Systems" to extract information on the behavior of other road users in driving scenarios. This method includes defining high-level scenarios, determining kinematic characteristics, evaluating safety relevance, and making assumptions on reasonably predictable behaviors. The assumptions were expressed as kinematic bounds. The numerical values for these bounds were extracted using Python scripts to process realistic data from the UniD dataset. The resulting information enables Automated Driving Systems designers to specify the parameters and limits of a road user's state in a specific scenario. This information can be utilized to establish starting conditions for testing a vehicle that is equipped with an Automated Driving System in simulations or on actual roads.
Abstract:This paper presents the development of the JKU-ITS Last Mile Delivery Robot. The proposed approach utilizes a combination of one 3D LIDAR, RGB-D camera, IMU and GPS sensor on top of a mobile robot slope mower. An embedded computer, running ROS1, is utilized to process the sensor data streams to enable 2D and 3D Simultaneous Localization and Mapping, 2D localization and object detection using a convolutional neural network.
Abstract:The transportation sector accounts for about 25% of global greenhouse gas emissions. Therefore, an improvement of energy efficiency in the traffic sector is crucial to reducing the carbon footprint. Efficiency is typically measured in terms of energy use per traveled distance, e.g. liters of fuel per kilometer. Leading factors that impact the energy efficiency are the type of vehicle, environment, driver behavior, and weather conditions. These varying factors introduce uncertainty in estimating the vehicles' energy efficiency. We propose in this paper an ensemble learning approach based on deep neural networks (ENN) that is designed to reduce the predictive uncertainty and to output measures of such uncertainty. We evaluated it using the publicly available Vehicle Energy Dataset (VED) and compared it with several baselines per vehicle and energy type. The results showed a high predictive performance and they allowed to output a measure of predictive uncertainty.