Abstract:Objective: A new method for generating realistic electrogastrogram (EGG) time series is presented and evaluated. Methods: We used EGG data from an existing open database to set model parameters and Monte Carlo simulation to evaluate a new model based on the hypothesis that EGG dominant frequency should be statistically significantly different between fasting and postprandial states. Additionally, we illustrated method customization for generating artificial EGG alterations caused by the simulator sickness. Results: The user can specify the following input parameters of developed data-driven model: (1) duration of the generated sequence, (2) sampling frequency, (3) recording state (postprandial or fasting state), (3) breathing artifact contamination, (4) a flag whether the output would produce plots, (5) seed for the sake of reproducibility, (6) pauses in the gastric rhythm (arrhythmia occurrence), and (7) overall noise contamination to produce proper variability in EGG signals. The simulated EGG provided expected results of Monte Carlo simulation while features obtained from the synthetic EGG signal resembling simulator sickness occurrence displayed expected trends. Conclusion: The code for generation of synthetic EGG time series is freely available and can be further customized to assess robustness of the signal processing algorithms to noises and especially to movement artifacts, as well as to simulate alterations of gastric electrical activity. Significance: The proposed approach is customized for EGG data synthesis, but it can be further utilized to other biosignals with similar nature such as electroencephalogram.
Abstract:We present a method to automatically calculate sensing time (ST) from the eye tracker data in subjects with neurological impairment using a driving simulator. ST presents the time interval for a person to notice the stimulus from its first occurrence. Precisely, we measured the time since the children started to cross the street until the drivers directed their look to the children. In comparison to the commonly used reaction time, ST does not require additional neuro-muscular responses such as braking and presents unique information on the sensory function. From 108 neurological patients recruited for the study, the analysis of ST was performed in overall 56 patients to assess fit-, unfit-, and conditionally-fit-to-drive patients. The results showed that the proposed method based on the YOLO (You Only Look Once) object detector is efficient for computing STs from the eye tracker data in neurological patients. We obtained discriminative results for fit-to-drive patients by application of Tukey's Honest Significant Difference post hoc test (p < 0.01), while no difference was observed between conditionally-fit and unfit-to-drive groups (p = 0.542). Moreover, we show that time-to-collision (TTC), initial gaze distance (IGD) from pedestrians, and speed at the hazard onset did not influence the result, while the only significant interaction is among fitness, IGD, and TTC on ST. Although the proposed method can be applied to assess fitness to drive, we provide directions for future driving simulation-based evaluation and propose processing workflow to secure reliable ST calculation in other domains such as psychology, neuroscience, marketing, etc.
Abstract:With increasing focus on privacy protection, alternative methods to identify vehicle operator without the use of biometric identifiers have gained traction for automotive data analysis. The wide variety of sensors installed on modern vehicles enable autonomous driving, reduce accidents and improve vehicle handling. On the other hand, the data these sensors collect reflect drivers' habit. Drivers' use of turn indicators, following distance, rate of acceleration, etc. can be transformed to an embedding that is representative of their behavior and identity. In this paper, we develop a deep learning architecture (Driver2vec) to map a short interval of driving data into an embedding space that represents the driver's behavior to assist in driver identification. We develop a custom model that leverages performance gains of temporal convolutional networks, embedding separation power of triplet loss and classification accuracy of gradient boosting decision trees. Trained on a dataset of 51 drivers provided by Nervtech, Driver2vec is able to accurately identify the driver from a short 10-second interval of sensor data, achieving an average pairwise driver identification accuracy of 83.1% from this 10-second interval, which is remarkably higher than performance obtained in previous studies. We then analyzed performance of Driver2vec to show that its performance is consistent across scenarios and that modeling choices are sound.