Abstract:The performance of the current collision avoidance systems in Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) can be drastically affected by low light and adverse weather conditions. Collisions with large animals such as deer in low light cause significant cost and damage every year. In this paper, we propose the first AI-based method for future trajectory prediction of large animals and mitigating the risk of collision with them in low light. In order to minimize false collision warnings, in our multi-step framework, first, the large animal is accurately detected and a preliminary risk level is predicted for it and low-risk animals are discarded. In the next stage, a multi-stream CONV-LSTM-based encoder-decoder framework is designed to predict the future trajectory of the potentially high-risk animals. The proposed model uses camera motion prediction as well as the local and global context of the scene to generate accurate predictions. Furthermore, this paper introduces a new dataset of FIR videos for large animal detection and risk estimation in real nighttime driving scenarios. Our experiments show promising results of the proposed framework in adverse conditions. Our code is available online.
Abstract:This paper has introduced a novel approach for the real-time estimation of 3D tactile forces exerted by human fingertips via vision only. The introduced approach is entirely monocular vision-based and does not require any physical force sensor. Therefore, it is scalable, non-intrusive, and easily fused with other perception systems such as body pose estimation, making it ideal for HRI applications where force sensing is necessary. The introduced approach consists of three main modules: finger tracking for detection and tracking of each individual finger, image alignment for preserving the spatial information in the images, and the force model for estimating the 3D forces from coloration patterns in the images. The model has been implemented experimentally, and the results have shown a maximum RMS error of 8.4% (for the entire range of force levels) along all three directions. The estimation accuracy is comparable to the offline models in the literature, such as EigneNail, while, this model is capable of performing force estimation at 30 frames per second.
Abstract:Fingernail imaging has been proven to be effective in prior works [1],[2] for estimating the 3D fingertip forces with a maximum RMS estimation error of 7%. In the current research, fingernail imaging is used to perform unconstrained grasp force measurement on multiple fingers to study human grasping. Moreover, two robotic arms with mounted cameras and a visual tracking system have been devised to keep the human fingers in the camera frame during the experiments. Experimental tests have been conducted for six human subjects under both constrained and unconstrained grasping conditions, and the results indicate a significant difference in force collaboration among the fingers between the two grasping conditions. Another interesting result according to the experiments is that in comparison to constrained grasping, unconstrained grasp forces are more evenly distributed over the fingers and there is less force variation (more steadiness) in each finger force. These results validate the importance of measuring grasp forces in an unconstrained manner in order to study how humans naturally grasp objects.