Abstract:The majority of fatalities and traumatic injuries in heavy industries involve mobile plant and vehicles, often resulting from a lapse of attention or communication. Existing approaches to hazard identification include the use of human spotters, passive reversing cameras, non-differentiating proximity sensors and tag based systems. These approaches either suffer from problems of worker attention or require the use of additional devices on all workers and obstacles. Whilst computer vision detection systems have previously been deployed in structured applications such as manufacturing and on-road vehicles, there does not yet exist a robust and portable solution for use in unstructured environments like construction that effectively communicates risks to relevant workers. To address these limitations, our solution, the Toolbox Spotter (TBS), acts to improve worker safety and reduce preventable incidents by employing an embedded robotic perception and distributed HMI alert system to augment both detection and communication of hazards in safety critical environments. In this paper we outline the TBS safety system and evaluate its performance based on data from real world implementations, demonstrating the suitability of the Toolbox Spotter for applications in heavy industries.
Abstract:Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05$^{\circ}$ and 0.18 m / 2.39$^{\circ}$. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.