Abstract:The remote mapping of minerals and discrimination of ore and waste on surfaces are important tasks for geological applications such as those in mining. Such tasks have become possible using ground-based, close-range hyperspectral sensors which can remotely measure the reflectance properties of the environment with high spatial and spectral resolution. However, autonomous mapping of mineral spectra measured on an open-cut mine face remains a challenging problem due to the subtleness of differences in spectral absorption features between mineral and rock classes as well as variability in the illumination of the scene. An additional layer of difficulty arises when there is no annotated data available to train a supervised learning algorithm. A pipeline for unsupervised mapping of spectra on a mine face is proposed which draws from several recent advances in the hyperspectral machine learning literature. The proposed pipeline brings together unsupervised and self-supervised algorithms in a unified system to map minerals on a mine face without the need for human-annotated training data. The pipeline is evaluated with a hyperspectral image dataset of an open-cut mine face comprising mineral ore martite and non-mineralised shale. The combined system is shown to produce a superior map to its constituent algorithms, and the consistency of its mapping capability is demonstrated using data acquired at two different times of day.
Abstract:This paper presents the initial stages in the development of a deep learning classifier for generalised Resident Space Object (RSO) characterisation that combines high-fidelity simulated light curves with transfer learning to improve the performance of object characterisation models that are trained on real data. The classification and characterisation of RSOs is a significant goal in Space Situational Awareness (SSA) in order to improve the accuracy of orbital predictions. The specific focus of this paper is the development of a high-fidelity simulation environment for generating realistic light curves. The simulator takes in a textured geometric model of an RSO as well as the objects ephemeris and uses Blender to generate photo-realistic images of the RSO that are then processed to extract the light curve. Simulated light curves have been compared with real light curves extracted from telescope imagery to provide validation for the simulation environment. Future work will involve further validation and the use of the simulator to generate a dataset of realistic light curves for the purpose of training neural networks.
Abstract:This paper presents an autonomous approach to tree detection and segmentation in high resolution airborne LiDAR that utilises state-of-the-art region-based CNN and 3D-CNN deep learning algorithms. If the number of training examples for a site is low, it is shown to be beneficial to transfer a segmentation network learnt from a different site with more training data and fine-tune it. The algorithm was validated using airborne laser scanning over two different commercial pine plantations. The results show that the proposed approach performs favourably in comparison to other methods for tree detection and segmentation.
Abstract:Hyperspectral imaging sensors are becoming increasingly popular in robotics applications such as agriculture and mining, and allow per-pixel thematic classification of materials in a scene based on their unique spectral signatures. Recently, convolutional neural networks have shown remarkable performance for classification tasks, but require substantial amounts of labelled training data. This data must sufficiently cover the variability expected to be encountered in the environment. For hyperspectral data, one of the main variations encountered outdoors is due to incident illumination, which can change in spectral shape and intensity depending on the scene geometry. For example, regions occluded from the sun have a lower intensity and their incident irradiance skewed towards shorter wavelengths. In this work, a data augmentation strategy based on relighting is used during training of a hyperspectral convolutional neural network. It allows training to occur in the outdoor environment given only a small labelled region, which does not need to sufficiently represent the geometric variability of the entire scene. This is important for applications where obtaining large amounts of training data is labourious, hazardous or difficult, such as labelling pixels within shadows. Radiometric normalisation approaches for pre-processing the hyperspectral data are analysed and it is shown that methods based on the raw pixel data are sufficient to be used as input for the classifier. This removes the need for external hardware such as calibration boards, which can restrict the application of hyperspectral sensors in robotics applications. Experiments to evaluate the classification system are carried out on two datasets captured from a field-based platform.