In recent years, machine learning has made leaps and bounds enabling applications with high recognition accuracy for speech and images. However, other types of data to which these models can be applied have not yet been explored as thoroughly. In particular, it can be relatively challenging to accurately classify single or multi-model, real-time sensor data. Labelling is an indispensable stage of data pre-processing that can be even more challenging in real-time sensor data collection. Currently, real-time sensor data labelling is an unwieldly process with limited tools available and poor performance characteristics that can lead to the performance of the machine learning models being compromised. In this paper, we introduce new techniques for labelling at the point of collection coupled with a systematic performance comparison of two popular types of Deep Neural Networks running on five custom built edge devices. These state-of-the-art edge devices are designed to enable real-time labelling with various buttons, slide potentiometer and force sensors. This research provides results and insights that can help researchers utilising edge devices for real-time data collection select appropriate labelling techniques. We also identify common bottlenecks in each architecture and provide field tested guidelines to assist developers building adaptive, high performance edge solutions.