Abstract:Since Convolutional Neural Networks (ConvNets) are able to simultaneously learn features and classifiers to discriminate different categories of activities, recent works have employed ConvNets approaches to perform human activity recognition (HAR) based on wearable sensors, allowing the removal of expensive human work and expert knowledge. However, these approaches have their power of discrimination limited mainly by the large number of parameters that compose the network and the reduced number of samples available for training. Inspired by this, we propose an accurate and robust approach, referred to as Latent HyperNet (LHN). The LHN uses feature maps from early layers (hyper) and projects them, individually, onto a low dimensionality space (latent). Then, these latent features are concatenated and presented to a classifier. To demonstrate the robustness and accuracy of the LHN, we evaluate it using four different networks architectures in five publicly available HAR datasets based on wearable sensors, which vary in the sampling rate and number of activities. Our experiments demonstrate that the proposed LHN is able to produce rich information, improving the results regarding the original ConvNets. Furthermore, the method outperforms existing state-of-the-art methods.