Abstract:Spiking Neural Networks (SNNs) offer promising energy efficiency advantages, particularly when processing sparse spike trains. However, their incompatibility with traditional datasets, which consist of batches of input vectors rather than spike trains, necessitates the development of efficient encoding methods. This paper introduces a novel, open-source PyTorch-compatible Python framework for spike encoding, designed for neuromorphic applications in machine learning and reinforcement learning. The framework supports a range of encoding algorithms, including Leaky Integrate-and-Fire (LIF), Step Forward (SF), Pulse Width Modulation (PWM), and Ben's Spiker Algorithm (BSA), as well as specialized encoding strategies covering population coding and reinforcement learning scenarios. Furthermore, we investigate the performance trade-offs of each method on embedded hardware using C/C++ implementations, considering energy consumption, computation time, spike sparsity, and reconstruction accuracy. Our findings indicate that SF typically achieves the lowest reconstruction error and offers the highest energy efficiency and fastest encoding speed, achieving the second-best spike sparsity. At the same time, other methods demonstrate particular strengths depending on the signal characteristics. This framework and the accompanying empirical analysis provide valuable resources for selecting optimal encoding strategies for energy-efficient SNN applications.
Abstract:The advancements in smart sensors for Industry 4.0 offer ample opportunities for low-powered predictive maintenance and condition monitoring. However, traditional approaches in this field rely on processing in the cloud, which incurs high costs in energy and storage. This paper investigates the potential of neural networks for low-power on-device computation of vibration sensor data for predictive maintenance. We review the literature on Spiking Neural Networks (SNNs) and Artificial Neuronal Networks (ANNs) for vibration-based predictive maintenance by analyzing datasets, data preprocessing, network architectures, and hardware implementations. Our findings suggest that no satisfactory standard benchmark dataset exists for evaluating neural networks in predictive maintenance tasks. Furthermore frequency domain transformations are commonly employed for preprocessing. SNNs mainly use shallow feed forward architectures, whereas ANNs explore a wider range of models and deeper networks. Finally, we highlight the need for future research on hardware implementations of neural networks for low-power predictive maintenance applications and the development of a standardized benchmark dataset.