Abstract:Secure navigation is pivotal for several applications including autonomous vehicles, robotics, and aviation. The inertial navigation system estimates position, velocity, and attitude through dead reckoning especially when external references like GPS are unavailable. However, the three accelerometers and three gyroscopes that compose the system are exposed to various types of errors including bias errors, scale factor errors, and noise, which can significantly degrade the accuracy of navigation constituting also a key vulnerability of this system. This work aims to adopt a supervised convolutional neural network (ConvNet) to address this vulnerability inherent in inertial navigation systems. In addition to this, this paper evaluates the impact of the ConvNet layer's depth on the accuracy of these corrections. This evaluation aims to determine the optimal layer configuration maximizing the effectiveness of error correction in INS (Inertial Navigation System) leading to precise navigation solutions.
Abstract:This paper introduces an innovative intrusion detection system that harnesses Generative Adversarial Networks (GANs), Multi-Scale Convolutional Neural Networks (MSCNNs), and Bidirectional Long Short-Term Memory (BiLSTM) networks, supplemented by Local Interpretable Model-Agnostic Explanations (LIME) for interpretability. Employing a GAN, the system generates realistic network traffic data, encompassing both normal and attack patterns. This synthesized data is then fed into an MSCNN-BiLSTM architecture for intrusion detection. The MSCNN layer extracts features from the network traffic data at different scales, while the BiLSTM layer captures temporal dependencies within the traffic sequences. Integration of LIME allows for explaining the model's decisions. Evaluation on the Hogzilla dataset, a standard benchmark, showcases an impressive accuracy of 99.16\% for multi-class classification and 99.10\% for binary classification, while ensuring interpretability through LIME. This fusion of deep learning and interpretability presents a promising avenue for enhancing intrusion detection systems by improving transparency and decision support in network security.
Abstract:Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.