Abstract:The rapid evolution of mobile networks from 5G to 6G has necessitated the development of autonomous network management systems, such as Zero-Touch Networks (ZTNs). However, the increased complexity and automation of these networks have also escalated cybersecurity risks. Existing Intrusion Detection Systems (IDSs) leveraging traditional Machine Learning (ML) techniques have shown effectiveness in mitigating these risks, but they often require extensive manual effort and expert knowledge. To address these challenges, this paper proposes an Automated Machine Learning (AutoML)-based autonomous IDS framework towards achieving autonomous cybersecurity for next-generation networks. To achieve autonomous intrusion detection, the proposed AutoML framework automates all critical procedures of the data analytics pipeline, including data pre-processing, feature engineering, model selection, hyperparameter tuning, and model ensemble. Specifically, it utilizes a Tabular Variational Auto-Encoder (TVAE) method for automated data balancing, tree-based ML models for automated feature selection and base model learning, Bayesian Optimization (BO) for hyperparameter optimization, and a novel Optimized Confidence-based Stacking Ensemble (OCSE) method for automated model ensemble. The proposed AutoML-based IDS was evaluated on two public benchmark network security datasets, CICIDS2017 and 5G-NIDD, and demonstrated improved performance compared to state-of-the-art cybersecurity methods. This research marks a significant step towards fully autonomous cybersecurity in next-generation networks, potentially revolutionizing network security applications.
Abstract:Autonomous Vehicles (AVs), furnished with sensors capable of capturing essential vehicle dynamics such as speed, acceleration, and precise location, possess the capacity to execute intelligent maneuvers, including lane changes, in anticipation of approaching roadblocks. Nevertheless, the sheer volume of sensory data and the processing necessary to derive informed decisions can often overwhelm the vehicles, rendering them unable to handle the task independently. Consequently, a common approach in traffic scenarios involves transmitting the data to servers for processing, a practice that introduces challenges, particularly in situations demanding real-time processing. In response to this challenge, we present a novel DL-based semantic traffic control system that entrusts semantic encoding responsibilities to the vehicles themselves. This system processes driving decisions obtained from a Reinforcement Learning (RL) agent, streamlining the decision-making process. Specifically, our framework envisions scenarios where abrupt roadblocks materialize due to factors such as road maintenance, accidents, or vehicle repairs, necessitating vehicles to make determinations concerning lane-keeping or lane-changing actions to navigate past these obstacles. To formulate this scenario mathematically, we employ a Markov Decision Process (MDP) and harness the Deep Q Learning (DQN) algorithm to unearth viable solutions.
Abstract:Vibrations of damaged bearings are manifested as modulations in the amplitude of the generated vibration signal, making envelope analysis an effective approach for discriminating between healthy and abnormal vibration patterns. Motivated by this, we introduce a low-complexity method for vibration-based condition monitoring (VBCM) of rolling bearings based on envelope analysis. In the proposed method, the instantaneous amplitude (envelope) and instantaneous frequency of the vibration signal are jointly utilized to facilitate three novel envelope representations: instantaneous amplitude-frequency mapping (IAFM), instantaneous amplitude-frequency correlation (IAFC), and instantaneous energy-frequency distribution (IEFD). Maintaining temporal information, these representations effectively capture energy-frequency variations that are unique to the condition of the bearing, thereby enabling the extraction of discriminative features with high sensitivity to variations in operational conditions. Accordingly, six new highly discriminative features are engineered from these representations, capturing and characterizing their shapes. The experimental results show outstanding performance in detecting and diagnosing various fault types, demonstrating the effectiveness of the proposed method in capturing unique variations in energy and frequency between healthy and faulty bearings. Moreover, the proposed method has moderate computational complexity, meeting the requirements of real-time applications. Further, the Python code of the proposed method is made public to support collaborative research efforts and ensure the reproducibility of the presented work
Abstract:As technology advances, the use of Machine Learning (ML) in cybersecurity is becoming increasingly crucial to tackle the growing complexity of cyber threats. While traditional ML models can enhance cybersecurity, their high energy and resource demands limit their applications, leading to the emergence of Tiny Machine Learning (TinyML) as a more suitable solution for resource-constrained environments. TinyML is widely applied in areas such as smart homes, healthcare, and industrial automation. TinyML focuses on optimizing ML algorithms for small, low-power devices, enabling intelligent data processing directly on edge devices. This paper provides a comprehensive review of common challenges of TinyML techniques, such as power consumption, limited memory, and computational constraints; it also explores potential solutions to these challenges, such as energy harvesting, computational optimization techniques, and transfer learning for privacy preservation. On the other hand, this paper discusses TinyML's applications in advancing cybersecurity for Electric Vehicle Charging Infrastructures (EVCIs) as a representative use case. It presents an experimental case study that enhances cybersecurity in EVCI using TinyML, evaluated against traditional ML in terms of reduced delay and memory usage, with a slight trade-off in accuracy. Additionally, the study includes a practical setup using the ESP32 microcontroller in the PlatformIO environment, which provides a hands-on assessment of TinyML's application in cybersecurity for EVCI.
Abstract:Unmanned Aerial Vehicles (UAVs) will be critical infrastructural components of future smart cities. In order to operate efficiently, UAV reliability must be ensured by constant monitoring for faults and failures. To this end, the work presented in this paper leverages signal processing and Machine Learning (ML) methods to analyze the data of a comprehensive vibrational analysis to determine the presence of rotor blade defects during pre and post-flight operation. With the help of dimensionality reduction techniques, the Random Forest algorithm exhibited the best performance and detected defective rotor blades perfectly. Additionally, a comprehensive analysis of the impact of various feature subsets is presented to gain insight into the factors affecting the model's classification decision process.
Abstract:Large language models (LLMs) are rapidly emerging in Artificial Intelligence (AI) applications, especially in the fields of natural language processing and generative AI. Not limited to text generation applications, these models inherently possess the opportunity to leverage prompt engineering, where the inputs of such models can be appropriately structured to articulate a model's purpose explicitly. A prominent example of this is intent-based networking, an emerging approach for automating and maintaining network operations and management. This paper presents semantic routing to achieve enhanced performance in LLM-assisted intent-based management and orchestration of 5G core networks. This work establishes an end-to-end intent extraction framework and presents a diverse dataset of sample user intents accompanied by a thorough analysis of the effects of encoders and quantization on overall system performance. The results show that using a semantic router improves the accuracy and efficiency of the LLM deployment compared to stand-alone LLMs with prompting architectures.
Abstract:Recent advancements in sensing, measurement, and computing technologies have significantly expanded the potential for signal-based applications, leveraging the synergy between signal processing and Machine Learning (ML) to improve both performance and reliability. This fusion represents a critical point in the evolution of signal-based systems, highlighting the need to bridge the existing knowledge gap between these two interdisciplinary fields. Despite many attempts in the existing literature to bridge this gap, most are limited to specific applications and focus mainly on feature extraction, often assuming extensive prior knowledge in signal processing. This assumption creates a significant obstacle for a wide range of readers. To address these challenges, this paper takes an integrated article approach. It begins with a detailed tutorial on the fundamentals of signal processing, providing the reader with the necessary background knowledge. Following this, it explores the key stages of a standard signal processing-based ML pipeline, offering an in-depth review of feature extraction techniques, their inherent challenges, and solutions. Differing from existing literature, this work offers an application-independent review and introduces a novel classification taxonomy for feature extraction techniques. Furthermore, it aims at linking theoretical concepts with practical applications, and demonstrates this through two specific use cases: a spectral-based method for condition monitoring of rolling bearings and a wavelet energy analysis for epilepsy detection using EEG signals. In addition to theoretical contributions, this work promotes a collaborative research culture by providing a public repository of relevant Python and MATLAB signal processing codes. This effort is intended to support collaborative research efforts and ensure the reproducibility of the results presented.
Abstract:Cyber-security attacks pose a significant threat to the operation of autonomous systems. Particularly impacted are the Heating, Ventilation, and Air Conditioning (HVAC) systems in smart buildings, which depend on data gathered by sensors and Machine Learning (ML) models using the captured data. As such, attacks that alter the readings of these sensors can severely affect the HVAC system operations impacting residents' comfort and energy reduction goals. Such attacks may induce changes in the online data distribution being fed to the ML models, violating the fundamental assumption of similarity in training and testing data distribution. This leads to a degradation in model prediction accuracy due to a phenomenon known as Concept Drift (CD) - the alteration in the relationship between input features and the target variable. Addressing CD requires identifying the source of drift to apply targeted mitigation strategies, a process termed drift explanation. This paper proposes a Feature Drift Explanation (FDE) module to identify the drifting features. FDE utilizes an Auto-encoder (AE) that reconstructs the activation of the first layer of the regression Deep Learning (DL) model and finds their latent representations. When a drift is detected, each feature of the drifting data is replaced by its representative counterpart from the training data. The Minkowski distance is then used to measure the divergence between the altered drifting data and the original training data. The results show that FDE successfully identifies 85.77 % of drifting features and showcases its utility in the DL adaptation method under the CD phenomenon. As a result, the FDE method is an effective strategy for identifying drifting features towards thwarting cyber-security attacks.
Abstract:The hyper-parameter optimization (HPO) process is imperative for finding the best-performing Convolutional Neural Networks (CNNs). The automation process of HPO is characterized by its sizable computational footprint and its lack of transparency; both important factors in a resource-constrained Internet of Things (IoT) environment. In this paper, we address these problems by proposing a novel approach that combines transformer architecture and actor-critic Reinforcement Learning (RL) model, TRL-HPO, equipped with multi-headed attention that enables parallelization and progressive generation of layers. These assumptions are founded empirically by evaluating TRL-HPO on the MNIST dataset and comparing it with state-of-the-art approaches that build CNN models from scratch. The results show that TRL-HPO outperforms the classification results of these approaches by 6.8% within the same time frame, demonstrating the efficiency of TRL-HPO for the HPO process. The analysis of the results identifies the main culprit for performance degradation attributed to stacking fully connected layers. This paper identifies new avenues for improving RL-based HPO processes in resource-constrained environments.
Abstract:The integration of Machine Learning and Artificial Intelligence (ML/AI) into fifth-generation (5G) networks has made evident the limitations of network intelligence with ever-increasing, strenuous requirements for current and next-generation devices. This transition to ubiquitous intelligence demands high connectivity, synchronicity, and end-to-end communication between users and network operators, and will pave the way towards full network automation without human intervention. Intent-based networking is a key factor in the reduction of human actions, roles, and responsibilities while shifting towards novel extraction and interpretation of automated network management. This paper presents the development of a custom Large Language Model (LLM) for 5G and next-generation intent-based networking and provides insights into future LLM developments and integrations to realize end-to-end intent-based networking for fully automated network intelligence.