Abstract:Generative Artificial Intelligence (GenAI) models such as LLMs, GPTs, and Diffusion Models have recently gained widespread attention from both the research and the industrial communities. This survey explores their application in network monitoring and management, focusing on prominent use cases, as well as challenges and opportunities. We discuss how network traffic generation and classification, network intrusion detection, networked system log analysis, and network digital assistance can benefit from the use of GenAI models. Additionally, we provide an overview of the available GenAI models, datasets for large-scale training phases, and platforms for the development of such models. Finally, we discuss research directions that potentially mitigate the roadblocks to the adoption of GenAI for network monitoring and management. Our investigation aims to map the current landscape and pave the way for future research in leveraging GenAI for network monitoring and management.
Abstract:In this paper, we tackle decision fusion for distributed detection in a randomly-deployed clustered wireless sensor networks (WSNs) operating over a non-ideal multiple access channels (MACs), i.e. considering Rayleigh fading, pathloss and additive noise. To mitigate fading, we propose the distributed equal gain transmit combining (dEGTC) and distributed maximum ratio transit combining (dMRTC). The first and second order statistics of the received signals were analytically computed via stochastic geometry tools. Then the distribution of the received signal over the MAC are approximated by Gaussian and log-normal distributions via moment matching. This enabled the derivation of moment matching optimal fusion rules (MOR)for both distributions. Moreover, suboptimal simpler fusion rules were also proposed, in which all the CHs data are equally weighed, which is termed moment matching equal gain fusion rule (MER). It is shown by simulations that increasing the number of clusters improve the performance. Moreover, MOR-Gaussian based algorithms are better under free-space propagation whereas their lognormal counterparts are more suited in the ground-reflection case. Also, the latter algorithms show better results in low SNR and SN numbers conditions. We have proved that the received power at the CH in MAC is proportional O(2R2) and to O(2ln2R) in the free-space propagation and the ground-reflection cases respectively, whereis SN deployment intensity and R is the cluster radius. This implies that having more clusters decreases the required transmission power for a given SNR at the receiver.
Abstract:The recent popularity growth of Deep Learning (DL) re-ignited the interest towards traffic classification, with several studies demonstrating the accuracy of DL-based classifiers to identify Internet applications' traffic. Even with the aid of hardware accelerators (GPUs, TPUs), DL model training remains expensive, and limits the ability to operate frequent model updates necessary to fit to the ever evolving nature of Internet traffic, and mobile traffic in particular. To address this pain point, in this work we explore Incremental Learning (IL) techniques to add new classes to models without a full retraining, hence speeding up model's updates cycle. We consider iCarl, a state of the art IL method, and MIRAGE-2019, a public dataset with traffic from 40 Android apps, aiming to understand "if there is a case for incremental learning in traffic classification". By dissecting iCarl internals, we discuss ways to improve its design, contributing a revised version, namely iCarl+. Despite our analysis reveals their infancy, IL techniques are a promising research area on the roadmap towards automated DL-based traffic analysis systems.