Abstract:In cater the need of Beyond 5G communications, large numbers of data driven artificial intelligence based fiber models has been put forward as to utilize artificial intelligence's regression ability to predict pulse evolution in fiber transmission at a much faster speed compared with the traditional split step Fourier method. In order to increase the physical interpretabiliy, principle driven fiber models have been proposed which inserts the Nonlinear Schodinger Equation into their loss functions. However, regardless of either principle driven or data driven models, they need to be re-trained the whole model under different transmission conditions. Unfortunately, this situation can be unavoidable when conducting the fiber communication optimization work. If the scale of different transmission conditions is large, then the whole model needs to be retrained large numbers of time with relatively large scale of parameters which may consume higher time costs. Computing efficiency will be dragged down as well. In order to address this problem, we propose the principle driven parameterized fiber model in this manuscript. This model breaks down the predicted NLSE solution with respect to one set of transmission condition into the linear combination of several eigen solutions which were outputted by each pre-trained principle driven fiber model via the reduced basis method. Therefore, the model can greatly alleviate the heavy burden of re-training since only the linear combination coefficients need to be found when changing the transmission condition. Not only strong physical interpretability can the model posses, but also higher computing efficiency can be obtained. Under the demonstration, the model's computational complexity is 0.0113% of split step Fourier method and 1% of the previously proposed principle driven fiber model.
Abstract:In this manuscript, a novelty principle driven fiber transmission model for short-distance transmission with parameterized inputs is put forward. By taking into the account of the previously proposed principle driven fiber model, the reduced basis expansion method and transforming the parameterized inputs into parameterized coefficients of the Nonlinear Schrodinger Equations, universal solutions with respect to inputs corresponding to different bit rates can all be obtained without the need of re-training the whole model. This model, once adopted, can have prominent advantages in both computation efficiency and physical background. Besides, this model can still be effectively trained without the needs of transmitted signals collected in advance. Tasks of on-off keying signals with bit rates ranging from 2Gbps to 50Gbps are adopted to demonstrate the fidelity of the model.
Abstract:Dual-function-radar-communication (DFRC) is a promising candidate technology for next-generation networks. By integrating hybrid analog-digital (HAD) beamforming into a multi-user millimeter-wave (mmWave) DFRC system, we design a new reconfigurable subarray (RS) architecture and jointly optimize the HAD beamforming to maximize the communication sum-rate and ensure a prescribed signal-to-clutter-plus-noise ratio for radar sensing. Considering the non-convexity of this problem arising from multiplicative coupling of the analog and digital beamforming, we convert the sum-rate maximization into an equivalent weighted mean-square error minimization and apply penalty dual decomposition to decouple the analog and digital beamforming. Specifically, a second-order cone program is first constructed to optimize the fully digital counterpart of the HAD beamforming. Then, the sparsity of the RS architecture is exploited to obtain a low-complexity solution for the HAD beamforming. The convergence and complexity analyses of our algorithm are carried out under the RS architecture. Simulations corroborate that, with the RS architecture, DFRC offers effective communication and sensing and improves energy efficiency by 83.4% and 114.2% with a moderate number of radio frequency chains and phase shifters, compared to the persistently- and fullyconnected architectures, respectively.
Abstract:The spectrum environment map (SEM), which can visualize the information of invisible electromagnetic spectrum, is vital for monitoring, management, and security of spectrum resources in cognitive radio (CR) networks. In view of a limited number of spectrum sensors and constrained sampling time, this paper presents a new three-dimensional (3D) SEM construction scheme based on sparse Bayesian learning (SBL). Firstly, we construct a scenario-dependent channel dictionary matrix by considering the propagation characteristic of the interested scenario. To improve sampling efficiency, a maximum mutual information (MMI)-based optimization algorithm is developed for the layout of sampling sensors. Then, a maximum and minimum distance (MMD) clustering-based SBL algorithm is proposed to recover the spectrum data at the unsampled positions and construct the whole 3D SEM. We finally use the simulation data of the campus scenario to construct the 3D SEMs and compare the proposed method with the state-of-the-art. The recovery performance and the impact of different sparsity on the constructed SEMs are also analyzed. Numerical results show that the proposed scheme can reduce the required spectrum sensor number and has higher accuracy under the low sampling rate.
Abstract:Path probability prediction is essential to describe the dynamic birth and death of propagation paths, and build the accurate channel model for air-to-ground (A2G) communications. The occurrence probability of each path is complex and time-variant due to fast changeable altitudes of UAVs and scattering environments. Considering the A2G channels under urban scenarios, this paper presents three novel stochastic probability models for the line-of-sight (LoS) path, ground specular (GS) path, and building-scattering (BS) path, respectively. By analyzing the geometric stochastic information of three-dimensional (3D) scattering environments, the proposed models are derived with respect to the width, height, and distribution of buildings. The effect of Fresnel zone and altitudes of transceivers are also taken into account. Simulation results show that the proposed LoS path probability model has good performance at different frequencies and altitudes, and is also consistent with existing models at the low or high altitude. Moreover, the proposed LoS and NLoS path probability models show good agreement with the ray-tracing (RT) simulation method.
Abstract:Roadside units (RSUs), which have strong computing capability and are close to vehicle nodes, have been widely used to process delay- and computation-intensive tasks of vehicle nodes. However, due to their high mobility, vehicles may drive out of the coverage of RSUs before receiving the task processing results. In this paper, we propose a mobile edge computing-assisted vehicular network, where vehicles can offload their tasks to a nearby vehicle via a vehicle-to-vehicle (V2V) link or a nearby RSU via a vehicle-to-infrastructure link. These tasks are also migrated by a V2V link or an infrastructure-to-infrastructure (I2I) link to avoid the scenario where the vehicles cannot receive the processed task from the RSUs. Considering mutual interference from the same link of offloading tasks and migrating tasks, we construct a vehicle offloading decision-based game to minimize the computation overhead. We prove that the game can always achieve Nash equilibrium and convergence by exploiting the finite improvement property. We then propose a task migration (TM) algorithm that includes three task-processing methods and two task-migration methods. Based on the TM algorithm, computation overhead minimization offloading (COMO) algorithm is presented. Extensive simulation results show that the proposed TM and COMO algorithms reduce the computation overhead and increase the success rate of task processing.
Abstract:This paper proposes to deploy multiple reconfigurable intelligent surfaces (RISs) in device-to-device (D2D)-underlaid cellular systems. The uplink sum-rate of the system is maximized by jointly optimizing the transmit powers of the users, the pairing of the cellular users (CUs) and D2D links, the receive beamforming of the base station (BS), and the configuration of the RISs, subject to the power limits and quality-of-service (QoS) of the users. To address the non-convexity of this problem, we develop a new block coordinate descent (BCD) framework which decouples the D2D-CU pairing, power allocation and receive beamforming, from the configuration of the RISs. Specifically, we derive closed-form expressions for the power allocation and receive beamforming under any D2D-CU pairing, which facilitates interpreting the D2D-CU pairing as a bipartite graph matching solved using the Hungarian algorithm. We transform the configuration of the RISs into a quadratically constrained quadratic program (QCQP) with multiple quadratic constraints. A low-complexity algorithm, named Riemannian manifold-based alternating direction method of multipliers (RM-ADMM), is developed to decompose the QCQP into simpler QCQPs with a single constraint each, and solve them efficiently in a decentralized manner. Simulations show that the proposed algorithm can significantly improve the sum-rate of the D2D-underlaid system with a reduced complexity, as compared to its alternative based on semidefinite relaxation (SDR).
Abstract:Public security vulnerability reports (e.g., CVE reports) play an important role in the maintenance of computer and network systems. Security companies and administrators rely on information from these reports to prioritize tasks on developing and deploying patches to their customers. Since these reports are unstructured texts, automatic information extraction (IE) can help scale up the processing by converting the unstructured reports to structured forms, e.g., software names and versions and vulnerability types. Existing works on automated IE for security vulnerability reports often rely on a large number of labeled training samples. However, creating massive labeled training set is both expensive and time consuming. In this work, for the first time, we propose to investigate this problem where only a small number of labeled training samples are available. In particular, we investigate the performance of fine-tuning several state-of-the-art pre-trained language models on our small training dataset. The results show that with pre-trained language models and carefully tuned hyperparameters, we have reached or slightly outperformed the state-of-the-art system on this task. Consistent with previous two-step process of first fine-tuning on main category and then transfer learning to others as in [7], if otherwise following our proposed approach, the number of required labeled samples substantially decrease in both stages: 90% reduction in fine-tuning from 5758 to 576,and 88.8% reduction in transfer learning with 64 labeled samples per category. Our experiments thus demonstrate the effectiveness of few-sample learning on NER for security vulnerability report. This result opens up multiple research opportunities for few-sample learning for security vulnerability reports, which is discussed in the paper. Code: https://github.com/guanqun-yang/FewVulnerability.
Abstract:This paper proposes two algorithms to maximize the minimum array power gain in a wide-beam mainlobe by solving the power gain pattern synthesis (PGPS) problem with and without sidelobe constraints. Firstly, the nonconvex PGPS problem is transformed into a nonconvex linear inequality optimization problem and then converted to an augmented Lagrangian problem by introducing auxiliary variables via the Alternating Direction Method of Multipliers (ADMM) framework. Next,the original intractable problem is converted into a series of nonconvex and convex subproblems. The nonconvex subproblems are solved by dividing their solution space into a finite set of smaller ones, in which the solution would be obtained pseudoanalytically. In such a way, the proposed algorithms are superior to the existing PGPS-based ones as their convergence can be theoretically guaranteed with a lower computational burden. Numerical examples with both isotropic element pattern (IEP) and active element pattern (AEP) arrays are simulated to show the effectiveness and superiority of the proposed algorithms by comparing with the related existing algorithms.
Abstract:In multicell massive multiple-input multiple-output (MIMO) non-orthogonal multiple access (NOMA) networks, base stations (BSs) with multiple antennas deliver their radio frequency energy in the downlink, and Internet-of-Things (IoT) devices use their harvested energy to support uplink data transmission. This paper investigates the energy efficiency (EE) problem for multicell massive MIMO NOMA networks with wireless power transfer (WPT). To maximize the EE of the network, we propose a novel joint power, time, antenna selection, and subcarrier resource allocation scheme, which can properly allocate the time for energy harvesting and data transmission. Both perfect and imperfect channel state information (CSI) are considered, and their corresponding EE performance is analyzed. Under quality-of-service (QoS) requirements, an EE maximization problem is formulated, which is non-trivial due to non-convexity. We first adopt nonlinear fraction programming methods to convert the problem to be convex, and then, develop a distributed alternating direction method of multipliers (ADMM)- based approach to solve the problem. Simulation results demonstrate that compared to alternative methods, the proposed algorithm can converge quickly within fewer iterations, and can achieve better EE performance.