Abstract:A one-shot algorithm called iterationless DANSE (iDANSE) is introduced to perform distributed adaptive node-specific signal estimation (DANSE) in a fully connected wireless acoustic sensor network (WASN) deployed in an environment with non-overlapping latent signal subspaces. The iDANSE algorithm matches the performance of a centralized algorithm in a single processing cycle while devices exchange fused versions of their multichannel local microphone signals. Key advantages of iDANSE over currently available solutions are its iterationless nature, which favors deployment in real-time applications, and the fact that devices can exchange fewer fused signals than the number of latent sources in the environment. The proposed method is validated in numerical simulations including a speech enhancement scenario.
Abstract:Cell-free massive multiple-input multiple-output (CFmMIMO) is a paradigm that can improve users' spectral efficiency (SE) far beyond traditional cellular networks. Increased spatial diversity in CFmMIMO is achieved by spreading the antennas into small access points (APs), which cooperate to serve the users. Sequential fronthaul topologies in CFmMIMO, such as the daisy chain and multi-branch tree topology, have gained considerable attention recently. In such a processing architecture, each AP must store its received signal vector in the memory until it receives the relevant information from the previous AP in the sequence to refine the estimate of the users' signal vector in the uplink. In this paper, we adopt vector-wise and element-wise compression on the raw or pre-processed received signal vectors to store them in the memory. We investigate the impact of the limited memory capacity in the APs on the optimal number of APs. We show that with no memory constraint, having single-antenna APs is optimal, especially as the number of users grows. However, a limited memory at the APs restricts the depth of the sequential processing pipeline. Furthermore, we investigate the relation between the memory capacity at the APs and the rate of the fronthaul link.
Abstract:A low-rank approximation-based version of the topology-independent distributed adaptive node-specific signal estimation (TI-DANSE) algorithm is introduced, using a generalized eigenvalue decomposition (GEVD) for application in ad-hoc wireless acoustic sensor networks. This TI-GEVD-DANSE algorithm as well as the original TI-DANSE algorithm exhibit a non-strict convergence, which can lead to numerical instability over time, particularly in scenarios where the estimation of accurate spatial covariance matrices is challenging. An adaptive filter coefficient normalization strategy is proposed to mitigate this issue and enable the stable performance of TI-(GEVD-)DANSE. The method is validated in numerical simulations including dynamic acoustic scenarios, demonstrating the importance of the additional normalization.
Abstract:In many speech recording applications, the recorded desired speech is corrupted by both noise and acoustic echo, such that combined noise reduction (NR) and acoustic echo cancellation (AEC) is called for. A common cascaded design corresponds to NR filters preceding AEC filters. These NR filters aim at reducing the near-end room noise (and possibly partially the echo) and operate on the microphones only, consequently requiring the AEC filters to model both the echo paths and the NR filters. In this paper, however, we propose a design with extended NR (NRext) filters preceding AEC filters under the assumption of the echo paths being additive maps, thus preserving the addition operation. Here, the NRext filters aim at reducing both the near-end room noise and the far-end room noise component in the echo, and operate on both the microphones and loudspeakers. We show that the succeeding AEC filters remarkably become independent of the NRext filters, such that the AEC filters are only required to model the echo paths, improving the AEC performance. Further, the degrees of freedom in the NRext filters scale with the number of loudspeakers, which is not the case for the NR filters, resulting in an improved NR performance.
Abstract:Fronthaul quantization causes a significant distortion in cell-free massive MIMO networks. Due to the limited capacity of fronthaul links, information exchange among access points (APs) must be quantized significantly. Furthermore, the complexity of the multiplication operation in the base-band processing unit increases with the number of bits of the operands. Thus, quantizing the APs' signal vector reduces the complexity of signal estimation in the base-band processing unit. Most recent works consider the direct quantization of the received signal vectors at each AP without any pre-processing. However, the signal vectors received at different APs are correlated mutually (inter-AP correlation) and also have correlated dimensions (intra-AP correlation). Hence, cooperative quantization of APs fronthaul can help to efficiently use the quantization bits at each AP and further reduce the distortion imposed on the quantized vector at the APs. This paper considers a daisy chain fronthaul and three different processing sequences at each AP. We show that 1) de-correlating the received signal vector at each AP from the corresponding vectors of the previous APs (inter-AP de-correlation) and 2) de-correlating the dimensions of the received signal vector at each AP (intra-AP de-correlation) before quantization helps to use the quantization bits at each AP more efficiently than directly quantizing the received signal vector without any pre-processing and consequently, improves the bit error rate (BER) and normalized mean square error (NMSE) of users signal estimation.
Abstract:Cell-free massive multiple-input multiple-output (MIMO) is an emerging technology that will reshape the architecture of next-generation networks. This paper considers the sequential fronthaul, whereby the access points (APs) are connected in a daisy chain topology with multiple sequential processing stages. With this sequential processing in the uplink, each AP refines users' signal estimates received from the previous AP based on its own local received signal vector. While this processing architecture has been shown to achieve the same performance as centralized processing, the impact of the limited memory capacity at the APs on the store and forward processing architecture is yet to be analyzed. Thus, we model the received signal vector compression using rate-distortion theory to demonstrate the effect of limited memory capacity on the optimal number of APs in the daisy chain fronthaul. Without this memory constraint, more geographically distributed antennas alleviate the adverse effect of large-scale fading on the signal-to-interference-plus-noise-ratio (SINR). However, we show that in case of limited memory capacity at each AP, the memory capacity to store the received signal vectors at the final AP of this fronthaul becomes a limiting factor. In other words, we show that when deciding on the number of APs to distribute the antennas, there is an inherent trade-off between more macro-diversity and compression noise power on the stored signal vectors at the APs. Hence, the available memory capacity at the APs significantly influences the optimal number of APs in the fronthaul.
Abstract:Deep learning (DL) based resource allocation (RA) has recently gained a lot of attention due to its performance efficiency. However, most of the related studies assume an ideal case where the number of users and their utility demands, e.g., data rate constraints, are fixed and the designed DL based RA scheme exploits a policy trained only for these fixed parameters. A computationally complex policy retraining is required whenever these parameters change. Therefore, in this paper, a DL based resource allocator (ALCOR) is introduced, which allows users to freely adjust their utility demands based on, e.g., their application layer. ALCOR employs deep neural networks (DNNs), as the policy, in an iterative optimization algorithm. The optimization algorithm aims to optimize the on-off status of users in a time-sharing problem to satisfy their utility demands in expectation. The policy performs unconstrained RA (URA) -- RA without taking into account user utility demands -- among active users to maximize the sum utility (SU) at each time instant. Based on the chosen URA scheme, ALCOR can perform RA in a model-based or model-free manner and in a centralized or distributed scenario. Derived convergence analyses provide guarantees for the convergence of ALCOR, and numerical experiments corroborate its effectiveness.
Abstract:Distributed optimization has experienced a significant surge in interest due to its wide-ranging applications in distributed learning and adaptation. While various scenarios, such as shared-memory, local-memory, and consensus-based approaches, have been extensively studied in isolation, there remains a need for further exploration of their interconnections. This paper specifically concentrates on a scenario where agents collaborate toward a unified mission while potentially having distinct tasks. Each agent's actions can potentially impact other agents through interactions. Within this context, the objective for the agents is to optimize their local parameters based on the aggregate of local reward functions, where only local zeroth-order oracles are available. Notably, the learning process is asynchronous, meaning that agents update and query their zeroth-order oracles asynchronously while communicating with other agents subject to bounded but possibly random communication delays. This paper presents theoretical convergence analyses and establishes a convergence rate for the proposed approach. Furthermore, it addresses the relevant issue of deep learning-based resource allocation in communication networks and conducts numerical experiments in which agents, acting as transmitters, collaboratively train their individual (possibly unique) policies to maximize a common performance metric.
Abstract:Distributed signal-processing algorithms in (wireless) sensor networks often aim to decentralize processing tasks to reduce communication cost and computational complexity or avoid reliance on a single device (i.e., fusion center) for processing. In this contribution, we extend a distributed adaptive algorithm for blind system identification that relies on the estimation of a stacked network-wide consensus vector at each node, the computation of which requires either broadcasting or relaying of node-specific values (i.e., local vector norms) to all other nodes. The extended algorithm employs a distributed-averaging-based scheme to estimate the network-wide consensus norm value by only using the local vector norm provided by neighboring sensor nodes. We introduce an adaptive mixing factor between instantaneous and recursive estimates of these norms for adaptivity in a time-varying system. Simulation results show that the extension provides estimation results close to the optimal fully-connected-network or broadcasting case while reducing inter-node transmission significantly.
Abstract:To keep supporting next-generation requirements, the radio access infrastructure will increasingly densify. Cell-free (CF) network architectures are emerging, combining dense deployments with extreme flexibility in allocating resources to users. In parallel, the Open Radio Access Networks (O-RAN) paradigm is transforming RAN towards an open, intelligent, virtualized, and fully interoperable architecture. This paradigm brings the needed flexibility and intelligent control opportunities for CF networking. In this paper, we document the current O-RAN terminology and contrast it with some common CF processing approaches. We then discuss the main O-RAN innovations and research challenges that remain to be solved.