Abstract:The concept of Wireless Network-on-Chip (WNoC) has emerged as a potential solution to address the escalating communication demands of modern computing systems due to their low-latency, versatility, and reconfigurability. However, for WNoC to fulfill its potential, it is essential to establish multiple high-speed wireless links across chips. Unfortunately, the compact and enclosed nature of computing packages introduces significant challenges in the form of Co-Channel Interference (CCI) and Inter-Symbol Interference (ISI), which not only hinder the deployment of multiple spatial channels but also severely restrict the symbol rate of each individual channel. In this paper, we posit that Time Reversal (TR) could be effective in addressing both impairments in this static scenario thanks to its spatiotemporal focusing capabilities even in the near field. Through comprehensive full-wave simulations and bit error rate analysis in multiple scenarios and at multiple frequency bands, we provide evidence that TR can increase the symbol rate by an order of magnitude, enabling the deployment of multiple concurrent links and achieving aggregate speeds exceeding 100 Gb/s. Finally, we evaluate the impact of reducing the sampling rate of the TR filter on the achievable speeds, paving the way to practical TR-based wireless communications at the chip scale.
Abstract:Quantum computing holds immense potential for solving classically intractable problems by leveraging the unique properties of quantum mechanics. The scalability of quantum architectures remains a significant challenge. Multi-core quantum architectures are proposed to solve the scalability problem, arising a new set of challenges in hardware, communications and compilation, among others. One of these challenges is to adapt a quantum algorithm to fit within the different cores of the quantum computer. This paper presents a novel approach for circuit partitioning using Deep Reinforcement Learning, contributing to the advancement of both quantum computing and graph partitioning. This work is the first step in integrating Deep Reinforcement Learning techniques into Quantum Circuit Mapping, opening the door to a new paradigm of solutions to such problems.
Abstract:Recently emerged Topological Deep Learning (TDL) methods aim to extend current Graph Neural Networks (GNN) by naturally processing higher-order interactions, going beyond the pairwise relations and local neighborhoods defined by graph representations. In this paper we propose a novel TDL-based method for compressing signals over graphs, consisting in two main steps: first, disjoint sets of higher-order structures are inferred based on the original signal --by clustering $N$ datapoints into $K\ll N$ collections; then, a topological-inspired message passing gets a compressed representation of the signal within those multi-element sets. Our results show that our framework improves both standard GNN and feed-forward architectures in compressing temporal link-based signals from two real-word Internet Service Provider Networks' datasets --from $30\%$ up to $90\%$ better reconstruction errors across all evaluation scenarios--, suggesting that it better captures and exploits spatial and temporal correlations over the whole graph-based network structure.
Abstract:Wireless Network-on-Chip (WNoC) is a promising paradigm to overcome the versatility and scalability issues of conventional on-chip networks for current processor chips. However, the chip environment suffers from delay spread which leads to intense Inter-Symbol Interference (ISI). This degrades the signal when transmitting and makes it difficult to achieve the desired Bit Error Rate (BER) in this constraint-driven scenario. Time reversal (TR) is a technique that uses the multipath richness of the channel to overcome the undesired effects of the delay spread. As the flip-chip channel is static and can be characterized beforehand, in this paper we propose to apply TR to the wireless in-package channel. We evaluate the effects of this technique in time and space from an electromagnetic point of view. Furthermore, we study the effectiveness of TR in modulated data communications in terms of BER as a function of transmission rate and power. Our results show not only the spatiotemporal focusing effect of TR in a chip that could lead to multiple spatial channels, but also that transmissions using TR outperform, BER-wise, non-TR transmissions it by an order of magnitude
Abstract:In this work, we examine the potential of autonomous operation of a reconfigurable intelligent surface (RIS) using wireless energy harvesting from information signals. To this end, we first identify the main RIS power-consuming components and introduce a suitable power-consumption model. Subsequently, we introduce a novel RIS power-splitting architecture that enables simultaneous energy harvesting and beamsteering. Specifically, a subset of the RIS unit cells (UCs) is used for beamsteering while the remaining ones absorb energy. For the subset allocation, we propose policies obtained as solutions to two optimization problems. The first problem aims at maximizing the signal-to-noise ratio (SNR) at the receiver without violating the RIS's energy harvesting demands. Additionally, the objective of the second problem is to maximize the RIS harvested power, while ensuring an acceptable SNR at the receiver. We prove that under particular propagation conditions, some of the proposed policies deliver the optimal solution of the two problems. Furthermore, we report numerical results that reveal the efficiency of the policies with respect to the optimal and very high-complexity brute-force design approach. Finally, through a case study of user tracking, we showcase that the RIS power-consumption demands can be secured by harvesting energy from information signals.
Abstract:Reconfigurable Intelligent Surface (RIS) composed of programmable actuators is a promising technology, thanks to its capability in manipulating Electromagnetic (EM) wavefronts. In particular, RISs have the potential to provide significant performance improvements for wireless networks. However, to do so, a proper configuration of the reflection coefficients of the unit cells in the RIS is required. RISs are sophisticated platforms so the design and fabrication complexity might be uneconomical for single-user scenarios while a RIS that can service multi-users justifies the costs. For the first time, we propose an efficient reconfiguration technique providing the multi-beam radiation pattern. Thanks to the analytical model the reconfiguration profile is at hand compared to time-consuming optimization techniques. The outcome can pave the wave for commercial use of multi-user communication beyond 5G networks. We analyze the performance of our proposed RIS technology for indoor and outdoor scenarios, given the broadcast mode of operation. The aforesaid scenarios encompass some of the most challenging scenarios that wireless networks encounter. We show that our proposed technique provisions sufficient gains in the observed channel capacity when the users are close to the RIS in the indoor office environment scenario. Further, we report more than one order of magnitude increase in the system throughput given the outdoor environment. The results prove that RIS with the ability to communicate with multiple users can empower wireless networks with great capacity.
Abstract:Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data. Such an ability has strong implications in a wide variety of fields whose data is inherently relational, for which conventional neural networks do not perform well. Indeed, as recent reviews can attest, research in the area of GNNs has grown rapidly and has lead to the development of a variety of GNN algorithm variants as well as to the exploration of groundbreaking applications in chemistry, neurology, electronics, or communication networks, among others. At the current stage of research, however, the efficient processing of GNNs is still an open challenge for several reasons. Besides of their novelty, GNNs are hard to compute due to their dependence on the input graph, their combination of dense and very sparse operations, or the need to scale to huge graphs in some applications. In this context, this paper aims to make two main contributions. On the one hand, a review of the field of GNNs is presented from the perspective of computing. This includes a brief tutorial on the GNN fundamentals, an overview of the evolution of the field in the last decade, and a summary of operations carried out in the multiple phases of different GNN algorithm variants. On the other hand, an in-depth analysis of current software and hardware acceleration schemes is provided, from which a hardware-software, graph-aware, and communication-centric vision for GNN accelerators is distilled.
Abstract:As the current standardization for the 5G networks nears completion, work towards understanding the potential technologies for the 6G wireless networks is already underway. One of these potential technologies for the 6G networks are Reconfigurable Intelligent Surfaces (RISs). They offer unprecedented degrees of freedom towards engineering the wireless channel, i.e., the ability to modify the characteristics of the channel whenever and however required. Nevertheless, such properties demand that the response of the associated metasurface (MSF) is well understood under all possible operational conditions. While an understanding of the radiation pattern characteristics can be obtained through either analytical models or full wave simulations, they suffer from inaccuracy under certain conditions and extremely high computational complexity, respectively. Hence, in this paper we propose a novel neural networks based approach that enables a fast and accurate characterization of the MSF response. We analyze multiple scenarios and demonstrate the capabilities and utility of the proposed methodology. Concretely, we show that this method is able to learn and predict the parameters governing the reflected wave radiation pattern with an accuracy of a full wave simulation (98.8%-99.8%) and the time and computational complexity of an analytical model. The aforementioned result and methodology will be of specific importance for the design, fault tolerance and maintenance of the thousands of RISs that will be deployed in the 6G network environment.
Abstract:Recent trends in networking are proposing the use of Machine Learning (ML) techniques for the control and operation of the network. In this context, ML can be used as a computer network modeling technique to build models that estimate the network performance. Indeed, network modeling is a central technique to many networking functions, for instance in the field of optimization, in which the model is used to search a configuration that satisfies the target policy. In this paper, we aim to provide an answer to the following question: Can neural networks accurately model the delay of a computer network as a function of the input traffic? For this, we assume the network as a black-box that has as input a traffic matrix and as output delays. Then we train different neural networks models and evaluate its accuracy under different fundamental network characteristics: topology, size, traffic intensity and routing. With this, we aim to have a better understanding of computer network modeling with neural nets and ultimately provide practical guidelines on how such models need to be trained.