Abstract:Wireless Network-on-Chip (WNoC) systems, which interconnect chips using wireless links, face significant challenges in area and power consumption. To tackle these constraints, behavioral models (BMs) are crucial for assessing system performance under various conditions and optimizing parameters like data throughput and power consumption. Building transceivers (TRXs) physically is costly and time-consuming, making modeling a more practical approach. This paper develops a power consumption model for the sub-blocks of a WNoC transmitter (TX) at the chip level. By integrating these BMs with MATLAB, we aim to create a power model for TXs in WNoC architectures, optimized for CMOS technology operating at millimeter-wave and terahertz frequencies.
Abstract:Advancements in Natural Language Processing (NLP), have led to the emergence of Large Language Models (LLMs) such as GPT, Llama, Claude, and Gemini, which excel across a range of tasks but require extensive fine-tuning to align their outputs with human expectations. A widely used method for achieving this alignment is Reinforcement Learning from Human Feedback (RLHF), which, despite its success, faces challenges in accurately modelling human preferences. In this paper, we introduce GazeReward, a novel framework that integrates implicit feedback -- and specifically eye-tracking (ET) data -- into the Reward Model (RM). In addition, we explore how ET-based features can provide insights into user preferences. Through ablation studies we test our framework with different integration methods, LLMs, and ET generator models, demonstrating that our approach significantly improves the accuracy of the RM on established human preference datasets. This work advances the ongoing discussion on optimizing AI alignment with human values, exploring the potential of cognitive data for shaping future NLP research.
Abstract:Flow-guided localization using in-body nanodevices in the bloodstream is expected to be beneficial for early disease detection, continuous monitoring of biological conditions, and targeted treatment. The nanodevices face size and power constraints that produce erroneous raw data for localization purposes. On-body anchors receive this data, and use it to derive the locations of diagnostic events of interest. Different Machine Learning (ML) approaches have been recently proposed for this task, yet they are currently restricted to a reference bloodstream of a resting patient. As such, they are unable to deal with the physical diversity of patients' bloodstreams and cannot provide continuous monitoring due to changes in individual patient's activities. Toward addressing these issues for the current State-of-the-Art (SotA) flow-guided localization approach based on Graph Neural Networks (GNNs), we propose a pipeline for GNN adaptation based on individual physiological indicators including height, weight, and heart rate. Our results indicate that the proposed adaptions are beneficial in reconciling the individual differences between bloodstreams and activities.
Abstract:Application-specific quantum computers offer the most efficient means to tackle problems intractable by classical computers. Realizing these architectures necessitates a deep understanding of quantum circuit properties and their relationship to execution outcomes on quantum devices. Our study aims to perform for the first time a rigorous examination of quantum circuits by introducing graph theory-based metrics extracted from their qubit interaction graph and gate dependency graph alongside conventional parameters describing the circuit itself. This methodology facilitates a comprehensive analysis and clustering of quantum circuits. Furthermore, it uncovers a connection between parameters rooted in both qubit interaction and gate dependency graphs, and the performance metrics for quantum circuit mapping, across a range of established quantum device and mapping configurations. Among the various device configurations, we particularly emphasize modular (i.e., multi-core) quantum computing architectures due to their high potential as a viable solution for quantum device scalability. This thorough analysis will help us to: i) identify key attributes of quantum circuits that affect the quantum circuit mapping performance metrics; ii) predict the performance on a specific chip for similar circuit structures; iii) determine preferable combinations of mapping techniques and hardware setups for specific circuits; and iv) define representative benchmark sets by clustering similarly structured circuits.
Abstract:The concept of Wireless Network-on-Chip (WNoC) has emerged as a potential solution to address the escalating communication demands of modern computing systems due to their low-latency, versatility, and reconfigurability. However, for WNoC to fulfill its potential, it is essential to establish multiple high-speed wireless links across chips. Unfortunately, the compact and enclosed nature of computing packages introduces significant challenges in the form of Co-Channel Interference (CCI) and Inter-Symbol Interference (ISI), which not only hinder the deployment of multiple spatial channels but also severely restrict the symbol rate of each individual channel. In this paper, we posit that Time Reversal (TR) could be effective in addressing both impairments in this static scenario thanks to its spatiotemporal focusing capabilities even in the near field. Through comprehensive full-wave simulations and bit error rate analysis in multiple scenarios and at multiple frequency bands, we provide evidence that TR can increase the symbol rate by an order of magnitude, enabling the deployment of multiple concurrent links and achieving aggregate speeds exceeding 100 Gb/s. Finally, we evaluate the impact of reducing the sampling rate of the TR filter on the achievable speeds, paving the way to practical TR-based wireless communications at the chip scale.
Abstract:Quantum computing holds immense potential for solving classically intractable problems by leveraging the unique properties of quantum mechanics. The scalability of quantum architectures remains a significant challenge. Multi-core quantum architectures are proposed to solve the scalability problem, arising a new set of challenges in hardware, communications and compilation, among others. One of these challenges is to adapt a quantum algorithm to fit within the different cores of the quantum computer. This paper presents a novel approach for circuit partitioning using Deep Reinforcement Learning, contributing to the advancement of both quantum computing and graph partitioning. This work is the first step in integrating Deep Reinforcement Learning techniques into Quantum Circuit Mapping, opening the door to a new paradigm of solutions to such problems.
Abstract:Wireless Network-on-Chip (WNoC) is a promising paradigm to overcome the versatility and scalability issues of conventional on-chip networks for current processor chips. However, the chip environment suffers from delay spread which leads to intense Inter-Symbol Interference (ISI). This degrades the signal when transmitting and makes it difficult to achieve the desired Bit Error Rate (BER) in this constraint-driven scenario. Time reversal (TR) is a technique that uses the multipath richness of the channel to overcome the undesired effects of the delay spread. As the flip-chip channel is static and can be characterized beforehand, in this paper we propose to apply TR to the wireless in-package channel. We evaluate the effects of this technique in time and space from an electromagnetic point of view. Furthermore, we study the effectiveness of TR in modulated data communications in terms of BER as a function of transmission rate and power. Our results show not only the spatiotemporal focusing effect of TR in a chip that could lead to multiple spatial channels, but also that transmissions using TR outperform, BER-wise, non-TR transmissions it by an order of magnitude
Abstract:Scientific advancements in nanotechnology and advanced materials are paving the way toward nanoscale devices for in-body precision medicine; comprising integrated sensing, computing, communication, data and energy storage capabilities. In the human cardiovascular system, such devices are envisioned to be passively flowing and continuously sensing for detecting events of diagnostic interest. The diagnostic value of detecting such events can be enhanced by assigning to them their physical locations (e.g., body region), which is the main proposition of flow-guided localization. Current flow-guided localization approaches suffer from low localization accuracy and they are by-design unable to localize events within the entire cardiovascular system. Toward addressing this issue, we propose the utilization of Graph Neural Networks (GNNs) for this purpose, and demonstrate localization accuracy and coverage enhancements of our proposal over the existing State of the Art (SotA) approaches. Based on our evaluation, we provide several design guidelines for GNN-enabled flow-guided localization.
Abstract:Nanodevices with Terahertz (THz)-based wireless communication capabilities are providing a primer for flow-guided localization within the human bloodstreams. Such localization is allowing for assigning the locations of sensed events with the events themselves, providing benefits in precision medicine along the lines of early and precise diagnostics, and reduced costs and invasiveness. Flow-guided localization is still in a rudimentary phase, with only a handful of works targeting the problem. Nonetheless, the performance assessments of the proposed solutions are already carried out in a non-standardized way, usually along a single performance metric, and ignoring various aspects that are relevant at such a scale (e.g., nanodevices' limited energy) and for such a challenging environment (e.g., extreme attenuation of in-body THz propagation). As such, these assessments feature low levels of realism and cannot be compared in an objective way. Toward addressing this issue, we account for the environmental and scale-related peculiarities of the scenario and assess the performance of two state-of-the-art flow-guided localization approaches along a set of heterogeneous performance metrics such as the accuracy and reliability of localization.
Abstract:Graph Neural Networks (GNN) show great promise in problems dealing with graph-structured data. One of the unique points of GNNs is their flexibility to adapt to multiple problems, which not only leads to wide applicability, but also poses important challenges when finding the best model or acceleration technique for a particular problem. An example of such challenges resides in the fact that the accuracy or effectiveness of a GNN model or acceleration technique generally depends on the structure of the underlying graph. In this paper, in an attempt to address the problem of graph-dependent acceleration, we propose ProGNNosis, a data-driven model that can predict the GNN training time of a given GNN model running over a graph of arbitrary characteristics by inspecting the input graph metrics. Such prediction is made based on a regression that was previously trained offline using a diverse synthetic graph dataset. In practice, our method allows making informed decisions on which design to use for a specific problem. In the paper, the methodology to build ProGNNosis is defined and applied for a specific use case, where it helps to decide which graph representation is better. Our results show that ProGNNosis helps achieve an average speedup of 1.22X over randomly selecting a graph representation in multiple widely used GNN models such as GCN, GIN, GAT, or GraphSAGE.