Abstract:We study a linear computation problem over a quantum multiple access channel (LC-QMAC), where $S$ servers share an entangled state and separately store classical data streams $W_1,\cdots, W_S$ over a finite field $\mathbb{F}_d$. A user aims to compute $K$ linear combinations of these data streams, represented as $Y = \mathbf{V}_1 W_1 + \mathbf{V}_2 W_2 + \cdots + \mathbf{V}_S W_S \in \mathbb{F}_d^{K \times 1}$. To this end, each server encodes its classical information into its local quantum subsystem and transmits it to the user, who retrieves the desired computations via quantum measurements. In this work, we propose an achievable scheme for LC-QMAC based on the stabilizer formalism and the ideas from entanglement-assisted quantum error-correcting codes (EAQECC). Specifically, given any linear computation matrix, we construct a self-orthogonal matrix that can be implemented using the stabilizer formalism. Also, we apply precoding matrices to minimize the number of auxiliary qudits required. Our scheme achieves more computations per qudit, i.e., a higher computation rate, compared to the best-known methods in the literature, and attains the capacity in certain cases.
Abstract:Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating external knowledge to generate a response within a context with improved accuracy and reduced hallucinations. However, multi-modal RAG systems face unique challenges: (i) the retrieval process may select irrelevant entries to user query (e.g., images, documents), and (ii) vision-language models or multi-modal language models like GPT-4o may hallucinate when processing these entries to generate RAG output. In this paper, we aim to address the first challenge, i.e, improving the selection of relevant context from the knowledge-base in retrieval phase of the multi-modal RAG. Specifically, we leverage the relevancy score (RS) measure designed in our previous work for evaluating the RAG performance to select more relevant entries in retrieval process. The retrieval based on embeddings, say CLIP-based embedding, and cosine similarity usually perform poorly particularly for multi-modal data. We show that by using a more advanced relevancy measure, one can enhance the retrieval process by selecting more relevant pieces from the knowledge-base and eliminate the irrelevant pieces from the context by adaptively selecting up-to-$k$ entries instead of fixed number of entries. Our evaluation using COCO dataset demonstrates significant enhancement in selecting relevant context and accuracy of the generated response.
Abstract:Retrieval-augmented generation (RAG) improves large language models (LLMs) by using external knowledge to guide response generation, reducing hallucinations. However, RAG, particularly multi-modal RAG, can introduce new hallucination sources: (i) the retrieval process may select irrelevant pieces (e.g., documents, images) as raw context from the database, and (ii) retrieved images are processed into text-based context via vision-language models (VLMs) or directly used by multi-modal language models (MLLMs) like GPT-4o, which may hallucinate. To address this, we propose a novel framework to evaluate the reliability of multi-modal RAG using two performance measures: (i) the relevancy score (RS), assessing the relevance of retrieved entries to the query, and (ii) the correctness score (CS), evaluating the accuracy of the generated response. We train RS and CS models using a ChatGPT-derived database and human evaluator samples. Results show that both models achieve ~88% accuracy on test data. Additionally, we construct a 5000-sample human-annotated database evaluating the relevancy of retrieved pieces and the correctness of response statements. Our RS model aligns with human preferences 20% more often than CLIP in retrieval, and our CS model matches human preferences ~91% of the time. Finally, we assess various RAG systems' selection and generation performances using RS and CS.
Abstract:Mixture-of-Agents (MoA) has recently been proposed as a method to enhance performance of large language models (LLMs), enabling multiple individual LLMs to work together for collaborative inference. This collaborative approach results in improved responses to user prompts compared to relying on a single LLM. In this paper, we consider such an MoA architecture in a distributed setting, where LLMs operate on individual edge devices, each uniquely associated with a user and equipped with its own distributed computing power. These devices exchange information using decentralized gossip algorithms, allowing different device nodes to talk without the supervision of a centralized server. In the considered setup, different users have their own LLM models to address user prompts. Additionally, the devices gossip either their own user-specific prompts or augmented prompts to generate more refined answers to certain queries. User prompts are temporarily stored in the device queues when their corresponding LLMs are busy. Given the memory limitations of edge devices, it is crucial to ensure that the average queue sizes in the system remain bounded. In this paper, we address this by theoretically calculating the queuing stability conditions for the device queues under reasonable assumptions, which we validate experimentally as well. Further, we demonstrate through experiments, leveraging open-source LLMs for the implementation of distributed MoA, that certain MoA configurations produce higher-quality responses compared to others, as evaluated on AlpacaEval 2.0 benchmark. The implementation is available at: https://github.com/purbeshmitra/distributed_moa.
Abstract:We consider the quantum \emph{symmetric} private information retrieval (QSPIR) problem in a system with $N$ databases and $K$ messages, with $U$ unresponsive servers, $T$-colluding servers, and $X$-security parameter, under several fundamental threat models. In the first model, there are $\mathcal{E}_1$ eavesdropped links in the uplink direction (the direction from the user to the $N$ servers), $\mathcal{E}_2$ eavesdropped links in the downlink direction (the direction from the servers to the user), where $|\mathcal{E}_1|, |\mathcal{E}_2| \leq E$; we coin this eavesdropper setting as \emph{dynamic} eavesdroppers. We show that super-dense coding gain can be achieved for some regimes. In the second model, we consider the case with Byzantine servers, i.e., servers that can coordinate to devise a plan to harm the privacy and security of the system together with static eavesdroppers, by listening to the same links in both uplink and downlink directions. It is important to note the considerable difference between the two threat models, since the eavesdroppers can take huge advantage of the presence of the Byzantine servers. Unlike the previous works in SPIR with Byzantine servers, that assume that the Byzantine servers can send only random symbols independent of the stored messages, we follow the definition of Byzantine servers in \cite{byzantine_tpir}, where the Byzantine servers can send symbols that can be functions of the storage, queries, as well as the random symbols in a way that can produce worse harm to the system. In the third and the most novel threat model, we consider the presence of Byzantine servers and dynamic eavesdroppers together. We show that having dynamic eavesdroppers along with Byzantine servers in the same system model creates more threats to the system than having static eavesdroppers with Byzantine servers.
Abstract:Transformers, known for their attention mechanisms, have proven highly effective in focusing on critical elements within complex data. This feature can effectively be used to address the time-varying channels in wireless communication systems. In this work, we introduce a channel-aware adaptive framework for semantic communication, where different regions of the image are encoded and compressed based on their semantic content. By employing vision transformers, we interpret the attention mask as a measure of the semantic contents of the patches and dynamically categorize the patches to be compressed at various rates as a function of the instantaneous channel bandwidth. Our method enhances communication efficiency by adapting the encoding resolution to the content's relevance, ensuring that even in highly constrained environments, critical information is preserved. We evaluate the proposed adaptive transmission framework using the TinyImageNet dataset, measuring both reconstruction quality and accuracy. The results demonstrate that our approach maintains high semantic fidelity while optimizing bandwidth, providing an effective solution for transmitting multi-resolution data in limited bandwidth conditions.
Abstract:In this paper, we address task-oriented (or goal-oriented) communications where an encoder at the transmitter learns compressed latent representations of data, which are then transmitted over a wireless channel. At the receiver, a decoder performs a machine learning task, specifically for classifying the received signals. The deep neural networks corresponding to the encoder-decoder pair are jointly trained, taking both channel and data characteristics into account. Our objective is to achieve high accuracy in completing the underlying task while minimizing the number of channel uses determined by the encoder's output size. To this end, we propose a multi-round, multi-task learning (MRMTL) approach for the dynamic update of channel uses in multi-round transmissions. The transmitter incrementally sends an increasing number of encoded samples over the channel based on the feedback from the receiver, and the receiver utilizes the signals from a previous round to enhance the task performance, rather than only considering the latest transmission. This approach employs multi-task learning to jointly optimize accuracy across varying number of channel uses, treating each configuration as a distinct task. By evaluating the confidence of the receiver in task decisions, MRMTL decides on whether to allocate additional channel uses in multiple rounds. We characterize both the accuracy and the delay (total number of channel uses) of MRMTL, demonstrating that it achieves the accuracy close to that of conventional methods requiring large numbers of channel uses, but with reduced delay by incorporating signals from a prior round. We consider the CIFAR-10 dataset, convolutional neural network architectures, and AWGN and Rayleigh channel models for performance evaluation. We show that MRMTL significantly improves the efficiency of task-oriented communications, balancing accuracy and latency effectively.
Abstract:In a classification task, counterfactual explanations provide the minimum change needed for an input to be classified into a favorable class. We consider the problem of privately retrieving the exact closest counterfactual from a database of accepted samples while enforcing that certain features of the input sample cannot be changed, i.e., they are \emph{immutable}. An applicant (user) whose feature vector is rejected by a machine learning model wants to retrieve the sample closest to them in the database without altering a private subset of their features, which constitutes the immutable set. While doing this, the user should keep their feature vector, immutable set and the resulting counterfactual index information-theoretically private from the institution. We refer to this as immutable private counterfactual retrieval (I-PCR) problem which generalizes PCR to a more practical setting. In this paper, we propose two I-PCR schemes by leveraging techniques from private information retrieval (PIR) and characterize their communication costs. Further, we quantify the information that the user learns about the database and compare it for the proposed schemes.
Abstract:Age of incorrect information (AoII) is a recently proposed freshness and mismatch metric that penalizes an incorrect estimation along with its duration. Therefore, keeping track of AoII requires the knowledge of both the source and estimation processes. In this paper, we consider a time-slotted pull-based remote estimation system under a sampling rate constraint where the information source is a general discrete-time Markov chain (DTMC) process. Moreover, packet transmission times from the source to the monitor are non-zero which disallows the monitor to have perfect information on the actual AoII process at any time. Hence, for this pull-based system, we propose the monitor to maintain a sufficient statistic called {\em belief} which stands for the joint distribution of the age and source processes to be obtained from the history of all observations. Using belief, we first propose a maximum a posteriori (MAP) estimator to be used at the monitor as opposed to existing martingale estimators in the literature. Second, we obtain the optimality equations from the belief-MDP (Markov decision process) formulation. Finally, we propose two belief-dependent policies one of which is based on deep reinforcement learning, and the other one is a threshold-based policy based on the instantaneous expected AoII.
Abstract:We consider a gossiping network, where a source node sends updates to a network of $n$ gossiping nodes. Meanwhile, the connectivity topology of the gossiping network changes over time, among a finite number of connectivity ''states,'' such as the fully connected graph, the ring graph, the grid graph, etc. The transition of the connectivity graph among the possible options is governed by a finite state continuous time Markov chain (CTMC). When the CTMC is in a particular state, the associated graph topology of the gossiping network is in the way indicated by that state. We evaluate the impact of time-varying graph topologies on the freshness of information for nodes in the network. We use the version age of information metric to quantify the freshness of information at the nodes. Using a method similar to the first passage percolation method, we show that, if one of the states of the CTMC is the fully connected graph and the transition rates of the CTMC are constant, then the version age of a typical node in the network scales logarithmically with the number of nodes, as in the case if the network was always fully connected. That is, there is no loss in the age scaling, even if the network topology deviates from full connectivity, in this setting. We perform numerical simulations and analyze more generally how having different topologies and different CTMC rates (that might depend on the number of nodes) affect the average version age scaling of a node in the gossiping network.