Abstract:Large Language Models (LLMs) are critical for a wide range of applications, but serving them efficiently becomes increasingly challenging as inputs become more complex. Context caching improves serving performance by exploiting inter-request dependency and reusing key-value (KV) cache across requests, thus improving time-to-first-token (TTFT). However, existing prefix-based context caching requires exact token prefix matches, limiting cache reuse in few-shot learning, multi-document QA, or retrieval-augmented generation, where prefixes may vary. In this paper, we present EPIC, an LLM serving system that introduces position-independent context caching (PIC), enabling modular KV cache reuse regardless of token chunk position (or prefix). EPIC features two key designs: AttnLink, which leverages static attention sparsity to minimize recomputation for accuracy recovery, and KVSplit, a customizable chunking method that preserves semantic coherence. Our experiments demonstrate that Epic delivers up to 8x improvements in TTFT and 7x throughput over existing systems, with negligible or no accuracy loss. By addressing the limitations of traditional caching approaches, Epic enables more scalable and efficient LLM inference.
Abstract:Computer-aided diagnostics has benefited from the development of deep learning-based computer vision techniques in these years. Traditional supervised deep learning methods assume that the test sample is drawn from the identical distribution as the training data. However, it is possible to encounter out-of-distribution samples in real-world clinical scenarios, which may cause silent failure in deep learning-based medical image analysis tasks. Recently, research has explored various out-of-distribution (OOD) detection situations and techniques to enable a trustworthy medical AI system. In this survey, we systematically review the recent advances in OOD detection in medical image analysis. We first explore several factors that may cause a distributional shift when using a deep-learning-based model in clinic scenarios, with three different types of distributional shift well defined on top of these factors. Then a framework is suggested to categorize and feature existing solutions, while the previous studies are reviewed based on the methodology taxonomy. Our discussion also includes evaluation protocols and metrics, as well as the challenge and a research direction lack of exploration.
Abstract:Channel state information (CSI) is important to reap the full benefits of millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems. The traditional channel estimation methods using pilot frames (PF) lead to excessive overhead. To reduce the demand for PF, data frames (DF) can be adopted for joint channel estimation and data recovery. However, the computational complexity of the DF-based methods is prohibitively high. To reduce the computational complexity, we propose a joint channel estimation and data recovery (JCD) method assisted by a small number of PF for mmWave massive MIMO systems. The proposed method has two stages. In Stage 1, differing from the traditional PF-based methods, the proposed PF-assisted method is utilized to capture the angle of arrival (AoA) of principal components (PC) of channels. In Stage 2, JCD is designed for parallel implementation based on the multi-user decoupling strategy. The theoretical analysis demonstrates that the PF-assisted JCD method can achieve equivalent performance to the Bayesian-optimal DF-based method, while greatly reducing the computational complexity. Simulation results are also presented to validate the analytical results.
Abstract:Deep neural networks have been widely used in communication signal recognition and achieved remarkable performance, but this superiority typically depends on using massive examples for supervised learning, whereas training a deep neural network on small datasets with few labels generally falls into overfitting, resulting in degenerated performance. To this end, we develop a semi-supervised learning (SSL) method that effectively utilizes a large collection of more readily available unlabeled signal data to improve generalization. The proposed method relies largely on a novel implementation of consistency-based regularization, termed Swapped Prediction, which leverages strong data augmentation to perturb an unlabeled sample and then encourage its corresponding model prediction to be close to its original, optimized with a scaled cross-entropy loss with swapped symmetry. Extensive experiments indicate that our proposed method can achieve a promising result for deep SSL of communication signal recognition.
Abstract:Deep learning has been widely used in radio frequency (RF) fingerprinting. Despite its excellent performance, most existing methods only consider a closed-set assumption, which cannot effectively tackle signals emitted from those unknown devices that have never been seen during training. In this letter, we exploit prototype learning for open-set RF fingerprinting and propose two improvements, including consistency-based regularization and online label smoothing, which aim to learn a more robust feature space. Experimental results on a real-world RF dataset demonstrate that our proposed measures can significantly improve prototype learning to achieve promising open-set recognition performance for RF fingerprinting.
Abstract:As a revolutionary generative paradigm of deep learning, generative adversarial networks (GANs) have been widely applied in various fields to synthesize realistic data. However, it is challenging for conventional GANs to synthesize raw signal data, especially in some complex cases. In this paper, we develop a novel GAN framework for radio generation called "Radio GAN". Compared to conventional methods, it benefits from three key improvements. The first is learning based on sampling points, which aims to model an underlying sampling distribution of radio signals. The second is an unrolled generator design, combined with an estimated pure signal distribution as a prior, which can greatly reduce learning difficulty and effectively improve learning precision. Finally, we present an energy-constrained optimization algorithm to achieve better training stability and convergence. Experimental results with extensive simulations demonstrate that our proposed GAN framework can effectively learn transmitter characteristics and various channel effects, thus accurately modeling for an underlying sampling distribution to synthesize radio signals of high quality.
Abstract:As a promising non-password authentication technology, radio frequency (RF) fingerprinting can greatly improve wireless security. Recent work has shown that RF fingerprinting based on deep learning can significantly outperform conventional approaches. The superiority, however, is mainly attributed to supervised learning using a large amount of labeled data, and it significantly degrades if only limited labeled data is available, making many existing algorithms lack practicability. Considering that it is often easier to obtain enough unlabeled data in practice with minimal resources, we leverage deep semi-supervised learning for RF fingerprinting, which largely relies on a composite data augmentation scheme designed for radio signals, combined with two popular techniques: consistency-based regularization and pseudo-labeling. Experimental results on both simulated and real-world datasets demonstrate that our proposed method for semi-supervised RF fingerprinting is far superior to other competing ones, and it can achieve remarkable performance almost close to that of fully supervised learning with a very limited number of examples.
Abstract:Decentralized federated learning (DFL) is a variant of federated learning, where edge nodes only communicate with their one-hop neighbors to learn the optimal model. However, as information exchange is restricted in a range of one-hop in DFL, inefficient information exchange leads to more communication rounds to reach the targeted training loss. This greatly reduces the communication efficiency. In this paper, we propose a new non-uniform quantization of model parameters to improve DFL convergence. Specifically, we apply the Lloyd-Max algorithm to DFL (LM-DFL) first to minimize the quantization distortion by adjusting the quantization levels adaptively. Convergence guarantee of LM-DFL is established without convex loss assumption. Based on LM-DFL, we then propose a new doubly-adaptive DFL, which jointly considers the ascending number of quantization levels to reduce the amount of communicated information in the training and adapts the quantization levels for non-uniform gradient distributions. Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of LM-DFL with the optimal quantized distortion and show that doubly-adaptive DFL can greatly improve communication efficiency.
Abstract:The computational prediction of wave propagation in dam-break floods is a long-standing problem in hydrodynamics and hydrology. Until now, conventional numerical models based on Saint-Venant equations are the dominant approaches. Here we show that a machine learning model that is well-trained on a minimal amount of data, can help predict the long-term dynamic behavior of a one-dimensional dam-break flood with satisfactory accuracy. For this purpose, we solve the Saint-Venant equations for a one-dimensional dam-break flood scenario using the Lax-Wendroff numerical scheme and train the reservoir computing echo state network (RC-ESN) with the dataset by the simulation results consisting of time-sequence flow depths. We demonstrate a good prediction ability of the RC-ESN model, which ahead predicts wave propagation behavior 286 time-steps in the dam-break flood with a root mean square error (RMSE) smaller than 0.01, outperforming the conventional long short-term memory (LSTM) model which reaches a comparable RMSE of only 81 time-steps ahead. To show the performance of the RC-ESN model, we also provide a sensitivity analysis of the prediction accuracy concerning the key parameters including training set size, reservoir size, and spectral radius. Results indicate that the RC-ESN are less dependent on the training set size, a medium reservoir size K=1200~2600 is sufficient. We confirm that the spectral radius \r{ho} shows a complex influence on the prediction accuracy and suggest a smaller spectral radius \r{ho} currently. By changing the initial flow depth of the dam break, we also obtained the conclusion that the prediction horizon of RC-ESN is larger than that of LSTM.
Abstract:In time-division duplexing (TDD) millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems, the reciprocity mismatch severely degrades the performance of the hybrid beamforming (HBF). In this work, to mitigate the detrimental effect of the reciprocity mismatch, we investigate reciprocity calibration for the mmWave-HBF system with a fully-connected phase shifter network. To reduce the overhead and computational complexity of reciprocity calibration, we first decouple digital radio frequency (RF) chains and analog RF chains with beamforming design. Then, the entire calibration problem of the HBF system is equivalently decomposed into two subproblems corresponding to the digital-chain calibration and analog-chain calibration. To solve the calibration problems efficiently, a closed-form solution to the digital-chain calibration problem is derived, while an iterative-alternating optimization algorithm for the analog-chain calibration problem is proposed. To measure the performance of the proposed algorithm, we derive the Cram\'er-Rao lower bound on the errors in estimating mismatch coefficients. The results reveal that the estimation errors of mismatch coefficients of digital and analog chains are uncorrelated, and that the mismatch coefficients of receive digital chains can be estimated perfectly. Simulation results are presented to validate the analytical results and to show the performance of the proposed calibration approach.