Abstract:Semantic caching has emerged as a pivotal technique for scaling LLM applications, widely adopted by major providers including AWS and Microsoft. By utilizing semantic embedding vectors as cache keys, this mechanism effectively minimizes latency and redundant computation for semantically similar queries. In this work, we conceptualize semantic cache keys as a form of fuzzy hashes. We demonstrate that the locality required to maximize cache hit rates fundamentally conflicts with the cryptographic avalanche effect necessary for collision resistance. Our conceptual analysis formalizes this inherent trade-off between performance (locality) and security (collision resilience), revealing that semantic caching is naturally vulnerable to key collision attacks. While prior research has focused on side-channel and privacy risks, we present the first systematic study of integrity risks arising from cache collisions. We introduce CacheAttack, an automated framework for launching black-box collision attacks. We evaluate CacheAttack in security-critical tasks and agentic workflows. It achieves a hit rate of 86\% in LLM response hijacking and can induce malicious behaviors in LLM agent, while preserving strong transferability across different embedding models. A case study on a financial agent further illustrates the real-world impact of these vulnerabilities. Finally, we discuss mitigation strategies.
Abstract:LLM-powered agents often use prompt compression to reduce inference costs, but this introduces a new security risk. Compression modules, which are optimized for efficiency rather than safety, can be manipulated by adversarial inputs, causing semantic drift and altering LLM behavior. This work identifies prompt compression as a novel attack surface and presents CompressionAttack, the first framework to exploit it. CompressionAttack includes two strategies: HardCom, which uses discrete adversarial edits for hard compression, and SoftCom, which performs latent-space perturbations for soft compression. Experiments on multiple LLMs show up to 80% attack success and 98% preference flips, while remaining highly stealthy and transferable. Case studies in VSCode Cline and Ollama confirm real-world impact, and current defenses prove ineffective, highlighting the need for stronger protections.




Abstract:Randomized algorithms can be used to speed up the analysis of large datasets. In this paper, we develop a unified methodology for statistical inference via randomized sketching or projections in two of the most fundamental problems in multivariate statistical analysis: least squares and PCA. The methodology applies to fixed datasets -- i.e., is data-conditional -- and the only randomness is due to the randomized algorithm. We propose statistical inference methods for a broad range of sketching distributions, such as the subsampled randomized Hadamard transform (SRHT), Sparse Sign Embeddings (SSE) and CountSketch, sketching matrices with i.i.d. entries, and uniform subsampling. To our knowledge, no comparable methods are available for SSE and for SRHT in PCA. Our novel theoretical approach rests on showing the asymptotic normality of certain quadratic forms. As a contribution of broader interest, we show central limit theorems for quadratic forms of the SRHT, relying on a novel proof via a dyadic expansion that leverages the recursive structure of the Hadamard transform. Numerical experiments using both synthetic and empirical datasets support the efficacy of our methods, and in particular suggest that sketching methods can have better computation-estimation tradeoffs than recently proposed optimal subsampling methods.
Abstract:Dynamic functional connectivity networks (dFCN) based on rs-fMRI have demonstrated tremendous potential for brain function analysis and brain disease classification. Recently, studies have applied deep learning techniques (i.e., convolutional neural network, CNN) to dFCN classification, and achieved better performance than the traditional machine learning methods. Nevertheless, previous deep learning methods usually perform successive convolutional operations on the input dFCNs to obtain high-order brain network aggregation features, extracting them from each sliding window using a series split, which may neglect non-linear correlations among different regions and the sequentiality of information. Thus, important high-order sequence information of dFCNs, which could further improve the classification performance, is ignored in these studies. Nowadays, inspired by the great success of Transformer in natural language processing and computer vision, some latest work has also emerged on the application of Transformer for brain disease diagnosis based on rs-fMRI data. Although Transformer is capable of capturing non-linear correlations, it lacks accounting for capturing local spatial feature patterns and modelling the temporal dimension due to parallel computing, even equipped with a positional encoding technique. To address these issues, we propose a self-attention (SA) based convolutional recurrent network (SA-CRN) learning framework for brain disease classification with rs-fMRI data. The experimental results on a public dataset (i.e., ADNI) demonstrate the effectiveness of our proposed SA-CRN method.