Abstract:Selective State Space Models (SSMs) achieve linear-time inference, yet their gradient-based sensitivity analysis remains bottlenecked by O(L) memory scaling during backpropagation. This memory constraint precludes genomic-scale modeling (L > 10^5) on consumer-grade hardware. We introduce Phase Gradient Flow (PGF), a framework that computes exact analytical derivatives by operating directly in the state-space manifold, bypassing the need to materialize the intermediate computational graph. By reframing SSM dynamics as Tiled Operator-Space Evolution (TOSE), our method delivers O(1) memory complexity relative to sequence length, yielding a 94% reduction in peak VRAM and a 23x increase in throughput compared to standard Autograd. Unlike parallel prefix scans that exhibit numerical divergence in stiff ODE regimes, PGF ensures stability through invariant error scaling, maintaining near-machine precision across extreme sequences. We demonstrate the utility of PGF on an impulse-response benchmark with 128,000-step sequences - a scale where conventional Autograd encounters prohibitive memory overhead, often leading to out-of-memory (OOM) failures in multi-layered models. Our work enables chromosome-scale sensitivity analysis on a single GPU, bridging the gap between theoretical infinite-context models and practical hardware limitations.




Abstract:Radio-frequency dosimetry is an important process in human safety and for compliance of related products. Recently, computational human models generated from medical images have often been used for such assessment, especially to consider the inter-variability of subjects. However, the common procedure to develop personalized models is time consuming because it involves excessive segmentation of several components that represent different biological tissues, which limits the inter-variability assessment of radiation safety based on personalized dosimetry. Deep learning methods have been shown to be a powerful approach for pattern recognition and signal analysis. Convolutional neural networks with deep architecture are proven robust for feature extraction and image mapping in several biomedical applications. In this study, we develop a learning-based approach for fast and accurate estimation of the dielectric properties and density of tissues directly from magnetic resonance images in a single shot. The smooth distribution of the dielectric properties in head models, which is realized using a process without tissue segmentation, improves the smoothness of the specific absorption rate (SAR) distribution compared with that in the commonly used procedure. The estimated SAR distributions, as well as that averaged over 10-g of tissue in a cubic shape, are found to be highly consistent with those computed using the conventional methods that employ segmentation.