Abstract:Radio propagation modeling is essential in telecommunication research, as radio channels result from complex interactions with environmental objects. Recently, Machine Learning has been attracting attention as a potential alternative to computationally demanding tools, like Ray Tracing, which can model these interactions in detail. However, existing Machine Learning approaches often attempt to learn directly specific channel characteristics, such as the coverage map, making them highly specific to the frequency and material properties and unable to fully capture the underlying propagation mechanisms. Hence, Ray Tracing, particularly the Point-to-Point variant, remains popular to accurately identify all possible paths between transmitter and receiver nodes. Still, path identification is computationally intensive because the number of paths to be tested grows exponentially while only a small fraction is valid. In this paper, we propose a Machine Learning-aided Ray Tracing approach to efficiently sample potential ray paths, significantly reducing the computational load while maintaining high accuracy. Our model dynamically learns to prioritize potentially valid paths among all possible paths and scales linearly with scene complexity. Unlike recent alternatives, our approach is invariant with translation, scaling, or rotation of the geometry, and avoids dependency on specific environment characteristics.
Abstract:With the increasing presence of dynamic scenarios, such as Vehicle-to-Vehicle communications, radio propagation modeling tools must adapt to the rapidly changing nature of the radio channel. Recently, both Differentiable and Dynamic Ray Tracing frameworks have emerged to address these challenges. However, there is often confusion about how these approaches differ and which one should be used in specific contexts. In this paper, we provide an overview of these two techniques and a comparative analysis against two state-of-the-art tools: 3DSCAT from UniBo and Sionna from NVIDIA. To provide a more precise characterization of the scope of these methods, we introduce a novel simulation-based metric, the Multipath Lifetime Map, which enables the evaluation of spatial and temporal coherence in radio channels only based on the geometrical description of the environment. Finally, our metrics are evaluated on a classic urban street canyon scenario, yielding similar results to those obtained from measurement campaigns.
Abstract:Recently, many self-supervised learning methods for image reconstruction have been proposed that can learn from noisy data alone, bypassing the need for ground-truth references. Most existing methods cluster around two classes: i) Noise2Self and similar cross-validation methods that require very mild knowledge about the noise distribution, and ii) Stein's Unbiased Risk Estimator (SURE) and similar approaches that assume full knowledge of the distribution. The first class of methods is often suboptimal compared to supervised learning, and the second class is often impractical, as the noise level is generally unknown in real-world applications. In this paper, we provide a theoretical framework that characterizes this expressivity-robustness trade-off and propose a new approach based on SURE, but unlike the standard SURE, does not require knowledge about the noise level. Throughout a series of experiments, we show that the proposed estimator outperforms other existing self-supervised methods on various imaging inverse problems.
Abstract:We propose a novel algorithm for distributed stochastic gradient descent (SGD) with compressed gradient communication in the parameter-server framework. Our gradient compression technique, named flattened one-bit stochastic gradient descent (FO-SGD), relies on two simple algorithmic ideas: (i) a one-bit quantization procedure leveraging the technique of dithering, and (ii) a randomized fast Walsh-Hadamard transform to flatten the stochastic gradient before quantization. As a result, the approximation of the true gradient in this scheme is biased, but it prevents commonly encountered algorithmic problems, such as exploding variance in the one-bit compression regime, deterioration of performance in the case of sparse gradients, and restrictive assumptions on the distribution of the stochastic gradients. In fact, we show SGD-like convergence guarantees under mild conditions. The compression technique can be used in both directions of worker-server communication, therefore admitting distributed optimization with full communication compression.
Abstract:Recently, Differentiable Ray Tracing has been successfully applied in the field of wireless communications for learning radio materials or optimizing the transmitter orientation. However, in the frame of gradient based optimization, obstruction of the rays by objects can cause sudden variations in the related objective functions or create entire regions where the gradient is zero. As these issues can dramatically impact convergence, this paper presents a novel Ray Tracing framework that is fully differentiable with respect to any scene parameter, but also provides a loss function continuous everywhere, thanks to specific local smoothing techniques. Previously non-continuous functions are replaced by a smoothing function, that can be exchanged with any function having similar properties. This function is also configurable via a parameter that determines how smooth the approximation should be. The present method is applied on a basic one-transmitter-multi-receiver scenario, and shows that it can successfully find the optimal solution. As a complementary resource, a 2D Python library, DiffeRT2d, is provided in Open Access, with examples and a comprehensive documentation.
Abstract:We present a demonstration of image classification using a hardware-based echo-state network (ESN) that relies on spintronic nanostructures known as vortex-based spin-torque oscillators (STVOs). Our network is realized using a single STVO multiplexed in time. To circumvent the challenges associated with repeated experimental manipulation of such a nanostructured system, we employ an ultrafast data-driven simulation framework called the data-driven Thiele equation approach (DD-TEA) to simulate the STVO dynamics. We use this approach to efficiently develop, optimize and test an STVO-based ESN for image classification using the MNIST dataset. We showcase the versatility of our solution by successfully applying it to solve classification challenges with the EMNIST-letters and Fashion MNIST datasets. Through our simulations, we determine that within a large ESN the results obtained using the STVO dynamics as an activation function are comparable to the ones obtained with other conventional nonlinear activation functions like the reLU and the sigmoid. While achieving state-of-the-art accuracy levels on the MNIST dataset, our model's performance on EMNIST-letters and Fashion MNIST is lower due to the relative simplicity of the system architecture and the increased complexity of the tasks. We expect that the DD-TEA framework will enable the exploration of more specialized neural architectures, ultimately leading to improved classification accuracy. This approach also holds promise for investigating and developing dedicated learning rules to further enhance classification performance.
Abstract:This paper presents a novel signal processing technique, coined grid hopping, as well as an active multistatic Frequency-Modulated Continuous Wave (FMCW) radar system designed to evaluate its performance. The design of grid hopping is motivated by two existing estimation algorithms. The first one is the indirect algorithm estimating ranges and speeds separately for each received signal, before combining them to obtain location and velocity estimates. The second one is the direct method jointly processing the received signals to directly estimate target location and velocity. While the direct method is known to provide better performance, it is seldom used because of its high computation time. Our grid hopping approach, which relies on interpolation strategies, offers a reduced computation time while its performance stays on par with the direct method. We validate the efficiency of this technique on actual FMCW radar measurements and compare it with other methods.
Abstract:Random data sketching (or projection) is now a classical technique enabling, for instance, approximate numerical linear algebra and machine learning algorithms with reduced computational complexity and memory. In this context, the possibility of performing data processing (such as pattern detection or classification) directly in the sketched domain without accessing the original data was previously achieved for linear random sketching methods and compressive sensing. In this work, we show how to estimate simple signal processing tasks (such as deducing local variations in a image) directly using random quadratic projections achieved by an optical processing unit. The same approach allows for naive data classification methods directly operated in the sketched domain. We report several experiments confirming the power of our approach.
Abstract:Lensless illumination single-pixel imaging with a multicore fiber (MCF) is a computational imaging technique that enables potential endoscopic observations of biological samples at cellular scale. In this work, we show that this technique is tantamount to collecting multiple symmetric rank-one projections (SROP) of a Hermitian \emph{interferometric} matrix -- a matrix encoding the spectral content of the sample image. In this model, each SROP is induced by the complex \emph{sketching} vector shaping the incident light wavefront with a spatial light modulator (SLM), while the projected interferometric matrix collects up to $O(Q^2)$ image frequencies for a $Q$-core MCF. While this scheme subsumes previous sensing modalities, such as raster scanning (RS) imaging with beamformed illumination, we demonstrate that collecting the measurements of $M$ random SLM configurations -- and thus acquiring $M$ SROPs -- allows us to estimate an image of interest if $M$ and $Q$ scale linearly (up to log factors) with the image sparsity level, hence requiring much fewer observations than RS imaging or a complete Nyquist sampling of the $Q \times Q$ interferometric matrix. This demonstration is achieved both theoretically, with a specific restricted isometry analysis of the sensing scheme, and with extensive Monte Carlo experiments. Experimental results made on an actual MCF system finally demonstrate the effectiveness of this imaging procedure on a benchmark image.
Abstract:Lensless illumination single-pixel imaging with a multicore fiber (MCF) is a computational imaging technique that enables potential endoscopic observations of biological samples at cellular scale. In this work, we show that this technique is tantamount to collecting multiple symmetric rank-one projections (SROP) of an interferometric matrix--a matrix encoding the spectral content of the sample image. In this model, each SROP is induced by the complex sketching vector shaping the incident light wavefront with a spatial light modulator (SLM), while the projected interferometric matrix collects up to $O(Q^2)$ image frequencies for a $Q$-core MCF. While this scheme subsumes previous sensing modalities, such as raster scanning (RS) imaging with beamformed illumination, we demonstrate that collecting the measurements of $M$ random SLM configurations--and thus acquiring $M$ SROPs--allows us to estimate an image of interest if $M$ and $Q$ scale log-linearly with the image sparsity level This demonstration is achieved both theoretically, with a specific restricted isometry analysis of the sensing scheme, and with extensive Monte Carlo experiments. On a practical side, we perform a single calibration of the sensing system robust to certain deviations to the theoretical model and independent of the sketching vectors used during the imaging phase. Experimental results made on an actual MCF system demonstrate the effectiveness of this imaging procedure on a benchmark image.