Abstract:In multiple-input and multiple-output (MIMO) radar systems based on Doppler-division multiplexing (DDM), phase shifters are employed in the transmit paths and require calibration strategies to maintain optimal performance all along the radar system's life cycle. In this paper, we propose a novel family of DDM codes that enable an online calibration of the phase shifters that scale realistically to any number of simultaneously activated transmit (Tx)-channels during the calibration frames. To achieve this goal we employ the previously developed odd-DDM (ODDM) sequences to design calibration DDM codes with reduced inter-Tx leakage. The proposed calibration sequence is applied to an automotive radar data set modulated with erroneous phase shifters.
Abstract:Orthogonal frequency-division multiplexing (OFDM) is a promising waveform candidate for future joint sensing and communication systems. It is well known that the OFDM waveform is vulnerable to in-phase and quadrature-phase (IQ) imbalance, which increases the noise floor in a range-Doppler map (RDM). A state-of-the-art method for robustifying the OFDM waveform against IQ imbalance avoids an increased noise floor, but it generates additional ghost objects in the RDM [1]. A consequence of these additional ghost objects is a reduction of the maximum unambiguous range. In this work, a novel OFDM-based waveform robust to IQ imbalance is proposed, which neither increases the noise floor nor reduces the maximum unambiguous range. The latter is achieved by shifting the ghost objects in the RDM to different velocities such that their range variations observed over several consecutive RDMs do not correspond to the observed velocity. This allows tracking algorithms to identify them as ghost objects and eliminate them for the follow-up processing steps. Moreover, we propose complete communication systems for both the proposed waveform as well as for the state-of-the-art waveform, including methods for channel estimation, synchronization, and data estimation that are specifically designed to deal with frequency selective IQ imbalance which occurs in wideband systems. The effectiveness of these communication systems is demonstrated by means of bit error ratio (BER) simulations.
Abstract:Pipelined analog-to-digital converters (ADCs) are key enablers in many state-of-the-art signal processing systems with high sampling rates. In addition to high sampling rates, such systems often demand a high linearity. To meet these challenging linearity requirements, ADC calibration techniques were heavily investigated throughout the past decades. One limitation in ADC calibration is the need for a precisely known test signal. In our previous work, we proposed the homogeneity enforced calibration (HEC) approach, which circumvents this need by consecutively feeding a test signal and a scaled version of it into the ADC. The calibration itself is performed using only the corresponding output samples, such that the test signal can remain unknown. On the downside, the HEC approach requires the option to accurately scale the test signal, impeding an on-chip implementation. In this work, we provide a thorough analysis of the HEC approach, including the effects of an inaccurately scaled test signal. Furthermore, the bi-linear homogeneity enforced calibration (BL-HEC) approach is introduced and suggested to account for an inaccurate scaling and, therefore, to facilitate an on-chip implementation. In addition, a comprehensive stability and convergence analysis of the BL-HEC approach is carried out. Finally, we verify our concept with simulations.
Abstract:Equalization is an important task at the receiver side of a digital wireless communication system, which is traditionally conducted with model-based estimation methods. Among the numerous options for model-based equalization, iterative soft interference cancellation (SIC) is a well-performing approach since error propagation caused by hard decision data symbol estimation during the iterative estimation procedure is avoided. However, the model-based method suffers from high computational complexity and performance degradation due to required approximations. In this work, we propose a novel neural network (NN-)based equalization approach, referred to as SICNN, which is designed by deep unfolding of a model-based iterative SIC method, eliminating the main disadvantages of its model-based counterpart. We present different variants of SICNN. SICNNv1 is very similar to the model-based method, and is specifically tailored for single carrier frequency domain equalization systems, which is the communication system we regard in this work. The second variant, SICNNv2, is more universal, and is applicable as an equalizer in any communication system with a block-based data transmission scheme. We highlight the pros and cons of both variants. Moreover, for both SICNNv1 and SICNNv2 we present a version with a highly reduced number of learnable parameters. We compare the achieved bit error ratio performance of the proposed NN-based equalizers with state-of-the-art model-based and NN-based approaches, highlighting the superiority of SICNNv1 over all other methods. Also, we present a thorough complexity analysis of the proposed NN-based equalization approaches, and we investigate the influence of the training set size on the performance of NN-based equalizers.
Abstract:Unique Word-orthogonal frequency division multiplexing (UW-OFDM) is known to provide various performance benefits over conventional cyclic prefix (CP) based OFDM. Most important, UW-OFDM features excellent spectral sidelobe suppression properties and an outstanding bit error ratio (BER) performance. Carrier frequency offset (CFO) induced impairments denote a challenging task for OFDM systems of any kind. In this work we investigate the CFO effects on UW-OFDM and compare it to conventional multi-carrier and single-carrier systems. Different CFO compensation approaches with different computational complexity are considered throughout this work and assessed against each other. A mean squared error analysis carried out after data estimation reveals a significant higher robustness of UW-OFDM over CP-OFDM against CFO effects. Additionally, the conducted BER simulations generally support this conclusion for various scenarios, ranging from uncoded to coded transmission in a frequency selective environment.
Abstract:Machine learning (ML) and tensor-based methods have been of significant interest for the scientific community for the last few decades. In a previous work we presented a novel tensor-based system identification framework to ease the computational burden of tensor-only architectures while still being able to achieve exceptionally good performance. However, the derived approach only allows to process real-valued problems and is therefore not directly applicable on a wide range of signal processing and communications problems, which often deal with complex-valued systems. In this work we therefore derive two new architectures to allow the processing of complex-valued signals, and show that these extensions are able to surpass the trivial, complex-valued extension of the original architecture in terms of performance, while only requiring a slight overhead in computational resources to allow for complex-valued operations.
Abstract:A promising waveform candidate for future joint sensing and communication systems is orthogonal frequencydivision multiplexing (OFDM). For such systems, supporting multiple transmit antennas requires multiplexing methods for the generation of orthogonal transmit signals, where equidistant subcarrier interleaving (ESI) is the most popular multiplexing method. In this work, we analyze a multiplexing method called Doppler-division multiplexing (DDM). This method applies a phase shift from OFDM symbol to OFDM symbol to separate signals transmitted by different Tx antennas along the velocity axis of the range-Doppler map. While general properties of DDM for the task of radar sensing are analyzed in this work, the main focus lies on the implications of DDM on the communication task. It will be shown that for DDM, the channels observed in the communication receiver are heavily timevarying, preventing any meaningful transmission of data when not taken into account. In this work, a communication system designed to combat these time-varying channels is proposed, which includes methods for data estimation, synchronization, and channel estimation. Bit error ratio (BER) simulations demonstrate the superiority of this communications system compared to a system utilizing ESI.
Abstract:Data estimation is conducted with model-based estimation methods since the beginning of digital communications. However, motivated by the growing success of machine learning, current research focuses on replacing model-based data estimation methods by data-driven approaches, mainly neural networks (NNs). In this work, we particularly investigate the incorporation of existing model knowledge into data-driven approaches, which is expected to lead to complexity reduction and / or performance enhancement. We describe three different options, namely "model-inspired'' pre-processing, choosing an NN architecture motivated by the properties of the underlying communication system, and inferring the layer structure of an NN with the help of model knowledge. Most of the current publications on NN-based data estimation deal with general multiple-input multiple-output communication (MIMO) systems. In this work, we investigate NN-based data estimation for so-called unique word orthogonal frequency division multiplexing (UW-OFDM) systems. We highlight differences between UW-OFDM systems and general MIMO systems one has to be aware of when using NNs for data estimation, and we introduce measures for successful utilization of NN-based data estimators in UW-OFDM systems. Further, we investigate the use of NNs for data estimation when channel coded data transmission is conducted, and we present adaptions to be made, such that NN-based data estimators provide satisfying performance for this case. We compare the presented NNs concerning achieved bit error ratio performance and computational complexity, we show the peculiar distributions of their data estimates, and we also point out their downsides compared to model-based equalizers.
Abstract:There have been several attempts to quantify the diagnostic distortion caused by algorithms that perform low-dimensional electrocardiogram (ECG) representation. However, there is no universally accepted quantitative measure that allows the diagnostic distortion arising from denoising, compression, and ECG beat representation algorithms to be determined. Hence, the main objective of this work was to develop a framework to enable biomedical engineers to efficiently and reliably assess diagnostic distortion resulting from ECG processing algorithms. We propose a semiautomatic framework for quantifying the diagnostic resemblance between original and denoised/reconstructed ECGs. Evaluation of the ECG must be done manually, but is kept simple and does not require medical training. In a case study, we quantified the agreement between raw and reconstructed (denoised) ECG recordings by means of kappa-based statistical tests. The proposed methodology takes into account that the observers may agree by chance alone. Consequently, for the case study, our statistical analysis reports the "true", beyond-chance agreement in contrast to other, less robust measures, such as simple percent agreement calculations. Our framework allows efficient assessment of clinically important diagnostic distortion, a potential side effect of ECG (pre-)processing algorithms. Accurate quantification of a possible diagnostic loss is critical to any subsequent ECG signal analysis, for instance, the detection of ischemic ST episodes in long-term ECG recordings.
Abstract:Pipelined analog-to-digital converters (ADCs) are fundamental components of various signal processing systems requiring high sampling rates and a high linearity. Over the past years, calibration techniques have been intensively investigated to increase the linearity. In this work, we propose an equalization-based calibration technique which does not require knowledge of the ADC input signal for calibration. For that, a test signal and a scaled version of it are fed into the ADC sequentially, while only the corresponding output samples are used for calibration. Several test signal sources are possible, such as a signal generator (SG) or the system application (SA) itself. For the latter case, the presented method corresponds to a background calibration technique. Thus, slowly changing errors are tracked and calibrated continuously. Because of the low computational complexity of the calibration technique, it is suitable for an on-chip implementation. Ultimately, this work contains an analysis of the stability and convergence behavior as well as simulation results.