Abstract:Pipelined analog-to-digital converters (ADCs) are key enablers in many state-of-the-art signal processing systems with high sampling rates. In addition to high sampling rates, such systems often demand a high linearity. To meet these challenging linearity requirements, ADC calibration techniques were heavily investigated throughout the past decades. One limitation in ADC calibration is the need for a precisely known test signal. In our previous work, we proposed the homogeneity enforced calibration (HEC) approach, which circumvents this need by consecutively feeding a test signal and a scaled version of it into the ADC. The calibration itself is performed using only the corresponding output samples, such that the test signal can remain unknown. On the downside, the HEC approach requires the option to accurately scale the test signal, impeding an on-chip implementation. In this work, we provide a thorough analysis of the HEC approach, including the effects of an inaccurately scaled test signal. Furthermore, the bi-linear homogeneity enforced calibration (BL-HEC) approach is introduced and suggested to account for an inaccurate scaling and, therefore, to facilitate an on-chip implementation. In addition, a comprehensive stability and convergence analysis of the BL-HEC approach is carried out. Finally, we verify our concept with simulations.
Abstract:Pipelined analog-to-digital converters (ADCs) are fundamental components of various signal processing systems requiring high sampling rates and a high linearity. Over the past years, calibration techniques have been intensively investigated to increase the linearity. In this work, we propose an equalization-based calibration technique which does not require knowledge of the ADC input signal for calibration. For that, a test signal and a scaled version of it are fed into the ADC sequentially, while only the corresponding output samples are used for calibration. Several test signal sources are possible, such as a signal generator (SG) or the system application (SA) itself. For the latter case, the presented method corresponds to a background calibration technique. Thus, slowly changing errors are tracked and calibrated continuously. Because of the low computational complexity of the calibration technique, it is suitable for an on-chip implementation. Ultimately, this work contains an analysis of the stability and convergence behavior as well as simulation results.