Abstract:In Plug-and-Play (PnP) algorithms, an off-the-shelf denoiser is used for image regularization. PnP yields state-of-the-art results, but its theoretical aspects are not well understood. This work considers the question: Similar to classical compressed sensing (CS), can we theoretically recover the ground-truth via PnP under suitable conditions on the denoiser and the sensing matrix? One hurdle is that since PnP is an algorithmic framework, its solution need not be the minimizer of some objective function. It was recently shown that a convex regularizer $\Phi$ can be associated with a class of linear denoisers such that PnP amounts to solving a convex problem involving $\Phi$. Motivated by this, we consider the PnP analog of CS: minimize $\Phi(x)$ s.t. $Ax=A\xi$, where $A$ is a $m\times n$ random sensing matrix, $\Phi$ is the regularizer associated with a linear denoiser $W$, and $\xi$ is the ground-truth. We prove that if $A$ is Gaussian and $\xi$ is in the range of $W$, then the minimizer is almost surely $\xi$ if $rank(W)\leq m$, and almost never if $rank(W)> m$. Thus, the range of the PnP denoiser acts as a signal prior, and its dimension marks a sharp transition from failure to success of exact recovery. We extend the result to subgaussian sensing matrices, except that exact recovery holds only with high probability. For noisy measurements $b = A \xi + \eta$, we consider a robust formulation: minimize $\Phi(x)$ s.t. $\|Ax-b\|\leq\delta$. We prove that for an optimal solution $x^*$, with high probability the distortion $\|x^*-\xi\|$ is bounded by $\|\eta\|$ and $\delta$ if the number of measurements is large enough. In particular, we can derive the sample complexity of CS as a function of distortion error and success rate. We discuss the extension of these results to random Fourier measurements, perform numerical experiments, and discuss research directions stemming from this work.
Abstract:In plug-and-play (PnP) regularization, the knowledge of the forward model is combined with a powerful denoiser to obtain state-of-the-art image reconstructions. This is typically done by taking a proximal algorithm such as FISTA or ADMM, and formally replacing the proximal map associated with a regularizer by nonlocal means, BM3D or a CNN denoiser. Each iterate of the resulting PnP algorithm involves some kind of inversion of the forward model followed by denoiser-induced regularization. A natural question in this regard is that of optimality, namely, do the PnP iterations minimize some f+g, where f is a loss function associated with the forward model and g is a regularizer? This has a straightforward solution if the denoiser can be expressed as a proximal map, as was shown to be the case for a class of linear symmetric denoisers. However, this result excludes kernel denoisers such as nonlocal means that are inherently non-symmetric. In this paper, we prove that a broader class of linear denoisers (including symmetric denoisers and kernel denoisers) can be expressed as a proximal map of some convex regularizer g. An algorithmic implication of this result for non-symmetric denoisers is that it necessitates appropriate modifications in the PnP updates to ensure convergence to a minimum of f+g. Apart from the convergence guarantee, the modified PnP algorithms are shown to produce good restorations.
Abstract:A standard model for image reconstruction involves the minimization of a data-fidelity term along with a regularizer, where the optimization is performed using proximal algorithms such as ISTA and ADMM. In plug-and-play (PnP) regularization, the proximal operator (associated with the regularizer) in ISTA and ADMM is replaced by a powerful image denoiser. Although PnP regularization works surprisingly well in practice, its theoretical convergence -- whether convergence of the PnP iterates is guaranteed and if they minimize some objective function -- is not completely understood even for simple linear denoisers such as nonlocal means. In particular, while there are works where either iterate or objective convergence is established separately, a simultaneous guarantee on iterate and objective convergence is not available for any denoiser to our knowledge. In this paper, we establish both forms of convergence for a special class of linear denoisers. Notably, unlike existing works where the focus is on symmetric denoisers, our analysis covers non-symmetric denoisers such as nonlocal means and almost any convex data-fidelity. The novelty in this regard is that we make use of the convergence theory of averaged operators and we work with a special inner product (and norm) derived from the linear denoiser; the latter requires us to appropriately define the gradient and proximal operators associated with the data-fidelity term. We validate our convergence results using image reconstruction experiments.
Abstract:Plug-and-play (PnP) method is a recent paradigm for image regularization, where the proximal operator (associated with some given regularizer) in an iterative algorithm is replaced with a powerful denoiser. Algorithmically, this involves repeated inversion (of the forward model) and denoising until convergence. Remarkably, PnP regularization produces promising results for several restoration applications. However, a fundamental question in this regard is the theoretical convergence of the PnP iterations, since the algorithm is not strictly derived from an optimization framework. This question has been investigated in recent works, but there are still many unresolved problems. For example, it is not known if convergence can be guaranteed if we use generic kernel denoisers (e.g. nonlocal means) within the ISTA framework (PnP-ISTA). We prove that, under reasonable assumptions, fixed-point convergence of PnP-ISTA is indeed guaranteed for linear inverse problems such as deblurring, inpainting and superresolution (the assumptions are verifiable for inpainting). We compare our theoretical findings with existing results, validate them numerically, and explain their practical relevance.
Abstract:In most state-of-the-art image restoration methods, the sum of a data-fidelity and a regularization term is optimized using an iterative algorithm such as ADMM (alternating direction method of multipliers). In recent years, the possibility of using denoisers for regularization has been explored in several works. A popular approach is to formally replace the proximal operator within the ADMM framework with some powerful denoiser. However, since most state-of-the-art denoisers cannot be posed as a proximal operator, one cannot guarantee the convergence of these so-called plug-and-play (PnP) algorithms. In fact, the theoretical convergence of PnP algorithms is an active research topic. In this letter, we consider the result of Chan et al. (IEEE TCI, 2017), where fixed-point convergence of an ADMM-based PnP algorithm was established for a class of denoisers. We argue that the original proof is incomplete, since convergence is not analyzed for one of the three possible cases outlined in the paper. Moreover, we explain why the argument for the other cases does not apply in this case. We give a different analysis to fill this gap, which firmly establishes the original convergence theorem.
Abstract:In the classical bilateral filter, a fixed Gaussian range kernel is used along with a spatial kernel for edge-preserving smoothing. We consider a generalization of this filter, the so-called adaptive bilateral filter, where the center and width of the Gaussian range kernel is allowed to change from pixel to pixel. Though this variant was originally proposed for sharpening and noise removal, it can also be used for other applications such as artifact removal and texture filtering. Similar to the bilateral filter, the brute-force implementation of its adaptive counterpart requires intense computations. While several fast algorithms have been proposed in the literature for bilateral filtering, most of them work only with a fixed range kernel. In this paper, we propose a fast algorithm for adaptive bilateral filtering, whose complexity does not scale with the spatial filter width. This is based on the observation that the concerned filtering can be performed purely in range space using an appropriately defined local histogram. We show that by replacing the histogram with a polynomial and the finite range-space sum with an integral, we can approximate the filter using analytic functions. In particular, an efficient algorithm is derived using the following innovations: the polynomial is fitted by matching its moments to those of the target histogram (this is done using fast convolutions), and the analytic functions are recursively computed using integration-by-parts. Our algorithm can accelerate the brute-force implementation by at least $20 \times$, without perceptible distortions in the visual quality. We demonstrate the effectiveness of our algorithm for sharpening, JPEG deblocking, and texture filtering.