Abstract:We consider an unsupervised bilevel optimization strategy for learning regularization parameters in the context of imaging inverse problems in the presence of additive white Gaussian noise. Compared to supervised and semi-supervised metrics relying either on the prior knowledge of reference data and/or on some (partial) knowledge on the noise statistics, the proposed approach optimizes the whiteness of the residual between the observed data and the observation model with no need of ground-truth data.We validate the approach on standard Total Variation-regularized image deconvolution problems which show that the proposed quality metric provides estimates close to the mean-square error oracle and to discrepancy-based principles.
Abstract:The unprecedented success of image reconstruction approaches based on deep neural networks has revolutionised both the processing and the analysis paradigms in several applied disciplines. In the field of digital humanities, the task of digital reconstruction of ancient frescoes is particularly challenging due to the scarce amount of available training data caused by ageing, wear, tear and retouching over time. To overcome these difficulties, we consider the Deep Image Prior (DIP) inpainting approach which computes appropriate reconstructions by relying on the progressive updating of an untrained convolutional neural network so as to match the reliable piece of information in the image at hand while promoting regularisation elsewhere. In comparison with state-of-the-art approaches (based on variational/PDEs and patch-based methods), DIP-based inpainting reduces artefacts and better adapts to contextual/non-local information, thus providing a valuable and effective tool for art historians. As a case study, we apply such approach to reconstruct missing image contents in a dataset of highly damaged digital images of medieval paintings located into several chapels in the Mediterranean Alpine Arc and provide a detailed description on how visible and invisible (e.g., infrared) information can be integrated for identifying and reconstructing damaged image regions.
Abstract:The spatial resolution of images of living samples obtained by fluorescence microscopes is physically limited due to the diffraction of visible light, which makes the study of entities of size less than the diffraction barrier (around 200 nm in the x-y plane) very challenging. To overcome this limitation, several deconvolution and super-resolution techniques have been proposed. Within the framework of inverse problems, modern approaches in fluorescence microscopy reconstruct a super-resolved image from a temporal stack of frames by carefully designing suitable hand-crafted sparsity-promoting regularisers. Numerically, such approaches are solved by proximal gradient-based iterative schemes. Aiming at obtaining a reconstruction more adapted to sample geometries (e.g. thin filaments), we adopt a plug-and-play denoising approach with convergence guarantees and replace the proximity operator associated with the explicit image regulariser with an image denoiser (i.e. a pre-trained network) which, upon appropriate training, mimics the action of an implicit prior. To account for the independence of the fluctuations between molecules, the model relies on second-order statistics. The denoiser is then trained on covariance images coming from data representing sequences of fluctuating fluorescent molecules with filament structure. The method is evaluated on both simulated and real fluorescence microscopy images, showing its ability to correctly reconstruct filament structures with high values of peak signal-to-noise ratio (PSNR).
Abstract:To overcome the physical barriers caused by light diffraction, super-resolution techniques are often applied in fluorescent microscopy. State-of-the-art approaches require specific and often demanding acquisition conditions to achieve adequate levels of both spatial and temporal resolution. Analyzing the stochastic fluctuations of the fluorescent molecules provides a solution to the aforementioned limitations, as sufficiently high spatio-temporal resolution for live-cell imaging can be achieved by using common microscopes and conventional fluorescent dyes. Based on this idea, we present COL0RME, a method for COvariance-based $\ell_0$ super-Resolution Microscopy with intensity Estimation, which achieves good spatio-temporal resolution by solving a sparse optimization problem in the covariance domain, and discuss automatic parameter selection strategies. The method is composed of two steps: the former where both the emitters' independence and the sparse distribution of the fluorescent molecules are exploited to provide an accurate localization; the latter where real intensity values are estimated given the computed support. The paper is furnished with several numerical results both on synthetic and real fluorescent microscopy images and several comparisons with state-of-the art approaches are provided. Our results show that COL0RME outperforms competing methods exploiting analogously temporal fluctuations; in particular, it achieves better localization, reduces background artifacts and avoids fine parameter tuning.
Abstract:We consider Wilson-Cowan-type models for the mathematical description of orientation-dependent Poggendorff-like illusions. Our modelling improves two previously proposed cortical-inspired approaches embedding the sub-Riemannian heat kernel into the neuronal interaction term, in agreement with the intrinsically anisotropic functional architecture of V1 based on both local and lateral connections. For the numerical realisation of both models, we consider standard gradient descent algorithms combined with Fourier-based approaches for the efficient computation of the sub-Laplacian evolution. Our numerical results show that the use of the sub-Riemannian kernel allows to reproduce numerically visual misperceptions and inpainting-type biases in a stronger way in comparison with the previous approaches.
Abstract:We propose a non-convex variational model for the super-resolution of Optical Coherence Tomography (OCT) images of the murine eye, by enforcing sparsity with respect to suitable dictionaries learnt from high-resolution OCT data. The statistical characteristics of OCT images motivate the use of {\alpha}-stable distributions for learning dictionaries, by considering the non-Gaussian case, {\alpha}=1. The sparsity-promoting cost function relies on a non-convex penalty - Cauchy-based or Minimax Concave Penalty (MCP) - which makes the problem particularly challenging. We propose an efficient algorithm for minimizing the function based on the forward-backward splitting strategy which guarantees at each iteration the existence and uniqueness of the proximal point. Comparisons with standard convex L1-based reconstructions show the better performance of non-convex models, especially in view of further OCT image analysis
Abstract:We consider the evolution model proposed in [9, 6] to describe illusory contrast perception phenomena induced by surrounding orientations. Firstly, we highlight its analogies and differences with widely used Wilson-Cowan equations [48], mainly in terms of efficient representation properties. Then, in order to explicitly encode local directional information, we exploit the model of the primary visual cortex V1 proposed in [20] and largely used over the last years for several image processing problems [24,38,28]. The resulting model is capable to describe assimilation and contrast visual bias at the same time, the main novelty being its explicit dependence on local image orientation. We report several numerical tests showing the ability of the model to explain, in particular, orientation-dependent phenomena such as grating induction and a modified version of the Poggendorff illusion. For this latter example, we empirically show the existence of a set of threshold parameters differentiating from inpainting to perception-type reconstructions, describing long-range connectivity between different hypercolumns in the primary visual cortex.
Abstract:We propose a new variational model for nonlinear image fusion. Our approach incorporates the osmosis model proposed in Vogel et al. (2013) and Weickert et al. (2013) as an energy term in a variational model. The osmosis energy is known to realize visually plausible image data fusion. As a consequence, our method is invariant to multiplicative brightness changes. On the practical side, it requires minimal supervision and parameter tuning and can encode prior information on the structure of the images to be fused. We develop a primal-dual algorithm for solving this new image fusion model and we apply the resulting minimisation scheme to multi-modal image fusion for face fusion, colour transfer and some cultural heritage conservation challenges. Visual comparison to state-of-the-art proves the quality and flexibility of our method.
Abstract:In this paper we present a new regularization term for variational image restoration which can be regarded as a space-variant anisotropic extension of the classical isotropic Total Variation (TV) regularizer. The proposed regularizer comes from the statistical assumption that the gradients of the target image distribute locally according to space-variant bivariate Laplacian distributions. The highly flexible variational structure of the corresponding regularizer encodes several free parameters which hold the potential for faithfully modelling the local geometry in the image and describing local orientation preferences. For an automatic estimation of such parameters, we design a robust maximum likelihood approach and report results on its reliability on synthetic data and natural images. A minimization algorithm based on the Alternating Direction Method of Multipliers (ADMM) is presented for the efficient numerical solution of the proposed variational model. Some experimental results are reported which demonstrate the high-quality of restorations achievable by the proposed model, in particular with respect to classical Total Variation regularization.
Abstract:We consider a differential model describing neuro-physiological contrast perception phenomena induced by surrounding orientations. The mathematical formulation relies on a cortical-inspired modelling [10] largely used over the last years to describe neuron interactions in the primary visual cortex (V1) and applied to several image processing problems [12,19,13]. Our model connects to Wilson-Cowan-type equations [23] and it is analogous to the one used in [3,2,14] to describe assimilation and contrast phenomena, the main novelty being its explicit dependence on local image orientation. To confirm the validity of the model, we report some numerical tests showing its ability to explain orientation-dependent phenomena (such as grating induction) and geometric-optical illusions [21,16] classically explained only by filtering-based techniques [6,18].