Category Archives: [:fr]traitement d’images[:en]image processing

Relaxations of non-convex fonctionals

We study different relaxations of non-convex functionals that can be found in image processing. Some problems, such as image segmentation, can indeed be written as the minimization of a functional. The minimizer of the functional represents the segmentation.

Different methods have been proposed in order to find local or global minima of the non-convex functional of the two-phase piecewise constant Mumford-Shah model. With a convex relaxation of this model we can find a global minimum of the non-convex functional. We present and compare some of these methods and we propose a new model with a narrow band. This models finds local minima while using robust convex optimization algorithms. Then a convex relaxation of a two-phase segmentation model is built that compares two given histograms with those of the two segmented regions.

We also study some relaxations of high-dimension multi-label problems such as optical flow computation. A convex relaxation with a new algorithm is proposed. The algorithm is iterative with exact projections. A new algorithm is given for a relaxation that is convex in each variable but that is not convex globally. We study the problem of constructing a solution of the original non-convex problem with a solution of the relaxed problem. We compare existing methods with new ones.

Image colorization

Colorization is an important issue for example for the restoration of old documents, but also for the entertainment industry. The methods fall into two categories, manual methods and example-based methods for which the user provides a color image which serves as a source of information. The comparison of textures used to extract color. Regularization is then required, that is carried out in our case by minimizing a non-convex function. To propose a reasonable model for colorization and an efficient algorithm to minimize the proposed functional constituted the bulk of the beginning of the thesis.

However, methods-based example exhibited difficulty in finding a relevant source image. It is difficult to find a relevant source image for a given colorization, and the method fails frequently, just for example, when a color is not present in the source image, or when two smooth portions or similar textures expressed a different color. To avoid these issues, we proposed a method called collaborative colorization, based on an interactive method in which the user supervises the result. It has the ability to integrate color points in the result where the result appears unsatisfactory. This new information is integrated into the process by exploiting the non-convexity of the model.

Illustration de Colorisation

Super-resolution and inpainting

Moncef Hidane completed a post-doctoral 2013-2014 under the Labex CPU. He previously completed a doctoral thesis at the laboratory GREYC in Caen, the decomposition problem of graphs of signals. He is currently research professor at INSA Loire Valley.

The topic of the post-doc concerns the 3D image reconstruction from a low-resolution full acquisition and partial acquisition in very high resolution. So this is a super-resolution inverse problem.

The model adopted for the low resolution acquisition is that of a convolution operator followed by a spatial or spectral sub-sampling. Initially, the impulse response of the sensor is assumed to be known. An additive white noise is integrated into the model.

The model adopted for the high resolution acquisition is one of an occlusion in which the support is known. An additive white noise is also integrated into the model.

The first track is followed by that of a regularization using an a priori isotropically total variation. Two versions are contemplated: one with inequality constraints, on the other hand with equality constraints. In this context, an important part of the work concerns minimization criteria in each case. Due to the non-differentiability of the measure total variance algorithms from the non-smooth convex optimization are considered. The goal is to have fast and robust algorithms for computing solutions.

The second track is followed by an adjustment using a non-local operator. The idea is to take advantage of both the volume low resolution and high resolution images to construct a priori regularization type nonlocal total variation.

The following figure illustrates the problem and the results obtained with the two previous approaches.

(a) image originale 256 x 256; (b) image BR (facteur 4); (c) image HR partielle; (d) interpolation bicubique de (b) (psnr=24.43); (e) reconstruction TV (psnr=29.52); (f) recontruction par régularisation non locale (psnr=30.16).

(a) image originale 256 x 256; (b) image BR (facteur 4); (c) image HR partielle; (d) interpolation bicubique de (b) (psnr=24.43); (e) reconstruction TV (psnr=29.52); (f) recontruction par régularisation non locale (psnr=30.16).

Study of sparse methods for tomography

As part of tomography with a limited number of angles, the resulting inverse problem is ill-conditioned. The algorithms “classic”, analytic and algebraic produce many artifacts. The optimization methods solve this problem by using a regularization. In my work, I guess the parsimonious or sparse images in a given representation. A sparse image is an image that has few non-zero components. To promote this parsimony, I will rectify the problem with a standard \ell^1 and solve one of the two equivalent problems below.

(1)   \begin{equation*} \underset{x\in \mathbb{R}^n}{\text{min}}\ \|x\|_1 \text{ s.t. } \|\mathbf{A}x-y\|_2\le \delta \end{equation*}

(2)   \begin{equation*} \underset{x\in \mathbb{R}^n}{\text{min}}\ \frac{1}{2}\|\mathbf{A}x-y\|_2 + \lambda \|x\|_1 \end{equation*}

My job is to assess the theoretical limits of these problems while being the closest to reality. Recent studies [3] have highlighted sufficient conditions ensuring the reconstruction of the image by minimizing \ell^1 and provide an error of this reconstruction [1]. These conditions require the existence of a particular vector called dual certificate . Depending on the application, it may be possible to calculate a candidate who will be a conditional dual certificate [2,4].

This work primarily cover the construction of such candidates. I propose in my work, a greedy approach to calculate a candidate who is a dual certificate for almost all images “rebuildable” and minimizing the theoretical reconstruction error.

[1] Charles Dossal and Remi Tesson. Consistency of l1 recovery from noisy deterministic measurements. Applied and Computational Harmonic Analysis, 2013.

[2] Jean-Jacques Fuchs. Recovery of exact sparse representations in the presence of bounded noise. Information Theory, IEEE Transactions on, 51(10) :3601–3608, 2005.

[3] Markus Grasmair, Otmar Scherzer, and Markus Haltmeier. Necessary and sufficient conditions for linear convergence of l1-regularization. Commun. Pure and Appl. Math., 64(2) :161–182, 2011.

[4] Samuel Vaiter, Gabriel Peyré, Charles Dossal, and Jalal Fadili. Robust sparse analysis regularization. 2011.

Optimal Transport in Image Processing

Optimal transport is nowadays a major statistical tool for computer vision and image processing. It may be used for measuring similarity between features, matching and averaging features or registrating images. However, a major drawback of this framework is the lack of regularity of the transport map and the robustness to outliers. The computational cost associated to the estimation of optimal transport is also very high and the application of such theory is difficult for problems of large dimensions. Hence, we are interested in the definition of new algorithms for computing solutions of generalized optimal transports that include some regularity priors.

Non-local models

Several imaging systems provide large amount of images with complex degradation models and low signal to noise ratio. Specific adapted restoration methods should be developed. With the computing power currently available, new paradigms emerge, such as the “non-local” one (Buades et al., 2005), with very good performance (see, e.g., Lebrun et al. 2012; Milanfar, 2013). However, extensions of this methods to specific image modality might not be trivial. We focus on extensions of such methods for the restoration of non-conventional image modalities (low-light imagery, coherent imagery, tomography, …): presence of complex degradation models (blur, missing data, non-Gaussian, non-stationary and correlated noise) and requirement of fast computation large volume of data.

Model selection for image processing

Parameter tuning is a critical point in image restoration techniques. When degraded data are simulated form a reference image, we can compare the restored image to the reference one, and then select the set of parameters providing the best restoration quality. This tuning is less trivial in real acquisition setting for which there are no reference images. In the case of simple degradation models, statistical tools can be used to estimate the square restoration error even though the reference image is unknown, we speak about “risk estimation”. To optimize this estimate with respect to the parameters of the method leads to a near optimal calibration. The Stein’s unbiased risk estimator (SURE, Stein 1981) is one of the most famous example, successfully applied to calibrate image restoration methods under Gaussian noise degradation (see e.g., Ramani et al., 2008). We focus in developing new estimators derived from the SURE for the calibration of parametres involved in recent methods, potentially highly parameterized, for the restoration of images with complex degradation models (blur, missing data, non-Gaussian, non-stationary and correlated noise).

See:
Stein Unbiased GrAdient estimator of the Risk
Stein Consistent Risk Estimator for hard thresholding
Local Behavior of Sparse Analysis Regularization

Stein Unbiased GrAdient estimator of the Risk

Algorithms to solve variational regularization of ill-posed inverse problems usually involve operators that depend on a collection of continuous parameters. When these operators enjoy some (local) regularity, these parameters can be selected using the so-called Stein Unbiased Risk Estimate (SURE). While this selection is usually performed by exhaustive search, we address in this work the problem of using the SURE to efficiently optimize for a collection of continuous parameters of the model. When considering non-smooth regularizers, such as the popular l1-norm corresponding to soft-thresholding mapping, the SURE is a discontinuous function of the parameters preventing the use of gradient descent optimization techniques. Instead, we focus on an approximation of the SURE based on finite differences as proposed in (Ramani et al., 2008). Under mild assumptions on the estimation mapping, we show that this approximation is a weakly differentiable function of the parameters and its weak gradient, coined the Stein Unbiased GrAdient estimator of the Risk (SUGAR), provides an asymptotically (with respect to the data dimension) unbiased estimate of the gradient of the risk. Moreover, in the particular case of soft-thresholding, the SUGAR is proved to be also a consistent estimator. The SUGAR can then be used as a basis to perform a quasi-Newton optimization. The computation of the SUGAR relies on the closed-form (weak) differentiation of the non-smooth function. We provide its expression for a large class of iterative proximal splitting methods and apply our strategy to regularizations involving non-smooth convex structured penalties. Illustrations on various image restoration and matrix completion problems are given.

Associated publications and source codes:

Charles-Alban Deledalle, Samuel Vaiter, Gabriel Peyré and Jalal Fadili
Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection,
Technical report HAL, hal-00987295 (HAL)

MATLAB source codes available from GitHub.