Category Archives: [:en]statistics[:fr]statistiques

Axis 9: Nonparametric estimation and statistical process

In this axis, one of the research direction is focused on nonparametric and semi-parametric statistics for the design of optimal estimators (in the minimax sense, or from Oracle inequalities) for statistical inference problems in large dimension (deformable models in signal processing, covariance matrix estimation, inverse problems).

In this context, the first part focuses on the minimization of the Stein unbiased risk estimator (SURE) for variational models. A first theoretical difficulty is to design such estimators when the targeted functionals are non-smooth, non-convex or discontinuous. A second difficulty relates to the development of efficient algorithms for the calculation and minimization of the SURE when solutions of these models are themselves derived from an optimization algorithm. Finally, a last difficulty concerns the extension of the SURE to complex inference problems (ill-posed problems, non white Gaussian noise, etc.).

Another part of this axis concerns semi-parametric regression models when the regression function is estimated by a recursive Nadaraya-Watson type estimator. In this context, a “région Aquitaine” contract was obtained in 2014 for 3 years. It covers the development of new non-parametric estimation methods with applications in valvometry and environmental sciences.

Axis 8: Large deviations and concentration inequalities

This research axis consists of two parts. The first part deals with large deviations properties of quadratic forms of Gaussian processes and Brownian diffusion. One can also cite recent work on large deviations of least squares estimators of the unknown parameters of Ornstein-Uhlenbeck process with shift. The second part is dedicated to concentration inequalities for sums of independent random variables and martingales. A book is forthcoming, with some applications of concentration inequalities in probabilities and statistics, particularly on the autoregressive process, random permutations and random matrix spectrum.

Axis 7: PDE: stochastic approches

This research axis consists in studying the properties of some classes of PDEs (existence, uniqueness, long time behavior, regularity …) using stochastic processes. The study of systems of progressive retrograde stochastic differential equations (SDE-RSDEs) allows for example to obtain a probabilistic representation for such PDEs, representation commonly called Feynman-Kac formula. This representation also helps to build and to study the convergence of probabilistic algorithms to numerically solve these PDEs.

The SDE-RSDEs also allow us to model the equations of hydrodynamics and their approximate solution give new simulation methods. Their combination with variational methods to answer questions of existence of generalized flux with initial and final conditions. RSDEs are also a promising tool for smoothing and denoising signals via the design of martingales with given terminal value.

Axis 5: Stochastic calculus, probabilities and statistics on manifolds

This axis relates to the use of all methods of stochastic calculus, particularly the detailed analysis of process trajectories, their probabilities, their variation, their couplings, in order to:

  • analyse the diffusion semigroups and evolution equations in manifolds (heat equation, mean curvature equation, Ricci flow), and their use in signal and image processing,
  • get functional inequalities,
  • study the Poisson boundaries,
  • computing price sensitivity in financial models,
  • get transport inequality,
  • design search and optimization algorithms in manifolds for images and signals.

Existence and uniqueness problems are also studied for martingales with given terminal value in manifolds. Several contributions also cover the notion of Fréchet mean which is an extension of the usual Euclidean barycenter to spaces equiped with non-Euclidean distances. In this context, many statistical properties of the Fréchet mean were established for deformable models of signals.

Estimation of Kullback-Leibler losses

We address the question of estimating Kullback-Leibler losses rather than squared losses in recovery problems where the noise is distributed within the exponential family. We exhibit conditions under which these losses can be unbiasedly estimated or estimated with a controlled bias. Simulations on parameter selection problems in image denoising applications with Gamma and Poisson noises illustrate the interest of Kullback-Leibler losses and the proposed estimators.

Preprint available here

Non parametric noise estimation

In order to provide a fully automatic denoising algorithm, we have developed an automatic noise estimation method that relies on the non-parametric detection of homogeneous areas. First, the homogeneous regions of the image are detected by computing Kendall’s rank correlation coefficient [1]. Computed on neighboring pixel sequences, it indicates the dependancy between neighbors, hence reflects the presence of structure inside an image block. This test is non-parametric, so the performance of the detection is independant of the noise statistical distribution. Once the homogeneous areas are detected, the noise level function, i.e., the function of the noise variance with respect to the image intensities, is estimated as a second order polynomial minimizing the \ell^1 error on the statistics of these regions.

Matlab implementation of the noise estimation algorithm

Related papers:

– C. Sutour, C.-A. Deledalle et J.-F. Aujol. Estimation of the noise level function based on a non-parametric detection of homogeneous image regions. Submitted to Siam Journal on Imaging Sciences, 2015.

– C. Sutour, C.-A. Deledalle et J.-F. Aujol. Estimation du niveau de bruit par la détection non paramétrique de zones homogènes. Submitted to Gretsi, 2015.

References

[1] Buades, A., Coll, B., and Morel, J.-M. (2005). A review of image denoising algorithms, with a new one. Multiscale Modeling and Simulation, 4(2): 490–530.

Optimal Transport in Image Processing

Optimal transport is nowadays a major statistical tool for computer vision and image processing. It may be used for measuring similarity between features, matching and averaging features or registrating images. However, a major drawback of this framework is the lack of regularity of the transport map and the robustness to outliers. The computational cost associated to the estimation of optimal transport is also very high and the application of such theory is difficult for problems of large dimensions. Hence, we are interested in the definition of new algorithms for computing solutions of generalized optimal transports that include some regularity priors.

Model selection for image processing

Parameter tuning is a critical point in image restoration techniques. When degraded data are simulated form a reference image, we can compare the restored image to the reference one, and then select the set of parameters providing the best restoration quality. This tuning is less trivial in real acquisition setting for which there are no reference images. In the case of simple degradation models, statistical tools can be used to estimate the square restoration error even though the reference image is unknown, we speak about “risk estimation”. To optimize this estimate with respect to the parameters of the method leads to a near optimal calibration. The Stein’s unbiased risk estimator (SURE, Stein 1981) is one of the most famous example, successfully applied to calibrate image restoration methods under Gaussian noise degradation (see e.g., Ramani et al., 2008). We focus in developing new estimators derived from the SURE for the calibration of parametres involved in recent methods, potentially highly parameterized, for the restoration of images with complex degradation models (blur, missing data, non-Gaussian, non-stationary and correlated noise).

See:
Stein Unbiased GrAdient estimator of the Risk
Stein Consistent Risk Estimator for hard thresholding
Local Behavior of Sparse Analysis Regularization

Stein Unbiased GrAdient estimator of the Risk

Algorithms to solve variational regularization of ill-posed inverse problems usually involve operators that depend on a collection of continuous parameters. When these operators enjoy some (local) regularity, these parameters can be selected using the so-called Stein Unbiased Risk Estimate (SURE). While this selection is usually performed by exhaustive search, we address in this work the problem of using the SURE to efficiently optimize for a collection of continuous parameters of the model. When considering non-smooth regularizers, such as the popular l1-norm corresponding to soft-thresholding mapping, the SURE is a discontinuous function of the parameters preventing the use of gradient descent optimization techniques. Instead, we focus on an approximation of the SURE based on finite differences as proposed in (Ramani et al., 2008). Under mild assumptions on the estimation mapping, we show that this approximation is a weakly differentiable function of the parameters and its weak gradient, coined the Stein Unbiased GrAdient estimator of the Risk (SUGAR), provides an asymptotically (with respect to the data dimension) unbiased estimate of the gradient of the risk. Moreover, in the particular case of soft-thresholding, the SUGAR is proved to be also a consistent estimator. The SUGAR can then be used as a basis to perform a quasi-Newton optimization. The computation of the SUGAR relies on the closed-form (weak) differentiation of the non-smooth function. We provide its expression for a large class of iterative proximal splitting methods and apply our strategy to regularizations involving non-smooth convex structured penalties. Illustrations on various image restoration and matrix completion problems are given.

Associated publications and source codes:

Charles-Alban Deledalle, Samuel Vaiter, Gabriel Peyré and Jalal Fadili
Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection,
Technical report HAL, hal-00987295 (HAL)

MATLAB source codes available from GitHub.