### Abstract

Original language | English |
---|---|

Pages (from-to) | 650-670 |

Number of pages | 21 |

Journal | SIAM Journal on Imaging Sciences |

Volume | 12 |

Issue number | 1 |

DOIs | |

Publication status | Published - 28 Mar 2019 |

### Fingerprint

### Keywords

- Bayesian inference
- Convex optimization
- Decision theory
- Differential geometry
- Inverse problems
- Mathematical imaging
- Maximum-a-posteriori estimation

### ASJC Scopus subject areas

- Mathematics(all)
- Applied Mathematics

### Cite this

}

*SIAM Journal on Imaging Sciences*, vol. 12, no. 1, pp. 650-670. https://doi.org/10.1137/18M1174076

**Revisiting Maximum-A-Posteriori Estimation in Log-Concave Models.** / Pereyra, Marcelo.

Research output: Contribution to journal › Article

TY - JOUR

T1 - Revisiting Maximum-A-Posteriori Estimation in Log-Concave Models

AU - Pereyra, Marcelo

PY - 2019/3/28

Y1 - 2019/3/28

N2 - Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimization. However, despite its success and wide adoption, MAP estimation is not theoretically well understood yet. In particular, the prevalent view in the community is that MAP estimation is not proper Bayesian estimation in the sense of Bayesian decision theory because it does not minimize a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimizes the mean squared loss). This paper addresses this theoretical gap by presenting a general decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss coincides with the Bregman divergence associated with the negative log posterior density. Following on from this, we show that the MAP estimator is the only Bayesian estimator that minimizes the expected canonical loss, and that the posterior mean or MMSE estimator minimizes the dual canonical loss. We then study the question of MAP and MMSE estimation performance in high dimensions. Precisely, we establish a universal bound on the expected canonical error as a function of image dimension, providing new insights on the good empirical performance observed in convex problems. Together, these results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple beneficial roles that convex geometry plays in imaging problems. Finally, we illustrate this new theory by analyzing the regularization-by-denoising Bayesian models, a class of state-of-the-art imaging models where priors are defined implicitly through image denoising algorithms, and an image denoising model with a wavelet shrinkage prior.

AB - Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimization. However, despite its success and wide adoption, MAP estimation is not theoretically well understood yet. In particular, the prevalent view in the community is that MAP estimation is not proper Bayesian estimation in the sense of Bayesian decision theory because it does not minimize a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimizes the mean squared loss). This paper addresses this theoretical gap by presenting a general decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss coincides with the Bregman divergence associated with the negative log posterior density. Following on from this, we show that the MAP estimator is the only Bayesian estimator that minimizes the expected canonical loss, and that the posterior mean or MMSE estimator minimizes the dual canonical loss. We then study the question of MAP and MMSE estimation performance in high dimensions. Precisely, we establish a universal bound on the expected canonical error as a function of image dimension, providing new insights on the good empirical performance observed in convex problems. Together, these results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple beneficial roles that convex geometry plays in imaging problems. Finally, we illustrate this new theory by analyzing the regularization-by-denoising Bayesian models, a class of state-of-the-art imaging models where priors are defined implicitly through image denoising algorithms, and an image denoising model with a wavelet shrinkage prior.

KW - Bayesian inference

KW - Convex optimization

KW - Decision theory

KW - Differential geometry

KW - Inverse problems

KW - Mathematical imaging

KW - Maximum-a-posteriori estimation

UR - http://www.scopus.com/inward/record.url?scp=85064202507&partnerID=8YFLogxK

U2 - 10.1137/18M1174076

DO - 10.1137/18M1174076

M3 - Article

VL - 12

SP - 650

EP - 670

JO - SIAM Journal on Imaging Sciences

JF - SIAM Journal on Imaging Sciences

SN - 1936-4954

IS - 1

ER -