Revisiting Maximum-A-Posteriori Estimation in Log-Concave Models

Research output: Contribution to journalArticle

Abstract

Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimization. However, despite its success and wide adoption, MAP estimation is not theoretically well understood yet. In particular, the prevalent view in the community is that MAP estimation is not proper Bayesian estimation in the sense of Bayesian decision theory because it does not minimize a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimizes the mean squared loss). This paper addresses this theoretical gap by presenting a general decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss coincides with the Bregman divergence associated with the negative log posterior density. Following on from this, we show that the MAP estimator is the only Bayesian estimator that minimizes the expected canonical loss, and that the posterior mean or MMSE estimator minimizes the dual canonical loss. We then study the question of MAP and MMSE estimation performance in high dimensions. Precisely, we establish a universal bound on the expected canonical error as a function of image dimension, providing new insights on the good empirical performance observed in convex problems. Together, these results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple beneficial roles that convex geometry plays in imaging problems. Finally, we illustrate this new theory by analyzing the regularization-by-denoising Bayesian models, a class of state-of-the-art imaging models where priors are defined implicitly through image denoising algorithms, and an image denoising model with a wavelet shrinkage prior.
Original languageEnglish
Pages (from-to)650-670
Number of pages21
JournalSIAM Journal on Imaging Sciences
Volume12
Issue number1
DOIs
Publication statusPublished - 28 Mar 2019

Fingerprint

Log-concave
Maximum a Posteriori
Maximum a Posteriori Estimation
Bayesian Model
Mean Squared Error
Bayesian Estimation
Convex Geometry
Minimise
Image Denoising
Error Estimator
Imaging
Differential Geometry
Error Estimation
Geometry
Loss Function
Image denoising
Bregman Divergence
Bayesian Decision Theory
Universal Bounds
Imaging techniques

Keywords

  • Bayesian inference
  • Convex optimization
  • Decision theory
  • Differential geometry
  • Inverse problems
  • Mathematical imaging
  • Maximum-a-posteriori estimation

ASJC Scopus subject areas

  • Mathematics(all)
  • Applied Mathematics

Cite this

@article{972a5438441541669e5bbcc26d6b0ddf,
title = "Revisiting Maximum-A-Posteriori Estimation in Log-Concave Models",
abstract = "Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimization. However, despite its success and wide adoption, MAP estimation is not theoretically well understood yet. In particular, the prevalent view in the community is that MAP estimation is not proper Bayesian estimation in the sense of Bayesian decision theory because it does not minimize a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimizes the mean squared loss). This paper addresses this theoretical gap by presenting a general decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss coincides with the Bregman divergence associated with the negative log posterior density. Following on from this, we show that the MAP estimator is the only Bayesian estimator that minimizes the expected canonical loss, and that the posterior mean or MMSE estimator minimizes the dual canonical loss. We then study the question of MAP and MMSE estimation performance in high dimensions. Precisely, we establish a universal bound on the expected canonical error as a function of image dimension, providing new insights on the good empirical performance observed in convex problems. Together, these results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple beneficial roles that convex geometry plays in imaging problems. Finally, we illustrate this new theory by analyzing the regularization-by-denoising Bayesian models, a class of state-of-the-art imaging models where priors are defined implicitly through image denoising algorithms, and an image denoising model with a wavelet shrinkage prior.",
keywords = "Bayesian inference, Convex optimization, Decision theory, Differential geometry, Inverse problems, Mathematical imaging, Maximum-a-posteriori estimation",
author = "Marcelo Pereyra",
year = "2019",
month = "3",
day = "28",
doi = "10.1137/18M1174076",
language = "English",
volume = "12",
pages = "650--670",
journal = "SIAM Journal on Imaging Sciences",
issn = "1936-4954",
publisher = "Society for Industrial and Applied Mathematics Publications",
number = "1",

}

Revisiting Maximum-A-Posteriori Estimation in Log-Concave Models. / Pereyra, Marcelo.

In: SIAM Journal on Imaging Sciences , Vol. 12, No. 1, 28.03.2019, p. 650-670.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Revisiting Maximum-A-Posteriori Estimation in Log-Concave Models

AU - Pereyra, Marcelo

PY - 2019/3/28

Y1 - 2019/3/28

N2 - Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimization. However, despite its success and wide adoption, MAP estimation is not theoretically well understood yet. In particular, the prevalent view in the community is that MAP estimation is not proper Bayesian estimation in the sense of Bayesian decision theory because it does not minimize a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimizes the mean squared loss). This paper addresses this theoretical gap by presenting a general decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss coincides with the Bregman divergence associated with the negative log posterior density. Following on from this, we show that the MAP estimator is the only Bayesian estimator that minimizes the expected canonical loss, and that the posterior mean or MMSE estimator minimizes the dual canonical loss. We then study the question of MAP and MMSE estimation performance in high dimensions. Precisely, we establish a universal bound on the expected canonical error as a function of image dimension, providing new insights on the good empirical performance observed in convex problems. Together, these results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple beneficial roles that convex geometry plays in imaging problems. Finally, we illustrate this new theory by analyzing the regularization-by-denoising Bayesian models, a class of state-of-the-art imaging models where priors are defined implicitly through image denoising algorithms, and an image denoising model with a wavelet shrinkage prior.

AB - Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimization. However, despite its success and wide adoption, MAP estimation is not theoretically well understood yet. In particular, the prevalent view in the community is that MAP estimation is not proper Bayesian estimation in the sense of Bayesian decision theory because it does not minimize a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimizes the mean squared loss). This paper addresses this theoretical gap by presenting a general decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss coincides with the Bregman divergence associated with the negative log posterior density. Following on from this, we show that the MAP estimator is the only Bayesian estimator that minimizes the expected canonical loss, and that the posterior mean or MMSE estimator minimizes the dual canonical loss. We then study the question of MAP and MMSE estimation performance in high dimensions. Precisely, we establish a universal bound on the expected canonical error as a function of image dimension, providing new insights on the good empirical performance observed in convex problems. Together, these results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple beneficial roles that convex geometry plays in imaging problems. Finally, we illustrate this new theory by analyzing the regularization-by-denoising Bayesian models, a class of state-of-the-art imaging models where priors are defined implicitly through image denoising algorithms, and an image denoising model with a wavelet shrinkage prior.

KW - Bayesian inference

KW - Convex optimization

KW - Decision theory

KW - Differential geometry

KW - Inverse problems

KW - Mathematical imaging

KW - Maximum-a-posteriori estimation

UR - http://www.scopus.com/inward/record.url?scp=85064202507&partnerID=8YFLogxK

U2 - 10.1137/18M1174076

DO - 10.1137/18M1174076

M3 - Article

VL - 12

SP - 650

EP - 670

JO - SIAM Journal on Imaging Sciences

JF - SIAM Journal on Imaging Sciences

SN - 1936-4954

IS - 1

ER -