Posterior probability

From Wikipedia, the free encyclopedia

The posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence is taken into account.

The posterior probability distribution of one random variable given the value of another can be calculated with Bayes' theorem by multiplying the prior probability distribution by the likelihood function, and then dividing by the normalizing constant, as follows:

f_{X\mid Y=y}(x)={f_X(x) L_{X\mid Y=y}(x) \over {\int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx}}

gives the posterior probability density function for a random variable X given the data Y = y, where

  • fX(x) is the prior density of X,
  • L_{X\mid Y=y}(x) = f_{Y\mid X=x}(y) is the likelihood function as a function of x,
  • \int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx is the normalizing constant, and
  • f_{X\mid Y=y}(x) is the posterior density of X given the data Y = y.