Normal-gamma distribution

From Wikipedia, the free encyclopedia

Normal-gamma
Probability density function
Cumulative distribution function
Parameters \mu\, location (real)
\lambda > 0\, (real)
\alpha \ge 1\, (real)
\beta \ge 0\, (real)
Support x \in (-\infty, \infty)\,\!, \; \tau \in (0,\infty)
Probability density function (pdf)
Cumulative distribution function (cdf)
Mean \mu\,\!
Median \mu\,
Mode
Variance \left(\frac{\lambda+1}{\lambda}\right)\frac{\beta}{\alpha-1}\,\!
Skewness
Excess kurtosis
Entropy
Moment-generating function (mgf)
Characteristic function

In probability theory and statistics, the normal-gamma distribution is a four-parameter family of continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and precision.

Contents

[edit] Definition

Suppose

  x|\tau, \mu, \lambda \sim N(\mu,\lambda / \tau) \,\!

has a normal distribution with mean μ and variance λ / τ, where

\tau|\alpha, \beta \sim \mathrm{Gamma}(\alpha,\beta) \!

has a gamma distribution. Then (x,τ) has a normal-gamma distribution, denoted as

 (x,\tau) \sim \mathrm{NormalGamma}(\mu,\lambda,\alpha,\beta) \! .

[edit] Characterization

[edit] Probability density function

f(x,\tau|\mu,\lambda,\alpha,\beta) = \frac{\beta^\alpha}{\Gamma(\alpha)\sqrt{2\pi\lambda}}  \, \tau^{\alpha-\frac{1}{2}}\,e^{-\beta\tau}\,e^{ -\frac{\tau(x- \mu)^2}{2\lambda}}

[edit] Cumulative distribution function

[edit] Properties

[edit] Summation

[edit] Scaling

For any t > 0, tX is distributed NormalGamma(tμ,λ,α,t2β)

[edit] Exponential family

[edit] Information entropy

[edit] Kullback-Leibler divergence

[edit] Maximum likelihood estimation

[edit] Generating normal-gamma random variates

Generation of random variates is straightforward:

  1. Sample τ from a gamma distribution with parameters α and β
  2. Sample x from a normal distribution with mean μ and variance λ / τ

[edit] Related distributions

[edit] References

  • Bernardo, J. M., and A. F. M. Smith. 1994. Bayesian theory. Chichester, UK: Wiley.
  • Dearden et al. Bayesian Q-learning