Characteristic function (probability theory)
From Wikipedia, the free encyclopedia
In probability theory, the characteristic function of any random variable completely defines its probability distribution. On the real line it is given by the following formula, where X is any random variable with the distribution in question:
where t is a real number, i is the imaginary unit, and E denotes the expected value.
If FX is the cumulative distribution function, then the characteristic function is given by the Riemann-Stieltjes integral
In cases in which there is a probability density function, fX, this becomes
If X is a vector-valued random variable, one takes the argument t to be a vector and tX to be a dot product.
Every probability distribution on R or on Rn has a characteristic function, because one is integrating a bounded function over a space whose measure is finite, and for every characteristic function there is exactly one probability distribution.
The characteristic function of a symmetric PDF (that is, one with p(x) = p( − x)) is real, because the imaginary components obtained from x > 0 cancel those from x < 0.
Contents |
[edit] Lévy continuity theorem
The core of the Lévy continuity theorem states that a sequence of random variables
where each
has a characteristic function
will converge in distribution towards a random variable
,
if
and
continuous in
and
is the characteristic function of
.
The Lévy continuity theorem can be used to prove the weak law of large numbers, see the proof using convergence of characteristic functions.
[edit] The inversion theorem
More than that, there is a bijection between cumulative probability distribution functions and characteristic functions. In other words, two distinct probability distributions never share the same characteristic function.
Given a characteristic function φ, it is possible to reconstruct the corresponding cumulative probability distribution function F:
In general this is an improper integral; the function being integrated may be only conditionally integrable rather than Lebesgue integrable, i.e. the integral of its absolute value may be infinite.
Reference: see (P. Levy, Calcul des probabilites, Gauthier-Villars, Paris, 1925. p166)
[edit] Bochner-Khinchin theorem
An arbitrary function
is a characteristic function corresponding to some probability law
if and only if the following three conditions are satisfied:
(1)
is continuous
(2) 
(3)
is a positive definite function ( note that this is a complicated condition which is not equivalent to
).
[edit] Uses of characteristic functions
Because of the continuity theorem, characteristic functions are used in the most frequently seen proof of the central limit theorem. The main trick involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution.
[edit] Basic properties
Characteristic functions are particularly useful for dealing with functions of independent random variables. For example, if X1, X2, ..., Xn is a sequence of independent (and not necessarily identically distributed) random variables, and
where the ai are constants, then the characteristic function for Sn is given by
In particular,
. To see this, write out the definition of characteristic function:
.
Observe that the independence of X and Y is required to establish the equality of the third and fourth expressions.
Another special case of interest is when ai = 1 / n and then Sn is the sample mean. In this case, writing
for the mean,
[edit] Moments
Characteristic functions can also be used to find moments of a random variable. Provided that the nth moment exists, characteristic function can be differentiated n times and
For example, suppose X has a standard Cauchy distribution. Then
. See how this is not differentiable at t = 0, showing that the Cauchy distribution has no expectation. Also see that the characteristic function of the sample mean
of n independent observations has characteristic function
, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.
The logarithm of a characteristic function is a cumulant generating function, which is useful for finding cumulants.
[edit] An example
The Gamma distribution with scale parameter θ and a shape parameter k has the characteristic function
Now suppose that we have
with X and Y independent from each other, and we wish to know what the distribution of X + Y is. The characteristic functions are
which by independence and the basic properties of characteristic function leads to
This is the characteristic function of the gamma distribution scale parameter θ and shape parameter k1 + k2, and we therefore conclude
- X + Y˜Γ(k1 + k2,θ)
The result can be expanded to n independent gamma distributed random variables with the same scale parameter and we get
[edit] Multivariate characteristic functions
If X is a multivariate PDF, then its characteristic function is defined as
Here, the dot signifies vector dot product (t is in the dual space of x).
[edit] Example
If X˜N(0,Σ) is a multivariate Gaussian with zero mean, then
[edit] Matrix-valued random variables
If X is a matrix-valued PDF, then the characteristic function is
Here
is the trace function and matrix multiplication (of T and X) is used. Note that the order of the multiplication is immaterial (
but tr(XT) = tr(TX)).
Examples of matrix-valued PDFs include the Wishart distribution.
[edit] Related concepts
Related concepts include the moment-generating function and the probability-generating function. The characteristic function exists for all probability distributions. However this is not the case for moment generating function.
The characteristic function is closely related to the Fourier transform: the characteristic function of a probability density function p(x) is the complex conjugate of the continuous Fourier transform of p(x) (according to the usual convention; see [1]).
where P(t) denotes the continuous Fourier transform of the probability density function p(x). Likewise, p(x) may be recovered from
through the inverse Fourier transform:
Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable.
[edit] References
Lukacs E. (1970) Characteristic Functions. Griffin, London. pp. 350
Bisgaard, T. M., Sasvári, Z. (2000) Characteristic Functions and Moment Sequences, Nova Science









![\operatorname{E}\left(X^n\right) = i^{-n}\, \varphi_X^{(n)}(0)
= i^{-n}\, \left[\frac{d^n}{dt^n} \varphi_X(t)\right]_{t=0}. \,\!](../../../../math/0/9/2/0929044f781ef542046f3b9f94862a0c.png)











