Differential operator

From Wikipedia, the free encyclopedia

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation, accepting a function and returning another (in the style of a higher-order function in computer science).

There are certainly reasons not to restrict to linear operators; for instance the Schwarzian derivative is a well-known non-linear operator. Only the linear case will be addressed here.

Contents

[edit] Notations

The most commonly used differential operator is the action of taking the derivative itself. Common notations for this operator include:

{d \over dx}
D,\, where the variable with respect to which one is differentiating is clear, and
D_x,\, where the variable is declared explicitly.

First derivatives are signified as above, but when taking higher, n-th derivatives, the following alterations are useful:

d^n \over dx^n
D^n\,
D^n_x.\,

The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form

\sum_{k=0}^n c_k D^k

in his study of differential equations.

One of the most frequently seen differential operators is the Laplacian operator, defined by

\Delta=\nabla^{2}=\sum_{k=1}^n {\partial^2\over \partial x_k^2}.

Another differential operator is the Θ operator, defined by

\Theta = z {d \over dz}.

This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:

\Theta (z^k) = k z^k,\quad k=0,1,2,\dots

In n variables the homogeneity operator is given by

\Theta = \sum_{k=1}^n x_k \frac{\partial}{\partial x_k}.

As in one variable, the eigenspaces of Θ are the spaces of homogeneous polynomials.

[edit] Adjoint of an operator

See also: Hermitian adjoint

Given a linear differential operator

Tu = \sum_{k=0}^n a_k(x) D^k u

the adjoint of this operator is defined as the operator T * such that

\langle Tu,v \rangle = \langle u, T^*v \rangle

where the notation \langle\cdot,\cdot\rangle is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product.

[edit] Formal adjoint in one variable

In the functional space of square integrable functions, the scalar product is defined by

\langle f, g \rangle = \int_a^b f(x) \, \overline{g(x)} \,dx.

If one moreover adds the condition that f or g vanishes for x \to a and x \to b, one can also define the adjoint of T by

T^*u = \sum_{k=0}^n (-1)^k D^k [a_k(x)u].\,

This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When T * is defined according to this formula, it is called the formal adjoint of T.

A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.

[edit] Several variables

If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:

\langle f, P^* g\rangle_{L^2(\Omega)} = \langle P f, g\rangle_{L^2(\Omega)}

for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely-defined operator.

[edit] Example

The Sturm-Liouville operator is a well-known example of formal self-adjoint operator. This second order linear differential operators L can be written in the form

Lu = -(pu')'+qu=-(pu''+p'u')+qu=-pu''-p'u'+qu=(-p) D^2 u +(-p') D u + (q)u.\;\!

This property can be proven using the formal adjoint definition above.

\begin{align}
L^*u & {} = (-1)^2 D^2 [(-p)u] + (-1)^1 D [(-p')u] + (-1)^0 (qu) \\
 & {} = -D^2(pu) + D(p'u)+qu \\
 & {} = -(pu)''+(p'u)'+qu \\
 & {} = -p''u-2p'u'-pu''+p''u+p'u'+qu \\
 & {} = -p'u'-pu''+qu \\
 & {} = -(pu')'+qu \\
 & {} = Lu
\end{align}

This operator is central to Sturm-Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.

[edit] Properties of differential operators

Differentiation is linear, i.e.,

D(f+g) = (Df)+(Dg)\,
D(af) = a(Df)\,

where f and g are functions, and a is a constant.

Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule

(D_1 \circ D_2)(f) = D_1(D_2(f)).\,

Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. In fact we have for example the relation basic in quantum mechanics:

Dx - xD = 1.\,

The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.

The differential operators also obey the shift theorem.

[edit] Several variables

The same constructions can be carried out with partial derivatives, differentiation with respect to different variables giving rise to operators that commute (see symmetry of second derivatives).

[edit] Coordinate-independent description and relation to commutative algebra

In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a manifold M. An \mathbb{R}-linear mapping of sections P: \Gamma(E) \rightarrow \Gamma(F)\, is said to be a k-th order linear differential operator if it factors through the jet bundle J^k(E)\,. In other words, there exists a linear mapping of vector bundles

i_P: J^k(E) \rightarrow F\,

such that

P = \hat{i}_P\circ j^k

where \hat{i}_P denotes the map induced by i_P\, on sections , and j^k:\Gamma(E)\rightarrow \Gamma(J^k(E))\, is the canonical (or universal) k-th order differential operator.

This just means that for a given sections s of E, the value of P(s) at a point x\in M is fully determined by the k-th order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any local operator is differential.

An equivalent, but purely algebraic description of linear differential operators is as follows: an \mathbb{R}-linear map P is a k-th order linear differential operator, if for any k+1 smooth functions f_0,\ldots,f_k \in C^\infty(M) we have

[f_k[f_{k-1}[\cdots[f_0,P]\cdots]]=0.

Here the bracket [f,P]:\Gamma(E)\rightarrow \Gamma(F) is defined as the commutator

[f,P](s)=P(f\cdot s)-f\cdot P(s).\,

This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.

[edit] Examples

[edit] See also