Kernel (linear operator)

From Wikipedia, the free encyclopedia

Main article: Kernel (mathematics)

In linear algebra and functional analysis, the kernel of a linear operator L is the set of all operands v for which L(v) = 0. That is, if LV → W, then

\ker(L) = \left\{ v\in V : L(v)=0 \right\}\text{,}

where 0 denotes the null vector in W. The kernel of L is a linear subspace of the domain V.

The kernel of a linear operator Rm → Rn is the same as the null space of the corresponding n × m matrix. Sometimes the kernel of a general linear operator is referred to as the null space of the operator.

Contents

[edit] Examples

  1. If LRm → Rn, then the kernel of L is the solution set to a homogeneous system of linear equations. For example, if L is the operator:
    L(x_1,x_2,x_3) = (2x_1 + 5x_2 - 3x_3,\; 4x_1 + 2x_2 + 7x_3)
    then the kernel of L is the set of solutions to the equations
    \begin{alignat}{7}
 2x_1 &&\; + \;&& 5x_2 &&\; - \;&& 3x_3 &&\; = \;&& 0 \\
 4x_1 &&\; + \;&& 2x_2 &&\; + \;&& 7x_3 &&\; = \;&& 0
\end{alignat}\text{.}
  2. Let C[0,1] denote the vector space of all continuous real-valued functions on the interval [0,1], and define LC[0,1] → R by the rule
    L(f) = f(0.3)\text{.}\,
    Then the kernel of L consists of all functions f ∈ C[0,1] for which f(0.3) = 0.
  3. Let C(R) be the vector space of all infinitely differentiable functions R → R, and let DC(R) → C(R) be the differentiation operator:
    D(f) = \frac{df}{dx}\text{.}
    Then the kernel of D consists of all functions in C(R) whose derivatives are zero, i.e. the set of all constant functions.
  4. Let R be the direct sum of infinitely many copies of R, and let sR → R be the shift operator
    s(x_1,x_2,x_3,x_4,\ldots) = (x_2,x_3,x_4,\ldots)\text{.}
    Then the kernel of s is the one-dimensional subspace consisting of all vectors (x1, 0, 0, ...). Note that s is onto, despite having nontrivial kernel.
  5. If V is an inner product space and W is a subspace, the kernel of the orthogonal projection V → W is the orthogonal complement to W in V.

[edit] Properties

If LV → W, then two elements of V have the same image in W if and only if their difference lies in the kernel of L:

L(v) = L(w)\;\;\;\;\Leftrightarrow\;\;\;\;L(v-w)=0\text{.}

It follows that the image of L is isomorphic to the quotient of V by the kernel:

\text{im}(L) \cong V / \ker(L)\text{.}

When V is finite dimensional, this implies the rank-nullity theorem:

\dim(\ker L) + \dim(\text{im}\,L) = \dim(V)\text{.}\,

When V is an inner product space, the quotient V / ker(L) can be identified with the orthogonal complement in V of ker(L). This is is the generalization to linear operators of the row space of a matrix.

[edit] Kernels in functional analysis

If V and W are topological vector spaces (and W is finite-dimensional) then a linear operator LV → W is continuous if and only if the kernel of L is a closed subspace of V.

[edit] See also