Orthogonality principle and error minimization
From Wikipedia, the free encyclopedia
The orthogonality principle is commonly used as a foundation in solving the following type of problem (generally):
Given a vector space V, let
where
. Find a vector
that is a combination of vectors in W, such that we can approximate a vector
by
.
This type of problem arises in signal processing, for example, when we have a signal in some unknown or unworkable space. We want to estimate this signal with one in a known space or a space that we can work in. This problem also implicitly calls for error minimization.
Geometrically, we can see this problem by the following simple case where W contains one vector:
We want to find the closest approximation to the vector y by a vector
in the space W. From the geometric interpretation, it is easily seen that the best approximation, or smallest error, occurs when the error vector, e, is orthogonal to vectors in the space W. This is the foundation behind the orthogonality principle.
[edit] A solution to error minimization problems
While there are various ways to approach error minimization problems, the following is one way to arrive at a solution.
If possible, we want to be able to approximate a vector v by
where
is the approximation of v as a linear combination of vectors in
. Therefore, we want to be able to solve for the coefficients,ci , so that we may write our approximation in known terms.
By the orthogonality theorem, the square norm of the error vector,
, is minimized when, for j = 1, 2, ..., n
Developing this equation, we obtain

or in matrix form
If the grammian matrix is invertible,
The inner products in the Grammian matrix and the cross correlation vector should be defined so that the elements of these matrices are known.
[edit] References
- Moon, Todd K. (2000), Mathematical Methods and Algorithms for Signal Processing, Prentice-Hall, ISBN 0-201-36186-8








