Cybenko theorem
From Wikipedia, the free encyclopedia
| This article is orphaned as few or no other articles link to it. Please help introduce links in articles on related topics. (November 2006) |
| The introduction to this article provides insufficient context for those unfamiliar with the subject. Please help improve the article with a good introductory style. |
| This article may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. (June 2007) |
The Cybenko theorem is a theorem proved by George Cybenko in 1989 that says that a single hidden layer, feed forward neural network is capable of approximating any continuous, multivariate function to any desired degree of accuracy and that failure to map a function arises from poor choices for
and
or an insufficient number of hidden neurons.
[edit] Formal statement
Let
be any continuous sigmoid-type function, e.g.,
. Then, given any continuous real-valued function f on [0,1]n (or any other compact subset of Rn) and ε > 0, there exist vectors
and
and a parameterized function
such that
for all
where
and
and
.
[edit] References
- Cybenko, G.V. (1989). Approximation by Superpositions of a Sigmoidal function, Mathematics of Control, Signals and Systems, vol. 2 no. 4 pp. 303-314. electronic version
- Hassoun, M. (1995) Fundamentals of Artificial Neural Networks MIT Press, p.48
be any continuous
. Then, given any continuous real-valued function
and
such that
for all ![\mathbf{x} \in [0,1]^n](../../../../math/d/f/7/df78c63e1da1e5e4fd476f1b0aee6573.png)

and
.
