Perceptrons (book)

From Wikipedia, the free encyclopedia

Perceptrons is the name of a book authored by Marvin Minsky and Seymour Papert, published in 1969. An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further released in 1987, containing a chapter dedicated to counter the criticisms made in the 1980s towards it.

The main subject of the book are perceptrons, an important kind of artificial neural network developed in the late 1950s and early 1960. The main researcher on perceptrons was Frank Rosenblatt, author of the book Principles of Neurodynamics. Rosenblatt and Minsky knew each other since adolescence, having studied with a one year difference at the Bronx High School of Science[citation needed]. They became at one point central figures of an historical debate inside the AI research community, and are known to have promoted loud discussions in conferences[citation needed]. Despite of the dispute, the corrected version of the book released after Rosenblatt's tragic and early death brings a dedicatory to him.

This book is the center of a long-time controversy in the study of artificial intelligence. It is claimed that pessimistic predictions made by the authors were responsible for an erroneous change in the direction of research in AI, concentrating efforts in the research of so-called "symbolic" systems, and contributing to the so-called AI winter. This decision would have been proved unfortunate in the 1980s, when new discoveries would have shown that the prognostics in the book were wrong.

The book brings a number of mathematical proofs regarding perceptrons, and while it highlights some of perceptrons strengths, it also showed some previously unknown limitations. The most important one is related to the computation of some predicates, as the XOR function, and also the important connectedness predicate. The problem of connectedness is illustrated at the awkwardly colored cover of the book, intended to show how humans themselves have difficulties in computing this predicate[citation needed].

[edit] The XOR affair

Critics of the book state that the authors infer that, since a single artificial neuron is incapable of implementing some functions such as the XOR logical function, larger networks would still have similar limitations, and therefore should be dropped. Later research on three-layered perceptrons would have finally shown how to implement such functions, therefore saving the technique from obliteration.

There are many mistakes in this story. Although a single neuron does in fact compute only a small number of logical predicates, is was widely known that networks of such elements could compute any possible boolean function. This was known by Warren McCulloch and Walter Pitts, who even proposed how to create a Turing Machine with their formal neurons, is mentioned in Rosenblatt's , and is even mentioned in the book Perceptrons.[citation needed] Minsky also extensively uses formal neurons to create simple theoretical computers in his book Computation: Finite and Infinite Machines.

What the book does prove is that in three-layered feed-forward perceptrons (with a so-called "hidden" or "intermediary" layer) is not possible to compute some predicates unless at least one of the neurons in the first layer of neurons (the "intermediary" layer) is connected with a non-null weight to each and every input. This was contrary to a hope held by some researchers in relying their work mostly on networks with few layers of "local" neurons, connected only to a small number of inputs. A feed-forward machine with "local" neurons is much easier to build and use then a larger and recurrent neural network, and researches at the time were concentrated on these instead of more complicated models.

[edit] Analysis of the controversy

Although it is a widely available book, many scientists talk about Perceptrons only echoing what others have said, what helps spreading misconceptions about it. Minsky has even compared the book to the fictional book Necronomicon of H. P. Lovecraft tales, a book known to many, but only read by few[1]. The authors talk in the expanded edition about the criticism towards the book that started in the 1980s with the wave symbolized by the PDP book.

The singular way how Perceptrons was explored first by a group of scientists to drive research on AI to a direction, and then later by a new group to another direction, has already been subject of a peer-reviewed scientific sociological study of science development[2].