User:CharlesGillingham/More/AI Winter
From Wikipedia, the free encyclopedia
Contents |
[edit] The abandonment of perceptrons in 1969
- See also: Perceptron and Frank Rosenblatt
A perceptron is a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he made optimistic claims about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages."[1] An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. They showed that there were severe limitations to what perceptrons could do and that Frank Rosenblatt's claims had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years.[2]
Eventually, the work of Hopfield, David Rumelhart and others would revive the field and thereafter it would become a vital and useful part of artificial intelligence. The specific problems brought up by Perceptrons were ultimately addressed using backpropagation and other modern machine learning techniques, developed by Paul Werbos in 1974 and championed by Rumelhart in the early 80s.[3] Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[1]
[edit] TODO
- Add an "earlier" early episode The abandonment of AI by IBM in 1960 "The Dartmouth conference represents a change in support for AI, from private industry to government"[4]
- Under Darpa's funding cuts, add an opening paragraph about the Licklidder policy of "funding the man" and the freedom it allowed. Add a final paragraph about the success of the DART battle management system (ref: there's one in the timeline of artificial intelligence and Norvig and Russell mention it.)
- Under The Lighthill report, add a paragraph about Alvey and the revival of research.
- Move information from AI Now into History of artificial intelligence under "AI behind the scenes" (eliminate jargon, shorten). Remove it from here because it's a little off topic. Or better: create an "applications of AI" article", and refer all these other articles to it.
- Add final section on the pathology of AI Winters: why do they happen? Optimism & Exaggeration, Technical Problems, Recovery -- use Kurzweil pg. 264. User material from new "Mechanisms" sections (and reconsider this prose).
McCarthy against optimism: "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case"[5]
[edit] The pathology of an economic bubbles in AI
Kurzweil "While the widespread expectations for revolutionary change are accurate, the are inccorrectly timed." K p. 263
Kurzweil "AI experienced a ... premature optimism"
Kurzweill:
The technology 'hype cycle' for a paradigm shift — railrouds, AI , internet, telecommunications....— typically starts with a period of unrealiistic expectations based on a lack of understand of all the enabling factors involved required. Although utilization of the new paradigm does increase exponentially, early growth is slow until the kneee of the exponentional is realized. While the widespread expectations for revolutionary change are accurate, the are inccorrectly timed. When the prospects do not quicky pan out, a perdiod of disillusionment sets in. Nevertheless exponential growth continues unabated and years later a more mature and realistic transformation does take place. We saw this in the widespread railroud frenzy of the nineteenth century ... and we are still feeling effects of the e-commerce and telecommunications busts.[6]
[edit] Hype
The first generation of AI researchers made these predictions about their work:
- 1958, Alan Newell: "within ten years a digital computer will the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."[7]
- 1965, H. A. Simon: "machines will be capable, within twenty years of doing any work a man can do"[8]
- 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[9]
- 1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."[10]
Daniel Crevier gives several reasons to explain why AI has always (and still does) generate such incredible hyperbole: ....
[edit] Limits and brick walls
Finally, a case can be made that these researchers actually believed that these things were possible. They were unaware or did not appreciate how difficult the problems they faced would be, problems like commonsense reasoning ...
raw computer power, intractability and combinatorial explosion,
[edit] Disappointment and collapse
[edit] Recovery
[edit] Early episodes
[edit] IBM ends support for AI in 1956
The very first computers were referred to in the press as "electronic brains." The public's reaction was one of horror. When IBM researcher Herbert Gelernter told people about his experiments in AI, IBM abruptly shut him down and stopped all research in to AI. The executives at IBM realized that, to sell computers, it was important to sell the idea that computers could "only do what they were told." They no longer allowed their machines to be called "giant brains" and instead preferred more innocous nicknames like "number crunchers."[11]
IBM's thing with Herbert Gelernter
"What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that." A Conversation with Herbert Simon. By Reuben L. Hann. Gateway IX(2): 12-13 (1998).
"Yet, in spite of the early activity of Rochester and other IBM researchers, the corporation's interest in AI cooled. Although work continued on computer-based checkers and chess, an internal report prepared about 1960 took a strong position against broad support for AI." from Developments in Artificial Intelligence from National Research Council (1999). Funding a Revolution:Government Support for Computing Research. National Academy Press.
[edit] Notes
- ^ a b Crevier 1993, pp. 102−5
- ^ Crevier 1993, pp. 102−105, McCorduck 2004, pp. 104−107, Russell & Norvig 2003, p. 22
- ^ Crevier 1993, pp. 214−6 and Russell & Norvig 2003, p. 25
- ^ "Thus, the activities surrounding the Dartmouth workshop were, at the outset, linked with the cutting-edge research at a leading private research laboratory (AT&T Bell Laboratories) and a rapidly emerging industrial giant (IBM). Researchers at Bell Laboratories and IBM nurtured the earliest work in AI and gave young academic researchers like McCarthy and Minsky credibility that might otherwise have been lacking. Moreover, the Dartmouth summer research project in AI was funded by private philanthropy and by industry, not by government. The same is true for much of the research that led up to the summer project." from Developments in Artificial Intelligence from (NRC 1999)
- ^ McCarthy 1974
- ^ Kurzweil 2005, pp. 263-264
- ^ Newell 1958:7-8
- ^ Simon 1965:96
- ^ Minsky 1967:2
- ^ Crevier 1993:96. Minsky fiercely contended he was misquoted.
- ^ Crevier 1993, pp. 58, 221


