User:CharlesGillingham/Drafts/Dreyfus' critique of artificial intelligence
From Wikipedia, the free encyclopedia
thumb|Book cover of the 1979 paperback edition of What Computers Can't Do
Hubert Dreyfus argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation, and that these unconscious skills could never be captured in formal rules. His critique was directed at (what John Haugeland calls) good old fashioned artificial intelligence: the first wave of AI research which used of high level formal symbols to represent reality and tried to reduce intelligence to formal symbol manipulation.
In a series of papers and books, including 1965's Alchemy and AI,[1] 1972's classic What Computers Can't Do[2] and 1986's Mind over Machine,[3] Dreyfus presented a caustic assessment of AI's progress and a careful critique of the philosophical foundations of the field, based on the insights of modern continental philosophers such as Merleau-Ponty and Heidegger.
When Dreyfus' ideas were first introduced in the middle 60's, they were met with ridicule and outright hostility.[4] But by the 1980s, many of his perspectives were rediscovered by researchers working in robotics and the new field of connectionism—approaches now called "subsymbolic" because they eschew early AI research's emphasis on high level symbols.
Historian and AI researcher Daniel Crevier writes: "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[5] Dreyfus would say in 2007 "I figure I won and it's over—they've given up."[6]
Contents |
[edit] Dreyfus' critique
[edit] The grandiose promises of artificial intelligence
In Alchemy and AI and What Computers Can't Do, Dreyfus summarized the history of artificial intelligence and ridiculed the unwarranted optimism that seems to permeate the field. For example, Herbert Simon, following his General Problem Solver (1957), predicted that by 1967:
- A computer would be world champion in chess.
- A computer would discover and prove an important new mathematical theorem.
- Most theories in psychology will take the form of computer programs.
MAYBE WRITE HERE ABOUT HIS CHESS PREDICTION AND THE CHESS GAME ...
[edit] Dreyfus' four assumptions of artificial intelligence research
All work in cognitive simulation and in artificial Intelligence is predicated on the assumption that humans in some fundamental way process information in ways that computers can emulate. A computer can represent information and reason only by using individual symbols and formal rules (i.e. programs) that operate on those symbols. Dreyfus argued that there was no evidence that human thought consisted of this kind of symbol manipulation, and he identified four assumptions that had misled AI researchers into believing that it was. "In each case," Dreyfus writes, "the assumption is taken by workers in [AI] as an axiom, guaranteeing results, whereas it is, in fact, one hypothesis among others, to be tested by the success of such work."[7]
[edit] The biological assumption
- The brain processes information discrete operations by way of some biological equivalent of on/off switches.
In the early days of research into neurology, scientists realized that neurons fire in all-or-nothing pulses. Several researchers, such as Walter Pitts and Warren McCulloch, argued that neurons functioned similar to the way Boolean logic gates operate, and so could be imitated by electronic circuitry at the level of the neuron.[8] When digital computers became widely used in the early 50s, this argument was extended to suggest that the brain was a vast physical symbol system, manipulating the binary symbols of zero and one. Dreyfus was able to refute the biological assumption by citing research in neurology that suggested that the action and timing of neuron firing had analog components.[9]
[edit] The psychological assumption
- The mind can be viewed as a device operating on bits of information according to formal rules.
This assumption is closely related to the physical symbol systems hypothesis, proposed by Herbert Simon and Alan Newell in 1963, EXPLAIN DREYFUS CRITIQUE HERE BRIEFLY.
[edit] The epistemological assumption
- All knowledge can be formalized.
This concerns the philosophical issue of epistemology, or the study of knowledge. Early AI researchers assumed that all knowledge could be captured in symbols, such as semantic nets, which represent the meaning of a symbol by its connections to other symbols, to English words and to cameras and other sensors. This is he refuted by showing that much of what we "know" about the world consists of complex attitudes or tendencies that make us lean towards one interpretation over another. He argued that our commonsense knowledge was largely sub-symbolic and unconscious.
[edit] The ontological assumption
- The world consists of independent facts that can be represented by independent symbols
Dreyfus also identified a subtler assumption about the world. AI researchers (and futurists and science fiction writers) often assume that there is no limit to formal, scientific knowledge, because everything in the world can be described by symbols or scientific theories. This assumes that everything that exists can be understood as objects, properties of objects, classes of objects, relations of objects, and so on: precisely those things that can be described by logic, language and mathematics. The question of what exists is called ontology, and so Dreyfus calls this "the ontological assumption:" If this is false, then it raises doubts about what we can ultimately know and on what machines will ultimately be able to help us to do.
[edit] The primacy of unconscious skills
The essential point that underlies all of Dreyfus' critiques is this: human thinking and knowledge does not consist primarily of the manipulation of high-level symbols that represent objects in the world. In What Computers Can't Do Dreyfus analyzed the "cognitive simulation" school of AI research practiced by Alan Newell and Herbert Simon in the 1960s, and outlined the differences between the way their programs were designed and the way people actually think and solve problems. In 1986's Mind Over Machine, written during the heyday of expert systems, he analyzed the difference between human expertise and the programs that claimed to capture it.
Dreyfus argued that human problem solving and expertise depend on our unconscious sense of the context, of what's important and interesting given the situation, rather than on the process of searching through combinations of possibilities to find what we need. Dreyfus would describe it in 1986 as the difference between "knowing-that" and "knowing-how", based on Heidegger's distinction of present-at-hand and ready-to-hand.[10]
Knowing-that is our conscious, step-by-step problem solving abilities. We use these skills when we encounter a difficult problem that requires us to stop, step back and search through ideas one at time. At moments like this, the ideas become very precise and simple: they become context free symbols, which we manipulate using logic and language. These are the skills that Newell and Simon had demonstrated with both psychological experiments and computer programs. Dreyfus agreed that their programs adequately imitated the skills he calls "knowing-that."
Knowing-how, on the other hand, is the way we deal with things normally. We take actions without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work or find the right thing to say. We seem to simply jump to the appropriate response, without considering any alternatives. This is the essence of expertise, Dreyfus argued: when our intuitions have been trained to the point that we forget the rules and simply "size up the situation" and react. (Malcolm Gladwell would later name this "fast" process of expert thinking as a "blink" in a bestseller of the same name.[11])
Our sense of the situation is based on our goals, our bodies and our culture—all of our unconscious intuitions, attitudes and knowledge about the world. This “context” or "background" (related to Heidegger's Dasein) is a form of knowledge that is not stored in our brains symbolically, but intuitively. It affects what we notice and what we don't notice, what we expect and what possibilities we don't consider: we discriminate between what is essential and inessential. (Gladwell calls this "thin-slicing"). The things that are inessential are relegated to our "fringe consciousness" (borrowing a phrase from William James): the millions of things we're aware of, but we're not really thinking about right now.
Dreyfus claimed that no AI programs, as they were implemented in the 70s and 80s, could capture this background or do the kind of fast problem solving, or blinking, that it allows. He argued that our unconscious knowledge could never be captured symbolically. If AI could not find a way to address these issues, then it was doomed to failure, an exercise in "tree climbing with one's eyes on the moon."[12]
[edit] Reaction
| Please help improve this article or section by expanding it. Further information might be found on the talk page or at requests for expansion. (May 2008) |
[edit] Hostility and ridicule
Dreyfus' critique was met tremendous hostility and, as Joseph Wiezenbaum would complain, childishness.
Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored."[13] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me."[14]
Dreyfus' critique was based on modern European philosophers (like Heidegger and Merleau-Ponty). AI researchers of the 1960s, by contrast, based their understanding of the human mind on engineering principles and efficient problem solving techniques related to management science. On a fundamental level, they spoke a different language. QUOTE FROM PAPERT "fluff! poppycock!" In 1965, there was simply too huge a gap between European philosophy and artificial intelligence, a gap that has since been filled by cognitive science, connectionism and robotics research.
[edit] Vindication
[edit] The failed promises of AI research
The grandiose predictions of early AI researchers failed to come true, even fifty years later. As Nicolas Fearn wrote in 2007: "AI researchers clearly have some explaining to do."[15] Today researchers are far more reluctant to make the kind of predictions that were made in the early days, although some futurists, such as Ray Kurzweil, are still given to the same kind of optimism.
[edit] The biological assumption
The biological assumption, although common in the the forties and early fifties,[16] was no longer assumed by most AI researchers by the time Dreyfus was writing. As Crevier (1993) writes "few still held that belief in the early 1970s, and nobody argued against Dreyfus."[17] Although many still argue that it is essential to reverse-engineer the brain by simulating the action of neurons (such as Ray Kurzweil[18]), they don't assume that neurons are essentially digital, but rather that the action of analog neurons can be simulated by digital machines to a reasonable level of accuracy.[19] (Alan Turing had made this same observation in 1950)[20]
[edit] The psychological and epistemological assumptions: unconscious reasoning and knowledge
AI researchers have accepted that human reasoning does not consist primarily of high-level symbol manipulation. SHOW THIS WITH A QUOTE OR TWO.
Progress has been made towards discovering the "rules" that govern unconscious reasoning.[21] The situated movement in robotics research also attempts to capture our unconscious skills at perception and attention.[22] Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning. Research into commonsense knowledge has focussed on reproducing the "background" or context of knowledge. In fact, since Dreyfus first published his critiques in the 60s, AI research in general has moved away from high level symbol manipulation or "GOFAI", towards new models that are intended to capture more of our unconscious reasoning.
If human thinking is not, in fact, a kind of symbol processing, there are two possible responses. The first emphasizes that it's not necessary to imitate human symbol manipulation, so it doesn't matter if humans don't use symbols. The second emphasizes that it's not necessary imitate human symbol manipulation, since we can also imitate our unconscious, non-symbolic reasoning.
Some, like John McCarthy believe that artificial intelligence doesn't need to use to the same algorithms that people do. "Artificial intelligence is not, by definition, simulation of human intelligence" writes McCarthy.[23] Russell and Norvig was it them? suggest an analogy with the early days of heavier than air flight: the problem could not be solved while researchers still insisted on imitating birds.
Others, focus on solving the problems we solve unconsciously, like perception, motion, and judgements involving uncertainty. Robotics researchers like Hans Moravec and Rodney Brooks were among the first to realize that unconscious skills would prove to be the most difficult to reverse engineer. (See Moravec's paradox). The most promising directions in research into unconscious skills include neural networks, fuzzy systems are often collected under the subfield of computational intelligence.[24] Dreyfus himself agrees that these sub-symbolic methods can capture the kind of "tendencies" and "attitudes" that he considers essential for intelligence and expertise.
WHERE DOES THIS PARAGRAPH FIT IN. This response, that it is possible for digital machines to simulate the complex processes of sub-symbolic thought, had been original proposed by Turing in his 1950 paper Computing machinery and intelligence. Writing 25 years before Dreyfus, he described the essential points of Dreyfus' argument as "the argument from the informality of behavior."[25] Turing argued in response that, just because we don't know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'" [26]
These two options satisfy most of those who work in the field. According to Crevier (1993), p. 126), "most AI researchers ... do not make the psychological assumption."
[edit] Classification of intelligent activities
What Computers Can't Do concludes with a chart classifying intelligent activities (quoted):
| I Associationistic | II Simple-Formal | III Complex-Formal | IV Nonformal | |
|---|---|---|---|---|
| Characteristics of Activity | Irrelevance of meaning and situation | Meanings completely explicit and situation independent | In Principle, same as II; In practice internally situation-dependent, independent of external situation | Dependent on meaning and situation that are not explicit. |
| Innate or learned by repetition | Learned by rule | Learned by rule and practice | Learned by perspicuous examples. | |
| Field of activity (and appropriate procedure) | Memory games, e.g. "Geography", (association) | Computable or quasi-computable games, e.g. nim or tic-tac-toe (seek algorithm or count out) | Uncomputable games, e.g. Chess or Go (global intuition and detailed counting out) | Ill-defined games, e.g. riddles (perceptive guess) |
| Maze problems (trial and error) | Combinatorial problems (non-heuristic Means-ends analysis) | Complex combinatorial problems, (planning and maze calculation) | Open-structured problems (insight) | |
| Word-by-word translation (mechanical dictionary) | Proof of theorems using mechanical proof procedures (seek algorithm) | Proof of theorems where no mechanical proof procedure exists (intuition and calculation) | Translating a natural language (understanding in context of use) | |
| Response to rigid patterns (innate releasers and classical conditioning) | Recognition of simple rigid patterns, e.g. reading typed page (search for traits whose conjunction defines class membership | Recognition of complex patterns in noise (search for regularities) | Recognition of varied and distorted patterns (recognition of generic or use of paradigm case) | |
| Kinds of Program | Decision tree, List search, Template | Algorithm | Search-pruning Heuristics | None |
[edit] See also
- Philosophy of artificial intelligence
- Hubert Dreyfus
- A detailed summary of the book's argument
[edit] Notes
- ^ Dreyfus 1965
- ^ Dreyfus 1972, Dreyfus 1979, Dreyfus 1992
- ^ Dreyfus & Dreyfus 1986
- ^ McCorduck 2004, pp. 211-243, Crevier 1993, pp. 120-132
- ^ Crevier 1993, p. 125
- ^ Quoted in Fearn 2007, p. 51
- ^ Dreyfus 1979, p. 157
- ^ Pitts & McCullough 1943, Crevier & 1993 PITTS MCCULLOUGH
- ^ Dreyfus 1992, pp. 158-62
- ^ Dreyfus & Dreyfus 1986 and see From Socrates to Expert Systems The "knowing-how"/"knowing-that" terminology was introduced in the 1950s by philosopher Gilbert Ryle.
- ^ Gladwell 2005
- ^ Dreyfus 1992, p. 119
- ^ Crevier 1993, p. 143
- ^ Crevier 1993, p. 122
- ^ Fearn 2007, p. 40
- ^ BIO ASSUMPT IN GEN 0
- ^ Crevier 1993, p. 126
- ^ Kurzweil 2005
- ^ Kurzweil 2005, p. WHERE HE SAYS WE ONLY NEED SIMULATE NEURONS
- ^ Turing 1950 under "(7) Argument from Continuity in the Nervous System."
- ^ Russell & Norvig 2003, p. 52
- ^ See Brooks 1990 and Moravec 1988
- ^ dfdf
- ^ COMPUTATIONAL INTELLIGENCE TEXTBOOK ()
- ^ Russell & Norvig 2003, p. 950-51 identify Dreyfus' argument as the one that Turing responds to.
- ^ Turing 1950 under "(8) The Argument from the Informality of Behavior"
[edit] References
- Brooks, Rodney (1990), “Elephants Don't Play Chess”, Robotics and Autonomous Systems 6: 3-15, <http://people.csail.mit.edu/brooks/papers/elephants.pdf>. Retrieved on 30 August 2007
- Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3
- Dreyfus, Hubert (1965), Alchemy and AI, RAND Corporation
- Dreyfus, Hubert (1972), What Computers Can't Do, New York: MIT Press, ISBN 0-06-090613-8
- Dreyfus, Hubert (1979), What Computers Can't Do, New York: MIT Press, ISBN 0-06-090624-3.
- Dreyfus, Hubert (1992), What Computers Still Can't Do, New York: MIT Press, ISBN 0-262-54067-3.
- Dreyfus, Hubert & Dreyfus, Stuart (1986), Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, Oxford, U.K.: Blackwell
- Fearn, Nicholas (2007), The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers, New York: Grove Press
- Gladwell, Malcolm (2005), Blink: The Power of Thinking Without Thinking, Boston: Little, Brown, ISBN 0-316-17232-4.
- Horst, Steven (Fall 2005), “The Computational Theory of Mind”, in Zalta, Edward N., The Stanford Encyclopedia of Philosophy, <http://plato.stanford.edu/archives/fall2005/entries/computational-mind/>.
- Kurzweil, Ray (2005), The Singularity is Near, New York: Viking Press, ISBN 0-670-03384-7.
- Moravec, Hans (1988), Mind Children, Harvard University Press
- Newell, Allen & Simon, H. A. (1963), “GPS: A Program that Simulates Human Thought”, in Feigenbaum, E.A. & Feldman, J., Computers and Thought, McGraw-Hill
- Russell, Stuart J. & Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2, <http://aima.cs.berkeley.edu/>
- Turing, Alan (October 1950), “Computing machinery and intelligence”, Mind LIX (236): 433-460, ISSN 0026-4423, doi:10.1093/mind/LIX.236.433, <http://loebner.net/Prizef/TuringArticle.html>

