Talk:Moravec's paradox
From Wikipedia, the free encyclopedia
Contents |
[edit] Why?
Why is this called a "paradox?" Wikipedia defines paradox as: "an apparently true statement or group of statements that leads to a contradiction or a situation which defies intuition." Where's the contradiction or nonintuitiveness? Moravec's statement strikes me as simply an empirical claim, which may be true or false. How is this any different from saying, "Men are at their worst trying to do the things most natural to women," or similar for birds / fish, cops / robbers, Cowboys / Indians, etc.? Are those paradoxes too?
Also -- does this term "Moravec's paradox" exist anywhere other than on wikipedia? Google shows only 12 links.
[edit] Paradox category
Charles G had removed this from the paradoxes cat. I agree it is on the outskirts, but still a paradox. A paradox does not need to be overly mysterious. All that qualifies it, is that it can be formulated as "P and not-P" I think this qualifies. Gregbard 12:19, 30 August 2007 (UTC)
- I don't believe this is a paradox in the strict sense a logician would prefer. It's a profoundly counter-intuitive fact about human skills. I am considering renaming the article because the title seems to have caused confusion. I am not aware if it is known by other, perhaps more suitable, names. More research will sort this out.---- CharlesGillingham 07:24, 31 August 2007 (UTC)
Hey, in case anyone's still watching: what about the fact that it's not even true? There are things that are easy, and hard, for BOTH humans and computers. I made this table because it might be useful somewhere.
| Easy (computers) | Hard (computers) | |
|---|---|---|
| Easy (humans) | add small numbers
apply symbol manipulation rules |
recognize faces and speech
identify relevant information |
| Hard (humans) | quickly manipulate large numbers of symbols
repeat a task many times |
prove new, useful theorems
defraud humans |
Btw, why does wikipedia make it so hard to edit tables that are in articles? =( MrVoluntarist (talk) 17:43, 6 February 2008 (UTC)
- You've misread the idea. It's not saying that "everything is reversed", it's saying that "some of the things that are reversed are really unexpected." Obviously, some things are easy for both machines and people, some things are hard for both machines and people. This is what we expect. What's interesting is the kinds of things that are reversed. For example, in your graph above, the human-hard, machine-easy category should also include "winning at Chess" and "doing well on the GRE", things we tend to think of as "highly intelligent" behavior. Also, in the human-easy, machine-hard category should include "walking across the park" and "catching a ball", things a five-year old can do. What's interesting is that "intelligence" isn't the "highest human faculty", contrary to expectation.---- CharlesGillingham (talk) 01:01, 7 February 2008 (UTC)
- Well, I don't mean to be a monkeywrench, but you could also say that "adding one-digit numbers" is something an 8-year-old can do, and, yep, it's easy for computers. Also, "proving a new, useful theorem" would cause a human to be judged "highly intelligent", and, yep, that's hard for computers as well. So is the paradox, as you hint at, just saying, "hey, some things are reversed, and it's unexpected which ones they will be"? Btw, can computers ace all parts of the GRE? Or just certain sections? MrVoluntarist (talk) 16:58, 12 February 2008 (UTC)
- Just parts of GRE. Several researchers in the early 1960s focussed on programming computers to do well on intelligence tests, such as Daniel Bobrow's STUDENT (which could do algebra word problems), Tom Evans' ANALOGY (which could solve problems like "A is to B as C is to ?" presented as diagrams), Herbert Gelernter's GTP (which could prove theorems in Euclidean geometry), and so on. These programs ran on machines with no more processing power than a modern wristwatch, but they could perform as well as graduate students on these difficult abstract tasks. Researchers today realize that "doing well on an intelligence test" is not a particularly interesting or fruitful area of research. You only learn how to solve a specific problem. And, compared to the difficult problems of mobility, perception and communication, doing well on intelligence tests is relatively easy.
- I guess the key to understanding the significance of this discovery is to look backward to Locke and Shakespeare, where they talk about our "faculty of reason" as being our "highest" ability ("What a piece of work is Man! How noble in reason!" and "In apprehension how like a god"). It's the thing that puts us on the top of chain of being, the thing that creates this great gulf between us and the "lower" animals. We're "homo sapiens" and our sapience is an almost spiritual ability. Science fiction still often assumes that sapience will change machines into god-like beings. However, the experience of AI research suggests that our ability to reason is not such a big deal after all, and that we should be far more impressed with evolutionary leaps made by our ancestors hundreds of millions of years ago. ---- CharlesGillingham (talk) 00:41, 14 February 2008 (UTC)
- Okay, that makes more sense. Now, I don't know if this runs afoul of Original Research, but could we maybe reword the opening to, "normal intuitions about which problems are 'easy' or 'hard' do not consistently apply to machines". Moravec would probably agree that the intuitions about ease of adding one-digit numbers, and the difficulty of proving useful theorems, carry over just fine. Btw, correct me if I'm wrong, but weren't the problems fed to ANALOGY already highly constrained? That is, it didn't receive them as picture files, the way humans would take the test. MrVoluntarist (talk) 04:41, 14 February 2008 (UTC)
- I've rewritten the introduction in light of our discussion. Hopefully it's clearer now. ---- CharlesGillingham (talk) 21:32, 14 February 2008 (UTC)
- Okay, that makes more sense. Now, I don't know if this runs afoul of Original Research, but could we maybe reword the opening to, "normal intuitions about which problems are 'easy' or 'hard' do not consistently apply to machines". Moravec would probably agree that the intuitions about ease of adding one-digit numbers, and the difficulty of proving useful theorems, carry over just fine. Btw, correct me if I'm wrong, but weren't the problems fed to ANALOGY already highly constrained? That is, it didn't receive them as picture files, the way humans would take the test. MrVoluntarist (talk) 04:41, 14 February 2008 (UTC)
- Well, I don't mean to be a monkeywrench, but you could also say that "adding one-digit numbers" is something an 8-year-old can do, and, yep, it's easy for computers. Also, "proving a new, useful theorem" would cause a human to be judged "highly intelligent", and, yep, that's hard for computers as well. So is the paradox, as you hint at, just saying, "hey, some things are reversed, and it's unexpected which ones they will be"? Btw, can computers ace all parts of the GRE? Or just certain sections? MrVoluntarist (talk) 16:58, 12 February 2008 (UTC)
[edit] Not a philosophical article
This article should not be categorized as philosophy and is not in the scope of Wikiproject:Philosophy. This is, as it states, a principle in robotics and artificial intelligence. It should be categorized only as artificial intelligence or robotics. (It's only relation to philosophy would be as an example in embodied philosophy or embodiment. This merits a "See Also", not a category.) ---- CharlesGillingham 07:24, 31 August 2007 (UTC)
- Well that's a pretty narrow view there. I think it qualifies on several counts Philosophy of mind, Philosophy of science, and logic. I think at least one applies. Fortunately, the field options we have mean we don't have to choose. Greg Bard 13:23, 12 September 2007 (UTC)
[edit] More references needed
Most classes in robotics or artificial intelligence mention some version of this principle (at least the ones I took). It is part of the motivation for "Nouvelle AI" (a school of AI research named by Rodney Brooks and also practiced by Hans Moravec and most people on the robotics side of things). More research will show this. ---- CharlesGillingham 07:24, 31 August 2007 (UTC)
- I've found a few: Minsky, McCorduck, etc. ---- CharlesGillingham (talk) 14:54, 4 June 2008 (UTC)

