Talk:Artificial intelligence
From Wikipedia, the free encyclopedia
[edit] Lenght
Because of their length, the previous discussions on this page have been archived. If further archiving is needed, see Wikipedia:How to archive a talk page.
Archives |
| Archive 1 (Feb 2004 to Oct 2005) Archive 2 (Oct 2005 to Jul 2006) |
[edit] Comment moved here
[edit] Business part and the amount of robots
That 1995 information on the amount of robots world wide should be replaced. —Preceding unsigned comment added by 91.155.157.163 (talk) 18:21, 6 October 2007 (UTC)
[edit] Link to peer reviewed paper
Hi, I recently added some new information regarding the coparison of various classification techniques with a reference to a peer reviewed article. There seems to be some controversy on this subject, the link has been removed several times. I am currently doing my PhD on this topic an I know the information is very relevant.
Is the reference and the external link http://www.pattenrecognition.co.za suitable for this site? If not, what can I do so that this information is not repeatedly removed?
cvdwalt
- There are lots of pages about AI on wikipedia, i personally believe that the main AI page should be as general and introductory as possible, i wonder if your information should be on the Pattern Recognition page, or under its own subheading ? Bugone 08:38, 9 March 2007 (UTC)
- Some minor points; It would have been better to use the "ref" tag so there is link between your information and the link to your site, look at the reference in the opening paragraph for an example. Perhaps the link to your site should have been added towards the bottom of the list of external links, try and keep the more important links at the top. Having said that, im sure your expertise could be very useful around here, i hope you dont get discouraged, plenty of work for everyone. Bugone 08:51, 9 March 2007 (UTC)
- The new information claimed above contasins no information about AI - simply a report that there is a paper on this topic (not in itself a remarkable fact). The link to the paper is spam IMHO. (signature added, aopologies) Michael Fourman 11:37, 9 March 2007 (UTC)
-
- I fixed the link so its direct to the paper, so its not obvious advertising, but i agree it doesn't add information. Its hard to justify its importance unless it mentions some conclusions or how the research is relevent. Actually it has been added to the pattern recognition page now which i suspect is where it belongs, so it really needs to be tied in better here or removed. cvdwalt, can you tie it better ? Bugone 03:48, 10 March 2007 (UTC)
[edit] Sept 2006
(originally placed in the middle of another message, Gwernol 07:10, 7 September 2006 (UTC)):
- The following data available at certain websites(confirmed by the availability of the same data in more than one site) are seen to disagree with the information at wikipedia.
1956: Demonstration of the first running AI program the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert Simon at Carnegie Mellon University. 1979: The first computer-controlled autonomous vehicle, the Stanford Cart, is built. The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab.
No references to cyc/AM yet?
[edit] "Hello, Professor Falken..."
There is also the AI of Joshua in the MGM film WarGames. Very good movie from the 80's, and Joshua is a type of AI that learns how to learn. --Seishirou Sakurazuka 23:38, 7 November 2006 (UTC)
[edit] Poor definition
1st sentence says AI is "the study of ...". It is not! It is intelligence which is synthetic / manufactured / man-made. The study of AI is not AI! "Life" is not the study of living things, that's "biology". "Intelligence" is not the study of clever and stupid things, it's an ability or a potential for creative / creative thought. And "artificial intelligence" is not the study of anything! The article definition is wrong. Paul Beardsell 09:26, 18 December 2006 (UTC)
Killer example. When someone says that "artificial intelligence is impossible" (or imminent or whatever) it is not the study of something which is being referred to. Paul Beardsell 09:54, 18 December 2006 (UTC)
- We get it! (but why did you write it then?) --moxon 12:55, 19 December 2006 (UTC)
-
- Too much spare time? On a very similar naming issue at artificial life my argument was not accepted. There it is held that artificial life is the academic discipline, not the thing itself. If you agree so readily with me perhaps you could review artificial life? Paul Beardsell 20:51, 19 December 2006 (UTC)
- Perhaps it is the case that "AI" can refer both to 'intelligence which is synthetic / manufactured / man-made' and the study of this? It does not have to be either one or the other. S Sepp 16:23, 20 December 2006 (UTC)
-
- I think it is also used that way but that such usage has arisen out shorthand or laziness. Context sometimes allows that by AI what is meant is "the study of AI" or "research into AI". But, without context, it means the thing itself. Paul Beardsell 22:58, 20 December 2006 (UTC)
[edit] Darpa prize money
Darpa has since secured the prize money for the 2007 grand challenge. It's $2 million for first place, $1 million for second and 500 grand for third. 24.107.153.58 06:06, 30 December 2006 (UTC)
The 1994 VaMP is probably the most influential driverless car ever. Is anybody out there in a legal position to upload an image of the VaMP? Science History 13:37, 15 June 2007 (UTC)
[edit] Challenge and Prize - Purpose?
Could somebody explain what the purpose of this section is? Why is it even mentioned? The DARPA Grand Challenge isn't the only challenge of this kind (I believe the UK MOD runs a similar prize with autonomous submarines). The title of the section is misleading and the section seems out of place when describing AI in general. DPMulligan 14:59, 1 January 2007 (UTC)
[edit] AI in business
Says 750,000 Robots were in use worldwide, 500,000 from japan. From Japan or in Japan? theres a big difference between the two.Peoplez1k 01:48, 18 January 2007 (UTC)
I think this section needs some work, it reads like a list. Bugone 03:04, 10 March 2007 (UTC)
[edit] General Quality
Given the amount of erudition evident on this (talk) page, the page we're talking about is an embarrassment, especially in the history section. The selection of systems discussed seems bizarre (and probably a result of self or friend promotion), and impoverished. For example, a surge in interest in game theory (if we agree that this is one of the major recent features of AI research) certainly is not called "Machine Learning" (corrected). And applications of Bayes theory still seem to be very important (viz the recent development of probabilistic logics) and have little or nothing to do with principal components (that bit I deleted wholesale).
Basically, this page seems to be under attack by vandals or people ignorant of its subject matter (or possibly Steven Colbert). Witbrock 19:03, 28 January 2007 (UTC)
Done This should be fixed now ---- CharlesGillingham (talk) 01:32, 8 December 2007 (UTC)
[edit] Programing language
Computers need help to even get their inner workings right. With every new programing language the computer gets more independent. We have meaningful variables instead of memory locations and now even garbage collection. But it is Sisyphus-work. With every more complex program, we (and not the computer) invents new IT-techniques. Why can the computer not sort its own hard drive. At least it can do defragmentation on its own. But new file systems are really hard work. And math systems like maple do an amazing job, but it does not reflex. Nothing of the power of maple or mathematica is used to improve the program itself. As typical application I want the computer to optimize my code for speed and memory footprint. Optimization were done by hand a long time and now the optimization techniques are programmed by hand, wow one step ahead, yeah and the compiler price is high as a sky-scraper, because of the many man-hours. How can we expect that a computer will be able to do something meaningful in the real world? Even solve our problems, care for something? Anyhow, I am still happy with its number-crunching abilities. Arnero 16:27, 9 February 2007 (UTC)
[edit] Opening paragraph
On 16 Jan something bad happened to the opening paragraph which has been left unchanged until 12 Feb when Bugone attempted to "cleanup the rambling". I will try and restore some of the original content. --moxon 14:40, 15 February 2007 (UTC)
- Looks good, i've also been trying to cleanup the history section, i believe the history section in this page should just be a summary of what is in the "History of Artificial Intelligence" page, key moments so to speak, for the finer details they can see the main history page. The events i put in this summary may be POV so perhaps someone else could look over what where some of the key moments -- bugone
In the opening Paragraph it says "AI is studied in overlapping fields of computer science, psychology, philosophy, neuroscience, and engineering,..." I think linguistics is missing there. -- Simon (theother@web.de) —Preceding unsigned comment added by 85.179.207.159 (talk) 20:26, August 28, 2007 (UTC)
[edit] External Links
There are a lot of links which i feel are too specific for this page, i doubt they would be usefull to someone new to the topic, i think links from this page should be to general introductory AI pages, with more technical links moved to other pages whihc deal with AI topics in more detail, i iwll try and clean it up. Bugone
- I also removed a link that required an external applet to view, though this situation may change. It's the link I initially identified as malicious spam, see this edit. Robotman1974 08:10, 8 March 2007 (UTC)
- I dont see any spam when i load that java applet, but im not sure about the link anyway, according to the manual of style, embedded links should be redirected via the references section, perhaps those paragraphs on classifiers in 'schools of thought' need there own sub heading or moved into a pattern recognition topic, im not sure it belongs there (but i dont claim to be an expert) Bugone 08:46, 8 March 2007 (UTC)
Ive pretty much finished cleaning up the links. The first group of 5 links on the page now look reasonable to me, the second group im not sure about, but gave them the benefit of the doubt. I removed a lot of links, was i too savage ? I will give everyone a week or so to review my changes, if i dont hear anything i will remove the "cleanup" notice. Bugone 11:15, 8 March 2007 (UTC)
- The cleanup job looks complete to me, so I've removed the link cleanup tag. Thanks for all the work Bugone. Robotman1974 11:26, 8 March 2007 (UTC)
- Wow! There are way too many external links, so I forbore to add what me thinks is a good one: Machina sapiens and Human Morality. Pawyilee (talk) 16:35, 4 March 2008 (UTC)
- the link dont work at all for me... I will try later maybe is a server problem... Raffethefirst (talk) 08:23, 5 March 2008 (UTC)
[edit] History section
I've put the timeline into table form, but the entries themselves need to be looked at. In particular, it strikes me as odd that nothing of note happened between 1974 and 1997. One would think there must have been some major developments between those years that would fit into this article. Robotman1974 11:42, 8 March 2007 (UTC)
- Perhaps it would be appropriate to put a link to AI_Winter which talks about the lull in research/funding at that time, its not actually an event though, more lack of events so not sure it belongs. Bugone 23:22, 8 March 2007 (UTC)
Good work with the history section. I suggest we move to the top. It's closing paragraph can form a prelude to "Schools of thought". And if any of you can think of a way to prevent every second sci-fi fan (including me:) from adding a paragraph about his favourite character to the fiction section, it would help. --moxon 18:41, 9 March 2007 (UTC)
[edit] A.I
Is it not a misery!I think we should add something about he criticism for the article because is it not that disappointing that scientists make robots-bots for just testing their ability to coordinate and making human looking robots instead finding ways to clear mines or sending bots to mars to find or explore areas on the planet.82.114.81.148 21:27, 22 March 2007 (UTC)
[edit] robot != ai
What the hell is a picture of asimo doing on the front page. It is not even an example of weak ai, all of its moves are prerecorded. Although I admit that it can walk upstairs and do other cool thngs, it is simply a machine and is about as stupid as any other. If some weirdo really wants to use a picture of a robot (which has nothing to do with ai anyway besides stereotypes) at least use a picture of a developmental robot ->http://en.wikipedia.org/wiki/Developmental_robotics <-
- Cool it. Although I agree that Asimo's prerecorded performances doesn't cut it, please also refer to [1] and [2]. What counts as intelligence is, unfortunately, still a matter of opinion. However, feel free to change the picture. --moxon 14:58, 10 April 2007 (UTC) P.S. all forms of AI are simply machines
- I agree with the last comment. Any form of AI is a machine. In addition, I suggest you take a look at ASIMO as you appear to underappreciate his capabilities. Even though his movements are prerecorded to some extent - he balances and evaluates the current situation on his own. Similarly, he does facial recognition, posture-, gesture-, general environment-, and object recognition. Oh, and he can distinguish sounds and their direction. ---hthth 12:24, 28 April 2007 (UTC)
- I think the picture of Asimo is fine. However, it is unfortunate that every picture in the article is of a robot, since AI and robotics are distinct fields. The fields of AI and robotics share many things in common, but neither is a superset of the other. We don't want to give people the impression that AI is only useful for the development of robots. Therefore, I replaced the picture of Asimo with the picture shown on the Deep Blue page. I figure it's a good picture to show at the top of the article, since it has both historic significance (many of the first AI programs were chess or checkers players) and current significance (since the goal of beating the human world champion was achieved fairly recently, and a similar goal is used for the RoboCup competition.) However, I'm open to a different picture if someone has something better to suggest. Colin M. 13:57, 13 May 2007 (UTC)
- Good call and argument Colin, I second your suggestion. ---hthth 15:37, 15 May 2007 (UTC)
- But isn't this like putting a picture of Pluto at the top of the Planet article? --moxon 12:54, 16 May 2007 (UTC)
[edit] AI languages and programming styles
IMO this section discredited AI with overgeneralized statements and oversimplified examples:
- Bayesian work often uses Matlab or Lush
- real-time systems are likely to use C++
- prototyping rather than bulletproof software engineering
- basic AI program is a single If-Then statement + All programs have If-Then logic
- but actually it is an automated response
- --moxon 10:59, 18 May 2007 (UTC)
-
Done This is fixed in the current version ---- CharlesGillingham (talk) 02:01, 8 December 2007 (UTC)
[edit] Sims as Landmark in AI History
Is The Sims really a Landmark in the AI History? i think it's not and its Reference in AI Time Line should be removed.
- -- AFG 16:56, 18 May 2007 (UTC)
I agree. Most of the other things in the list are watershed events: major changes in the field of AI. While it's impressive that the Sims is the best-selling computer game of all time, I'm not aware that the AI in The Sims is significant from a standpoint of pushing the field of research forward. Colin M. 03:15, 22 May 2007 (UTC)
-
Done Sims are long gone. ---- CharlesGillingham (talk) 02:01, 8 December 2007 (UTC)
[edit] General comments
The best quality of this article is that it is short. The 'Mechanisms' section needs a complete re-write, however.
The 'if-then' opening paragraph is oversimplifying things to an extreme. I like the idea of talking about inference in the beginning though. If you can do inference, then you can make decisions. Then the question becomes: what type of inference, how do you choose decisions based on your inference, and how the inference is made. In some cases the inference is trivial, i.e. optimization problems (where it is easy to compute the cost for any given solution and the problem is restricted to searching), in other cases the decision making is trivial i.e. classification problems (where there is a small number of finite classes with a known cost for misclassification and the difficulty is in inferring the correct model for the probability of a class given some observations)
The split between two 'groups' of AI techniques as unfortunate at best. Probabilistic (i.e bayesian) techniques can be incremental and distributed. I don't think it is of any value to split models and paradigms into mutually exclusive categories. There are many orthogonal characteristics, i.e. ad-hoc approaches versus principled approaches and biologically-inspired versus mathematically-inspired.
The other splits in those cases seem to be misinformed at best. There are 'neural networks', for example, which can be a type of bayesian system (if you use bayesian formalisms), yet bayesian networks are listed in a different category. Also, expert systems and case base reasoning systems are applications of AI, not AI models. You can use any model and formalism you want for implementing these systems.
A more sensible split in my opinion would be one that talks about the different research/application fields: modeling, inference and estimation, planning, control and decision-making and optimization. Of course each of these fields incorporates aspects of the other fields. This should also be made clear in this section. --Olethros 10:42, 22 May 2007 (UTC)
- Agreed. It would be nice if the "split" between fields could be something defined in an authoritative source (perhaps the Russell & Norvig book) so that there's an in-print justification for choosing a specific set of categories. Colin M. 12:41, 23 May 2007 (UTC)
- Agreed. I have added an Expert tag until this page can be fixed (see below).CharlesGillingham 23:54, 27 July 2007 (UTC)
Done All these issues have been resolved. ---- CharlesGillingham (talk) 01:36, 8 December 2007 (UTC)
[edit] Layout of page
Some of this page's text appears to have disappeared underneath a picture of Kasparov playing chess against Dark Blue - would somebody who understands the formatting better than me be able to tidy it up? Alchemeleon 17:55, 30 May 2007 (UTC)
- I moved the {{portal}} template and it seems to be fixed now. -- Schaefer (talk) 19:06, 30 May 2007 (UTC)
[edit] Fiction
I moved this ever growing section to its own sub-article where different plots can be described and compared in more detail with examples. A different article name might be more appropriate. --moxon 13:22, 13 June 2007 (UTC)
[edit] Prometheus Project
Was this a challenge? This paragraph might be more appropriately added to the List of notable artificial intelligence projects. Was any AI involved? I didn't see any references to AI. --moxon 16:52, 13 June 2007 (UTC)
Done I agree, and this is gone now. ---- CharlesGillingham (talk) 02:02, 8 December 2007 (UTC)
[edit] Bayesian networks: traditional AI or soft computing?
This page lists Bayesian networks as "traditional AI" and the soft computing page claims them as under "soft computing". Which is correct, and which page should be corrected?
Bob Petersen
- This one was wrong. This has been corrected. ---- CharlesGillingham (talk) 01:41, 8 December 2007 (UTC)
[edit] Topic of "problems in creating AI"
why is there no topic of the problems in creating AI? I talked with an expert in computers who told me AI can't yet reliably understand spoken language. Google search engine programers say the same that search engines can be very stupid. Where is the Con and the Pro on the success/failure of AI?--Mark v1.0 07:29, 1 July 2007 (UTC)
- Absolutely. If you define the problems, then you can also discuss proposed solutions and how different schools of AI subscribe to very different solutions, for example Neats vs. scruffies, top-down and bottom-up, computational intelligence vs GOFAI. This goes a long way towards explaining how AI research defines itself. Some information about some of the major problems is at
- History of artificial intelligence#The problems
- History of artificial intelligence#Critiques from across campus
- CharlesGillingham 21:45, 27 July 2007 (UTC)
[edit] Expert Tag
I have tagged the main section of this article as needing the attention of an expert. The problems, as I see them, are:
- Emphasis: Some subjects that are not key aspects of artificial intelligence (like classifiers) are given too much emphasis. Key areas like natural language processing are not even mentioned outside the lead paragraph.
- Organization: The article could be reorganized by "schools of thought" or "approaches", or by applications (i.e. computer vision, natural language processing, expert systems, etc.), but it needs some kind of consistent organization.
- Misinformation: The article incorrectly states that AI programs are "generally" production systems. The description of "fuzzy logic" is inaccurate. I'm not sure the division of AI into two competing subfields has been done accurately, or if it's even true that there are exactly two subfields. For example, who would consider Rodney Brooks "conventional AI"?
This article needs some reference that could define the "key areas of AI research", commercially and academically. There needs to be some expert source that determines what should and should not be included in this article.CharlesGillingham 23:51, 27 July 2007 (UTC)
Done All these issues have been resolved. ---- CharlesGillingham (talk) 01:41, 8 December 2007 (UTC)
[edit] VaMP (1995) vs 2005 DARPA Grand Challenge
We better delete most of the references to this heavily promoted DARPA race which actually did not achieve any AI milestone at all. The milestones in autonomous driving were set 10 years earlier by the much more impressive car robots of Mercedes-Benz and Ernst Dickmanns. Let us compare his VaMP robot car to the five cars that finished the course of the 2005 DARPA Grand Challenge: Stanley & Sandstorm & H1ghlander & TerraMax & Kat-5. My sources are mostly Wikipedia and the rest of the web. In 2005, the DARPA cars drove 212 km without human intervention. In 1995, the VaMP drove up to 158 km without human intervention. The DARPA cars drove on a dirt road flattened by a steamroller. The VaMP drove on the Autobahn. In both cases the road boundaries were easily identifiable by computer vision. Like many commercial cars, the DARPA cars used GPS navigation, essentially driving from one waypoint to the next (almost 3000 waypoints for the entire course, several waypoints per curve). Like humans, the VaMP drove by vision only. The DARPA cars reached speeds up to 40 km/h. The VaMP reached speeds up to 180 km/h. So the VaMP was more than four times faster although its computer processors apparently were 1000 times slower. The DARPA cars did not encounter any traffic but a few stationary obstacles. The VaMP drove in traffic around moving obstacles, passing other cars. Interestingly, the 2007 Urban Grand Challenge is trying to repeat something the VaMP was already able to do 12 years ago. Willingandable 18:07, 24 August 2007 (UTC)
- I would like to see some technical comparison of the hard and soft computational abilities between these two approaches. --moxon 15:00, 27 August 2007 (UTC)
[edit] Topic of "problems in creating AI"
It seems to me that an encyclopedic article on AI must address the persistent challenges and setbacks. "AI Winter" is briefly mentioned, and is better described in the separate page on the "History of AI." Yet, here, the problem of unfulfilled expectations is glossed over. In fact, an example of overstatement is found in this very article, where it says: "AI logistics systems deployed in the first Gulf War save the US more money than spent on all AI research since 1950[citation needed]." Why place an unsubstantiated assertion in an article? I suggest that the article should stick to the verifiable facts. Timlevin 03:47, 27 August 2007 (UTC)
[edit] Image: Kasparov vs Deep Blue
I see the image at the top of the article, depicting Garik Kasparov vs Deep Blue, is flagged for speedy deletion. I hope it stays; despite the ups and downs, this match was historic in the (short) history of AI. For many years many people thought that a computer could never be able to play chess effectively, and the struggle was long, starting at the very beginning of AI with von Neumann. Later, Ken Thompson (builder of Belle) said that the way chess computing developed was a little disappointing in terms of AI, but with all due respect I disagree: cognitive neuroscience shows us massively parallel compuation of jillions of individually simplistic (pattern matching) heuristics, similar to Belle computing 100,000 positions per second with "class C" heuristics. This is getting us somewhere :-) Pete St.John 18:14, 5 September 2007 (UTC)
- Permission obtained. Many thanks to Murray Campbell at IBM. Pgr94 (talk) 16:55, 29 February 2008 (UTC)
[edit] AI in Films
Any mention of AI in films?CoolRanch3 17:07, 17 September 2007 (UTC)
- See artificial intelligence in fiction ---- CharlesGillingham 20:30, 17 September 2007 (UTC)
[edit] AI in Fiction
Added many of the classic fictional examples of AI to the Artificial intelligence in fiction. Listed them thematically. Two concerns I have:
- 1) Robot rights should probable come back to the main page
- 2) I think a lot of people will miss the fiction page based on the current main page layout. Perhaps the fiction subsection on the main page should be rewritten to be less of a list and more of an into. Any other thoughts?
Thomas Kist 23:59, 23 September 2007 (UTC)
[edit] Reorganization proposal
I would like to re-organize this article into a tight summary style and bring it closer to sources like Artificial Intelligence: A Modern Approach. If you'd like to help (or have some objection) please see Talk:Artificial intelligence/Structure proposal. ---- CharlesGillingham 23:27, 2 October 2007 (UTC)
Here's latest version of the proposal:as it is now, transcluded:
- Perspectives on AI (is there a better title?)
-
- AI in myth and fiction
-
-
- More or less as written in artificial intelligence#Fiction
- The rise and fall of AI in public perception
-
- See also: AI Winter
-
- More or less as written in artificial intelligence#History
- The philosophy of AI
-
-
-
- More or less as written in artificial intelligence#Philosophy
- The future of AI
-
-
-
- Very short description of futurists like Ray Kurzweil and the possibility of Strong AI
-
- AI research
-
- Approaches to AI reserach
-
- Short bulleted list of neats vs. scruffies, GOFAI vs computational intelligence, GOFAI vs embodied/situated/Nouvelle AI, strong AI vs. "Applied AI"/intelligent agent paradigm, etc.
- Problems of AI research
-
- A bulleted list of all the major sub-problems of AI, with short paragraphs on deduction, natural language processing, computer vision, machine learning, automated planning and scheduling, commonsense knowledge/knowledge representation, and so on. as described in Russell and Norvig or other major AI textbooks.
- Tools of AI research
-
- A bulleted list, with short paragraphs on logic programming, search algorithms, optimization, constraint satisfaction, evolutionary algorithms, neural networks, fuzzy logic, production system, etc. as described in Russell and Norvig or other major AI textbooks.
- Applications of artificial intelligence
-
-
- A bulleted list with short paragraphs or sections on some of the applications expert systems, data mining, driverless cars, game artificial intelligence, etc.
-
As an alternative to "Perspectives on AI", how about "AI in popular discourse" or something along those lines? Because it seems to me the unifying characteristic of the heading's subtopics is that they are nontechnical and described mostly in writings intended for general lay audiences, rather than for other specialist academics. This applies least for the philosophy of AI, which does have a foundation of rigorous academic work on which to write an article, so perhaps it should be pulled out as a new top-level section between the popular stuff (SF, history, and predictions) and the gritty details of modern AI methods. -- Schaefer (talk) 16:57, 3 October 2007 (UTC)
- I was thinking that history, philosophy, fiction and futurism are all subjects where people tallk about AI but don't do AI. That way the next section can be about how AI is done. ---- CharlesGillingham 19:05, 3 October 2007 (UTC)
I think descriptive paragraphs is essential for a good article - bulleted lists might be a bit to tight. They also tend grow uncontrollably. You might have some resistance if subtopics like challenges and AI in business and toys are completely dropped; I guess they could form part of "Applied AI"? I am a bit concerned that major AI textbooks primarily focus on symbolic AI. These are not the sources used in electronic engineering (my POV). But do the restructuring and lets see what happens :-) --moxon 12:42, 5 October 2007 (UTC)
- Absolutely. A paragraph each for each sub-sub-topic (like "natural language processing"). And I want to move the paragraphs about games, business, driverless cars, etc. into the last section "Applications of AI". Nothing would be lost except the middle section now called Artificial intelligence#Mechanisms.
- I'm very into including the perspectives from EECS.
- Would you be interested in helping me write some paragraphs? I am very familiar with older topics, like knowledge representation or logic but I'm a little fuzzy on newer ones, like Bayesian networks. I'll put the current draft at Talk:Artificial intelligence/AI research (draft) ---- CharlesGillingham 00:52, 7 October 2007 (UTC)
I'll help where I can and when I have the time (sorry about the long silence). I'd be more comfortable with a migration of the article rather than a replacement. It would keep more editors involved in the process. It wouldn't be to difficult to implement most of the suggested structural changes --moxon 20:52, 10 November 2007 (UTC)
I agree with your proposal on the "Approaches to AI reserach" section. Currently that section is confusing, trying to fit everything into two category, which is simply not the case. Classifying them from different perspectives would be a much better choice, as you proposed. Also some statements there seem problematic to me. For example: "Conventional AI mostly involves methods now classified as machine learning..." I think machine learning was never the main stream in GOFAI. Instead, knowledge representation and reasoning is an important topic for GOFAI, but is not mentioned at all in that section. Took (talk) 10:42, 23 November 2007 (UTC)
- I have carried out most of this plan. See notes below. ---- CharlesGillingham (talk) 01:54, 29 November 2007 (UTC)
[edit] Section "Why Artificial Intelligence Needs Philosophy"
The section newly added by Pooya.babahajyani (talk) on 6 October 2007 seems to be copied from "Some philosophical problems from the standpoint of artificial intelligence" by John McCarthy and Patrick J. Hayes, see http://www-formal.stanford.edu/jmc/mcchay69/node3.html. Therefore I will remove it. --Ioverka 02:19, 7 October 2007 (UTC)
[edit] Loebner Prize In "Research Challenges" Section.
Hi, I think Loebner Prize should be included into "Research Challenges" Section, any Comments about this? —Preceding unsigned comment added by 148.87.1.172 (talk) 16:54, 19 November 2007 (UTC)
Done ---- CharlesGillingham (talk) 01:38, 8 December 2007 (UTC)
[edit] Major expansion
I have expanded this article to include most of the topics covered by major AI textbooks and online summaries of AI. (I've posted my research at Talk:Artificial intelligence/Textbook survey). I'm hoping that this brings the article within striking distance of the featured article criteria.
The only thing I deleted was the original "approaches to AI" section, which had too many problems to be saved. I expanded and added sources to some of the existing sections, but they are mostly intact. ---- CharlesGillingham (talk) 03:02, 29 November 2007 (UTC)
[edit] Todo
- The article is a bit too long. (My printer puts the text at just under 10 pages, not counting external links, footnotes and references). I'd rather cut more technical stuff, since this is of less interest to the general reader.
- Check for errors, especially on technical subjects.
Done Unfinished sections: learning, perception, robotics.
Done Add sources to philosophy of AI section. Borrow them from philosophy of AI.
Done Add Galatea and Talos and frankenstein to AI in fiction.historynew section. Use McCorduck as a source. Move it up to the top, so there's a historical progression through the section.
Done Write AI in the future. Mention Moore's Law. Don't mention anything from science fiction.Kurzweil is a good source, references are already in place.- Look into ways to make the applications section comprehensive, rather than scattershot. I think we need an article applications of AI technology that contains a complete, huge list of all AI's successes. The section in this article should just highlight the various categories.
Little improvements in comprehensiveness (skipping these wouldn't have any effect on WP:GACR or WP:FACR)
- Could use a list of architectures in the section on bringing approaches together, such as subsumption architecture, three tiered, etc.
- Which learning algorithms use search? Out of my depth here.
- For completeness, it should have a tiny section on symbolic learning methods, such as explanation based learning, relevance based learning, inductive logic programming, case based reasoning.
- Could use a tiny section on other knowledge representation tools besides logic, like semantic nets, frames, etc.
- Similarly for tools used in robotics, maybe replacing the control theory section.
Updated --- CharlesGillingham (talk) 03:02, 29 November 2007 (UTC)
Updated --- CharlesGillingham (talk) 23:59, 29 November 2007 (UTC)
Updated --- CharlesGillingham (talk) 13:39, 6 December 2007 (UTC)
Updated --- CharlesGillingham (talk) 02:53, 22 February 2008 (UTC)
The recent changes by CharlesGillingham (talk) are a great improvement. The previous version was a disgrace.
However, I think it is wrong to characterise production systems as Horn clauses, although I admit that you might get such an impression from the Russell-Norvig AI book. But even Russell and Norvig limit production systems to forward reasoning. The production system article is much more accurate in this respect. Two important characteristics of production systems, which distinguish them from logic, are their use of "actions" (often expressed in the imperative mood) instead of "conclusions" and conflict resolution, which can eliminate logical consequences of the "conditions" part of a rule. Arguably, production systems were a precursor of the intelligent agent movement in AI, because production rules can be used to sense changes in the environment and perform actions to change the environment.
One other, small comment: The example "If shiny then diamond" "If shiny then pick up" shows both the relationship and the differences between logic and production rules. An even more interesting example is "If shiny then diamond" "If diamond then pick up", from which the original example could be derived if only production rules could truly be expressed in logical form. Robert Kowalski (talk) 13:25, 8 December 2007 (UTC)
- Thank you! I've made the changes you recommended. As I mention in the Todo list above, checking for errors and misunderstandings is a top priority and I appreciate the help. ---- CharlesGillingham (talk) 03:10, 9 December 2007 (UTC)
[edit] Rewrite of AI in myth and fiction
I've been considering a re-write of the fiction section in the main article (still working on the 2nd and 4th paragraph), but I though I put where I was after I saw the ToDo list
Reasons:
- Don't want the main AI page to have lists of fictional AI - I'd like to direct the reader to more specific lists on other pages
- Portrayals of AI are broader than up "an comming power"
Beings created by man have existed in mythology long before their currently imagined embodiment in electronics (and to a lesser extent biochemistry). Some notable examples include: Golems, and Frankenstein. These, and our modern science fiction stories, enables us to imagine that the fundamental problems of perception, knowledge representation, common sense reasoning, and learning have been solved and let's us consider the technology's impact on society. With Artificial Intelligence's theorized potential equal to or greater than our own, the impact can range from service (R2D2), cooperation (Lt. Commander Data), and/or human enhancement (Ghost in the Shell) to our domination (With Folded Hands) or extermination (Terminator (series), The Matrix (series), Battlestar Galactica (re-imagining)). Given the negative consequences, ranging from fear of losing one's job to an AI, the clouding of our self image, to the extreme of the AI Apocalypse, it is not surprising the Frankenstein complex would be a common reaction. Subconsciously we demonstrate this same fear in the Uncanny Valley hypothesis. See AI and Society in fiction for more ...
With the capabilities of a human, an AI can play any of the roles normally ascribed to humans: protagonist antagonist cometic relief. The creation of sentient machines is the holy grail of AI, human level intelligence. The following stories deal with the birth of artificial consciousness and the resulting consequences. (This section deals with the more personal struggles of the AIs and humans than the previous See Sentient AI in fiction ...
While most portrayals of AI in science fiction deal with sentient AIs, many imagined futures incorporate AI subsystems in their vision: such as self-navigating cars and speech recognition systems. See non-sentient AI in fiction for more ...
The inevitability of the integration of AI into human society is also argued by some science/futurist writers such as Kevin Warwick and Hans Moravec and the manga Ghost in the Shell
Thomas Kist (talk) 17:00, 29 November 2007 (UTC)
-
- I'm not sure what you're saying. This all looks good. You should add it. (The article artificial intelligence in fiction could also use more analysis and fewer lists). ---- CharlesGillingham (talk) 21:50, 29 November 2007 (UTC)
[edit] New topic
- Does not Wikipedia constitute a basis for a hybrid human-artificial intelligence? It already uses tactics that are used in CYC. It draws reflexively on its own references as well as others. It is competitive. I saw it bump a University off Google's listings for first place on the subject of artificial intelligence. It grows sometimes by big leaps but more often a disarmingly productive global growth in small increments, each cunningly designed. Etc. It is clearly on Earth's side, and fits into the evolutionary emergence of silicon alongside diatoms and Equisetum. What if Wikipedia is not within the destiny of human control? SyntheticET (talk) 18:03, 7 December 2007 (UTC)
-
- Forgive me for moving this to a new topic. ---- CharlesGillingham (talk) 01:37, 8 December 2007 (UTC)
- A program devised by Evgeniy Gabrilovich and Shaul Markovitch of the Technion Faculty of Computer Science at the Technion - Israel Institute of Technology helps computers map single words and larger fragments of text to a database of concepts built from the online encyclopedia Wikipedia; this then helps in making broad-based connections between topics, to aid in filtering e-mail spam, performing searches and conducting electronic intelligence gathering at a more sophisticated level, according to this ScienceDaily release. Pawyilee (talk) 15:02, 9 December 2007 (UTC)
[edit] Samuel Butler
I'm not sure I agree that "Samuel Butler first raised the possibility of mechanical consciousness", since there are conscious machines described by Homer and other classical sources. Is there a source that claims this? ---- CharlesGillingham (talk) 00:04, 22 December 2007 (UTC)
- I wrote Samuel Butler first raised the possibility of "mechanical consciousness"..., NOT was the first to, because I have no way of knowing just who was the first. I'm even treading on thin ice there as I have not read the article he wrote for The Press and am only going by the heading it was given, Darwin Among the Machines. I frankly expected an objection to including Butler because he referred, in Erewhon, to emergence by means of Darwinian Evolution, specifically by Natural selection, so that would make his "mechanical consciousness" a form of natural, not artificial, intelligence. How did Homer handle it? Pawyilee (talk) 14:26, 22 December 2007 (UTC)
-
-
- Hephaestus' golden robots are mentioned in Homer's Illiad, and I think Homer may mention Galatea as well, although I can't remember. So, in answer to your question, classical sources imagine that artificial intelligence can be created by the craftsmanship of masters, like Hephaestus, Pygmalian or Daedalus. Samuel Butler's ideas are fascinating. My only question is about how his ideas fit into the history of AI in general—I just want to find the proper place to introduce him ---- CharlesGillingham (talk) 16:07, 23 December 2007 (UTC)
-
-
- Butler's vision makes the difference between artificial and natural an artificial one. To get to the root of the matter, check out ar- and gen, then find art and nature. I'm happy you are fascinated: that was my purpose! Pawyilee (talk) 09:31, 24 December 2007 (UTC)
- PPS How about moving the bulk of the entry on Butler to Consciousness, then refer to it in this and the other the AI articles as sort of an Honorable mention? Pawyilee (talk) 12:42, 28 December 2007 (UTC)
- Butler's vision makes the difference between artificial and natural an artificial one. To get to the root of the matter, check out ar- and gen, then find art and nature. I'm happy you are fascinated: that was my purpose! Pawyilee (talk) 09:31, 24 December 2007 (UTC)
-
-
-
- My inclination would be to mention him as a precursor to Kurzweil, Hans Moravec and other Transhumanists: he's really an early futurist. I think the article needs a short section about futures studies and AI (see my original outline above), mentioning Moore's Law, transhumanism, the ethics of artificial intelligence, and other (serious) speculations about what AI will mean to society in the future. We could even use an entire article on the future of AI. ---- CharlesGillingham (talk) 17:07, 30 December 2007 (UTC)
-
-
-
-
-
-
-
- I've tucked him into a new section that covers futurists, ethics and fiction. I'm sorry that I've cut him down to one sentence—I wanted to keep this section down to one page. There's just way too much to cover. Maybe you would consider adding more material to the Samuel Butler article? ---- CharlesGillingham (talk) 02:44, 22 February 2008 (UTC)
-
-
-
-
-
-
-
-
-
- Somebody copied what I'd written into a new article, Darwin Among the Machines. To it I added a NZ link to Butler's letter to the editor, plus a new paragraph on George B. Dyson's 1998 book of the same name. I then cut the reference here to a couple of clauses ...expressing an idea first proposed by Samuel Butler's Darwin Among the Machines (1863), and expanded upon by George Dyson (science historian) in his book of the same name (1998). OK? Pawyilee (talk) 15:20, 27 February 2008 (UTC)
-
-
-
-
[edit] stupidity, ignorance, and laziness - unverifiable
The following section has been removed from the article because it could not be verified. Pgr94 (talk) 23:36, 28 January 2008 (UTC)
- "There are three general limitations in AI,"
- You should wrote a theory, become notable and then put a small link in this article like: general limitations in AI - to you theory.
- Notable here means generally accepted by the ai researchers. I dont agree with you so I even more dont think this is a good place for this to stay here. Raffethefirst (talk) 11:20, 29 January 2008 (UTC)
- What specific facts claimed in that section do you believe need verification? Sai Emrys ¿? ✍ 01:17, 29 January 2008 (UTC)
- That "stupidity, ignorance, and laziness" has something to do with AI. Pgr94 (talk) 02:15, 29 January 2008 (UTC)
- Then I cite Professor Russell of UCB, whose AI class I have taken and who said exactly that. Presumably he's an authority (having written the main book about it). WP:V only demands verifi*ability*, not online access to the source. My paraphrase below is reasonably accurate. Sai Emrys ¿? ✍ 16:47, 29 January 2008 (UTC)
- I said that his *class* (specifically, CS 188, two years ago - though he still runs it TTBOMK) is the source, not the book (which AFAIK doesn't mention it). The book simply establishes him as an authoritative source. If you want to verify it, take his class or email him. You're welcome to reword it from "limitation" to "approach" or somesuch; I don't recall how he framed that part. But stupidity, ignorance, and laziness is a direct quote, and most of the examples I used are from ones brought up in class. A friend who took a class with Prof. Denzinger said that he talked about them also. Sai Emrys ¿? ✍ 19:11, 6 February 2008 (UTC)
- To be verifiable, we need to be able to check what you are claiming is true. According to WP:V the burden of proof lies with the contributor. You should at least be able to find it in the course notes. At any rate I doubt it is a widely held opinion and I'd have reservations about adding it to the article unless there is a peer-reviewed article that covers the topic. Pgr94 (talk) 23:46, 6 February 2008 (UTC)
- You *can* check it easily: email him. The address is russell@cs.berkeley.edu. The burden on me is only to provide a source (which I have done), not to check it for you or to prove that the matter claimed is true - the criterion is reader-verifiability, not proven truth, as it says in the WP:V header. It is verifi*able*, therefore it meets the standard.
- I didn't know how to add a "said in class multiple times" cite, so I left it as a comment. AFAIK it was not in the course notes or slides. Not being readily available online does not make that source any less valid.
- As for "opinion", I don't think that's an appropriate framing; it's more that it is a way of describing problems than some sort of strong statement about "limits" in AI. As i said, I have no problem with my paraphrasing being changed to make that clearer. And you can remove "widely held" too; that's just explanatory prose to me. Sai Emrys ¿? ✍ 02:29, 7 February 2008 (UTC)
- From WP:V "readers should be able to check that material added to Wikipedia has already been published by a reliable source" (my emphasis). Perhaps I've misunderstood you, but at the moment it seems the only evidence for verifiability is unpublished. Pgr94 (talk) 04:51, 7 February 2008 (UTC)
- I'm not an AI person. However, if these are indeed "commonly stated" (as stated above), then it should be simple to find them stated in an AI textbook (or similar), and use that as the source. Without such a source, there's no way of knowing that it isn't one specific lecturer's in-joke Bluap (talk) 05:05, 7 February 2008 (UTC)
- That "stupidity, ignorance, and laziness" has something to do with AI. Pgr94 (talk) 02:15, 29 January 2008 (UTC)
- (Coming from the village pump) If it's not published then it's not verified. Yes, in theory one could go to the horses mouth to ask if he said it (assuming that he is a reliable source). This is not a feasible burden to place on readers, who must be provided realistic access to the source to check its accuracy. And in the absence of actually asking this living source, there is a useful word to describe the use of your reporting of what this other person said: hearsay.--Fuhghettaboutit (talk) 15:42, 7 February 2008 (UTC)
-
-
- I checked back with my friend - he talked to Prof. Denzinger. I was confused; Denzinger is actually at Calgary, not Berkeley. He checked by email, and Denzinger responded that it was not originally his and he didn't know the source - but he used it himself as a description. Two profs in two unrelated universities in different countries using the same rubric (if that's the right word) seems rather more than just "one specific lecturer's in-joke". I would guess that because it is somewhat tongue-in-cheek, it's unlikely to be published in an AI textbook for cultural reasons. This doesn't prevent it from being worth including in a Wikipedia article. Additionally, I have verified its existence at least in these two instances, so I don't believe that it's such a burden to ask - or that only *published* material counts as verifiable. If that were the case, then WP:V would say so. It doesn't. Like I said, email him and you'll find out for yourself. It's rather easier IMO to verify than most books. Sai Emrys ¿? ✍ 18:56, 7 February 2008 (UTC)
- A couple sources:
- CS188 notes from a TA - Laziness (p 10, uncertainty)
- CS 188 lecture slides expectimax search 2006 and 2007 - Ignorance
- Test for Intelligent Systems (2II40), TUE (sql format) Ch 2 Q3 Q: Does probalility provide a way of summarizing the uncertainty that comes from our 'laziness' and 'ignorance'? A: Probalility provides a way of summarizing the uncertainty that comes from our 'laziness' and 'ignorance'. It deals with certain 'degrees of belief' of the agent. Probability for an event changes when new evidence appears for an event. We can divide this into 'prior probability', which stands for the probability before having evidence and 'posterior probability', which stands for the probability after having gained evidence. - note that this isn't in her lecture slides, which are ripped from CS188's. And again, different university, different lecturer. Nevertheless, it's on the test...
- Which supports what I said - this isn't something you're likely to find published "officially", since it's a humorous way to explain what's normally called just "intelligent agents and uncertainty". Sai Emrys ¿? ✍ 20:12, 7 February 2008 (UTC)
- Google Scholar also turns up some links, e.g. these lecture notes (again, different university - tcd.ie) that include 'ignorance and laziness' in the same context. (Many of the hits are irrelevant, from social commentaries and the like talking about humans, but not all.) This chapter from some random book appears to refer to this concept as well from the google snippet, but I can't access the full text. Sai Emrys ¿? ✍ 20:26, 7 February 2008 (UTC)
-
- 1. Stupidity: One does not always know how to compute a perfect solution.
The perfect solution is the one that gives the right answer in the smallest amount of time.
There can be any method here and the one that gives the best answer in little time is definitely not stupidity.
It can be trial an error or logical deduction or something else. The approaches used to solve a problem can be very different according with the known data and the problem.
- 2. Ignorance: One does not always have the necessary information to compute a perfect solution.
This is an approach to solve a problem. Is not an limitation to AI. There can be a lot of other approaches to solve problems.
- 3. Laziness: One does not always have the time to compute a perfect solution.
The perfect solution is the one that gives the right solution in little amount of time. To get the perfect solution you can try to reduce the unnecessary checks. This is a thing that must to be done in order to get efficient solutions. But there are more other things to do to solve problems efficiently.
So the things you said are no limitation to AI, are not even a classification of the same things.
The 1 and 2 are methods of solving problems and 3 is a speed up method applicable to other methods to solve problems.
Also even if your list is trying to be a list of problem solving methods, the things don't work this way. There is not no such a list... you can observe such methods in humans but to apply strictly those methods in AI is wrong without understanding the background mechanism.
So I kindly ask you to let you professor make a theory and let this theory become accepted and then try to put it on Wikipedia.
Raffethefirst (talk) 08:27, 8 February 2008 (UTC)
- I don't think you understood what I wrote.
- I've said already that "limitation" is poor wording; "complication" is perhaps a better one. Nobody is arguing that fast, correct, certain etc answers aren't better. Sure they are. But sometimes one can't have all of those things, because of practical limitations - such as in the examples I gave. E.g. chess is simply not within current computers' ability to fully process, even though the means of doing so is quite simple if one did have that much computing power. Thus, one has to use more sophisticated methods - heuristics, better search methods, etc. I said nothing about this being not true of humans as well.
- In any case, this section is not primarily about algorithms or ways to *address* the problems (except for completeness of the examples) - it's about listing the *problem*.
- P.S. Please don't leave notes on my talk page that are just a rehash of discussion here. I monitor this page and will respond here. Thanks. Sai Emrys ¿? ✍ 02:19, 10 February 2008 (UTC)
Yes I agree with you: I did not understand what you mean. I still don't understand.
As you say you professor phrase it somehow else. The idea could be more intelligible if you could get his citations.
I was trying to talk on you user page to understand exactly what you mean and not to change the subject here. So please explain me what you mean where you want.Raffethefirst (talk) 17:02, 10 February 2008 (UTC)
Three general problems in AI can be stated as stupidity, ignorance, and laziness.[citation needed] Most real-world problems have one or more of these factors.
- Stupidity: One does not always know how to compute a perfect solution.
- E.g. there is no known method to directly factor the multiple of two primes.
- The solution to stupidity is generally to use an alternative method to approach the answer, or one that results in an answer that is "good enough". E.g. for prime factorization, there are various heuristics to determine whether a large number is prime.
- Ignorance: One does not always have the necessary information to compute a perfect solution.
- E.g. in the game Stratego, the opponent's pieces are of known position, but start as of unknown identity. In Texas hold 'em poker, the order of the deck and thus the other players' cards as well as the flop cards are unknown.
- The solution to ignorance is generally the strategic discovery of new information or acceptance of unknowns - e.g. in Stratego one can bait or attack pieces to uncover their identity, or guess that the opponent's flag is in a well-protected location rather than in an easily reachable one. In poker, one can try to determine the other players' cards by their reactions during bidding, as well as knowing the simple probability of various flop cards and going with whatever is most likely to succeed overall.
- Laziness: One does not always have the time to compute a perfect solution.
- E.g. in chess, though the state is entirely known, as well as the rules of the game and the value of its outcomes, there is not enough computing power available to exhaustively go through all possible games. Checkers, however, has been solved relatively recently by exactly this method.[1]
- The solution to laziness is generally a utility heuristic - e.g. in chess, one can take a guess at how likely a certain move is to result in a win or a loss even without having fully computed its outcomes, based on generalized ideas such as defensive positions, numeric piece values, etc.
-
-
- I support User:Pgr94's interpretation of WP:VER. What user Sai Emrys advocates would obligate each user to email Dr. Russell for verification of the claim. While that would be nice and all, the world is a big place, and I'm sure Mr. Russell has better things to do with his time.--Sparkygravity (talk) 14:51, 21 February 2008 (UTC)
-
[edit] Peer Review on pushing AI to GA status
I'd like to see if we can get a PR on this article so we could possibly push it to GA class. It is a very nice article at the moment with extensive inline citations and references. - Jameson L. Tai talk ♦ contribs 16:57, 19 February 2008 (UTC)
-
Support the only real criticism I can offer is that it's a little long. But other than that I've read the article twice and most to the references are great. It's definitely higher than B-class. I support and you can notify me if you need that support.--Sparkygravity (talk) 14:44, 21 February 2008 (UTC)
- I am not a PR or FAC regular but coming across this section, here's a few pre-PR notes:
- Quotation in first sentence (and any subsequent) must have an inline citation directly after the quote (see Wikipedia:Citing sources#When quoting someone;
DoneCharlesGillingham (talk) 04:24, 22 February 2008 (UTC) - Ontology randomly capitalized in lead; lead does not seem to broadly summarize article as a whole and is a bit short given the length of the article;
DoneCharlesGillingham (talk) 04:24, 22 February 2008 (UTC) - "Craftsman from ancient times to the present have created humanoid robots" unclear; plain statement conveys the meaning that humanoid robots were actually made, which is not what is meant;
Done. They were actually made. Tried to make it clear that we're not talking about intelligent robots.CharlesGillingham (talk) 04:24, 22 February 2008 (UTC) - Image:P11 kasparov breakout.jpg if it can be properly used in this article under fair use, has no required fair use rationale for inclusion;
- Does anyone else know how to do this (so it sticks)?
Done. Permission granted by IBM. Pgr94 (talk) 20:03, 2 March 2008 (UTC)
- The article is quite listy--there are many sections where instead of prose neatly summarizing the section that's linked with {{main}}, there's bulleted points. I see this as a constant source of criticism at FAC.--Fuhghettaboutit (talk) 03:24, 22 February 2008 (UTC)
- Knocked out two or three of the lists yesterday. Weaving these into shorter paragraphs is also a good way to address the length issue. ---- CharlesGillingham (talk) 19:44, 22 February 2008 (UTC)
- I think Image:P11 kasparov breakout.jpg is going to have to be replaced, if the article is going to reach GA status.--Sparkygravity (talk) 02:19, 28 February 2008 (UTC)
-
- k, so I removed the Kasparov image here http://en.wikipedia.org/w/index.php?title=Artificial_intelligence&diff=prev&oldid=194573913 - personally, I hate my change. I really would much rather have the Kasparov image. To many people think that AI are represented by robots which is completely untrue... Most likely a strong AI will be a box, and look like a box... I'd rather show HAL from Space Odyssey 2001 or WOPR from wargames, since they're better examples but of course I can't do that, cuz they are fictional examples... ASIMO has limited AI, but it's only in spatial reasoning, very shallow... I think we should work on the Deep Blue-Kasparov image, because I think it's a better intro picture. Once we get the copyright issues taken care of, I feel we should revert my change.--Sparkygravity (talk) 11:24, 28 February 2008 (UTC)
-
- I think Image:P11 kasparov breakout.jpg is going to have to be replaced, if the article is going to reach GA status.--Sparkygravity (talk) 02:19, 28 February 2008 (UTC)
[edit] Article getting too big
I think the article is starting to become too long and it's perhaps time to discuss what should be chopped (or rather moved into other articles). It's obviously a big subject, so there is a lot to cover. Pgr94 (talk) 03:11, 22 February 2008 (UTC) Suggestion:
- shrink Applications of artificial intelligence to a paragraph and start a new article with that name.
- shrink Competitions and prizes to a paragraph and start a new article Competitions and prizes in artificial intelligence
- I agree that the article is too long, and, yes, it's my fault. I'm open to any suggestion. As I said in the "todo" list above, I think I would prefer to cut really technical material, since this is of less interest to the general reader. Your suggestion seems sensible, as a start. ---- CharlesGillingham (talk) 03:41, 22 February 2008 (UTC)
- Well, I knocked out a few lines here and there and removed several bullet lists. There's much more to do. ---- CharlesGillingham (talk) 07:16, 22 February 2008 (UTC)
- It wasn't criticism, it's just the nature of the field - it's so wide-ranging and at the same time fragmented and dissociated. I like to think that we're waiting for the Einstein of AI to provide us with the unifying theory.
- I have noticed that most sections lead to a more detailed article, here are a few sections that don't. These are candidates for creating separate articles:
- Traditional symbolic AI
- Approaches to AI
- Sub-symbolic AI
- Intelligent agent paradigm
- Evaluating artificial intelligence
Competitions and prizesApplications of artificial intelligenceHave I chopped too much here?
- Pgr94 (talk) 11:31, 22 February 2008 (UTC)
- great work.
- but 'Approaches to AI' should include 'Traditional symbolic AI', 'Sub-symbolic AI', 'Intelligent agent paradigm', so I dont think we need (yet) those separately articles. So I think 'Approaches to AI' will be a great article and also help this page. I dont know (yet) how to do it so I hope somebody else will doit.
- about 'Evaluating artificial intelligence' is too small to be an new article and I think it looks good here. Raffethefirst (talk) 11:49, 22 February 2008 (UTC)
-
-
- also it should not only contain links to other articles because it might have the functionality of the portal... Raffethefirst (talk) 12:09, 22 February 2008 (UTC)
- Thanks. Yes I agree with your suggestion on an Approaches to artificial intelligence article. Pgr94 (talk) 14:04, 22 February 2008 (UTC)
-
- I think a lot of the length is due to just the amount or information we have on the research being done, and that has been done in AI. If we were to create a page on AI research (either name AI Research or Research done on AI. etc), I really think we could cut the article in half. The only problem I see is how to summarize the topic of AI research correctly, efficiently, and in a way that makes it easy to read.--Sparkygravity (talk) 18:23, 22 February 2008 (UTC)
-
-
[edit] Weird newlines in the article
Does anyone understand why all the footnotes in the article have multiple newlines in them? It looks like they were introduced by some kind of bot. Is there any reason not to fix them? ---- CharlesGillingham (talk) 04:03, 22 February 2008 (UTC)
-
- Are you talking about the Nilsson 1998, Russell & Norvig 2003 and all that?--Sparkygravity (talk) 18:17, 22 February 2008 (UTC)
- Yes. The footnotes throughout the article have been "spread out" vertically for some reason. I've put the back together by hand in the sections I editted yesterday. ---- CharlesGillingham (talk) 18:50, 22 February 2008 (UTC)
- Are you talking about the Nilsson 1998, Russell & Norvig 2003 and all that?--Sparkygravity (talk) 18:17, 22 February 2008 (UTC)
[edit] Not necessary to call it a "definition" ?
Saying what ai is, is like saying that we know indubitably what ai is.
Saying is the modern definition lets peoples perfectly understand that is a definition that might not be perfect but is the best we have at the moment.
And beginning with a 'definition' I think it gives a + professional look. Raffethefirst (talk) 07:38, 22 February 2008 (UTC)
- reverted ----CharlesGillingham (talk) 08:28, 22 February 2008 (UTC)
[edit] Readability
http://en.wikipedia.org/w/index.php?title=Artificial_intelligence&diff=next&oldid=193220693 - here I notice that you mention that the technical details are pretty boring. I was thinking since AI is such a board topic, it might be good to set a target audience age. That way there could be a way to set a standard for technical details, and how they could be sorted and assessed for placement on the page. For instance if the subject is too complex in detail, the article would give a basic lay audience intro. and then direct user to see a sub-page for more details.
(On a side note, Colorblind is my fav. Counting Crows song)--Sparkygravity (talk) 17:28, 22 February 2008 (UTC)
- The target audience is the general, educated reader: not someone who is already familiar with computer science. It should also be useful to computer scientists coming from other fields. The technical subjects have to be mentioned (i.e., each of the tools has to be mentioned) to properly define the field. I'm in the process of removing the names of specific sub-sub-problems or algorithms for which can't be properly explained in the room available ---- CharlesGillingham (talk) 18:42, 22 February 2008 (UTC)
- Well I have to say, for an educated reader, familiar with the way wikipedia works, the article is fine. But for a general reader, who is looking for a brief summary, even the most complete sections can leave a reader with more questions than answers. For instance, many of the summarizing descriptions completely rely on the readers patience and availability to search the connecting wikilinks.
-
- Examples where I think the average reader might have to follow wikilinks to understand what was being said:
- "In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology" (History of AI research). The word neurology, is probably understood by someone with a GED, but not a 12 year old. suggested fix neurology|brain, so it reads "recent discoveries about the brain".
-
-
- "Brain" is inaccurate. The discoveries that are being alluded to are about how individual neurons work, not brains, specifically that they fire in all-or-nothing pulses, and this seemed (to Walter Pitts and Warren McCullough) similar to digital circuitry, and inspired them to develop the first theory of neural nets. I think "neurology" was fine. As you say, even a high-school graduate probably knows what neurology is. One could argue that this whole paragraph should be cut, since describing their interests and inspirations so briefly can't do them justice, and since these themes are really precursors to AI research, rather than AI research proper. The paragraph is just to give the impression that there was this milieu of (largely unspecified) ideas that gave birth to modern AI. Cut it? Revert it? Which do you think?---- CharlesGillingham (talk) 04:10, 23 February 2008 (UTC)
-
- "By the middle 60s their research was heavily funded by DARPA." (History of AI research) not a huge issue. But it could read DARPA|U.S. Department of Defense "By the middle 60s their research was heavily funded by the U.S. Department of Defense."
-
-
- This reads fine your way. ---- 04:10, 23 February 2008 (UTC)
-
- "... no silver bullet..." (Traditional symbolic AI#"Scruffy" symbolic AI) relies on user knowing what the slang "sivler bullet" means. Suggested fix silver bullet|easy answer.
-
-
- What English speaker hasn't heard of a silver bullet? ---- CharlesGillingham (talk) 04:10, 23 February 2008 (UTC)
-
- ""Naive" search algorithms"(Tools of AI#Search) What is a Naive search algorithms? Suggested fix rewrite sentence.
- "Heuristic or "informed" search. The naive algorithms quickly..." (Tools of AI#Search) What percentage of the population knows what Heuristic means? Answer: 4%and really you know that half of those people are pompous insufferable know-it-alls Suggested fix rewrite sentence (which leads met too say.....)
-
- Rather than continue, I've decided to just make the changes and then you and User:Pgr94, or whoever can revert what you don't like. But again, this seems to be a common problem throughout the entire article.--Sparkygravity (talk) 20:25, 22 February 2008 (UTC)
-
- rewritting "naive" algorithms to be easier to read. I used link to Computation time but DTIME may be better, please review--Sparkygravity (talk) 21:07, 22 February 2008 (UTC)
- The changes I've seen seem fine. I'm not familiar with the term "naive" search though; is it just a synonym for exhaustive search? If so, I confess to having a slight preference for the latter as it's what I've come across most in the literature. Pgr94 (talk) 01:20, 23 February 2008 (UTC)
- pfft, I have no idea either. I think the difference is the nature the operation. One works with a search space the other a tends to be a recursive operation. But I don't really know.--Sparkygravity (talk) 02:42, 23 February 2008 (UTC)
- Yeah, "naive" was a poor choice. Nilsson and Russell/Norvig call them "uninformed" searches, a designation I find, well, uninformative. This section should probably be rewritten without the bullets (or the titles naive, heuristic, local, etc.). Two paragraphs: (1) Explain what combinatorial explosion is, define what a heuristic is, and how heuristics help solve the problem (2) explain what an optimization search is (as opposed to listing types of opt. searches.) Note that genetic algorithms are, technically, a form of optimization search, explain what genetic algorithms are.
- pfft, I have no idea either. I think the difference is the nature the operation. One works with a search space the other a tends to be a recursive operation. But I don't really know.--Sparkygravity (talk) 02:42, 23 February 2008 (UTC)
- The changes I've seen seem fine. I'm not familiar with the term "naive" search though; is it just a synonym for exhaustive search? If so, I confess to having a slight preference for the latter as it's what I've come across most in the literature. Pgr94 (talk) 01:20, 23 February 2008 (UTC)
- rewritting "naive" algorithms to be easier to read. I used link to Computation time but DTIME may be better, please review--Sparkygravity (talk) 21:07, 22 February 2008 (UTC)
-
- Rather than continue, I've decided to just make the changes and then you and User:Pgr94, or whoever can revert what you don't like. But again, this seems to be a common problem throughout the entire article.--Sparkygravity (talk) 20:25, 22 February 2008 (UTC)
-
-
-
-
-
-
- By the way, in the rewrite I'm looking at, method (computer science) is used inaccurately. ---- CharlesGillingham (talk) 04:10, 23 February 2008 (UTC)
- hmm, ok, I'll look at method again. I actually feel that "...heuristic methods to eliminate choices..." is currently a fine description of what the heuristic method does for computer searches. It eliminates choices and reduces possibilities. So I actually feel this sentence is quite a bit clearer than it was, but if you want to take a shot at it, cool.
- By the way, in the rewrite I'm looking at, method (computer science) is used inaccurately. ---- CharlesGillingham (talk) 04:10, 23 February 2008 (UTC)
-
-
-
-
-
-
-
-
-
-
-
-
- I like the bullet style but I understand it doesn't really meet WP:STYLE standards, and understand it probably needs to be rewritten so that it isn't an WP:Embedded list, which can cause problems later. I probably won't do it, just because I don't think I'm the best person to do the bullet-to-prose rewrite.
-
-
-
-
-
-
-
-
-
-
-
-
-
- I'll look at the genetic algorithms to see what I can do. I made the changes I suggested yesterday, and then got pulled away by other chores.--Sparkygravity (talk) 16:18, 23 February 2008 (UTC)
-
-
-
-
-
-
[edit] Philosophy of AI
Section 1.3 (Philosophy of AI) states this:
- Gödel's incompleteness theorem: There are statements that no physical symbol system can prove.
As far as I know, this is flat out wrong. Firstly, Godel's incompleteness theorem has no direct connection to "physical" symbol systems (whatever that means). But more importantly, Godel's incompleteness theorem does not show that there are statements that no "symbol system" (or formal system to be clearer) can prove. Godel's incompleteness theorem shows that for each (sufficiently expressive) formal system there are true statements that it can not prove - but this does not preclude other formal systems from proving those statements. This is evident by the simple fact that you can always take a formal system, and add to it one of its unprovable true statements as an axiom, and it is then proven within the new one-axiom-larger formal system (incidentally with it's own new set of unprovable true statements). Can anyone with knowledge on the subject comment on whether I am correct in my understanding on this. Remy B (talk) 09:00, 27 February 2008 (UTC)
- I don't have the patience to wait for confirmation - I am confident I am right - so I have changed the article accordingly. If anyone disagrees and I somehow have this whole topic back to front then we can revert. Remy B (talk) 09:23, 27 February 2008 (UTC)
-
- Your revision is correct. The original was wrong. ---- CharlesGillingham (talk) 05:34, 1 March 2008 (UTC)
[edit] AI military
A really interesting look at what the military is doing with AI http://www.aaai.org/AITopics/html/military.html. I'm currently using links from this and other sources to develop an article on the Dynamic Analysis and Replanning Tool a program that uses intelligence agents to optimize and schedule, supplies, transport and other logistics.--Sparkygravity (talk) 12:33, 28 February 2008 (UTC)
[edit] Donald McKay
There are a couple of links to an AI researcher of this name, but the link takes you to the famous clipper ship builder. I do not know how to find the link you meant.
--AJim (talk) 04:32, 8 May 2008 (UTC)

