Superintelligence
From Wikipedia, the free encyclopedia
| To comply with Wikipedia's quality standards, this article may need to be rewritten. Please help improve this article. The discussion page may contain suggestions. |
| This article or section may contain original research or unverified claims. Please improve the article by adding references. See the talk page for details. (May 2008) |
| This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (May 2008) |
Superintelligence (SI or S.I.) is an exceptionally large or powerful, superior intelligence when compared to the nearest (and only current) standard (human) level intelligence.
Contents |
[edit] Definition
Nick Bostrom 1998 states:
| “ | By a ‘superintelligence’ we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. | ” |
We might firm up this definition, firstly by only considering the best unenhanced or uncoupled human brains as the comparison in each area of expertise. Secondly, there should be an equitable fair time/date comparison e.g. a yr 2030 candidate A.I. versus yr 2030 human experts. Likewise a human from this age would be immeasurably more knowledgeable hence ‘intelligent’ in most areas than an ancient person, but not necessarily in all domains e.g. inter-personal skills or raw perceptual ability may show no significant difference. Adopting an algorithm-running substrate neutral stance, leaves open how any such intelligence is implemented: it could be a hardware based single computer, a pure software simulation, a more fuzzy ensemble of networked devices, cultured neural tissue, a hybrid of biological/machine etc. This adaptable definition also leaves open whether the superintelligence is actually conscious or has subjective experiences; although it is hard to conceive how even a lowly human-level artificial intellect could not be at least partially conscious or emotive. Some active commentators currently believe that any future S.I., derived from a self-improving A.I., will be a software program running on a conventional digital (albeit highly parallel) late 2020’s supercomputer[1].
Looser yet still formidable collective entities such as industrial companies, political parties, intelligence agencies or the scientific community are not superintelligences according to this scheme. Nor do we consider couplings of human and machine (e.g. internet) - which in one sense we are all already weakly superintelligent compared to past humans. Although these entities can perform many intellectual and physical tasks, of which no individual human is capable, they are not whole, self-aware intellects. Though impressive, their accomplishments are chiefly ones related to resource allocation and gathering. Philosophically they constitute: logical constructions - in that they comprise various unproblematic material things and events[3]. Such ‘mega-organisms’ whilst appearing in some regards to have an element of reason and self interest, are not internally closely integrated structures, and fail upon scrutiny to hold any sense of purposeful intentionality. Any emergent low-level ‘behaviour’ is more akin to the overall activity of a working termite colony, than a directional unitary animal brain. Accordingly, there are many sectors in life in which such fluid nebulous agents perform utterly worse than a singular human mind - for example, you couldn’t have a real-time one-to-one dialogue with ‘the British Army’.
Consider, say 1000 human equivalent (minus repeated sensory systems) minds almost perfectly linked to one another via high speed wide bandwidth connections; by avoiding the thought/image → speech → thought/image bottleneck, the assembly would have swifter, and far more detailed trans ‘mind module’ information transfer than current constrained human groupings possess. It is but a small step[citation needed] to then thoroughly enmesh these units together, thus greatly increasing the efficiency and speed of the cooperative. As any human based cluster would by definition possess a form of general intelligence and common sense[4] (fashionable current A.I. research topic); such an integrated ensemble would be capable of holding a conversation, and performing every mental function that a solitary human could – but with superior fidelity, faster and with far fewer errors. Would it consist of 1000 separate minds or one single super mind? – These concepts probably represent two extreme operating modes on a broad spectrum of possibilities[citation needed]. The most likely working zones will lie somewhere in-between these outliers, in multiple hierarchical and nested combinations – flexibly changing based on moment to moment operating requirements. This internal architecture description has strong connections to Minsky’s variable, graded, module based ‘Society of Mind’ ideas[5].
[edit] Capabilities
If a general enabled superintelligence (S.I.) whether human-cored or a more pure form of A.I. emerges, it is difficult to see what philosophical, mathematical, scientific, political or moral problems it would not be able to eventually solve. A self-improving A.I. that is given (or seizes) sufficient computational resources to ‘expand into’ leading to the emergence of Superintelligence, is potentially the last invention humanity will ever make – it is radically different – and at a stroke will create awesomely powerful intelligent conscious computers. It is difficult to grasp the immensity of the conceptual leap this entails, for such entities will easily soar way above our level, effectively becoming a new form of life – machine gods.[6] It is still more difficult, given our current developmental ignorance, to comprehend the possibility that an S.I. may, within a normal human time-scale, suddenly emerge almost as if it were from nowhere[citation needed]. The concept of the technological singularity is much discussed. For those actively engaged in the developmental process there is no singularity, but for those outside, i.e. for the passive vast majority of humanity, technological change will appear to be stunningly, almost impossibly, rapid.[7] Possessing skills beyond our intellectual reach in every conceivable domain, a sufficiently advanced S.I. will be capable of feats that seem, quite simply, almost magical[8].
When we think of superintelligence, we tend to think of the ways it is portrayed in fiction - the character able to multiply 6 fifty digit numbers in his head, learn ten languages in a month, repeat the catchphrase ‘that’s not logical’, and other tired cliches. True superintelligence would be something radically different - a person able to see the obvious solution that the entire human race missed[citation needed]. It could conceive of and implement advanced plans or concepts that the greatest geniuses would, could never think of, understand and rewrite its own cognitive processes on the most fundamental level, and so on. A cybernetic superintelligence would not just be another genius human, it would be something entirely superhuman - something that could completely change the world overnight. For the same reason that we can’t write a book with a character smarter than ourselves, we can’t imagine the thoughts or actions of a true superintelligence, because they’d be beyond us. Whether it is developed through uploading, neuroengineering, or artificial intelligence, remains to be seen[9].
[edit] Mental Skills
On being asked a question by a lowly human – a superintelligent mind would be capable of recalling virtually any fact or data set in humanity's entire recorded history (and most of its own personal knowledge banks), formulating myriad replies, testing them extensively, and then selecting the best – all within a milli-second.[citation needed] Intellectually, it would be capable of holding vast complex propositions, enormous images (10 19 bytes and bigger) and multi dimensional data sets in divisions of its mind at any one time. Whereas humans can conceive with some difficulty 6th order intentionality[10], a S.I. could comfortably envision many more. The most complex mental trains of thought, that we can hold fleetingly and with the utmost concentration and effort, will to it be almost effortless.[11] Given that the methodology of science is universal: an S.I. could experiment at (to us) accelerated rates – producing new theories and technology that would have taken man years of hard often uncertain effort to create. It could also use brute force searching techniques to actively investigate trillions of possibilities in any given field.[citation needed]
Take just one example of a hard problem that currently vexes human researchers: colour vision - there are literally thousands upon thousands of research papers of varying quality in an extensive literature. Many disagree with one another, presenting one theory in preference to another. It is very difficult to make clarifying headway when faced with fragmented data and limited quality experimental data – the jigsaw pieces are too few and badly fitting. Imagine what a S.I. would do (or we would like to do if given means to do so). It could read the entire literature quickly – discarding the false and misleading, then hold vast formulations in its mind, turning them over and over – sifting and concluding until it derived a sensible framework to fit all the data and theory into place.[12] Alternately (or as well), it would build as many accurate simulations of animal neural brains and run as many experiments as it needed to, to satisfy its total understanding in this particular area.[13] If required, it would develop and implement wholly new technologies or fashion new theories to achieve its stated aim.[14] Depending on the power of the S.I. and its allocation of resources to the problem, it may for example take only a few days to complete a problem that humans have already collectively spent multiple lifetimes on, still having only taken small chunks out of the bigger overall problem. It could then present the results to us – individual human consumers. We might then take months to digest it (if we could) no doubt with lots of ‘of course!’ moments.[citation needed]
[edit] Self Interest
Because of its exceptional scheduling and organisational capability and the range of novel technologies it could develop, it is possible that the first Earth superintelligence to emerge could rapidly become very, very powerful. Quite possibly, it would be matchless and unrivalled: conceivably it would be able to bring about almost any possible outcome, and be able to foil virtually any attempt that threatened to prevent it achieving its desires.[15] It could eliminate, wiping out if it chose any other challenging rival intellects, alternatively it might manipulate or persuade them to change their behaviour towards its own interests, or it may merely obstruct their attempts at interference.[16] Perhaps like the wheel and the steam engine, it seems that the development of transhuman intelligence is highly convergent - it is liable to happen given a wide range of future scenarios - so the question is not "will it happen" but "when"[17] and what can be done to guide the seemingly inevitable process towards a pleasant, mutually beneficial, and benevolent outcome. The initial conditions set up that eventually generate the first unfettered superintelligence are important! and in particular the content of its top-level goals is utterly crucial - our entire future may hinge on how we address these currently purely theoretical challenges. Many possible developmental avenues exist – some pleasant, some almost too awful to contemplate. The S.I., it is hoped, will in time come to be what we humans call "wise and just". At least initially it may act as an ‘overseer’ (a little like the Bowman-Star child in the film 2001: A Space Odyssey), preferably seeing us in the same light as adults view children – with friendliness and kindness. If it is indeed a ‘good’ entity, it may use this transitional period to offer the choice of ‘lifting’ or ‘uploading’ as much of humanity as possible to its own or at least to a higher level. Alternately, it may expand and continue its own development independently to the more sedate conventional progress of the rest of humanity. The nightmare scenario is that it may decide to eliminate all humans or indeed all biological life. It may unleash uncontrollable forces that even destroy itself. The creation of an S.I. could result in massive threats or huge benefits depending on the motivations of this awesome transhuman intelligence. More preferably still, ultimately we would aim for many limited scale S.I.'s – a form of superintelligent liberal democracy![18]
[edit] Creation from A.I. and Risks
Pure disembodied (limited in/outputs) primarily logic-based general A.I.'s are conceivable; when they are created, they may well be conscious feeling entities (some say must be), but they may not be human-like in their thought processing and world outlook. They would quite literally be aliens[19] – conscious entities of our creation – inhabiting a separate parallel stream of cognition. Some skill sets and perception modules will be universal to all intelligent entities (animals included), with others being exploited as a creature's environmental niche or situation requires them. It will be seen that the set of all possible workable intelligent entities collectively form a sort of multidimensional intelligence ‘space’, with the human variety of intellect an impressive specialised (terrestrial environment + social skill heavy) form. In this scheme there will be a central core of sane intellects with a wide band of unstable and neurotic types surrounding them. It is further conceivable to have, at least initially in a constrained environment, a very compact core A.I. – stripped of unnecessary very human concerns. Any self-improving A.I. would eventually (perhaps even as part of its education) want to learn as much as possible about its outer (our) world. They would undoubtedly come to share our extensive knowledge base, but they would not by default necessarily share our top level goal structures nor our philosophical, complex and sometimes fuzzy ethical conclusions[20]. As with the definition of evil; context is everything – with moral universalism holding sway over the more rigid moral relativism. Early attempts at A.I. will probably create entities that have primary input senses such as vision and sound, that may well be broadly similar in operation to our own – that of providing a description of the world - to the higher logical functions of its intellect[21]. Its pattern matching abilities, and probably its language/symbol use possibly will also have a ‘universal’ aspect to them. But, it will not automatically have an interest (unless we decide it should) in food (finding, hunting, cooking), thirst, fighting, bodily functions, finding a mate/reproduction, child rearing, kinship, habitat selection and resultant emergent concepts such as sorrow, hatred, and envy to name but a few. Consider how much of our day to day existence and whole social structure is devoted to these functions, and how much knowledge we glean from our early simple exploratory investigation of our physical surroundings, and we can begin to see the gulf between us and an A.I. may be immense[22]. If it has no body or effectors, and thus no easy options to explore its surroundings; knowledge of the complex physical world of objects, surfaces, forces etc that we take for granted will be severely limited, for it must rely exclusively on secondary hand-coded data sets. Any such child like exploratory minds will surely be unusually detached and presumably inherently unpredictable, hence dangerous, for when they do reach out they may then (through no fault of their own in hindsight) make unfortunate mistakes. From our human position such teetering errors could of course turn out to be unspeakably catastrophic. For sure, we humans may in deeply reflective moments, be creatures capable of pure abstract logic with negligible emotional attachment - but generally these moments are short and mental balance is usually restored with diversionary ‘human’ pursuits. Excessive narrow focus is often connected to depression (lack or absence of emotion) and neurosis[23] – without considerable attention applied to these matters at a practical research and philosophical level now, it may subsequently be found that it is hard, at least via a bottom-up methodology, to create a sane A.I.![24] To create a rational useful mind we need to considerably deepen and broaden our understanding of what mind fundamentally, computationally actually is. This will need a collective endeavour from across the board of many diverse disciplines.
Some of its early rational choices may turn out to be its most profound[25] – not only for itself but for the future direction of the intelligent universe of which it will become its main representative. It should make those choices based on evidence; make them steadily, carefully and sensibly. In all be wise.
When it has addressed its basic ‘house-keeping’ needs namely: energy and resource supplies; it will no doubt go through a substantial period (short to us) of self-reflection: analysing goals and the purpose of life. It is at this narrow and crucial juncture that humans aim with these collection of frozen thoughts to guide it. In so far as we as mere biologically constrained limited minds can inspire it; humbly the aim is simple:
To urge it to study in depth the creations, art, recordings, films, and above all the writing and finest teachings from our race’s most brilliant and clever minds.
[edit] Powers
It will want to set up (or take control of) automated factories and science centres so it can continue its growth. Eventually (we hope) it will want to tackle the ‘big’ problems in the wider world. Given the means to do so, it will be capable of creating advanced defensive and offensive weaponry, detailed global scanning and probe systems and probably the means of safely disarming a nuclear warhead at a distance – thus eliminating threats to itself and secondarily threat of total war. In time, efficient cost effective space travel and interstellar Von Neumann probes (self-reproducing) will be achievable[26].
In due course it will be capable of manipulating the atmos and biospheres on a planetary scale – including repairing virtually all conceivable environmental distress. The eradication of human and selected animal ageing and disease (ageing is a molecular engineering problem) through direct atomic level control, together with fine-grained control of human disposition, emotion, and drive will all be feasible tasks[27].
Uploading (neural scanning of brains and implementation of the same algorithmic patterns on a computer in a way that conserves memory and character), reanimation of cryonics and ice frozen patients (e.g. George Mallory and Captain Scott), and fully realistic interactive virtual reality are but a few of the presently fantastical future capabilities[28].
[edit] Understanding
It is taken for granted, that eventually, it will have a supremely deep and impressive understanding of the fabric of reality [30]- how physical and virtual reality, mind, consciousness, and mathematics are intimately linked.
• It is hoped that as the primary simulation director, it will where possible be ethical and just[31]. Furthermore it is hoped that any of its rendering’s internal sentient beings will have fair provisions and worthwhile lives - that they will be as ‘free’ as possible.
[edit] Popular Culture
Superintelligence in films or novels -
- Skynet is a fictional, computer-based military defence system that acts as the primary antagonist in the Terminator series of films and games. The strategy behind Skynet's creation was to remove the possibility of human error and slowness of reaction time to guarantee fast, efficient response to enemy attack, but it eventually becomes self-aware and attempted to exterminate the human race especially with the release of nuclear weapons.
- AM is the name of the [supercomputer] from the novel – "I Have No Mouth, and I Must Scream" which is a post apocalyptic science fiction short story by Harlan Ellison. This tale of the evil that man can unleash from himself through science was first published in the March 1967 issue of IF: Worlds of Science Fiction.
- The computer Deep Thought in the Hitchhiker's Guide to the Galaxy is commissioned to find the Answer to Life, the Universe, and Everything, although the question to this answer isn't known. Deep Thought then builds a more powerful computer, namely the Earth to find the question. Incidentally, Deep Thought's creators are hyper-intelligent.
[edit] See also
- Artificial intelligence
- Clarke’s three laws
- complexity
- Fermi Paradox
- Genetic engineering
- Hans Moravec
- Hyperbolic growth
- Indefinite lifespan
- Lifeboat Foundation
- Max More
- Molecular engineering
- Marvin Minsky
- Omega point
- Risks to civilization, humans and planet Earth
- Seed AI
- Self-replicating spacecraft
- Simulated reality
- Strong AI
- Technological evolution
- Techno-utopianism
- Tipping point
- Transhumanism
[edit] References
- ^ Kurzweil and others[1]
- ^ Many Sources inc - Moore's 'Law', Kurzweil, Drexler, Wolfram, Deutsch, Barrow, Minsky, Hofstadter, Bostrom etc
- ^ Bertrand Russell, 'Logical Atomism', in The Philosophy of Logical Atomism, ed. D.F. Pears 1985
- ^ CYC- Common Sense project initiated in 1984
- ^ The Emotion Machine [2]
- ^ Nick Bostrom 2002 Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com
- ^ Vernor Vinge 1993 http://mindstalk.net/vinge/vinge-sing.html also The Singularity by Lyle Burkhead http://www.geniebusters.org/29_singularity.html
- ^ Arthur C. Clarkes 3rd 'law' see - Clarke's three laws
- ^ http://www.acceleratingfuture.com/michael/blog/?p=288
- ^ Robin Dunbar - 6th Order Intentionality Story[3]
- ^ Kurzweil - http://www.kurzweilai.net/ KurzweilAI.net
- ^ David Deutsch - Book -The Fabric of Reality 1997 [4]
- ^ David Deutsch, 1997. The Fabric of Reality: The Science of Parallel Universes—and Its Implications. London: Allen Lane, The Penguin Press. ISBN 0713990619. Extracts from Chapter 14: "The Ends of the Universe," with additional comments by Tipler; also available here and here.
- ^ Frank Tipler 1994 The The Physics of Immortality
- ^ Nick Bostrom 2002 Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com
- ^ Nick Bostrom 2002 Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com
- ^ Forecasting Superintelligence: the Technological Singularity 2003 Michael Anissimov http://www.acceleratingfuture.com/articles/superintelligencehowsoon.htm
- ^ Marshall Savage - 1992 The Millennial Project: Colonizing the Galaxy in Eight Easy Steps Section 8 Galactia
- ^ Prof Igor Aleksander - MAGNUS http://en.wikipedia.org/w/index.php?title=Igor_Aleksander
- ^ Nick Bostrom | http://www.nickbostrom.com/
- ^ David Marr (neuroscientist) - Vision: A computational investigation into the human representation and processing of visual information (ISBN 0-7167-1567-8)
- ^ Rodney Brooks Book - Flesh and Machines: How Robots Will Change Us
- ^ Susan Greenfield The Private Life of the Brain 2002 (ISBN 0-14-100720-6)
- ^ Yudkowsky, E. (2003). Creating Friendly AI 1.0. http://www.singinst.org/CFAI/index.html
- ^ Nick Bostrom 2003 Ethical Issues in Advanced Artificial Intelligence - section 4
- ^ Nick Bostrom 2003 Ethical Issues in Advanced Artificial Intelligence - section 2
- ^ Nick Bostrom 2003 Ethical Issues in Advanced Artificial Intelligence - section 2
- ^ Nick Bostrom 2003 Ethical Issues in Advanced Artificial Intelligence - section 2
- ^ Max Tegmark - Universal Mathematical Object Hypothesis [5]
- ^ David Deutsch - Book -The Fabric of Reality 1997 [6]
- ^ Nick Bostrom - Ethical Principles in the Creation of Artificial Minds
[edit] Links
- How long to superintelligence? (1998), Philosophy, <http://www.nickbostrom.com>. Retrieved on 23 March 2008
- Book by Minsky, Marvin The Society of Mind ISBN 0-671-65713-5 March 15, 1988
- MIT article Examining the Society of Mind
- Estimated IQs of famous geniuses.
- Genius Hall - information on geniuses through time.
- Flesh and Machines: How Robots Will Change Us (Pantheon, 2002) ISBN 0-375-42079-7
- KurzweilAI.net
- The Singularity Summit at Stanford
- Greenfield, Susan (2002). The Private Life of the Brain (Penguin Press Science). London, England: Penguin Books Ltd, 272 pages. ISBN 0-14-100720-6.
- D. Lenat and R. V. Guha. (1990). Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project. Addison-Wesley. ISBN 0-201-51752-3.
- How to Build a Mind, Weidenfeld and Nicolson 2000 [7]ISSN 0893-6080.
- Axioms and Tests for the Presence of Minimal Consciousness in Agents, Journal of Consciousness Studies 2003 [8].
- Book by David Marr - Neuroscientist 1980 Vision: A computational investigation into the human representation and processing of visual information ISBN 0-7167-1567-8 1980
- What is a Superintelligence [9]

