World Library  
Flag as Inappropriate
Email this Article

Superintelligence

Article Id: WHEBN0000726659
Reproduction Date:

Title: Superintelligence  
Author: World Heritage Encyclopedia
Language: English
Subject: Singularitarianism, Doctor Manhattan, Skynet (Terminator), A Fire Upon the Deep, Transhumanism
Collection: Futurology, Intelligence, Singularitarianism, Sociocultural Evolution
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Superintelligence

A superintelligence or hyperintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘’Superintelligence’’ may also refer to the form or degree of intelligence possessed by such an agent.

Oxford futurist Nick Bostrom defines superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."[1] The program Fritz falls short of superintelligence even though it is much better than humans at chess, because Fritz cannot outperform humans in other tasks.[2] Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to— either as a single being or as a new species — become much more powerful than humans, and to displace them.[3]

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.[4]

Contents

  • Feasibility of artificial superintelligence 1
  • Feasibility of biological superintelligence 2
  • Forecasts (when) 3
  • Design considerations 4
  • Potential danger to human survival 5
  • See also 6
  • Citations 7
  • Bibliography 8
  • External links 9

Feasibility of artificial superintelligence

Progress in machine classification of images
The error rate of AI by year. Red line - the error rate of a trained human

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.[5]

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulable by synthetic materials.[6] He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI.[7] Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.[8]

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).”[9] Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind that's run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[10] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.[11]

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.[12]

Feasibility of biological superintelligence

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[13] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

collective superintelligence.[15]

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a

  • Bill Gates Joins Stephen Hawking in Fears of a Coming Threat from "Superintelligence"
  • Will Superintelligent Machines Destroy Humanity?
  • Apple Co-founder Has Sense of Foreboding About Artificial Superintelligence

External links

  •  
  •  
  •  
  •  
  • Legg, Shane (2008). Machine Super Intelligence (PDF) (PhD). Department of Informatics, University of Lugano. Retrieved September 19, 2014. 
  • Müller, Vincent;  
  • Santos-Lang, Christopher (2014). "Our responsibility to manage evaluative diversity" (PDF). ACM SIGCAS Computers & Society 44 (2): 16–19.  

Bibliography

  1. ^  
  2. ^ Bostrom 2014, p. 22.
  3. ^  
  4. ^ Legg 2008, pp. 135-137.
  5. ^ Chalmers 2010, p. 7.
  6. ^ Chalmers 2010, p. 7-9.
  7. ^ Chalmers 2010, p. 10-11.
  8. ^ Chalmers 2010, p. 11-13.
  9. ^ Bostrom 2014, p. 59.
  10. ^  
  11. ^ Bostrom 2014, pp. 56-57.
  12. ^ Bostrom 2014, pp. 52, 59-61.
  13. ^  
  14. ^ Bostrom 2014, pp. 37-39.
  15. ^ Bostrom 2014, p. 39.
  16. ^ Bostrom 2014, pp. 48-49.
  17. ^ Bostrom 2014, pp. 36-37, 42, 47.
  18. ^ Maker, Meg Houston (July 13, 2006). "AI@50: First Poll". Archived from the original on 2014-05-13. 
  19. ^ Müller & Bostrom 2014, pp. 3-4, 6, 9-12.
  20. ^ Bostrom 2014, pp. 209-221.
  21. ^ Santos-Lang 2014, pp. 16-19.
  22. ^ Bill Joy, Why the future doesn't need us. In:Wired magazine. See also technological singularity.Nick Bostrom 2002 Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com
  23. ^ Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
  24. ^ Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence." In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
  25. ^ Eliezer Yudkowsky (2008) in Artificial Intelligence as a Positive and Negative Factor in Global Risk
  26. ^ Hibbard 2002, pp. 155-163.

Citations

See also

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence. [26]

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." [25]

Eliezer Yudkowsky explains:

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[24]

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

It has been suggested that learning computers that rapidly become superintelligent may take unforeseen actions or that robots would out-compete humanity (one technological singularity scenario).[22] Researchers have argued that, by way of an "intelligence explosion" sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[23]

Potential danger to human survival

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.[21]

  • The Coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge.
  • The Moral rightness (MR) proposal is that it should value moral rightness.
  • The Moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values).

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:[20]

Design considerations

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on timescales. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[18] In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft Academic Search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.[19]

Forecasts (when)

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using AI-complete problem.[17]

[16]

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.