Switch to: Citations

Add references

You must login to add references.
  1. On the promotion of safe and socially beneficial artificial intelligence.Seth D. Baum - 2017 - AI and Society 32 (4):543-551.
    This paper discusses means for promoting artificial intelligence that is designed to be safe and beneficial for society. The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts. Two types of measures are available for encouraging the AI field to shift more toward building beneficial AI. Extrinsic measures impose constraints (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • .Nick Bostrom & Julian Savulescu - 2009 - Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • Global technology regulation and potentially apocalyptic technological threats.James J. Hughes - 2007 - In Fritz Allhoff, Patrick Lin, James Moor & John Weckert (eds.), Nanoethics: The Ethical and Social Implications of Nanotechnology. New York: Wiley. pp. 201-214.
    In 2000 Bill Joy proposed that the best way to prevent technological apocalypse was to "relinquish" emerging bio-, info- and nanotechnologies. His essay introduced many watchdog groups to the dangers that futurists had been warning of for decades. One such group, ETC, has called for a moratorium on all nanotechnological research until all safety issues can be investigated and social impacts ameliorated. In this essay I discuss the differences and similarities of regulating bio- and nanotechnological innovation to the efforts to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood?Ben Goertzel - 2012 - Journal of Consciousness Studies 19 (1-2):96.
    Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations