Switch to: Citations

Add references

You must login to add references.
  1. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Why change your beliefs rather than your desires? Two puzzles.Olav Benjamin Vassend - 2021 - Analysis 81 (2):275-281.
    In standard decision theory, the probability function ought to be updated in light of evidence, but the utility function generally stays fixed. However, there is nothing in the formal theory that prevents one from instead updating the utility function, while keeping the probability function fixed. Moreover, there are good arguments for updating the utilities and not just the probabilities. Hence, the first puzzle is whether there is anything that justifies updating beliefs, but not desires, in light of evidence. The paper (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • An AGI Modifying Its Utility Function in Violation of the Strong Orthogonality Thesis.James D. Miller, Roman Yampolskiy & Olle Häggström - 2020 - Philosophies 5 (4):40.
    An artificial general intelligence (AGI) might have an instrumental drive to modify its utility function to improve its ability to cooperate, bargain, promise, threaten, and resist and engage in blackmail. Such an AGI would necessarily have a utility function that was at least partially observable and that was influenced by how other agents chose to interact with it. This instrumental drive would conflict with the strong orthogonality thesis since the modifications would be influenced by the AGI’s intelligence. AGIs in highly (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Responsibility and Control. [REVIEW]Michael McKenna - 2001 - Journal of Philosophy 98 (2):93-100.
    Download  
     
    Export citation  
     
    Bookmark   107 citations  
  • Universal intelligence: A definition of machine intelligence.Shane Legg & Marcus Hutter - 2007 - Minds and Machines 17 (4):391-444.
    A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: we take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Existential risks: analyzing human extinction scenarios and related hazards.Nick Bostrom - 2002 - J Evol Technol 9 (1).
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Download  
     
    Export citation  
     
    Bookmark   117 citations