Switch to: References

Add citations

You must login to add citations.
  1. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   156 citations  
  • Forbidden knowledge in machine learning reflections on the limits of research and publication.Thilo Hagendorff - 2021 - AI and Society 36 (3):767-781.
    Certain research strands can yield “forbidden knowledge”. This term refers to knowledge that is considered too sensitive, dangerous or taboo to be produced or shared. Discourses about such publication restrictions are already entrenched in scientific fields like IT security, synthetic biology or nuclear physics research. This paper makes the case for transferring this discourse to machine learning research. Some machine learning applications can very easily be misused and unfold harmful consequences, for instance, with regard to generative video or text synthesis, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Provably Safe Artificial General Intelligence via Interactive Proofs.Kristen Carlson - 2021 - Philosophies 6 (4):83.
    Methods are currently lacking to _prove_ artificial general intelligence (AGI) safety. An AGI ‘hard takeoff’ is possible, in which first generation _AGI 1 _ rapidly triggers a succession of more powerful _AGI n _ that differ dramatically in their computational capabilities (_AGI n _ _n_+1 ). No proof exists that AGI will benefit humans or of a sound value-alignment method. Numerous paths toward human extinction or subjugation have been identified. We suggest that probabilistic proof methods are the fundamental paradigm for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Eight Kinds of Critters: A Moral Taxonomy for the Twenty-Second Century.Michael Bess - 2018 - Journal of Medicine and Philosophy 43 (5):585-612.
    Over the coming century, the accelerating advance of bioenhancement technologies, robotics, and artificial intelligence (AI) may significantly broaden the qualitative range of sentient and intelligent beings. This article proposes a taxonomy of such beings, ranging from modified animals to bioenhanced humans to advanced forms of robots and AI. It divides these diverse beings into three moral and legal categories—animals, persons, and presumed persons—describing the moral attributes and legal rights of each category. In so doing, the article sets forth a framework (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations