Switch to: References

Add citations

You must login to add citations.
  1. Computational Goals, Values and Decision-Making.Louise A. Dennis - 2020 - Science and Engineering Ethics 26 (5):2487-2495.
    Considering the popular framing of an artificial intelligence as a rational agent that always seeks to maximise its expected utility, referred to as its goal, one of the features attributed to such rational agents is that they will never select an action which will change their goal. Therefore, if such an agent is to be friendly towards humanity, one argument goes, we must understand how to specify this friendliness in terms of a utility function. Wolfhart Totschnig, argues in contrast that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Polity Without Politics? Artificial Intelligence Versus Democracy: Lessons From Neal Asher’s Polity Universe.Ivana Damnjanović - 2015 - Bulletin of Science, Technology and Society 35 (3-4):76-83.
    Is it time for politics and political theory to face the challenge of artificial intelligence (AI)? It seems to be the case that political theory constantly lags behind technological developments. With rapid developments in the field of AI, a common estimate is that technological singularity will probably happen in the next 50 to 200 years. Even regardless of the time frame, the very possibility of superhumanly smart AIs poses serious political questions and calls for some serious political decisions. Luckily, some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Implementations in Machine Ethics: A Survey.Suzanne Tolmeijer, Markus Kneer, Cristina Sarasua, Markus Christen & Abraham Bernstein - 2020 - ACM Computing Surveys 53 (6):1–38.
    Increasingly complex and autonomous systems require machine ethics to maximize the benefits and minimize the risks to society arising from the new technology. It is challenging to decide which type of ethical theory to employ and how to implement it effectively. This survey provides a threefold contribution. First, it introduces a trimorphic taxonomy to analyze machine ethics implementations with respect to their object (ethical theories), as well as their nontechnical and technical aspects. Second, an exhaustive selection and description of relevant (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Leakproofing the Singularity.Roman V. Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the ‘leakproofing’ of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
    Download  
     
    Export citation  
     
    Bookmark