Switch to: References

Citations of:

The Singularity and Machine Ethics

In Amnon H. Eden & James H. Moor (eds.), Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer. pp. 101-125 (2012)

Add citations

You must login to add citations.
  1. Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Advantages of artificial intelligences, uploads, and digital minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of hardware (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Interdependence as the key for an ethical artificial autonomy.Filippo Pianca & Vieri Giuliano Santucci - forthcoming - AI and Society:1-15.
    Currently, the autonomy of artificial systems, robotic systems in particular, is certainly one of the most debated issues, both from the perspective of technological development and its social impact and ethical repercussions. While theoretical considerations often focus on scenarios far beyond what can be concretely hypothesized from the current state of the art, the term autonomy is still used in a vague or too general way. This reduces the possibilities of a punctual analysis of such an important issue, thus leading (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • An argument for the impossibility of machine intelligence (preprint).Jobst Landgrebe & Barry Smith - 2021 - Arxiv.
    Since the noun phrase `artificial intelligence' (AI) was coined, it has been debated whether humans are able to create intelligence using technology. We shed new light on this question from the point of view of themodynamics and mathematics. First, we define what it is to be an agent (device) that could be the bearer of AI. Then we show that the mainstream definitions of `intelligence' proposed by Hutter and others and still accepted by the AI community are too weak even (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Social choice ethics in artificial intelligence.Seth D. Baum - 2020 - AI and Society 35 (1):165-176.
    A major approach to the ethics of artificial intelligence is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Aligning artificial intelligence with human values: reflections from a phenomenological perspective.Shengnan Han, Eugene Kelly, Shahrokh Nikou & Eric-Oluf Svee - 2022 - AI and Society 37 (4):1383-1395.
    Artificial Intelligence (AI) must be directed at humane ends. The development of AI has produced great uncertainties of ensuring AI alignment with human values (AI value alignment) through AI operations from design to use. For the purposes of addressing this problem, we adopt the phenomenological theories of material values and technological mediation to be that beginning step. In this paper, we first discuss the AI value alignment from the relevant AI studies. Second, we briefly present what are material values and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Using Dreyfus’ legacy to understand justice in algorithm-based processes.David Casacuberta & Ariel Guersenzvaig - 2019 - AI and Society 34 (2):313-319.
    As AI is linked to more and more aspects of our lives, the need for algorithms that can take decisions that are not only accurate but also fair becomes apparent. It can be seen both in discussions of future trends such as autonomous vehicles or the issue of superintelligence, as well as actual implementations of machine learning used to decide whether a person should be admitted in certain university or will be able to return a credit. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • 4.5 AI (Artificial Intelligence) and Ultimate Reality.Anthony Cristiano - 2020 - Ultimate Reality and Meaning 36 (3-4):127-143.
    Download  
     
    Export citation  
     
    Bookmark