Switch to: References

Citations of:

Future progress in artificial intelligence: A survey of expert opinion

In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571 (2016)

Add citations

You must login to add citations.
  1. Inteligencia artificial y ética de la responsabilidad.Antonio Luis Terrones Rodriguez - 2018 - Cuestiones de Filosofía 22 (4):141-170.
    La Inteligencia Artificial (IA) ha supuesto un gran avance para la humanidad en diversos campos; sin embargo, eso no implica que su actividad esté exenta de reflexión ética. La humanidad está enfrentado, y va a enfrentar en el futuro, numerosos desafíos que van a obligar a elaborar nuevas ideas para poder vivir a la altura de los tiempos. Entre esos desafíos encontramos el laboral y económico, el de mejoramiento humano, el militar y de seguridad y el político y jurídico, entre (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Singularitarianism and Schizophrenia.Galanos Vasileios - 2016 - AI and Society:1-18.
    Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Bateson’s core theories of ecology of mind, schismogenesis, and double bind, are hereby revisited, taken out of their respective sociological, anthropological, and psychotherapeutic contexts and recontextualized in the field of Roboethics as to a twofold aim: (a) the proposal of a rigid ethical standpoint toward both artificial and non-artificial agents, and (b) an explanatory analysis of the reasons bringing about such a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History.Phil Torres - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press.
    This chapter argues that dual-use emerging technologies are distributing unprecedented offensive capabilities to nonstate actors. To counteract this trend, some scholars have proposed that states become a little “less liberal” by implementing large-scale surveillance policies to monitor the actions of citizens. This is problematic, though, because the distribution of offensive capabilities is also undermining states’ capacity to enforce the rule of law. I will suggest that the only plausible escape from this conundrum, at least from our present vantage point, is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Challenges of Artificial Judicial Decision-Making for Liberal Democracy.Christoph Winter - 2022 - In P. Bystranowski, Bartosz Janik & M. Prochnicki (eds.), Judicial Decision-Making: Integrating Empirical and Theoretical Perspectives. Springer Nature. pp. 179-204.
    The application of artificial intelligence (AI) to judicial decision-making has already begun in many jurisdictions around the world. While AI seems to promise greater fairness, access to justice, and legal certainty, issues of discrimination and transparency have emerged and put liberal democratic principles under pressure, most notably in the context of bail decisions. Despite this, there has been no systematic analysis of the risks to liberal democratic values from implementing AI into judicial decision-making. This article sets out to fill this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Measuring progress in robotics: Benchmarking and the ‘measure-target confusion’.Vincent C. Müller - 2019 - In Fabio Bonsignorio, John Hallam, Elena Messina & Angel P. Del Pobil (eds.), Metrics of sensory motor coordination and integration in robots and animals. Springer. pp. 169-179.
    While it is often said that robotics should aspire to reproducible and measurable results that allow benchmarking, I argue that a focus on benchmarking can be a hindrance for progress in robotics. The reason is what I call the ‘measure-target confusion’, the confusion between a measure of progress and the target of progress. Progress on a benchmark (the measure) is not identical to scientific or technological progress (the target). In the past, several academic disciplines have been led into pursuing only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Agency, qualia and life: connecting mind and body biologically.David Longinotti - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 43-56.
    Many believe that a suitably programmed computer could act for its own goals and experience feelings. I challenge this view and argue that agency, mental causation and qualia are all founded in the unique, homeostatic nature of living matter. The theory was formulated for coherence with the concept of an agent, neuroscientific data and laws of physics. By this method, I infer that a successful action is homeostatic for its agent and can be caused by a feeling - which does (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How Philosophy of Mind Can Shape the Future.Susan Schneider & Pete Mandik - 2018 - In Amy Kind (ed.), Philosophy of Mind in the Twentieth and Twenty-First Centuries: The History of the Philosophy of Mind, Volume 6. New York: Routledge. pp. 303-319.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical Issues for Autonomous Trading Agents.Michael P. Wellman & Uday Rajan - 2017 - Minds and Machines 27 (4):609-624.
    The rapid advancement of algorithmic trading has demonstrated the success of AI automation, as well as gaps in our understanding of the implications of this technology proliferation. We explore ethical issues in the context of autonomous trading agents, both to address problems in this domain and as a case study for regulating autonomous agents more generally. We argue that increasingly competent trading agents will be capable of initiative at wider levels, necessitating clarification of ethical and legal boundaries, and corresponding development (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Introduction: The prize essays.Emil Višňovský - 2021 - Human Affairs 31 (1):3-5.
    Download  
     
    Export citation  
     
    Bookmark  
  • Imaginative Value Sensitive Design: Using Moral Imagination Theory to Inform Responsible Technology Design.Steven Umbrello - 2020 - Science and Engineering Ethics 26 (2):575-595.
    Safe-by-Design (SBD) frameworks for the development of emerging technologies have become an ever more popular means by which scholars argue that transformative emerging technologies can safely incorporate human values. One such popular SBD methodology is called Value Sensitive Design (VSD). A central tenet of this design methodology is to investigate stakeholder values and design those values into technologies during early stage research and development (R&D). To accomplish this, the VSD framework mandates that designers consult the philosophical and ethical literature to (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence.Steven Umbrello & Stefan Lorenz Sorgner - 2019 - Philosophies 4 (2):24.
    Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • God-like robots: the semantic overlap between representation of divine and artificial entities.Nicolas Spatola & Karolina Urbanska - 2020 - AI and Society 35 (2):329-341.
    Artificial intelligence and robots may progressively take a more and more prominent place in our daily environment. Interestingly, in the study of how humans perceive these artificial entities, science has mainly taken an anthropocentric perspective (i.e., how distant from humans are these agents). Considering people’s fears and expectations from robots and artificial intelligence, they tend to be simultaneously afraid and allured to them, much as they would be to the conceptualisations related to the divine entities (e.g., gods). In two experiments, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Out of the laboratory and into the classroom: the future of artificial intelligence in education.Daniel Schiff - 2021 - AI and Society 36 (1):331-348.
    Like previous educational technologies, artificial intelligence in education threatens to disrupt the status quo, with proponents highlighting the potential for efficiency and democratization, and skeptics warning of industrialization and alienation. However, unlike frequently discussed applications of AI in autonomous vehicles, military and cybersecurity concerns, and healthcare, AI’s impacts on education policy and practice have not yet captured the public’s attention. This paper, therefore, evaluates the status of AIEd, with special attention to intelligent tutoring systems and anthropomorphized artificial educational agents. I (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The right to refuse diagnostics and treatment planning by artificial intelligence.Thomas Ploug & Søren Holm - 2020 - Medicine, Health Care and Philosophy 23 (1):107-114.
    In an analysis of artificially intelligent systems for medical diagnostics and treatment planning we argue that patients should be able to exercise a right to withdraw from AI diagnostics and treatment planning for reasons related to (1) the physician’s role in the patients’ formation of and acting on personal preferences and values, (2) the bias and opacity problem of AI systems, and (3) rational concerns about the future societal effects of introducing AI systems in the health care sector.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Will intelligent machines become moral patients?Parisa Moosavi - forthcoming - Philosophy and Phenomenological Research.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   134 citations  
  • Discourse analysis of academic debate of ethics for AGI.Ross Graham - 2022 - AI and Society 37 (4):1519-1532.
    Artificial general intelligence is a greatly anticipated technology with non-trivial existential risks, defined as machine intelligence with competence as great/greater than humans. To date, social scientists have dedicated little effort to the ethics of AGI or AGI researchers. This paper employs inductive discourse analysis of the academic literature of two intellectual groups writing on the ethics of AGI—applied and/or ‘basic’ scientific disciplines henceforth referred to as technicians (e.g., computer science, electrical engineering, physics), and philosophy-adjacent disciplines henceforth referred to as PADs (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Teasing out Artificial Intelligence in Medicine: An Ethical Critique of Artificial Intelligence and Machine Learning in Medicine.Mark Henderson Arnold - 2021 - Journal of Bioethical Inquiry 18 (1):121-139.
    The rapid adoption and implementation of artificial intelligence in medicine creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients, with resultant ontological and epistemic concerns for physicians and patients relating to the instatiation of AI as a dependent, semi- or fully-autonomous agent in the encounter. The concept of libertarian paternalism potentially exercised by AI (and those who control it) has created challenges to conventional assessments (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Zukunft.Anja Leser - 2016 - Swiss Philosophical Preprints.
    Welche Zukunft erwartet die Menschheit? Oder müsste man fragen: Welche möglichen Zukünfte liegen im Gestaltungsbereich des Menschen? Was passiert, wenn wir eine superintelligente Maschine erfinden, die uns sogar ermöglicht, der Vergänglichkeit zu entrinnen? Welchen Einfluss haben die Technologien auf die Arbeit oder auf unsere Wahrnehmung der Zeit?
    Download  
     
    Export citation  
     
    Bookmark  
  • AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robust Computer Algebra, Theorem Proving, and Oracle AI.Gopal P. Sarma & Nick J. Hay - unknown
    In the context of superintelligent AI systems, the term “oracle” has two meanings. One refers to modular systems queried for domain-specific tasks. Another usage, referring to a class of systems which may be useful for addressing the value alignment and AI control problems, is a superintelligent AI system that only answers questions. The aim of this manuscript is to survey contemporary research problems related to oracles which align with long-term research goals of AI safety. We examine existing question answering systems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mammalian Value Systems.Gopal P. Sarma & Nick J. Hay - 2016 - Arxiv Preprint Arxiv:1607.08289.
    Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Singularity and Coordination Problems: Pandemic Lessons from 2020.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - Journal of Future Studies 26 (1): 61–74.
    One of the strands of the Transhumanist movement, Singulitarianism, studies the possibility that high-level artificial intelligence may be created in the future, debating ways to ensure that the interaction between human society and advanced artificial intelligence can occur safely and beneficially. But how can we guarantee this safe interaction? Are there any indications that a Singularity may be on the horizon? In trying to answer these questions, We'll make a small introduction to the area of security research in artificial intelligence. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Agential Risks: A Comprehensive Introduction.Phil Torres - 2016 - Journal of Evolution and Technology 26 (2):31-47.
    The greatest existential threats to humanity stem from increasingly powerful advanced technologies. Yet the “risk potential” of such tools can only be realized when coupled with a suitable agent who; through error or terror; could use the tool to bring about an existential catastrophe. While the existential risk literature has provided many accounts of how advanced technologies might be misused and abused to cause unprecedented harm; no scholar has yet explored the other half of the agent-tool coupling; namely the agent. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Autonomous killer robots are probably good news.Vincent C. Müller - 2016 - In Ezio Di Nucci & Filippo Santonio de Sio (eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. London: Ashgate. pp. 67-81.
    Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations