Results for 'future of AI'

1000+ found
Order:
  1. The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - 2020 - AI and Society 35 (1):273-282.
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  67
    Artificial Intelligence and the Body: Dreyfus, Bickhard, and the Future of AI.Daniel Susser - 2013 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence. Berlin: Springer. pp. 277-287.
    For those who find Dreyfus’s critique of AI compelling, the prospects for producing true artificial human intelligence are bleak. An important question thus becomes, what are the prospects for producing artificial non-human intelligence? Applying Dreyfus’s work to this question is difficult, however, because his work is so thoroughly human-centered. Granting Dreyfus that the body is fundamental to intelligence, how are we to conceive of non-human bodies? In this paper, I argue that bringing Dreyfus’s work into conversation with the work of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Is There a Future for AI Without Representation?Vincent C. Müller - 2007 - Minds and Machines 17 (1):101-115.
    This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  4.  69
    What the Near Future of Artificial Intelligence Could Be.Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    In this article, I shall argue that AI’s likely developments and possible challenges are best understood if we interpret AI not as a marriage between some biological-like intelligence and engineered artefacts, but as a divorce between agency and intelligence, that is, the ability to solve problems successfully and the necessity of being intelligent in doing so. I shall then look at five developments: (1) the growing shift from logic to statistics, (2) the progressive adaptation of the environment to AI rather (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  5. The Ethics of AI in Health Care: A Mapping Review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - 2020 - Social Science & Medicine 260.
    This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched to support the following research question: how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be ‘ethically mindful? A series of screening stages were carried (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Future Progress in Artificial Intelligence: A Survey of Expert Opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent Müller (ed.), Fundamental Issues of Artificial Intelligence. Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  7.  82
    Artificial Intelligence, Deepfakes and a Future of Ectypes.Luciano Floridi - 2018 - Philosophy and Technology 31 (3):317-321.
    AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  8.  50
    The Promise and Perils of AI in Medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. How to Design AI for Social Good: Seven Essential Factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771-1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   10 citations  
  10. The Future of Human-Artificial Intelligence Nexus and its Environmental Costs.Petr Spelda & Vit Stritecky - 2020 - Futures 117.
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  88
    Theopolis Monk: Envisioning a Future of A.I. Public Service.Scott H. Hawley - 2019 - In Newton Lee (ed.), The Transhumanism Handbook. Springer Verlag. pp. 271-300.
    Visions of future applications of artificial intelligence tend to veer toward the naively optimistic or frighteningly dystopian, neglecting the numerous human factors necessarily involved in the design, deployment and oversight of such systems. The dream that AI systems may somehow replace the irregularities and struggles of human governance with unbiased efficiency is seen to be non-scientific and akin to a religious hope, whereas the current trajectory of AI development indicates that it will increasingly serve as a tool by which (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  69
    A United Framework of Five Principles for AI in Society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  13. When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15.  45
    From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles Into Practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  16. Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - forthcoming - AI and Society.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  46
    Aiming AI at a Moving Target: Health.Mihai Nadin - 2020 - AI and Society 35 (4):841-849.
    Justified by spectacular achievements facilitated through applied deep learning methodology, the “Everything is possible” view dominates this new hour in the “boom and bust” curve of AI performance. The optimistic view collides head on with the “It is not possible”—ascertainments often originating in a skewed understanding of both AI and medicine. The meaning of the conflicting views can be assessed only by addressing the nature of medicine. Specifically: Which part of medicine, if any, can and should be entrusted to AI—now (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  19.  98
    Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa - manuscript
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing.Steven Umbrello & Seth D. Baum - 2018 - Futures 100:63-73.
    Atomically precise manufacturing (APM) is the assembly of materials with atomic precision. APM does not currently exist, and may not be feasible, but if it is feasible, then the societal impacts could be dramatic. This paper assesses the net societal impacts of APM across the full range of important APM sectors: general material wealth, environmental issues, military affairs, surveillance, artificial intelligence, and space travel. Positive effects were found for material wealth, the environment, military affairs (specifically nuclear disarmament), and space travel. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  23. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  70
    Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions.Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo & Luciano Floridi - 2020 - Science and Engineering Ethics 26 (1):89-120.
    Artificial intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  26.  96
    Back to the Future: Curing Past Sufferings and S-Risks Via Indexical Uncertainty.Alexey Turchin - manuscript
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Future Progress in Artificial Intelligence: A Poll Among Experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  30. Editorial: Risks of General Artificial Intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. From Human to Artificial Cognition and Back: New Perspectives on Cognitively Inspired AI Systems.Antonio Lieto & Daniele Radicioni - 2016 - Cognitive Systems Research 39 (c):1-3.
    We overview the main historical and technological elements characterising the rise, the fall and the recent renaissance of the cognitive approaches to Artificial Intelligence and provide some insights and suggestions about the future directions and challenges that, in our opinion, this discipline needs to face in the next years.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. On Controllability of Artificial Intelligence.Roman Yampolskiy - manuscript
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  52
    Do Androids Dream of Normative Endorsement? On the Fallibility of Artificial Moral Agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules as action-guiding. They need to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. When Should Co-Authorship Be Given to AI?G. P. Transformer Jr, End X. Note, M. S. Spellchecker & Roman Yampolskiy - manuscript
    If an AI makes a significant contribution to a research paper, should it be listed as a co-author? The current guidelines in the field have been created to reduce duplication of credit between two different authors in scientific articles. A new computer program could be identified and credited for its impact in an AI research paper that discusses an early artificial intelligence system which is currently under development at Lawrence Berkeley National. One way to imagine the future of artificial (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  36.  69
    An Ethically Mindful Approach to AI for Health Care.Jessica Morley & Luciano Floridi - 2020 - The Lancet 395 (10220):254-255.
    Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource constraints. At the same time, the use of digital health technologies in all care settings has led to an expansion of data. For this reason, policy makers, politicians, clinical entrepreneurs, and computer and data scientists argue that a key part of health-care solutions will be artificial Intelligence (AI), particularly machine learning AI forms a key part of the National Health Service (NHS) Long-Term Plan (2019) in England, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Minding the Future: Artificial Intelligence, Philosophical Visions and Science Fiction.Barry Francis Dainton, Will Slocombe & Attila Tanyi (eds.) - forthcoming - Springer.
    Bringing together literary scholars, computer scientists, ethicists, philosophers of mind, and scholars from affiliated disciplines, this collection of essays offers important and timely insights into the pasts, presents, and, above all, possible futures of Artificial Intelligence. This book covers topics such as ethics and morality, identity and selfhood, and broader issues about AI, addressing questions about the individual, social, and existential impacts of such technologies. Through the works of science fiction authors such as Isaac Asimov, Stanislaw Lem, Ann Leckie, Iain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. The Discovery of the Artificial: Some Protocybernetic Developments 1930-1940.Roberto Cordeschi - 1991 - Artificial Intelligence and Society 5 (3):218-238.
    In this paper I start from a definition of “culture of the artificial” which might be stated by referring to the background of philosophical, methodological, pragmatical assumptions which characterizes the development of the information processing analysis of mental processes and of some trends in contemporary cognitive science: in a word, the development of AI as a candidate science of mind. The aim of this paper is to show how (with which plausibility and limitations) the discovery of the mentioned background might (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  42. Toward an Ethics of AI Assistants: An Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  43. A Counterexample T o All Future Dynamic Systems Theories of Cognition.Eric Dietrich - 2000 - J. Of Experimental and Theoretical AI 12 (2):377-382.
    Years ago, when I was an undergraduate math major at the University of Wyoming, I came across an interesting book in our library. It was a book of counterexamples t o propositions in real analysis (the mathematics of the real numbers). Mathematicians work more or less like the rest of us. They consider propositions. If one seems to them to be plausibly true, then they set about to prove it, to establish the proposition as a theorem. Instead o f setting (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  44.  13
    Algorithm Exploitation: Humans Are Keen to Exploit Benevolent AI.Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami & Ophelia Deroy - 2021 - iScience 24 (6):102679.
    We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. One Decade of Universal Artificial Intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.
    The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, 2008) (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  47.  35
    The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Machines as Moral Patients We Shouldn’T Care About : The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  50.  40
    Imagination and Actionability: Refections on the Future of Interdisciplinarity, Inspired by Julie Thompson Klein.Machiel Keestra - 2019 - Issues in Interdisciplinary Studies 2 (37):110-129.
    When introduced around 1925, interdisciplinarity, grounded in the notion of the unity of knowledge, was meant to reconnect the fragmented and specialized disciplines of academia. However, interdisciplinary research became more and more challenging as the plurality and heterogeneity of disciplinary perspectives and insights increased. Insisting on this divergence and diversity, Julie Thompson Klein has nonetheless contributed in important ways to convergence in interdisciplinarity with her work on the process of integration as interdisciplinarity's defining feature. Of course, she is aware that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1000