View topic on PhilPapers for more information
Related categories

293 found
Order:
More results on PhilPapers
1 — 50 / 293
Material to categorize
  1. Measuring Information Deprivation: A Democratic Proposal.Adrian K. Yee - forthcoming - Philosophy of Science.
    There remains no consensus among social scientists as to how to measure and understand forms of information deprivation such as misinformation. Machine learning and statistical analyses of information deprivation typically contain problematic operationalizations which are too often biased towards epistemic elites' conceptions that can undermine their empirical adequacy. A mature science of information deprivation should include considerable citizen involvement that is sensitive to the value-ladenness of information quality and that doing so may improve the predictive and explanatory power of extant (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Institutional Trust in Medicine in the Age of Artificial Intelligence.Michał Klincewicz - forthcoming - In David Collins, Mark Alfano & Iris Jovanovic (eds.), The Moral Psychology of Trust. Rowman and Littlefield/Lexington Books: Rowman and Littlefield/Lexington Books.
    It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the functional role (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  4. Can Artificial Intelligence (Re)Define Creativity?Dessislava Fessenko - 2022 - In EthicAI=LABS Project. Sofia: DA LAB Foundation /Goethe-institut Sofia. pp. 34-48.
    What is the essential ingredient of creativity that only humans – and not machines – possess? Can artificial intelligence help refine the notion of creativity by reference to that essential ingredient? How / do we need to redefine our conceptual and legal frameworks for rewarding creativity because of this new qualifying – actually creatively significant – factor? -/- Those are the questions tackled in this essay. The author’s conclusion is that consciousness, experiential states (such as a raw feel of what (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve Our Wellbeing.James Maclaurin, Toby Walsh, Neil Levy, Genevieve Bell, Fiona Wood, Anthony Elliott & Iven Mareels - 2019 - Melbourne VIC, Australia: Australian Council of Learned Academies.
    This project has been supported by the Australian Government through the Australian Research Council (project number CS170100008); the Department of Industry, Innovation and Science; and the Department of Prime Minister and Cabinet. ACOLA collaborates with the Australian Academy of Health and Medical Sciences and the New Zealand Royal Society Te Apārangi to deliver the interdisciplinary Horizon Scanning reports to government. The aims of the project which produced this report are: 1. Examine the transformative role that artificial intelligence may play in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  6. Metaphysics , Meaning, and Morality: A Theological Reflection on A.I.Jordan Joseph Wales - 2022 - Journal of Moral Theology 11 (Special Issue 1):157-181.
    Theologians often reflect on the ethical uses and impacts of artificial intelligence, but when it comes to artificial intelligence techniques themselves, some have questioned whether much exists to discuss in the first place. If the significance of computational operations is attributed rather than intrinsic, what are we to say about them? Ancient thinkers—namely Augustine of Hippo (lived 354–430)—break the impasse, enabling us to draw forth the moral and metaphysical significance of current developments like the “deep neural networks” that are responsible (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. The Kantian Notion of Freedom and Autonomy of Artificial Agency.Manas Sahu - 2021 - Prometeica - Revista De Filosofía Y Ciencias 23:136-149.
    The objective of this paper is to provide critical analysis of the Kantian notion of freedom ; its significance in the contemporary debate on free-will and determinism, and the possibility of autonomy of artificial agency in the Kantian paradigm of autonomy. Kant's resolution of the third antinomy by positing the ground in the noumenal self resolves the problem of antinomies; however, it invites an explanatory gap between phenomenality and the noumenal self; even if he has successfully established the compatibility of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  10. Neural Chitchat.Barry Smith - 2021 - The Sherry Turkle Miracle.
    A constant theme in Sherry Turkle’s work is the idea that computers shape our social and psychological lives. This idea is of course in a sense trivial, as can be observed when walking down any city street and noting how many of the passers-by have their heads buried in screens. In The Second Self, however, Turkle makes a stronger claim to the effect that where people confront machines that seem to think this suggests a new way for us to think (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Saint Thomas d'Aquin contre les robots. Pistes pour une approche philosophique de l'Intelligence Artificielle.Matthieu Raffray - 2019 - Angelicum 4 (96):553-572.
    In light of the pervasive developments of new technologies, such as NBIC (Nanotechnology, biotechnology, information technology, and cognitive science), it is imperative to produce a coherent and deep reflexion on the human nature, on human intelligence and on the limit of both of them, in order to successfully respond to some technical argumentations that strive to depict humanity as a purely mechanical system. For this purpose, it is interesting to refer to the epistemology and metaphysics of Thomas Aquinas as a (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  12. Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13. Privacy and Digital Ethics After the Pandemic.Carissa Véliz - 2021 - Nature Electronics 4:10-11.
    The increasingly prominent role of digital technologies during the coronavirus pandemic has been accompanied by concerning trends in privacy and digital ethics. But more robust protection of our rights in the digital realm is possible in the future. -/- After surveying some of the challenges we face, I argue for the importance of diplomacy. Democratic countries must try to come together and reach agreements on minimum standards and rules regarding cybersecurity, privacy and the governance of AI.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Consequences of Unexplainable Machine Learning for the Notions of a Trusted Doctor and Patient Autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues. Springer International Publishing.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Deepfakes and the Epistemic Backstop.Regina Rini - 2020 - Philosophers' Imprint 20 (24):1-16.
    Deepfake technology uses machine learning to fabricate video and audio recordings that represent people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   13 citations  
  17. Computational Models (of Narrative) for Literary Studies.Antonio Lieto - 2015 - Semicerchio, Rivista di Poesia Comparata 2 (LIII):38-44.
    In the last decades a growing body of literature in Artificial Intelligence (AI) and Cognitive Science (CS) has approached the problem of narrative understanding by means of computational systems. Narrative, in fact, is an ubiquitous element in our everyday activity and the ability to generate and understand stories, and their structures, is a crucial cue of our intelligence. However, despite the fact that - from an historical standpoint - narrative (and narrative structures) have been an important topic of investigation in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics.Fabio Fossa - 2018 - In Mark Coeckelbergh, Janina Loh, Michael Funk, Joanna Seibt & Marco Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and, Public Space. Amsterdam: pp. 103-111.
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Philosophy and Theory of Artificial Intelligence 2017.Vincent C. Müller (ed.) - 2017 - Berlin: Springer.
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
Artificial Intelligence Safety
  1. Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2022 - Journal of Experimental and Theoretical Artificial Intelligence.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Basic Issues in AI Policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Cham: Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. The Ghost in the Machine has an American Accent: Value Conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large language models (LLMs). (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - forthcoming - In Oxford Handbook of Digital Ethics. Oxford: Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Zero Tolerance Policy for Autonomous Weapons: Why?Birgitta Dresp-Langley - manuscript
    A brief overview of Autonomous Weapon Systems (AWS) and their different levels of autonomy is provided, followed by a discussion of the risks represented by these systems under the light of the just war principles and insights from research in cybersecurity. Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This commentary starts from the example of chemical weapons, now banned worldwide by (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Designing AI with Rights, Consciousness, Self-Respect, and Freedom.Eric Schwitzgebel & Mara Garza - 2020 - In Ethics of Artificial Intelligence. New York, NY, USA: pp. 459-479.
    We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Existential Risk From AI and Orthogonality: Can We Have It Both Ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Quantum of Wisdom.Brett Karlan & Colin Allen - forthcoming - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. Toronto, ON, Canada: University of Toronto Press. pp. 1-6.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Epistemological Solipsism as a Route to External World Skepticism.Grace Helton - 2021 - Philosophical Perspectives 35 (1):229-250.
    I show that some of the most initially attractive routes of refuting epistemological solipsism face serious obstacles. I also argue that for creatures like ourselves, solipsism is a genuine form of external world skepticism. I suggest that together these claims suggest the following morals: No proposed solution to external world skepticism can succeed which does not also solve the problem of epistemological solipsism. And, more tentatively: In assessing proposed solutions to external world skepticism, epistemologists should explicitly consider whether those solutions (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Hey, Google, leave those kids alone: Against hypernudging children in the age of big data.James Smith & Tanya de Villiers-Botha - forthcoming - AI and Society:1-11.
    Children continue to be overlooked as a topic of concern in discussions around the ethical use of people’s data and information. Where children are the subject of such discussions, the focus is often primarily on privacy concerns and consent relating to the use of their data. This paper highlights the unique challenges children face when it comes to online interferences with their decision-making, primarily due to their vulnerability, impressionability, the increased likelihood of disclosing personal information online, and their developmental capacities. (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  11. The Emperor is Naked: Moral Diplomacies and the Ethics of AI.Constantin Vica, Cristina Voinea & Radu Uszkai - 2021 - Információs Társadalom 21 (2):83-96.
    With AI permeating our lives, there is widespread concern regarding the proper framework needed to morally assess and regulate it. This has given rise to many attempts to devise ethical guidelines that infuse guidance for both AI development and deployment. Our main concern is that, instead of a genuine ethical interest for AI, we are witnessing moral diplomacies resulting in moral bureaucracies battling for moral supremacy and political domination. After providing a short overview of what we term ‘ethics washing’ in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Combating Disinformation with AI: Epistemic and Ethical Challenges.Benjamin Lange & Ted Lechterman - 2021 - IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) 1:1-5.
    AI-supported methods for identifying and combating disinformation are progressing in their development and application. However, these methods face a litany of epistemic and ethical challenges. These include (1) robustly defining disinformation, (2) reliably classifying data according to this definition, and (3) navigating ethical risks in the deployment of countermeasures, which involve a mixture of harms and benefits. This paper seeks to expose and offer preliminary analysis of these challenges.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Who Should Bear the Risk When Self-Driving Vehicles Crash?Antti Kauppinen - 2021 - Journal of Applied Philosophy 38 (4):630-645.
    The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  14. Inscrutable Processes: Algorithms, Agency, and Divisions of Deliberative Labour.Marinus Ferreira - 2021 - Journal of Applied Philosophy 38 (4):646-661.
    As the use of algorithmic decision‐making becomes more commonplace, so too does the worry that these algorithms are often inscrutable and our use of them is a threat to our agency. Since we do not understand why an inscrutable process recommends one option over another, we lose our ability to judge whether the guidance is appropriate and are vulnerable to being led astray. In response, I claim that a process being inscrutable does not automatically make its guidance inappropriate. This phenomenon (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. The Value Alignment Problem.Dan J. Bruiger - manuscript
    The Value Alignment Problem (VAP) presupposes that artificial general intelligence (AGI) is desirable and perhaps inevitable. As usually conceived, it is one side of the more general issue of mutual control between agonistic agents. To be fully autonomous, an AI must be an autopoietic system (an agent), with its own purposiveness. In the case of such systems, Bostrom’s orthogonality thesis is untrue. The VAP reflects the more general problem of interfering in complex systems, entraining the possibility of unforeseen consequences. Instead (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. The Unfounded Bias Against Autonomous Weapons Systems.Áron Dombrovszki - 2021 - Információs Társadalom 21 (2):13–28.
    Autonomous Weapons Systems (AWS) have not gained a good reputation in the past. This attitude is odd if we look at the discussion of other-usually highly anticipated-AI-technologies, like autonomous vehicles (AVs); whereby even though these machines evoke very similar ethical issues, philosophers' attitudes towards them are constructive. In this article, I try to prove that there is an unjust bias against AWS because almost every argument against them is effective against AVs too. I start with the definition of "AWS." Then, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. How Does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - forthcoming - In Carissa Véliz (ed.), Oxford Handbook of Digital Ethics.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. AI Risk Denialism.Roman V. Yampolskiy - manuscript
    In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Machine Morality, Moral Progress, and the Looming Environmental Disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Sztuczna Inteligencja: Bezpieczeństwo I Zabezpieczenia.Roman Yampolskiy (ed.) - 2020 - Warszawa: Wydawnictwo Naukowe PWN.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  22. Autonomous Weapon Systems, Asymmetrical Warfare, and Myth.Michal Klincewicz - 2018 - Civitas. Studia Z Filozofii Polityki 23:179-195.
    Predictions about autonomous weapon systems are typically thought to channel fears that drove all the myths about intelligence embodied in matter. One of these is the idea that the technology can get out of control and ultimately lead to horrifi c consequences, as is the case in Mary Shelley’s classic Frankenstein. Given this, predictions about AWS are sometimes dismissed as science-fiction fear-mongering. This paper considers several analogies between AWS and other weapon systems and ultimately offers an argument that nuclear weapons (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. On the Logical Impossibility of Solving the Control Problem.Caleb Rudnick - manuscript
    In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how humans (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Dynamic Cognition Applied to Value Learning in Artificial Intelligence.Nythamar De Oliveira & Nicholas Corrêa - 2021 - Aoristo - International Journal of Phenomenology, Hermeneutics and Metaphysics 4 (2):185-199.
    Experts in Artificial Intelligence (AI) development predict that advances in the dvelopment of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Modelos Dinâmicos Aplicados à Aprendizagem de Valores em Inteligência Artificial.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2020 - Veritas – Revista de Filosofia da Pucrs 2 (65):1-15.
    Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance is not made prudently and critically-reflexively, it can result in negative outcomes for humanity. For this reason, several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment. Currently, several of the open problems in the field of AI research (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  26. On Controllability of Artificial Intelligence.Roman Yampolskiy - manuscript
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  27. Human ≠ AGI.Roman Yampolskiy - manuscript
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Improve Alignment of Research Policy and Societal Values.Peter Novitzky, Michael J. Bernstein, Vincent Blok, Robert Braun, Tung Tung Chan, Wout Lamers, Anne Loeber, Ingeborg Meijer, Ralf Lindner & Erich Griessler - 2020 - Science 369 (6499):39-41.
    Historically, scientific and engineering expertise has been key in shaping research and innovation policies, with benefits presumed to accrue to society more broadly over time. But there is persistent and growing concern about whether and how ethical and societal values are integrated into R&I policies and governance, as we confront public disbelief in science and political suspicion toward evidence-based policy-making. Erosion of such a social contract with science limits the ability of democratic societies to deal with challenges presented by new, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Machines Learning Values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. New York, USA: Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: first, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   24 citations  
1 — 50 / 293