Switch to: Citations

Add references

You must login to add references.
  1. Charting the Terrain of Artificial Intelligence: a Multidimensional Exploration of Ethics, Agency, and Future Directions.Partha Pratim Ray & Pradip Kumar Das - 2023 - Philosophy and Technology 36 (2):1-7.
    This comprehensive analysis dives deep into the intricate interplay between artificial intelligence (AI) and human agency, examining the remarkable capabilities and inherent limitations of large language models (LLMs) such as GPT-3 and ChatGPT. The paper traces the complex trajectory of AI's evolution, highlighting its operation based on statistical pattern recognition, devoid of self-consciousness or innate comprehension. As AI permeates multiple spheres of human life, it raises substantial ethical, legal, and societal concerns that demand immediate attention and deliberation. The metaphorical illustration (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Generative AI entails a credit–blame asymmetry.Sebastian Porsdam Mann, Brian D. Earp, Sven Nyholm, John Danaher, Nikolaj Møller, Hilary Bowman-Smart, Joshua Hatherley, Julian Koplin, Monika Plozza, Daniel Rodger, Peter V. Treit, Gregory Renard, John McMillan & Julian Savulescu - 2023 - Nature Machine Intelligence 5 (5):472-475.
    Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethics of generative AI.Hazem Zohny, John McMillan & Mike King - 2023 - Journal of Medical Ethics 49 (2):79-80.
    Artificial intelligence (AI) and its introduction into clinical pathways presents an array of ethical issues that are being discussed in the JME. 1–7 The development of AI technologies that can produce text that will pass plagiarism detectors 8 and are capable of appearing to be written by a human author 9 present new issues for medical ethics. One set of worries concerns authorship and whether it will now be possible to know that an author or student in fact produced submitted (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Ethical implications of text generation in the age of artificial intelligence.Laura Illia, Elanor Colleoni & Stelios Zyglidopoulos - 2022 - Business Ethics, the Environment and Responsibility 32 (1):201-210.
    We are at a turning point in the debate on the ethics of Artificial Intelligence (AI) because we are witnessing the rise of general-purpose AI text agents such as GPT-3 that can generate large-scale highly refined content that appears to have been written by a human. Yet, a discussion on the ethical issues related to the blurring of the roles between humans and machines in the production of content in the business arena is lacking. In this conceptual paper, drawing on (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI ethics: the case for including animals.Peter Singer - 2022 - AI and Ethics 2 (3).
    The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds of impact AI has (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Virtue-Based Framework to Support Putting AI Ethics into Practice.Thilo Hagendorff - 2022 - Philosophy and Technology 35 (3):1-24.
    Many ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, several AI ethics researchers have pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach. This paper proposes a complementary to the principled approach that is based on virtue ethics. It defines four “basic AI virtues”, namely justice, honesty, responsibility and care, all of which represent specific (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.Jakob Mökander, Maria Axente, Federico Casolari & Luciano Floridi - 2022 - Minds and Machines 32 (2):241-268.
    The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the _conformity assessments_ that providers of high-risk AI systems are expected to conduct, and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   199 citations  
  • (1 other version)Cognitive load selectively interferes with utilitarian moral judgment.Joshua D. Greene, Sylvia A. Morelli, Kelly Lowenberg, Leigh E. Nystrom & Jonathan D. Cohen - 2008 - Cognition 107 (3):1144-1154.
    Traditional theories of moral development emphasize the role of controlled cognition in mature moral judgment, while a more recent trend emphasizes intuitive and emotional processes. Here we test a dual-process theory synthesizing these perspectives. More specifically, our theory associates utilitarian moral judgment (approving of harmful actions that maximize good consequences) with controlled cognitive processes and associates non-utilitarian moral judgment with automatic emotional responses. Consistent with this theory, we find that a cognitive load manipulation selectively interferes with utilitarian judgment. This interference (...)
    Download  
     
    Export citation  
     
    Bookmark   198 citations  
  • Moral Reasoning: Hints and Allegations.Joseph M. Paxton & Joshua D. Greene - 2010 - Topics in Cognitive Science 2 (3):511-527.
    Recent research in moral psychology highlights the role of emotion and intuition in moral judgment. In the wake of these findings, the role and significance of moral reasoning remain uncertain. In this article, we distinguish among different kinds of moral reasoning and review evidence suggesting that at least some kinds of moral reasoning play significant roles in moral judgment, including roles in abandoning moral intuitions in the absence of justifying reasons, applying both deontological and utilitarian moral principles, and counteracting automatic (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • The global landscape of AI ethics guidelines.A. Jobin, M. Ienca & E. Vayena - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   231 citations  
  • Consent-GPT: is it ethical to delegate procedural consent to conversational AI?Jemima Winifred Allen, Brian D. Earp, Julian Koplin & Dominic Wilkinson - 2024 - Journal of Medical Ethics 50 (2):77-83.
    Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Current cases of AI misalignment and their implications for future risks.Leonard Dung - 2023 - Synthese 202 (5):1-23.
    How can one build AI systems such that they pursue the goals their designers want them to pursue? This is the alignment problem. Numerous authors have raised concerns that, as research advances and systems become more powerful over time, misalignment might lead to catastrophic outcomes, perhaps even to the extinction or permanent disempowerment of humanity. In this paper, I analyze the severity of this risk based on current instances of misalignment. More specifically, I argue that contemporary large language models and (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Why we should (not) worry about generative AI in medical ethics teaching.Seppe Segers - 2024 - International Journal of Ethics Education 9 (1):57-63.
    In this article I discuss the ethical ramifications for medical ethics training of the availability of large language models (LLMs) for medical students. My focus is on the practical ethical consequences for what we should expect of medical students in terms of medical professionalism and ethical reasoning, and how this can be tested in a context where LLMs are relatively easy available. If we continue to expect ethical competences of medical professionalism of future physicians, how much – if at all (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Calibrating machine behavior: a challenge for AI alignment.Erez Firt - 2023 - Ethics and Information Technology 25 (3):1-8.
    When discussing AI alignment, we usually refer to the problem of teaching or training advanced autonomous AI systems to make decisions that are aligned with human values or preferences. Proponents of this approach believe it can be employed as means to stay in control over sophisticated intelligent systems, thus avoiding certain existential risks. We identify three general obstacles on the path to implementation of value alignment: a technological/technical obstacle, a normative obstacle, and a calibration problem. Presupposing, for the purposes of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Cognitive Load Selectively Interferes with Utilitarian Moral Judgment.Jonathan D. Cohen Joshua D. Greene, Sylvia A. Morelli, Kelly Lowenberg, Leigh E. Nystrom - 2008 - Cognition 107 (3):1144.
    Download  
     
    Export citation  
     
    Bookmark   158 citations  
  • The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts.Mohammad Hosseini, David B. Resnik & Kristi Holmes - 2023 - Research Ethics 19 (4):449-465.
    In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Principles alone cannot guarantee ethical AI.Brent Mittelstadt - 2019 - Nature Machine Intelligence 1 (11):501-507.
    Download  
     
    Export citation  
     
    Bookmark   104 citations  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra & John Danaher - 2022 - Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Publisher Correction to: The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (3):457-461.
    In the original publication of this article, the Table 1 has been published in a low resolution. Now a larger version of Table 1 is published in this correction. The publisher apologizes for the error made during production.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Blind spots in AI ethics.Thilo Hagendorff - 2022 - AI and Ethics 2 (4):851-867.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The uselessness of AI ethics.Luke Munn - 2023 - AI and Ethics 3 (3):869-877.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Generative AI models should include detection mechanisms as a condition for public release.Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell & Yoshua Bengio - 2023 - Ethics and Information Technology 25 (4):1-7.
    The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Open AI meets open notes: surveillance capitalism, patient privacy and online record access.Charlotte Blease - 2024 - Journal of Medical Ethics 50 (2):84-89.
    Patient online record access (ORA) is spreading worldwide, and in some countries, including Sweden, and the USA, access is advanced with patients obtaining rapid access to their full records. In the UK context, from 31 October 2023 as part of the new NHS England general practitioner (GP) contract it will be mandatory for GPs to offer ORA to patients aged 16 and older. Patients report many benefits from reading their clinical records including feeling more empowered, better understanding and remembering their (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations