Switch to: Citations

Add references

You must login to add references.
  1. The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   177 citations  
  • Principles of Biomedical Ethics: Marking Its Fortieth Anniversary.James Childress & Tom Beauchamp - 2019 - American Journal of Bioethics 19 (11):9-12.
    Volume 19, Issue 11, November 2019, Page 9-12.
    Download  
     
    Export citation  
     
    Bookmark   308 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   219 citations  
  • The global landscape of AI ethics guidelines.A. Jobin, M. Ienca & E. Vayena - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   261 citations  
  • Explanation in artificial intelligence: Insights from the social sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
    Download  
     
    Export citation  
     
    Bookmark   163 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   225 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   87 citations  
  • Principles alone cannot guarantee ethical AI.Brent Mittelstadt - 2019 - Nature Machine Intelligence 1 (11):501-507.
    Download  
     
    Export citation  
     
    Bookmark   118 citations  
  • Informed Consent: What Must Be Disclosed and What Must Be Understood?Joseph Millum & Danielle Bromwich - 2021 - American Journal of Bioethics 21 (5):46-58.
    Over the last few decades, multiple studies have examined the understanding of participants in clinical research. They show variable and often poor understanding of key elements of disclosure, such as expected risks and the experimental nature of treatments. Did the participants in these studies give valid consent? According to the standard view of informed consent they did not. The standard view holds that the recipient of consent has a duty to disclose certain information to the profferer of consent because valid (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Identifying Ethical Considerations for Machine Learning Healthcare Applications.Danton S. Char, Michael D. Abràmoff & Chris Feudtner - 2020 - American Journal of Bioethics 20 (11):7-17.
    Along with potential benefits to healthcare delivery, machine learning healthcare applications raise a number of ethical concerns. Ethical evaluations of ML-HCAs will need to structure th...
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Defining the undefinable: the black box problem in healthcare artificial intelligence.Jordan Joseph Wadden - 2022 - Journal of Medical Ethics 48 (10):764-768.
    The ‘black box problem’ is a long-standing talking point in debates about artificial intelligence. This is a significant point of tension between ethicists, programmers, clinicians and anyone else working on developing AI for healthcare applications. However, the precise definition of these systems are often left undefined, vague, unclear or are assumed to be standardised within AI circles. This leads to situations where individuals working on AI talk over each other and has been invoked in numerous debates between opaque and explainable (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Beyond explainability: justifiability and contestability of algorithmic decision systems.Clément Henin & Daniel Le Métayer - 2022 - AI and Society 37 (4):1397-1410.
    In this paper, we point out that explainability is useful but not sufficient to ensure the legitimacy of algorithmic decision systems. We argue that the key requirements for high-stakes decision systems should be justifiability and contestability. We highlight the conceptual differences between explanations and justifications, provide dual definitions of justifications and contestations, and suggest different ways to operationalize justifiability and contestability.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The right not to know: an autonomy based approach.R. Andorno - 2004 - Journal of Medical Ethics 30 (5):435-439.
    The emerging international biomedical law tends to recognise the right not to know one’s genetic status. However, the basis and conditions for the exercise of this right remain unclear in domestic laws. In addition to this, such a right has been criticised at the theoretical level as being in contradiction with patient’s autonomy, with doctors’ duty to inform patients, and with solidarity with family members. This happens especially when non-disclosure poses a risk of serious harm to the patient’s relatives who, (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.A. Barredo Arrieta, N. Díaz-Rodríguez, J. Ser, A. Bennetot, S. Tabik & A. Barbado - 2020 - Information Fusion 58.
    Download  
     
    Export citation  
     
    Bookmark   84 citations  
  • Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations.Kristina Astromskė, Eimantas Peičius & Paulius Astromskis - 2021 - AI and Society 36 (2):509-520.
    This paper inquiries into the complex issue of informed consent applying artificial intelligence in medical diagnostic consultations. The aim is to expose the main ethical and legal concerns of the New Health phenomenon, powered by intelligent machines. To achieve this objective, the first part of the paper analyzes ethical aspects of the alleged right to explanation, privacy, and informed consent, applying artificial intelligence in medical diagnostic consultations. This analysis is followed by a legal analysis of the limits and requirements for (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent?Frank Ursin, Cristian Timmermann, Marcin Orzechowski & Florian Steger - 2021 - Frontiers in Medicine 8:695217.
    Purpose: The method of diagnosing diabetic retinopathy (DR) through artificial intelligence (AI)-based systems has been commercially available since 2018. This introduces new ethical challenges with regard to obtaining informed consent from patients. The purpose of this work is to develop a checklist of items to be disclosed when diagnosing DR with AI systems in a primary care setting. -/- Methods: Two systematic literature searches were conducted in PubMed and Web of Science databases: a narrow search focusing on DR and a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Peeking inside the black-box: A survey on explainable artificial intelligence (XAI).A. Adadi & M. Berrada - 2018 - IEEE Access 6.
    Download  
     
    Export citation  
     
    Bookmark   77 citations  
  • Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based.Liam G. McCoy, Connor T. A. Brenna, Stacy S. Chen, Karina Vold & Sunit Das - 2022 - Journal of Clinical Epidemiology 142:252-257.
    Objective: To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. Study Design and Setting: This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. Results: We find that concerns regarding explainability are (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship.Florian Funer - 2022 - Philosophy and Technology 35 (1):1-20.
    The initial successes in recent years in harnessing machine learning technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • XPLAIN: a system for creating and explaining expert consulting programs.William R. Swartout - 1983 - Artificial Intelligence 21 (3):285-325.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Ethical Implications of Alzheimer’s Disease Prediction in Asymptomatic Individuals Through Artificial Intelligence.Frank Ursin, Cristian Timmermann & Florian Steger - 2021 - Diagnostics 11 (3):440.
    Biomarker-based predictive tests for subjectively asymptomatic Alzheimer’s disease (AD) are utilized in research today. Novel applications of artificial intelligence (AI) promise to predict the onset of AD several years in advance without determining biomarker thresholds. Until now, little attention has been paid to the new ethical challenges that AI brings to the early diagnosis in asymptomatic individuals, beyond contributing to research purposes, when we still lack adequate treatment. The aim of this paper is to explore the ethical arguments put forward (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Patientenautonomie Und Informierte Einwilligung: Schlüssel Und Barriere Medizinischer Behandlungen.Pia Becker - 2019 - J.B. Metzler.
    Pia Becker entwirft eine Konzeption von Patientenautonomie, die sich im Gegensatz zu in der Medizinethik bisher dominierenden Konzeptionen an der grundsätzlichen Fähigkeit des Patienten zur Autonomie orientiert. Ausgangspunkt bildet die Notwendigkeit der informierten Einwilligung, die neben der Patientenautonomie vor allem auch die körperliche Integrität des Patienten schützt. Als Adäquatheitsbedingungen dienen die beiden normativen Funktionen der Patientenautonomie als Barriere und Schlüssel einer medizinischen Behandlung. Diese Konzeption von Patientenautonomie hat den Vorteil, Patienten besser vor Überforderungen zu bewahren und deren Bedarf an Unterstützungsangeboten (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Causability and explainability of artificial intelligence in medicine.Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal & Heimo Müller - 2019 - Wires Data Mining and Knowledge Discovery 9 (4):e1312.
    Download  
     
    Export citation  
     
    Bookmark   13 citations