Switch to: References

Add citations

You must login to add citations.
  1. Legal requirements on explainability in machine learning.Adrien Bibal, Michael Lognoul, Alexandre de Streel & Benoît Frénay - 2021 - Artificial Intelligence and Law 29 (2):149-169.
    Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   6 citations  
  • Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.
    When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   5 citations  
  • Should We Have a Right to Refuse Diagnostics and Treatment Planning by Artificial Intelligence?Iñigo de Miguel Beriain - 2020 - Medicine, Health Care and Philosophy 23 (2):247-252.
    Should we be allowed to refuse any involvement of artificial intelligence technology in diagnosis and treatment planning? This is the relevant question posed by Ploug and Holm in a recent article in Medicine, Health Care and Philosophy. In this article, I adhere to their conclusions, but not necessarily to the rationale that supports them. First, I argue that the idea that we should recognize this right on the basis of a rational interest defence is not plausible, unless we are willing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Socio-Ethical Implications of Using AI in Accelerating SDG3 in Least Developed Countries.Kutoma Wakunuma, Tilimbe Jiya & Suleiman Aliyu - 2020 - Journal of Responsible Technology 4:100006.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2019 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   18 citations  
  • The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   10 citations  
  • Will Big Data and personalized medicine do the gender dimension justice?Antonio Carnevale, Emanuela A. Tangari, Andrea Iannone & Elena Sartini - forthcoming - AI and Society:1-13.
    Over the last decade, humans have produced each year as much data as were produced throughout the entire history of humankind. These data, in quantities that exceed current analytical capabilities, have been described as “the new oil,” an incomparable source of value. This is true for healthcare, as well. Conducting analyses of large, diverse, medical datasets promises the detection of previously unnoticed clinical correlations and new diagnostic or even therapeutic possibilities. However, using Big Data poses several problems, especially in terms (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Four Responsibility Gaps with Artificial Intelligence: Why They Matter and How to Address Them.Filippo Santoni de Sio & Giulio Mecacci - forthcoming - Philosophy and Technology:1-28.
    The notion of “responsibility gap” with artificial intelligence was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is explainable artificial intelligence intrinsically valuable?Nathan Colaner - forthcoming - AI and Society:1-8.
    There is general consensus that explainable artificial intelligence is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth having in the (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • “Strongly Recommended” Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies.Marjolein Lanzing - 2019 - Philosophy and Technology 32 (3):549-568.
    This paper explores and rehabilitates the value of decisional privacy as a conceptual tool, complementary to informational privacy, for critiquing personalized choice architectures employed by self-tracking technologies. Self-tracking technologies are promoted and used as a means to self-improvement. Based on large aggregates of personal data and the data of other users, self-tracking technologies offer personalized feedback that nudges the user into behavioral change. The real-time personalization of choice architectures requires continuous surveillance and is a very powerful technology, recently coined as (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical and Legal Challenges of Informed Consent Applying Artificial Intelligence in Medical Diagnostic Consultations.Kristina Astromskė, Eimantas Peičius & Paulius Astromskis - forthcoming - AI and Society.
    This paper inquiries into the complex issue of informed consent applying artificial intelligence in medical diagnostic consultations. The aim is to expose the main ethical and legal concerns of the New Health phenomenon, powered by intelligent machines. To achieve this objective, the first part of the paper analyzes ethical aspects of the alleged right to explanation, privacy, and informed consent, applying artificial intelligence in medical diagnostic consultations. This analysis is followed by a legal analysis of the limits and requirements for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algo-Rhythms and the Beat of the Legal Drum.Ugo Pagallo - 2018 - Philosophy and Technology 31 (4):507-524.
    The paper focuses on concerns and legal challenges brought on by the use of algorithms. A particular class of algorithms that augment or replace analysis and decision-making by humans, i.e. data analytics and machine learning, is under scrutiny. Taking into account Balkin’s work on “the laws of an algorithmic society”, attention is drawn to obligations of transparency, matters of due process, and accountability. This US-centric analysis on drawbacks and loopholes of current legal systems is complemented with the analysis of norms (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Algorithmic Accountability and Public Reason.Reuben Binns - 2018 - Philosophy and Technology 31 (4):543-556.
    The ever-increasing application of algorithms to decision-making in a range of social contexts has prompted demands for algorithmic accountability. Accountable decision-makers must provide their decision-subjects with justifications for their automated system’s outputs, but what kinds of broader principles should we expect such justifications to appeal to? Drawing from political philosophy, I present an account of algorithmic accountability in terms of the democratic ideal of ‘public reason’. I argue that situating demands for algorithmic accountability within this justificatory framework enables us to (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles Into Practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Dissecting the Algorithmic Leviathan: On the Socio-Political Anatomy of Algorithmic Governance.Pascal D. König - 2020 - Philosophy and Technology 33 (3):467-485.
    A growing literature is taking an institutionalist and governance perspective on how algorithms shape society based on unprecedented capacities for managing social complexity. Algorithmic governance altogether emerges as a novel and distinctive kind of societal steering. It appears to transcend established categories and modes of governance—and thus seems to call for new ways of thinking about how social relations can be regulated and ordered. However, as this paper argues, despite its novel way of realizing outcomes of collective steering and coordination, (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of the Health-Related Internet of Things: A Narrative Review.Brent Mittelstadt - 2017 - Ethics and Information Technology 19 (3):1-19.
    The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations