Switch to: References

Add citations

You must login to add citations.
  1. Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Challenges of Artificial Judicial Decision-Making for Liberal Democracy.Christoph Winter - 2022 - In P. Bystranowski, Bartosz Janik & M. Prochnicki (eds.), Judicial Decision-Making: Integrating Empirical and Theoretical Perspectives. Springer Nature. pp. 179-204.
    The application of artificial intelligence (AI) to judicial decision-making has already begun in many jurisdictions around the world. While AI seems to promise greater fairness, access to justice, and legal certainty, issues of discrimination and transparency have emerged and put liberal democratic principles under pressure, most notably in the context of bail decisions. Despite this, there has been no systematic analysis of the risks to liberal democratic values from implementing AI into judicial decision-making. This article sets out to fill this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Varieties of (Extended) Thought Manipulation.J. Adam Carter - 2020 - In Mark Blitz & Christoph Bublitz (eds.), The Future of Freedom of Thought: Liberty, Technology, and Neuroscience. Palgrave Macmillan.
    Our understanding of what exactly needs protected against in order to safeguard a plausible construal of our ‘freedom of thought’ is changing. And this is because the recent influx of cognitive offloading and outsourcing—and the fast-evolving technologies that enable this—generate radical new possibilities for freedom-of-thought violating thought manipulation. This paper does three main things. First, I briefly overview how recent thinking in the philosophy of mind and cognitive science recognises—contrary to traditional Cartesian ‘internalist’ assumptions—ways in which our cognitive faculties, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Blueprint for an AI Bill of Rights: In Search of Enaction, at Risk of Inaction.Emmie Hine & Luciano Floridi - 2023 - Minds and Machines 33 (2):285-292.
    The US is promoting a new vision of a “Good AI Society” through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights violations. Furthermore, it may have some federal impact, but it is non-binding, and without concrete legislation, the private sector is likely to ignore it.
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • “Please understand we cannot provide further information”: evaluating content and transparency of GDPR-mandated AI disclosures.Alexander J. Wulf & Ognyan Seizov - 2024 - AI and Society 39 (1):235-256.
    The General Data Protection Regulation (GDPR) of the EU confirms the protection of personal data as a fundamental human right and affords data subjects more control over the way their personal information is processed, shared, and analyzed. However, where data are processed by artificial intelligence (AI) algorithms, asserting control and providing adequate explanations is a challenge. Due to massive increases in computing power and big data processing, modern AI algorithms are too complex and opaque to be understood by most data (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Value Alignment for Advanced Artificial Judicial Intelligence.Christoph Winter, Nicholas Hollman & David Manheim - 2023 - American Philosophical Quarterly 60 (2):187-203.
    This paper considers challenges resulting from the use of advanced artificial judicial intelligence (AAJI). We argue that these challenges should be considered through the lens of value alignment. Instead of discussing why specific goals and values, such as fairness and nondiscrimination, ought to be implemented, we consider the question of how AAJI can be aligned with goals and values more generally, in order to be reliably integrated into legal and judicial systems. This value alignment framing draws on AI safety and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Socio-ethical implications of using AI in accelerating SDG3 in Least Developed Countries.Kutoma Wakunuma, Tilimbe Jiya & Suleiman Aliyu - 2020 - Journal of Responsible Technology 4:100006.
    Download  
     
    Export citation  
     
    Bookmark  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Lost in Transduction: From Law and Code’s Intra-actions to the Right to Explanation in the European Data Protection Regulations.Miriam Tedeschi & Mika Viljanen - forthcoming - Law and Critique:1-18.
    Recent algorithmic technologies have challenged law’s anthropocentric assumptions. In this article, we develop a set of theoretical tools drawn from new materialisms and the philosophy of information to unravel the complex intra-actions between law and computer code. Accordingly, we first propose a framework for understanding the enmeshing of law and code based on a diffractive reading of Barad’s agential realism and Simondon’s theory of information. We argue that once law and code are understood as material entities that intra-act through in-formation, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.Aurelia Tamò-Larrieux, Christoph Lutz, Eduard Fosch Villaronga & Heike Felzmann - 2019 - Big Data and Society 6 (1).
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation.Sanja Srećković, Andrea Berber & Nenad Filipović - 2021 - Minds and Machines 32 (1):159-183.
    Certain characteristics make machine learning a powerful tool for processing large amounts of data, and also particularly unsuitable for explanatory purposes. There are worries that its increasing use in science may sideline the explanatory goals of research. We analyze the key characteristics of ML that might have implications for the future directions in scientific research: epistemic opacity and the ‘theory-agnostic’ modeling. These characteristics are further analyzed in a comparison of ML with the traditional statistical methods, in order to demonstrate what (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The European Commission report on ethics of connected and automated vehicles and the future of ethics of transportation.Filippo Santoni de Sio - 2021 - Ethics and Information Technology 23 (4):713-726.
    The paper has two goals. The first is presenting the main results of the recent report Ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility written by the Horizon 2020 European Commission Expert Group to advise on specific ethical issues raised by driverless mobility, of which the author of this paper has been member and rapporteur. The second is presenting some broader ethical and philosophical implications of these recommendations, and using these to contribute to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.
    When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • AI-Enhanced Healthcare: Not a new Paradigm for Informed Consent.M. Pruski - forthcoming - Journal of Bioethical Inquiry:1-15.
    With the increasing prevalence of artificial intelligence (AI) and other digital technologies in healthcare, the ethical debate surrounding their adoption is becoming more prominent. Here I consider the issue of gaining informed patient consent to AI-enhanced care from the vantage point of the United Kingdom’s National Health Service setting. I build my discussion around two claims from the World Health Organization: that healthcare services should not be denied to individuals who refuse AI-enhanced care and that there is no precedence to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algo-Rhythms and the Beat of the Legal Drum.Ugo Pagallo - 2018 - Philosophy and Technology 31 (4):507-524.
    The paper focuses on concerns and legal challenges brought on by the use of algorithms. A particular class of algorithms that augment or replace analysis and decision-making by humans, i.e. data analytics and machine learning, is under scrutiny. Taking into account Balkin’s work on “the laws of an algorithmic society”, attention is drawn to obligations of transparency, matters of due process, and accountability. This US-centric analysis on drawbacks and loopholes of current legal systems is complemented with the analysis of norms (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • Ethics of the health-related internet of things: a narrative review.Brent Mittelstadt - 2017 - Ethics and Information Technology 19 (3):1-19.
    The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination.Tobias Matzner & Monique Mann - 2019 - Big Data and Society 6 (2).
    The potential for biases being built into algorithms has been known for some time, yet literature has only recently demonstrated the ways algorithmic profiling can result in social sorting and harm marginalised groups. We contend that with increased algorithmic complexity, biases will become more sophisticated and difficult to identify, control for, or contest. Our argument has four steps: first, we show how harnessing algorithms means that data gathered at a particular place and time relating to specific persons, can be used (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • What Can Deep Neural Networks Teach Us About Embodied Bounded Rationality.Edward A. Lee - 2022 - Frontiers in Psychology 13.
    “Rationality” in Simon's “bounded rationality” is the principle that humans make decisions on the basis of step-by-step reasoning using systematic rules of logic to maximize utility. “Bounded rationality” is the observation that the ability of a human brain to handle algorithmic complexity and large quantities of data is limited. Bounded rationality, in other words, treats a decision maker as a machine carrying out computations with limited resources. Under the principle of embodied cognition, a cognitive mind is an interactive machine. Turing-Church (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “Strongly Recommended” Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies.Marjolein Lanzing - 2019 - Philosophy and Technology 32 (3):549-568.
    This paper explores and rehabilitates the value of decisional privacy as a conceptual tool, complementary to informational privacy, for critiquing personalized choice architectures employed by self-tracking technologies. Self-tracking technologies are promoted and used as a means to self-improvement. Based on large aggregates of personal data and the data of other users, self-tracking technologies offer personalized feedback that nudges the user into behavioral change. The real-time personalization of choice architectures requires continuous surveillance and is a very powerful technology, recently coined as (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Dissecting the Algorithmic Leviathan: On the Socio-Political Anatomy of Algorithmic Governance.Pascal D. König - 2020 - Philosophy and Technology 33 (3):467-485.
    A growing literature is taking an institutionalist and governance perspective on how algorithms shape society based on unprecedented capacities for managing social complexity. Algorithmic governance altogether emerges as a novel and distinctive kind of societal steering. It appears to transcend established categories and modes of governance—and thus seems to call for new ways of thinking about how social relations can be regulated and ordered. However, as this paper argues, despite its novel way of realizing outcomes of collective steering and coordination, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Expertise, a Framework for our Most Characteristic Asset and Most Basic Inequality.Cliff Hooker, Claire Hooker & Giles Hooker - 2022 - Spontaneous Generations 10 (1):27-35.
    This essay provides a framework of concepts and principles suitable for systematic discussion of issues surrounding expertise. Expertise creates inequality. Its multiple benefits and the creativity of technology lead to a society replete with expertises. The basic binds of expertise derive from the desire of non-experts to be able to both enjoy what expertise offers and insure that it is exercised in the social interest. This involves trusting the exercise of expertise, involuntarily or voluntarily. A healthy society provides various means (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reflection machines: increasing meaningful human control over Decision Support Systems.W. F. G. Haselager, H. K. Schraffenberger, R. J. M. van Eerdt & N. A. J. Cornelissen - 2022 - Ethics and Information Technology 24 (2).
    Rapid developments in Artificial Intelligence are leading to an increasing human reliance on machine decision making. Even in collaborative efforts with Decision Support Systems (DSSs), where a human expert is expected to make the final decisions, it can be hard to keep the expert actively involved throughout the decision process. DSSs suggest their own solutions and thus invite passive decision making. To keep humans actively ‘on’ the decision-making loop and counter overreliance on machines, we propose a ‘reflection machine’ (RM). This (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reflection Machines: Supporting Effective Human Oversight Over Medical Decision Support Systems.Pim Haselager, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan van de Groes & Miranda van Hooff - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Human decisions are increasingly supported by decision support systems (DSS). Humans are required to remain “on the loop,” by monitoring and approving/rejecting machine recommendations. However, use of DSS can lead to overreliance on machines, reducing human oversight. This paper proposes “reflection machines” (RM) to increase meaningful human control. An RM provides a medical expert not with suggestions for a decision, but with questions that stimulate reflection about decisions. It can refer to data points or suggest counterarguments that are less compatible (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can AI-Based Decisions be Genuinely Public? On the Limits of Using AI-Algorithms in Public Institutions.Alon Harel & Gadi Perl - 2024 - Jus Cogens 6 (1):47-64.
    AI-based algorithms are used extensively by public institutions. Thus, for instance, AI algorithms have been used in making decisions concerning punishment providing welfare payments, making decisions concerning parole, and many other tasks which have traditionally been assigned to public officials and/or public entities. We develop a novel argument against the use of AI algorithms, in particular with respect to decisions made by public officials and public entities. We argue that decisions made by AI algorithms cannot count as public decisions, namely (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainable AI under contract and tort law: legal incentives and technical challenges.Philipp Hacker, Ralf Krestel, Stefan Grundmann & Felix Naumann - 2020 - Artificial Intelligence and Law 28 (4):415-439.
    This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Mapping the Issues of Automated Legal Systems: Why Worry About Automatically Processable Regulation?Clement Guitton, Aurelia Tamò-Larrieux & Simon Mayer - 2022 - Artificial Intelligence and Law 31 (3):571-599.
    The field of computational law has increasingly moved into the focus of the scientific community, with recent research analysing its issues and risks. In this article, we seek to draw a structured and comprehensive list of societal issues that the deployment of automatically processable regulation could entail. We do this by systematically exploring attributes of the law that are being challenged through its encoding and by taking stock of what issues current projects in this field raise. This article adds to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How to protect privacy in a datafied society? A presentation of multiple legal and conceptual approaches.Oskar J. Gstrein & Anne Beaulieu - 2022 - Philosophy and Technology 35 (1):1-38.
    The United Nations confirmed that privacy remains a human right in the digital age, but our daily digital experiences and seemingly ever-increasing amounts of data suggest that privacy is a mundane, distributed and technologically mediated concept. This article explores privacy by mapping out different legal and conceptual approaches to privacy protection in the context of datafication. It provides an essential starting point to explore the entwinement of technological, ethical and regulatory dynamics. It clarifies why each of the presented approaches emphasises (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice.Thomas Grote & Philipp Berens - 2023 - Journal of Medicine and Philosophy 48 (1):84-97.
    In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Global Governance of Artificial Intelligence: Some Normative Concerns.Eva Erman & Markus Furendal - 2022 - Moral Philosophy and Politics 9 (2):267-291.
    The creation of increasingly complex artificial intelligence (AI) systems raises urgent questions about their ethical and social impact on society. Since this impact ultimately depends on political decisions about normative issues, political philosophers can make valuable contributions by addressing such questions. Currently, AI development and application are to a large extent regulated through non-binding ethics guidelines penned by transnational entities. Assuming that the global governance of AI should be at least minimally democratic and fair, this paper sets out three desiderata (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Should we have a right to refuse diagnostics and treatment planning by artificial intelligence?Iñigo de Miguel Beriain - 2020 - Medicine, Health Care and Philosophy 23 (2):247-252.
    Should we be allowed to refuse any involvement of artificial intelligence technology in diagnosis and treatment planning? This is the relevant question posed by Ploug and Holm in a recent article in Medicine, Health Care and Philosophy. In this article, I adhere to their conclusions, but not necessarily to the rationale that supports them. First, I argue that the idea that we should recognize this right on the basis of a rational interest defence is not plausible, unless we are willing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithmic decision-making employing profiling: will trade secrecy protection render the right to explanation toothless?Paul B. de Laat - 2022 - Ethics and Information Technology 24 (2).
    Algorithmic decision-making based on profiling may significantly affect people’s destinies. As a rule, however, explanations for such decisions are lacking. What are the chances for a “right to explanation” to be realized soon? After an exploration of the regulatory efforts that are currently pushing for such a right it is concluded that, at the moment, the GDPR stands out as the main force to be reckoned with. In cases of profiling, data subjects are granted the right to receive meaningful information (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is explainable artificial intelligence intrinsically valuable?Nathan Colaner - 2022 - AI and Society 37 (1):231-238.
    There is general consensus that explainable artificial intelligence is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth having in the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Will Big Data and personalized medicine do the gender dimension justice?Antonio Carnevale, Emanuela A. Tangari, Andrea Iannone & Elena Sartini - 2021 - AI and Society:1-13.
    Over the last decade, humans have produced each year as much data as were produced throughout the entire history of humankind. These data, in quantities that exceed current analytical capabilities, have been described as “the new oil,” an incomparable source of value. This is true for healthcare, as well. Conducting analyses of large, diverse, medical datasets promises the detection of previously unnoticed clinical correlations and new diagnostic or even therapeutic possibilities. However, using Big Data poses several problems, especially in terms (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations