Switch to: References

Add citations

You must login to add citations.
  1. A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality.Kassandra Karpathakis, Jessica Morley & Luciano Floridi - 2024 - Minds and Machines 34 (4):1-40.
    Healthcare systems are grappling with critical challenges, including chronic diseases in aging populations, unprecedented health care staffing shortages and turnover, scarce resources, unprecedented demands and wait times, escalating healthcare expenditure, and declining health outcomes. As a result, policymakers and healthcare executives are investing in artificial intelligence (AI) solutions to increase operational efficiency, lower health care costs, and improve patient care. However, current level of investment in developing healthcare AI among members of the global digital health partnership does not seem to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Genealogical Approach to Algorithmic Bias.Marta Ziosi, David Watson & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-17.
    The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.Stephanie Sheir, Arianna Manzini, Helen Smith & Jonathan Ives - forthcoming - AI and Society:1-14.
    Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought to aspire, at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance.Alexander Blanchard, Christopher Thomas & Mariarosaria Taddeo - forthcoming - AI and Society:1-14.
    The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial intelligence in support of the circular economy: ethical considerations and a path forward.Huw Roberts, Joyce Zhang, Ben Bariach, Josh Cowls, Ben Gilburt, Prathm Juneja, Andreas Tsamados, Marta Ziosi, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-14.
    The world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures needed to scale circularity. However, to date, considerations of the ethical implications of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trust and ethics in AI.Hyesun Choung, Prabu David & Arun Ross - 2023 - AI and Society 38 (2):733-745.
    With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI—human-like trust and functionality trust—and presents a multilevel conceptualization (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics-based auditing of automated decision-making systems: intervention points and policy implications.Jakob Mökander & Maria Axente - 2023 - AI and Society 38 (1):153-171.
    Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Towards a Value Sensitive Design Framework for Attaining Meaningful Human Control over Autonomous Weapons Systems.Steven Umbrello - 2021 - Dissertation, Consortium Fino
    The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This thesis argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently the tracking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics-based auditing of automated decision-making systems: nature, scope, and limitations.Jakob Mökander, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (4):1–30.
    Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The European legislation on AI: a brief analysis of its philosophical approach.Luciano Floridi - 2021 - Philosophy and Technology 34 (2):215–⁠222.
    On 21 April 2021, the European Commission published the proposal of the new EU Artificial Intelligence Act (AIA) — one of the most influential steps taken so far to regulate AI internationally. This article highlights some foundational aspects of the Act and analyses the philosophy behind its proposal.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The Principle-at-Risk Analysis (PaRA): Operationalising Digital Ethics by Bridging Principles and Operations of a Digital Ethics Advisory Panel.André T. Nemat, Sarah J. Becker, Simon Lucas, Sean Thomas, Isabel Gadea & Jean Enno Charton - 2023 - Minds and Machines 33 (4):737-760.
    Recent attempts to develop and apply digital ethics principles to address the challenges of the digital transformation leave organisations with an operationalisation gap. To successfully implement such guidance, they must find ways to translate high-level ethics frameworks into practical methods and tools that match their specific workflows and needs. Here, we describe the development of a standardised risk assessment tool, the Principle-at-Risk Analysis (PaRA), as a means to close this operationalisation gap for a key level of the ethics infrastructure at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The latent space of data ethics.Enrico Panai - forthcoming - AI and Society:1-19.
    In informationally mature societies, almost all organisations record, generate, process, use, share and disseminate data. In particular, the rise of AI and autonomous systems has corresponded to an improvement in computational power and in solving complex problems. However, the resulting possibilities have been coupled with an upsurge of ethical risks. To avoid the misuse, underuse, and harmful use of data and data-based systems like AI, we should use an ethical framework appropriate to the object of its reasoning. Unfortunately, in recent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Philosophy and Technology 35 (3):1-24.
    In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • What about investors? ESG analyses as tools for ethics-based AI auditing.Matti Minkkinen, Anniina Niukkanen & Matti Mäntymäki - 2024 - AI and Society 39 (1):329-343.
    Artificial intelligence (AI) governance and auditing promise to bridge the gap between AI ethics principles and the responsible use of AI systems, but they require assessment mechanisms and metrics. Effective AI governance is not only about legal compliance; organizations can strive to go beyond legal requirements by proactively considering the risks inherent in their AI systems. In the past decade, investors have become increasingly active in advancing corporate social responsibility and sustainability practices. Including nonfinancial information related to environmental, social, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Operationalising AI ethics: barriers, enablers and next steps.Jessica Morley, Libby Kinsey, Anat Elhalal, Francesca Garcia, Marta Ziosi & Luciano Floridi - 2023 - AI and Society 38 (1):411-423.
    By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.Jakob Mökander, Maria Axente, Federico Casolari & Luciano Floridi - 2022 - Minds and Machines 32 (2):241-268.
    The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the _conformity assessments_ that providers of high-risk AI systems are expected to conduct, and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Ethical Principles for Artificial Intelligence in National Defence.Mariarosaria Taddeo, David McNeish, Alexander Blanchard & Elizabeth Edgar - 2021 - Philosophy and Technology 34 (4):1707-1729.
    Defence agencies across the globe identify artificial intelligence as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations