Switch to: References

Add citations

You must login to add citations.
  1. The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   180 citations  
  • (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2022 - AI and Society 37 (1):215-230.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • Operationalising AI ethics: barriers, enablers and next steps.Jessica Morley, Libby Kinsey, Anat Elhalal, Francesca Garcia, Marta Ziosi & Luciano Floridi - 2023 - AI and Society 38 (1):411-423.
    By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • A Virtue-Based Framework to Support Putting AI Ethics into Practice.Thilo Hagendorff - 2022 - Philosophy and Technology 35 (3):1-24.
    Many ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, several AI ethics researchers have pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach. This paper proposes a complementary to the principled approach that is based on virtue ethics. It defines four “basic AI virtues”, namely justice, honesty, responsibility and care, all of which represent specific (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Ethics-based auditing of automated decision-making systems: nature, scope, and limitations.Jakob Mökander, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (4):1–30.
    Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance.Alexander Blanchard, Christopher Thomas & Mariarosaria Taddeo - forthcoming - AI and Society:1-14.
    The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial intelligence ethics has a black box problem.Jean-Christophe Bélisle-Pipon, Erica Monteferrante, Marie-Christine Roy & Vincent Couture - 2023 - AI and Society 38 (4):1507-1522.
    It has become a truism that the ethics of artificial intelligence (AI) is necessary and must help guide technological developments. Numerous ethical guidelines have emerged from academia, industry, government and civil society in recent years. While they provide a basis for discussion on appropriate regulation of AI, it is not always clear how these ethical guidelines were developed, and by whom. Using content analysis, we surveyed a sample of the major documents (_n_ = 47) and analyzed the accessible information regarding (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study.Javier Camacho Ibáñez & Mónica Villas Olmeda - 2022 - AI and Society 37 (4):1663-1687.
    Despite the increase in the research field of ethics in artificial intelligence, most efforts have focused on the debate about principles and guidelines for responsible AI, but not enough attention has been given to the “how” of applied ethics. This paper aims to advance the research exploring the gap between practice and principles in AI ethics by identifying how companies are applying those guidelines and principles in practice. Through a qualitative methodology based on 22 semi-structured interviews and two focus groups, (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice.Hannah Bleher & Matthias Braun - 2023 - Science and Engineering Ethics 29 (3):1-21.
    Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory–practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Organisational responses to the ethical issues of artificial intelligence.Bernd Carsten Stahl, Josephina Antoniou, Mark Ryan, Kevin Macnish & Tilimbe Jiya - 2022 - AI and Society 37 (1):23-37.
    The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • AI for the public. How public interest theory shifts the discourse on AI.Theresa Züger & Hadi Asghari - 2023 - AI and Society 38 (2):815-828.
    AI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for ‘public (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Beyond explainability: justifiability and contestability of algorithmic decision systems.Clément Henin & Daniel Le Métayer - 2022 - AI and Society 37 (4):1397-1410.
    In this paper, we point out that explainability is useful but not sufficient to ensure the legitimacy of algorithmic decision systems. We argue that the key requirements for high-stakes decision systems should be justifiability and contestability. We highlight the conceptual differences between explanations and justifications, provide dual definitions of justifications and contestations, and suggest different ways to operationalize justifiability and contestability.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Augmenting Morality through Ethics Education: the ACTWith model.Jeffrey White - 2024 - AI and Society:1-20.
    Recently in this journal, Jessica Morley and colleagues (AI & SOC 2023 38:411–423) review AI ethics and education, suggesting that a cultural shift is necessary in order to prepare students for their responsibilities in developing technology infrastructure that should shape ways of life for many generations. Current AI ethics guidelines are abstract and difficult to implement as practical moral concerns proliferate. They call for improvements in ethics course design, focusing on real-world cases and perspective-taking tools to immerse students in challenging (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Before and beyond trust: reliance in medical AI.Charalampia Kerasidou, Angeliki Kerasidou, Monika Buscher & Stephen Wilkinson - 2021 - Journal of Medical Ethics 48 (11):852-856.
    Artificial intelligence is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on trust (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • AI ethics as subordinated innovation network.James Steinhoff - forthcoming - AI and Society:1-13.
    AI ethics is proposed, by the Big Tech companies which lead AI research and development, as the cure for diverse social problems posed by the commercialization of data-intensive technologies. It aims to reconcile capitalist AI production with ethics. However, AI ethics is itself now the subject of wide criticism; most notably, it is accused of being no more than “ethics washing” a cynical means of dissimulation for Big Tech, while it continues its business operations unchanged. This paper aims to critically (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • What does it mean to embed ethics in data science? An integrative approach based on the microethics and virtues.Louise Bezuidenhout & Emanuele Ratti - 2021 - AI and Society 36:939–953.
    In the past few years, scholars have been questioning whether the current approach in data ethics based on the higher level case studies and general principles is effective. In particular, some have been complaining that such an approach to ethics is difficult to be applied and to be taught in the context of data science. In response to these concerns, there have been discussions about how ethics should be “embedded” in the practice of data science, in the sense of showing (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Cultivating Moral Attention: a Virtue-Oriented Approach to Responsible Data Science in Healthcare.Emanuele Ratti & Mark Graves - 2021 - Philosophy and Technology 34 (4):1819-1846.
    In the past few years, the ethical ramifications of AI technologies have been at the center of intense debates. Considerable attention has been devoted to understanding how a morally responsible practice of data science can be promoted and which values have to shape it. In this context, ethics and moral responsibility have been mainly conceptualized as compliance to widely shared principles. However, several scholars have highlighted the limitations of such a principled approach. Drawing from microethics and the virtue theory tradition, (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Ethics-based auditing of automated decision-making systems: intervention points and policy implications.Jakob Mökander & Maria Axente - 2023 - AI and Society 38 (1):153-171.
    Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Investing in AI for social good: an analysis of European national strategies.Francesca Foffano, Teresa Scantamburlo & Atia Cortés - 2023 - AI and Society 38 (2):479-500.
    Artificial Intelligence (AI) has become a driving force in modern research, industry and public administration and the European Union (EU) is embracing this technology with a view to creating societal, as well as economic, value. This effort has been shared by EU Member States which were all encouraged to develop their own national AI strategies outlining policies and investment levels. This study focuses on how EU Member States are approaching the promise to develop and use AI for the good of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Actionable Principles for Artificial Intelligence Policy: Three Pathways.Charlotte Stix - 2021 - Science and Engineering Ethics 27 (1):1-17.
    In the development of governmental policy for artificial intelligence that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Contextual Integrity as a General Conceptual Tool for Evaluating Technological Change.Elizabeth O’Neill - 2022 - Philosophy and Technology 35 (3):1-25.
    The fast pace of technological change necessitates new evaluative and deliberative tools. This article develops a general, functional approach to evaluating technological change, inspired by Nissenbaum’s theory of contextual integrity. Nissenbaum introduced the concept of contextual integrity to help analyze how technological changes can produce privacy problems. Reinterpreted, the concept of contextual integrity can aid our thinking about how technological changes affect the full range of human concerns and values—not only privacy. I propose a generalized concept of contextual integrity that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • From Reality to World. A Critical Perspective on AI Fairness.Jean-Marie John-Mathews, Dominique Cardon & Christine Balagué - 2022 - Journal of Business Ethics 178 (4):945-959.
    Fairness of Artificial Intelligence decisions has become a big challenge for governments, companies, and societies. We offer a theoretical contribution to consider AI ethics outside of high-level and top-down approaches, based on the distinction between “reality” and “world” from Luc Boltanski. To do so, we provide a new perspective on the debate on AI fairness and show that criticism of ML unfairness is “realist”, in other words, grounded in an already instituted reality based on demographic categories produced by institutions. Second, (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What about investors? ESG analyses as tools for ethics-based AI auditing.Matti Minkkinen, Anniina Niukkanen & Matti Mäntymäki - 2024 - AI and Society 39 (1):329-343.
    Artificial intelligence (AI) governance and auditing promise to bridge the gap between AI ethics principles and the responsible use of AI systems, but they require assessment mechanisms and metrics. Effective AI governance is not only about legal compliance; organizations can strive to go beyond legal requirements by proactively considering the risks inherent in their AI systems. In the past decade, investors have become increasingly active in advancing corporate social responsibility and sustainability practices. Including nonfinancial information related to environmental, social, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence.Jake B. Telkamp & Marc H. Anderson - 2022 - Journal of Business Ethics 178 (4):961-976.
    Organizations are making massive investments in artificial intelligence, and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Supporting Trustworthy AI Through Machine Unlearning.Emmie Hine, Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2024 - Science and Engineering Ethics 30 (5):1-13.
    Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From AI Ethics Principles to Practices: A Teleological Methodology to Apply AI Ethics Principles in The Defence Domain.Christopher Thomas, Alexander Blanchard & Mariarosaria Taddeo - 2024 - Philosophy and Technology 37 (1):1-21.
    This article provides a methodology for the interpretation of AI ethics principles to specify ethical criteria for the development and deployment of AI systems in high-risk domains. The methodology consists of a three-step process deployed by an independent, multi-stakeholder ethics board to: (1) identify the appropriate level of abstraction for modelling the AI lifecycle; (2) interpret prescribed principles to extract specific requirements to be met at each step of the AI lifecycle; and (3) define the criteria to inform purpose- and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.Alexander Buhmann & Christian Fieseler - forthcoming - Business Ethics Quarterly:1-34.
    Responsible innovation in artificial intelligence calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence.Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-16.
    Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Operationalizing the Ethics of Connected and Automated Vehicles. An Engineering Perspective.Fabio Fossa - 2022 - International Journal of Technoethics 13 (1):1-20.
    In response to the many social impacts of automated mobility, in September 2020 the European Commission published Ethics of Connected and Automated Vehicles, a report in which recommendations on road safety, privacy, fairness, explainability, and responsibility are drawn from a set of eight overarching principles. This paper presents the results of an interdisciplinary research where philosophers and engineers joined efforts to operationalize the guidelines advanced in the report. To this aim, we endorse a function-based working approach to support the implementation (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Implementing Ethics in Healthcare AI-Based Applications: A Scoping Review.Robyn Clay-Williams, Elizabeth Austin & Magali Goirand - 2021 - Science and Engineering Ethics 27 (5):1-53.
    A number of Artificial Intelligence (AI) ethics frameworks have been published in the last 6 years in response to the growing concerns posed by the adoption of AI in different sectors, including healthcare. While there is a strong culture of medical ethics in healthcare applications, AI-based Healthcare Applications (AIHA) are challenging the existing ethics and regulatory frameworks. This scoping review explores how ethics frameworks have been implemented in AIHA, how these implementations have been evaluated and whether they have been successful. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The limitation of ethics-based approaches to regulating artificial intelligence: regulatory gifting in the context of Russia.Gleb Papyshev & Masaru Yarime - forthcoming - AI and Society:1-16.
    The effects that artificial intelligence (AI) technologies will have on society in the short- and long-term are inherently uncertain. For this reason, many governments are avoiding strict command and control regulations for this technology and instead rely on softer ethics-based approaches. The Russian approach to regulating AI is characterized by the prevalence of unenforceable ethical principles implemented via industry self-regulation. We analyze the emergence of the regulatory regime for AI in Russia to illustrate the limitations of this approach. The article (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Tailoring responsible research and innovation to the translational context: the case of AI-supported exergaming.Sabrina Blank, Celeste Mason, Frank Steinicke & Christian Herzog - 2024 - Ethics and Information Technology 26 (2):1-16.
    We discuss the implementation of Responsible Research and Innovation (RRI) within a project for the development of an AI-supported exergame for assisted movement training, outline outcomes and reflect on methodological opportunities and limitations. We adopted the responsibility-by-design (RbD) standard (CEN CWA 17796:2021) supplemented by methods for collaborative, ethical reflection to foster and support a shift towards a culture of trustworthiness inherent to the entire development process. An embedded ethicist organised the procedure to instantiate a collaborative learning effort and implement RRI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Employee Perceptions of the Effective Adoption of AI Principles.Stephanie Kelley - 2022 - Journal of Business Ethics 178 (4):871-893.
    This study examines employee perceptions on the effective adoption of artificial intelligence principles in their organizations. 49 interviews were conducted with employees of 24 organizations across 11 countries. Participants worked directly with AI across a range of positions, from junior data scientist to Chief Analytics Officer. The study found that there are eleven components that could impact the effective adoption of AI principles in organizations: communication, management support, training, an ethics office, a reporting mechanism, enforcement, measurement, accompanying technical processes, a (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI led ethical digital transformation: framework, research and managerial implications.Kumar Saurabh, Ridhi Arora, Neelam Rani, Debasisha Mishra & M. Ramkumar - 2022 - Journal of Information, Communication and Ethics in Society 20 (2):229-256.
    Purpose Digital transformation leverages digital technologies to change current processes and introduce new processes in any organisation’s business model, customer/user experience and operational processes. Artificial intelligence plays a significant role in achieving DT. As DT is touching each sphere of humanity, AI led DT is raising many fundamental questions. These questions raise concerns for the systems deployed, how they should behave, what risks they carry, the monitoring and evaluation control we have in hand, etc. These issues call for the need (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics.Johan Rochel & Florian Evéquoz - 2021 - AI and Society 36 (2):609-622.
    Enacting an AI system typically requires three iterative phases where AI engineers are in command: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. Our main hypothesis is that these phases involve practices with ethical questions. This paper maps these ethical questions and proposes a way to address them in light of a neo-republican understanding of freedom, defined as absence of domination. We thereby identify different (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The teaching of computer ethics on computer science and related degree programmes. a European survey.Ioannis Stavrakakis, Damian Gordon, Brendan Tierney, Anna Becevel, Emma Murphy, Gordana Dodig-Crnkovic, Radu Dobrin, Viola Schiaffonati, Cristina Pereira, Svetlana Tikhonenko, J. Paul Gibson, Stephane Maag, Francesco Agresta, Andrea Curley, Michael Collins & Dympna O’Sullivan - 2021 - International Journal of Ethics Education 7 (1):101-129.
    Within the Computer Science community, many ethical issues have emerged as significant and critical concerns. Computer ethics is an academic field in its own right and there are unique ethical issues associated with information technology. It encompasses a range of issues and concerns including privacy and agency around personal information, Artificial Intelligence and pervasive technology, the Internet of Things and surveillance applications. As computing technology impacts society at an ever growing pace, there are growing calls for more computer ethics content (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Contestable AI by Design: Towards a Framework.Kars Alfrink, Ianus Keller, Gerd Kortuem & Neelke Doorn - 2023 - Minds and Machines 33 (4):613-639.
    As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it.Alice Liefgreen, Netta Weinstein, Sandra Wachter & Brent Mittelstadt - 2024 - AI and Society 39 (5):2183-2199.
    Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • In Defence of Principlism in AI Ethics and Governance.Elizabeth Seger - 2022 - Philosophy and Technology 35 (2):1-7.
    It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to adopt ethics principles have been accused of unwarranted “ethics washing”. Accordingly, there remains a question as to if and how high-level principles should be expected to influence the development of safe and beneficial AI. In this short commentary I discuss two roles high-level principles might play in AI ethics and governance. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation.Jan Gogoll, Niina Zuber, Severin Kacianka, Timo Greger, Alexander Pretschner & Julian Nida-Rümelin - 2021 - Philosophy and Technology 34 (4):1085-1108.
    Software systems play an ever more important role in our lives and software engineers and their companies find themselves in a position where they are held responsible for ethical issues that may arise. In this paper, we try to disentangle ethical considerations that can be performed at the level of the software engineer from those that belong in the wider domain of business ethics. The handling of ethical problems that fall into the responsibility of the engineer has traditionally been addressed (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations