Switch to: References

Add citations

You must login to add citations.
  1. Analyzing the justification for using generative AI technology to generate judgments based on the virtue jurisprudence theory.Shilun Zhou - 2024 - Journal of Decision Systems 1:1-24.
    This paper responds to the question of whether judgements generated by judges using ChatGPT can be directly adopted. It posits that it is unjust for judges to rely on and directly adopt ChatGPT-generated judgements based on virtue jurisprudence theory. This paper innovatively applies case-based empirical analysis and is the first to use virtue jurisprudence approach to analyse the question and support its argument. The first section reveals the use of generative AI-based tools in judicial practice and the existence of erroneous (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI through the looking glass: an empirical study of structural social and ethical challenges in AI.Mark Ryan, Nina De Roo, Hao Wang, Vincent Blok & Can Atik - 2024 - AI and Society 1 (1):1-17.
    This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals’ perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards a Human Rights-Based Approach to Ethical AI Governance in Europe.Linda Hogan & Marta Lasek-Markey - 2024 - Philosophies 9 (6):181.
    As AI-driven solutions continue to revolutionise the tech industry, scholars have rightly cautioned about the risks of ‘ethics washing’. In this paper, we make a case for adopting a human rights-based ethical framework for regulating AI. We argue that human rights frameworks can be regarded as the common denominator between law and ethics and have a crucial role to play in the ethics-based legal governance of AI. This article examines the extent to which human rights-based regulation has been achieved in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.Salla Westerstrand - 2024 - Science and Engineering Ethics 30 (5):1-21.
    The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping the landscape of ethical considerations in explainable AI research.Luca Nannini, Marta Marchiori Manerba & Isacco Beretta - 2024 - Ethics and Information Technology 26 (3):1-22.
    With its potential to contribute to the ethical governance of AI, eXplainable AI (XAI) research frequently asserts its relevance to ethical considerations. Yet, the substantiation of these claims with rigorous ethical analysis and reflection remains largely unexamined. This contribution endeavors to scrutinize the relationship between XAI and ethical considerations. By systematically reviewing research papers mentioning ethical terms in XAI frameworks and tools, we investigate the extent and depth of ethical discussions in scholarly research. We observe a limited and often superficial (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review.Thilo Hagendorff - 2024 - Minds and Machines 34 (4):1-27.
    The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A values-based approach to designing military autonomous systems.Christine Boshuijzen-van Burken, Shannon Spruit, Tom Geijsen & Lotte Fillerup - 2024 - Ethics and Information Technology 26 (3):1-14.
    Our research is a value sensitive based approach to designing autonomous systems in a military context. Value sensitive design is an iterative process of conceptual, empirical and technical considerations. We enhance value sensitive design with Participatory Value Evaluation. This allows us to mine values of a large unorganized stakeholder group relevant to our context of research, namely Australian citizens. We found that value prioritizations differ depending on the context of use and that no one value fits all autonomous systems. General (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Assessing dual use risks in AI research: necessity, challenges and mitigation strategies.Andreas Brenneis - forthcoming - Research Ethics.
    This article argues that due to the difficulty in governing AI, it is essential to develop measures implemented early in the AI research process. The goal of dual use considerations is to create robust strategies that uphold AI’s integrity while protecting societal interests. The challenges of applying dual use frameworks to AI research are examined and dual use and dual use research of concern (DURC) are defined while highlighting the difficulties in balancing the technology’s benefits and risks. AI’s dual use (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of “Responsibility” in Artificial Intelligence within the Healthcare Context.Sarah Bouhouita-Guermech & Hazar Haidar - 2024 - Asian Bioethics Review 16 (3):315-344.
    The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Take five? A coherentist argument why medical AI does not require a new ethical principle.Seppe Segers & Michiel De Proost - 2024 - Theoretical Medicine and Bioethics 45 (5):387-400.
    With the growing application of machine learning models in medicine, principlist bioethics has been put forward as needing revision. This paper reflects on the dominant trope in AI ethics to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of principlism. It specifically suggests that these four principles are sufficient and challenges the relevance of explicability as a separate ethical principle by emphasizing the coherentist affinity of principlism. We argue that, through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Engagement and Disengagement in Health Care AI Development.Ariadne A. Nichol, Meghan Halley, Carole Federico, Mildred K. Cho & Pamela L. Sankar - 2024 - AJOB Empirical Bioethics 15 (4):291-300.
    Background Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.Methods We conducted 40 semi-structured interviews with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cultivating Dignity in Intelligent Systems.Adeniyi Fasoro - 2024 - Philosophies 9 (2):46.
    As artificial intelligence (AI) integrates across social domains, prevailing technical paradigms often overlook human relational needs vital for cooperative resilience. Alternative pathways consciously supporting dignity and wisdom warrant consideration. Integrating seminal insights from virtue and care ethics, this article delineates the following four cardinal design principles prioritizing communal health: (1) affirming the sanctity of life; (2) nurturing healthy attachment; (3) facilitating communal wholeness; and (4) safeguarding societal resilience. Grounding my analysis in the rich traditions of moral philosophy, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Tailoring responsible research and innovation to the translational context: the case of AI-supported exergaming.Sabrina Blank, Celeste Mason, Frank Steinicke & Christian Herzog - 2024 - Ethics and Information Technology 26 (2):1-16.
    We discuss the implementation of Responsible Research and Innovation (RRI) within a project for the development of an AI-supported exergame for assisted movement training, outline outcomes and reflect on methodological opportunities and limitations. We adopted the responsibility-by-design (RbD) standard (CEN CWA 17796:2021) supplemented by methods for collaborative, ethical reflection to foster and support a shift towards a culture of trustworthiness inherent to the entire development process. An embedded ethicist organised the procedure to instantiate a collaborative learning effort and implement RRI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What Do We Teach to Engineering Students: Embedded Ethics, Morality, and Politics.Avigail Ferdman & Emanuele Ratti - 2024 - Science and Engineering Ethics 30 (1):1-26.
    In the past few years, calls for integrating ethics modules in engineering curricula have multiplied. Despite this positive trend, a number of issues with these ‘embedded’ programs remains. First, learning goals are underspecified. A second limitation is the conflation of different dimensions under the same banner, in particular confusion between ethics curricula geared towards addressing the ethics of individual conduct and curricula geared towards addressing ethics at the societal level. In this article, we propose a tripartite framework to overcome these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI for crisis decisions.Tina Comes - 2024 - Ethics and Information Technology 26 (1):1-14.
    Increasingly, our cities are confronted with crises. Fuelled by climate change and a loss of biodiversity, increasing inequalities and fragmentation, challenges range from social unrest and outbursts of violence to heatwaves, torrential rainfall, or epidemics. As crises require rapid interventions that overwhelm human decision-making capacity, AI has been portrayed as a potential avenue to support or even automate decision-making. In this paper, I analyse the specific challenges of AI in urban crisis management as an example and test case for many (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Integrating ethics in AI development: a qualitative study.Laura Arbelaez Ossa, Giorgia Lorenzini, Stephen R. Milford, David Shaw, Bernice S. Elger & Michael Rost - 2024 - BMC Medical Ethics 25 (1):1-11.
    Background While the theoretical benefits and harms of Artificial Intelligence (AI) have been widely discussed in academic literature, empirical evidence remains elusive regarding the practical ethical challenges of developing AI for healthcare. Bridging the gap between theory and practice is an essential step in understanding how to ethically align AI for healthcare. Therefore, this research examines the concerns and challenges perceived by experts in developing ethical AI that addresses the healthcare context and needs. Methods We conducted semi-structured interviews with 41 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI-Based Medical Solutions Can Threaten Physicians’ Ethical Obligations Only If Allowed to Do So.Benjamin Gregg - 2023 - American Journal of Bioethics 23 (9):84-86.
    Mildred Cho and Nicole Martinez-Martin (2023) distinguish between two of the ways in which humans can be represented in medical contexts. One is technical: a digital model of aspects of a person’s...
    Download  
     
    Export citation  
     
    Bookmark  
  • Prospects for Overcoming the Contradictions of the Development of Artificial Intelligence.Dmitry Viktorovich Gluzdov - forthcoming - Philosophy and Culture (Russian Journal).
    The subject of this study is a set of alleged contradictions in the development of artificial intelligence, pursued in order to achieve their overcoming. Philosophical anthropology contains the potential to analyze complex interactions, to articulate the problems that arise between artificial intelligence and humans. The philosophical and anthropological analysis of artificial intelligence is aimed at understanding this human phenomenon, human presence and its experience. The article is an attempt to identify and outline the trajectories for the possible resolution of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI ethics as subordinated innovation network.James Steinhoff - forthcoming - AI and Society:1-13.
    AI ethics is proposed, by the Big Tech companies which lead AI research and development, as the cure for diverse social problems posed by the commercialization of data-intensive technologies. It aims to reconcile capitalist AI production with ethics. However, AI ethics is itself now the subject of wide criticism; most notably, it is accused of being no more than “ethics washing” a cynical means of dissimulation for Big Tech, while it continues its business operations unchanged. This paper aims to critically (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Moral distance, AI, and the ethics of care.Carolina Villegas-Galaviz & Kirsten Martin - forthcoming - AI and Society:1-12.
    This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Hard choices in artificial intelligence.Roel Dobbe, Thomas Krendl Gilbert & Yonatan Mintz - 2021 - Artificial Intelligence 300 (C):103555.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Domesticating Artificial Intelligence.Luise Müller - 2022 - Moral Philosophy and Politics 9 (2):219-237.
    For their deployment in human societies to be safe, AI agents need to be aligned with value-laden cooperative human life. One way of solving this “problem of value alignment” is to build moral machines. I argue that the goal of building moral machines aims at the wrong kind of ideal, and that instead, we need an approach to value alignment that takes seriously the categorically different cognitive and moral capabilities between human and AI agents, a condition I call deep agential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Hippocratic Oath for mathematicians? Mapping the landscape of ethics in mathematics.Dennis Müller, Maurice Chiodo & James Franklin - 2022 - Science and Engineering Ethics 28 (5):1-30.
    While the consequences of mathematically-based software, algorithms and strategies have become ever wider and better appreciated, ethical reflection on mathematics has remained primitive. We review the somewhat disconnected suggestions of commentators in recent decades with a view to piecing together a coherent approach to ethics in mathematics. Calls for a Hippocratic Oath for mathematicians are examined and it is concluded that while lessons can be learned from the medical profession, the relation of mathematicians to those affected by their work is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Contextual Integrity as a General Conceptual Tool for Evaluating Technological Change.Elizabeth O’Neill - 2022 - Philosophy and Technology 35 (3):1-25.
    The fast pace of technological change necessitates new evaluative and deliberative tools. This article develops a general, functional approach to evaluating technological change, inspired by Nissenbaum’s theory of contextual integrity. Nissenbaum introduced the concept of contextual integrity to help analyze how technological changes can produce privacy problems. Reinterpreted, the concept of contextual integrity can aid our thinking about how technological changes affect the full range of human concerns and values—not only privacy. I propose a generalized concept of contextual integrity that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The tragedy of the AI commons.Travis LaCroix & Aydin Mohseni - 2022 - Synthese 200 (4):1-33.
    Policy and guideline proposals for ethical artificial intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for a common good. However, there typically exist incentives for non-cooperation ; and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma—namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fostering ethical reflection on health data research through co-design: A pilot study.Joanna Sleigh & Julia Amann - 2022 - International Journal of Ethics Education 7 (2):325-342.
    Health research ethics training is highly variable, with some researchers receiving little to none, which is why ethical frameworks represent critical tools for ethical deliberation and guiding responsible practice. However, these documents' voluntary and abstract nature can leave health researchers seeking more operationalised guidance, such as in the form of checklists, even though this approach does not support reflection on the meaning of principles nor their implications. In search of more reflective and participatory practices in a pandemic context with distance (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Virtue-Based Framework to Support Putting AI Ethics into Practice.Thilo Hagendorff - 2022 - Philosophy and Technology 35 (3):1-24.
    Many ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, several AI ethics researchers have pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach. This paper proposes a complementary to the principled approach that is based on virtue ethics. It defines four “basic AI virtues”, namely justice, honesty, responsibility and care, all of which represent specific (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • AI for the public. How public interest theory shifts the discourse on AI.Theresa Züger & Hadi Asghari - 2023 - AI and Society 38 (2):815-828.
    AI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for ‘public (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Trust and ethics in AI.Hyesun Choung, Prabu David & Arun Ross - 2023 - AI and Society 38 (2):733-745.
    With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI—human-like trust and functionality trust—and presents a multilevel conceptualization (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Why AI Ethics Is a Critical Theory.Rosalie Waelen - 2022 - Philosophy and Technology 35 (1):1-16.
    The ethics of artificial intelligence is an upcoming field of research that deals with the ethical assessment of emerging AI applications and addresses the new kinds of moral questions that the advent of AI raises. The argument presented in this article is that, even though there exist different approaches and subfields within the ethics of AI, the field resembles a critical theory. Just like a critical theory, the ethics of AI aims to diagnose as well as change society and is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda.Anna Lena Hunkenschroer & Christoph Luetge - 2022 - Journal of Business Ethics 178 (4):977-1007.
    Companies increasingly deploy artificial intelligence technologies in their personnel recruiting and selection process to streamline it, making it faster and more efficient. AI applications can be found in various stages of recruiting, such as writing job ads, screening of applicant resumes, and analyzing video interviews via face recognition software. As these new technologies significantly impact people’s lives and careers but often trigger ethical concerns, the ethicality of these AI applications needs to be comprehensively understood. However, given the novelty of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Artificial intelligence ethics has a black box problem.Jean-Christophe Bélisle-Pipon, Erica Monteferrante, Marie-Christine Roy & Vincent Couture - 2023 - AI and Society 38 (4):1507-1522.
    It has become a truism that the ethics of artificial intelligence (AI) is necessary and must help guide technological developments. Numerous ethical guidelines have emerged from academia, industry, government and civil society in recent years. While they provide a basis for discussion on appropriate regulation of AI, it is not always clear how these ethical guidelines were developed, and by whom. Using content analysis, we surveyed a sample of the major documents (_n_ = 47) and analyzed the accessible information regarding (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?Paul B. de Laat - 2021 - Philosophy and Technology 34 (4):1135-1193.
    The term ‘responsible AI’ has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the ‘Partnership on AI’. By means of a comprehensive web search, two questions are addressed by this study: (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study.Javier Camacho Ibáñez & Mónica Villas Olmeda - 2022 - AI and Society 37 (4):1663-1687.
    Despite the increase in the research field of ethics in artificial intelligence, most efforts have focused on the debate about principles and guidelines for responsible AI, but not enough attention has been given to the “how” of applied ethics. This paper aims to advance the research exploring the gap between practice and principles in AI ethics by identifying how companies are applying those guidelines and principles in practice. Through a qualitative methodology based on 22 semi-structured interviews and two focus groups, (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • What does it mean to embed ethics in data science? An integrative approach based on the microethics and virtues.Louise Bezuidenhout & Emanuele Ratti - 2021 - AI and Society 36:939–953.
    In the past few years, scholars have been questioning whether the current approach in data ethics based on the higher level case studies and general principles is effective. In particular, some have been complaining that such an approach to ethics is difficult to be applied and to be taught in the context of data science. In response to these concerns, there have been discussions about how ethics should be “embedded” in the practice of data science, in the sense of showing (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • AI Ethics: how can information ethics provide a framework to avoid usual conceptual pitfalls? An Overview.Frédérick Bruneault & Andréane Sabourin Laflamme - forthcoming - AI and Society:1-10.
    Artificial intelligence plays an important role in current discussions on information and communication technologies and new modes of algorithmic governance. It is an unavoidable dimension of what social mediations and modes of reproduction of our information societies will be in the future. While several works in artificial intelligence ethics address ethical issues specific to certain areas of expertise, these ethical reflections often remain confined to narrow areas of application, without considering the global ethical issues in which they are embedded. We, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   162 citations  
  • What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids.Sabine Salloch & Andreas Eriksen - 2024 - American Journal of Bioethics 24 (9):67-78.
    Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as “human in the loop” or “meaningful human control” are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Challenges of responsible AI in practice: scoping review and recommended actions.Malak Sadek, Emma Kallina, Thomas Bohné, Céline Mougenot, Rafael A. Calvo & Stephen Cave - forthcoming - AI and Society:1-17.
    Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Economics of AI behavior: nudging the digital minds toward greater societal benefit.Emre Sezgin - 2024 - AI and Society 39 (6):3031-3032.
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of using artificial intelligence (AI) in veterinary medicine.Simon Coghlan & Thomas Quinn - 2023 - AI and Society (5):2337-2348.
    This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. Veterinary medicine is a socially valued service, which, like human medicine, will likely be significantly affected by AI. Veterinary AI raises some unique ethical issues because of the nature of the client–patient–practitioner relationship, society’s relatively minimal valuation and protection of nonhuman animals and differences in opinion about responsibilities to animal patients and human clients. The paper examines how these distinctive (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • More Process, Less Principles: The Ethics of Deploying AI and Robotics in Medicine.Amitabha Palmer & David Schwan - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):121-134.
    Current national and international guidelines for the ethical design and development of artificial intelligence (AI) and robotics emphasize ethical theory. Various governing and advisory bodies have generated sets of broad ethical principles, which institutional decisionmakers are encouraged to apply to particular practical decisions. Although much of this literature examines the ethics of designing and developing AI and robotics, medical institutions typically must make purchase and deployment decisions about technologies that have already been designed and developed. The primary problem facing medical (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Before and beyond trust: reliance in medical AI.Charalampia Kerasidou, Angeliki Kerasidou, Monika Buscher & Stephen Wilkinson - 2021 - Journal of Medical Ethics 48 (11):852-856.
    Artificial intelligence is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on trust (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence.Jake B. Telkamp & Marc H. Anderson - 2022 - Journal of Business Ethics 178 (4):961-976.
    Organizations are making massive investments in artificial intelligence, and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations