Switch to: References

Add citations

You must login to add citations.
  1. The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   167 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Download  
     
    Export citation  
     
    Bookmark   61 citations  
  • Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • A Cultural Species and its Cognitive Phenotypes: Implications for Philosophy.Joseph Henrich, Damián E. Blasi, Cameron M. Curtin, Helen Elizabeth Davis, Ze Hong, Daniel Kelly & Ivan Kroupin - 2022 - Review of Philosophy and Psychology 14 (2):349-386.
    After introducing the new field of cultural evolution, we review a growing body of empirical evidence suggesting that culture shapes what people attend to, perceive and remember as well as how they think, feel and reason. Focusing on perception, spatial navigation, mentalizing, thinking styles, reasoning (epistemic norms) and language, we discuss not only important variation in these domains, but emphasize that most researchers (including philosophers) and research participants are psychologically peculiar within a global and historical context. This rising tide of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Against “Democratizing AI”.Johannes Himmelreich - 2023 - AI and Society 38 (4):1333-1346.
    This paper argues against the call to democratize artificial intelligence (AI). Several authors demand to reap purported benefits that rest in direct and broad participation: In the governance of AI, more people should be more involved in more decisions about AI—from development and design to deployment. This paper opposes this call. The paper presents five objections against broadening and deepening public participation in the governance of AI. The paper begins by reviewing the literature and carving out a set of claims (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial Intelligence as a Socratic Assistant for Moral Enhancement.Francisco Lara & Jan Deckers - 2019 - Neuroethics 13 (3):275-287.
    The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Techno-optimism: an Analysis, an Evaluation and a Modest Defence.John Danaher - 2022 - Philosophy and Technology 35 (2):1-29.
    What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Speciesism and tribalism: Embarrassing origins.François Jaquet - 2022 - Philosophical Studies 179 (3):933-954.
    Animal ethicists have been debating the morality of speciesism for over forty years. Despite rather persuasive arguments against this form of discrimination, many philosophers continue to assign humans a higher moral status than nonhuman animals. The primary source of evidence for this position is our intuition that humans’ interests matter more than the similar interests of other animals. And it must be acknowledged that this intuition is both powerful and widespread. But should we trust it for all that? The present (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Harm to Nonhuman Animals from AI: a Systematic Account and Framework.Simon Coghlan & Christine Parker - 2023 - Philosophy and Technology 36 (2):1-34.
    This paper provides a systematic account of how artificial intelligence (AI) technologies could harm nonhuman animals and explains why animal harms, often neglected in AI ethics, should be better recognised. After giving reasons for caring about animals and outlining the nature of animal harm, interests, and wellbeing, the paper develops a comprehensive ‘harms framework’ which draws on scientist David Fraser’s influential mapping of human activities that impact on sentient animals. The harms framework is fleshed out with examples inspired by both (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Experimental Philosophy of Technology.Steven R. Kraaijeveld - 2021 - Philosophy and Technology 34:993-1012.
    Experimental philosophy is a relatively recent discipline that employs experimental methods to investigate the intuitions, concepts, and assumptions behind traditional philosophical arguments, problems, and theories. While experimental philosophy initially served to interrogate the role that intuitions play in philosophy, it has since branched out to bring empirical methods to bear on problems within a variety of traditional areas of philosophy—including metaphysics, philosophy of language, philosophy of mind, and epistemology. To date, no connection has been made between developments in experimental philosophy (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Design Bioethics: A Theoretical Framework and Argument for Innovation in Bioethics Research.Gabriela Pavarini, Robyn McMillan, Abigail Robinson & Ilina Singh - 2021 - American Journal of Bioethics 21 (6):37-50.
    Empirical research in bioethics has developed rapidly over the past decade, but has largely eschewed the use of technology-driven methodologies. We propose “design bioethics” as an area of conjoined theoretical and methodological innovation in the field, working across bioethics, health sciences and human-centred technological design. We demonstrate the potential of digital tools, particularly purpose-built digital games, to align with theoretical frameworks in bioethics for empirical research, integrating context, narrative and embodiment in moral decision-making. Purpose-built digital tools can engender situated engagement (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Autonomous Driving and Public Reason: a Rawlsian Approach.Claudia Brändle & Michael W. Schmidt - 2021 - Philosophy and Technology 34 (4):1475-1499.
    In this paper, we argue that solutions to normative challenges associated with autonomous driving, such as real-world trolley cases or distributions of risk in mundane driving situations, face the problem of reasonable pluralism: Reasonable pluralism refers to the fact that there exists a plurality of reasonable yet incompatible comprehensive moral doctrines within liberal democracies. The corresponding problem is that a politically acceptable solution cannot refer to only one of these comprehensive doctrines. Yet a politically adequate solution to the normative challenges (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Philosophical Investigations into AI Alignment: A Wittgensteinian Framework.José Antonio Pérez-Escobar & Deniz Sarikaya - 2024 - Philosophy and Technology 37 (3):1-25.
    We argue that the later Wittgenstein’s philosophy of language and mathematics, substantially focused on rule-following, is relevant to understand and improve on the Artificial Intelligence (AI) alignment problem: his discussions on the categories that influence alignment between humans can inform about the categories that should be controlled to improve on the alignment problem when creating large data sets to be used by supervised and unsupervised learning algorithms, as well as when introducing hard coded guardrails for AI models. We cast these (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance.Seán S. ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng & Zhe Liu - 2020 - Philosophy and Technology 33 (4):571-593.
    Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. This paper focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Are the folk utilitarian about animals?Guy Kahane & Lucius Caviola - 2022 - Philosophical Studies 180 (4):1081-1103.
    Robert Nozick famously raised the possibility that there is a sense in which both deontology and utilitarianism are true: deontology applies to humans while utilitarianism applies to animals. In recent years, there has been increasing interest in such a hybrid views of ethics. Discussions of this Nozickian Hybrid View, and similar approaches to animal ethics, often assume that such an approach reflects the commonsense view, and best captures common moral intuitions. However, recent psychological work challenges this empirical assumption. We review (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Safety requirements vs. crashing ethically: what matters most for policies on autonomous vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Do Automated Vehicles Face Moral Dilemmas? A Plea for a Political Approach.Javier Rodríguez-Alcázar, Lilian Bermejo-Luque & Alberto Molina-Pérez - 2020 - Philosophy and Technology 34:811-832.
    How should automated vehicles (AVs) react in emergency circumstances? Most research projects and scientific literature deal with this question from a moral perspective. In particular, it is customary to treat emergencies involving AVs as instances of moral dilemmas and to use the trolley problem as a framework to address such alleged dilemmas. Some critics have pointed out some shortcomings of this strategy and have urged to focus on mundane traffic situations instead of trolley cases involving AVs. Besides, these authors rightly (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Critically engaging the ethics of AI for a global audience.Samuel T. Segun - 2021 - Ethics and Information Technology 23 (2):99-105.
    This article introduces readers to the special issue on Selected Issues in the Ethics of Artificial Intelligence. In this paper, I make a case for a wider outlook on the ethics of AI. So far, much of the engagements with the subject have come from Euro-American scholars with obvious influences from Western epistemic traditions. I demonstrate that socio-cultural features influence our conceptions of ethics and in this case the ethics of AI. The goal of this special issue is to entertain (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The epistemic opacity of autonomous systems and the ethical consequences.Mihály Héder - 2023 - AI and Society 38 (5):1819-1827.
    This paper takes stock of all the various factors that cause the design-time opacity of autonomous systems behaviour. The factors include embodiment effects, design-time knowledge gap, human factors, emergent behaviour and tacit knowledge. This situation is contrasted with the usual representation of moral dilemmas that assume perfect information. Since perfect information is not achievable, the traditional moral dilemma representations are not valid and the whole problem of ethical autonomous systems design proves to be way more empirical than previously understood.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI.Marilyn Giroux, Jungkeun Kim, Jacob C. Lee & Jongwon Park - 2022 - Journal of Business Ethics 178 (4):1027-1041.
    Several technological developments, such as self-service technologies and artificial intelligence, are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Autonomous Driving Ethics: from Trolley Problem to Ethics of Risk.Maximilian Geisslinger, Franziska Poszler, Johannes Betz, Christoph Lütge & Markus Lienkamp - 2021 - Philosophy and Technology 34 (4):1033-1055.
    In 2017, the German ethics commission for automated and connected driving released 20 ethical guidelines for autonomous vehicles. It is now up to the research and industrial sectors to enhance the development of autonomous vehicles based on such guidelines. In the current state of the art, we find studies on how ethical theories can be integrated. To the best of the authors’ knowledge, no framework for motion planning has yet been published which allows for the true implementation of any practical (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Turning the trolley with reflective equilibrium.Tanja Rechnitzer - 2022 - Synthese 200 (4):1-28.
    Reflective equilibrium —the idea that we have to justify our judgments and principles through a process of mutual adjustment—is taken to be a central method in philosophy. Nonetheless, conceptions of RE often stay sketchy, and there is a striking lack of explicit and traceable applications of it. This paper presents an explicit case study for the application of an elaborate RE conception. RE is used to reconstruct the arguments from Thomson’s paper “Turning the Trolley” for why a bystander must not (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • People's judgments of humans and robots in a classic moral dilemma.Bertram F. Malle, Matthias Scheutz, Corey Cusimano, John Voiklis, Takanori Komatsu, Stuti Thapa & Salomi Aladia - 2025 - Cognition 254 (C):105958.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles.Veljko Dubljević - 2020 - Science and Engineering Ethics 26 (5):2461-2472.
    Autonomous vehicles —and accidents they are involved in—attest to the urgent need to consider the ethics of artificial intelligence. The question dominating the discussion so far has been whether we want AVs to behave in a ‘selfish’ or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The agent–deed–consequence model :3–20, 2014a, Behav Brain Sci 37:487–488, 2014b) provides a (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Solving the Single-Vehicle Self-Driving Car Trolley Problem Using Risk Theory and Vehicle Dynamics.Rebecca Davnall - 2020 - Science and Engineering Ethics 26 (1):431-449.
    Questions of what a self-driving car ought to do if it encounters a situation analogous to the ‘trolley problem’ have dominated recent discussion of the ethics of self-driving cars. This paper argues that this interest is misplaced. If a trolley-style dilemma situation actually occurs, given the limits on what information will be available to the car, the dynamics of braking and tyre traction determine that, irrespective of outcome, it is always least risky for the car to brake in a straight (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Moral dynamics: Grounding moral judgment in intuitive physics and intuitive psychology.Felix A. Sosa, Tomer Ullman, Joshua B. Tenenbaum, Samuel J. Gershman & Tobias Gerstenberg - 2021 - Cognition 217 (C):104890.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Ethical Decision Making in Autonomous Vehicles: The AV Ethics Project.Katherine Evans, Nelson de Moura, Stéphane Chauvier, Raja Chatila & Ebru Dogan - 2020 - Science and Engineering Ethics 26 (6):3285-3312.
    The ethics of autonomous vehicles has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence.Jake B. Telkamp & Marc H. Anderson - 2022 - Journal of Business Ethics 178 (4):961-976.
    Organizations are making massive investments in artificial intelligence, and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Applying a principle of explicability to AI research in Africa: should we do it?Mary Carman & Benjamin Rosman - 2020 - Ethics and Information Technology 23 (2):107-117.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Moral disagreement and artificial intelligence.Pamela Robinson - 2024 - AI and Society 39 (5):2425-2438.
    Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. _Moral solutions_ apply a moral theory or related principles and largely ignore the details of the disagreement. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Of trolleys and self-driving cars: What machine ethicists can and cannot learn from trolleyology.Peter Königs - 2023 - Utilitas 35 (1):70-87.
    Crashes involving self-driving cars at least superficially resemble trolley dilemmas. This article discusses what lessons machine ethicists working on the ethics of self-driving cars can learn from trolleyology. The article proceeds by providing an account of the trolley problem as a paradox and by distinguishing two types of solutions to the trolley problem. According to an optimistic solution, our case intuitions about trolley dilemmas are responding to morally relevant differences. The pessimistic solution denies that this is the case. An optimistic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethical dilemmas are really important to potential adopters of autonomous vehicles.Tripat Gill - 2021 - Ethics and Information Technology 23 (4):657-673.
    The ethical dilemma of whether autonomous vehicles should protect the passengers or pedestrians when harm is unavoidable has been widely researched and debated. Several behavioral scientists have sought public opinion on this issue, based on the premise that EDs are critical to resolve for AV adoption. However, many scholars and industry participants have downplayed the importance of these edge cases. Policy makers also advocate a focus on higher level ethical principles rather than on a specific solution to EDs. But conspicuously (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI led ethical digital transformation: framework, research and managerial implications.Kumar Saurabh, Ridhi Arora, Neelam Rani, Debasisha Mishra & M. Ramkumar - 2022 - Journal of Information, Communication and Ethics in Society 20 (2):229-256.
    Purpose Digital transformation leverages digital technologies to change current processes and introduce new processes in any organisation’s business model, customer/user experience and operational processes. Artificial intelligence plays a significant role in achieving DT. As DT is touching each sphere of humanity, AI led DT is raising many fundamental questions. These questions raise concerns for the systems deployed, how they should behave, what risks they carry, the monitoring and evaluation control we have in hand, etc. These issues call for the need (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Human–Autonomy Teaming: Definitions, Debates, and Directions.Joseph B. Lyons, Katia Sycara, Michael Lewis & August Capiola - 2021 - Frontiers in Psychology 12:589585.
    Researchers are beginning to transition from studying human–automation interaction to human–autonomy teaming. This distinction has been highlighted in recent literature, and theoretical reasons why the psychological experience of humans interacting with autonomy may vary and affect subsequent collaboration outcomes are beginning to emerge (de Visser et al., 2018;Wynne and Lyons, 2018). In this review, we do a deep dive into human–autonomy teams (HATs) by explaining the differences between automation and autonomy and by reviewing the domain of human–human teaming to make (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective.Simon Burton, Ibrahim Habli, Tom Lawton, John McDermid, Phillip Morgan & Zoe Porter - 2020 - Artificial Intelligence 279 (C):103201.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Morals, ethics, and the technology capabilities and limitations of automated and self-driving vehicles.Joshua Siegel & Georgios Pappas - 2023 - AI and Society 38 (1):213-226.
    We motivate the desire for self-driving and explain its potential and limitations, and explore the need for—and potential implementation of—morals, ethics, and other value systems as complementary “capabilities” to the Deep Technologies behind self-driving. We consider how the incorporation of such systems may drive or slow adoption of high automation within vehicles. First, we explore the role for morals, ethics, and other value systems in self-driving through a representative hypothetical dilemma faced by a self-driving car. Through the lens of engineering, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machines and humans in sacrificial moral dilemmas: Required similarly but judged differently?Yueying Chu & Peng Liu - 2023 - Cognition 239 (C):105575.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Keeping the organization in the loop: a socio-technical extension of human-centered artificial intelligence.Thomas Herrmann & Sabine Pfeiffer - forthcoming - AI and Society:1-20.
    The human-centered AI approach posits a future in which the work done by humans and machines will become ever more interactive and integrated. This article takes human-centered AI one step further. It argues that the integration of human and machine intelligence is achievable only if human organizations—not just individual human workers—are kept “in the loop.” We support this argument with evidence of two case studies in the area of predictive maintenance, by which we show how organizational practices are needed and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle.Simone Borsci, Ville V. Lehtola, Francesco Nex, Michael Ying Yang, Ellen-Wien Augustijn, Leila Bagheriye, Christoph Brune, Ourania Kounadi, Jamy Li, Joao Moreira, Joanne Van Der Nagel, Bernard Veldkamp, Duc V. Le, Mingshu Wang, Fons Wijnhoven, Jelmer M. Wolterink & Raul Zurita-Milla - forthcoming - AI and Society:1-20.
    The European Union Commission’s whitepaper on Artificial Intelligence proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: the lack of a coherent EU vision to drive future (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous vehicles: How perspective-taking accessibility alters moral judgments and consumer purchasing behavior.Rose Martin, Petko Kusev & Paul van Schaik - 2021 - Cognition 212 (C):104666.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence.Andreia Martinho, Maarten Kroesen & Caspar Chorus - 2021 - Minds and Machines 31 (2):215-237.
    As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethical Decision-Making for Self-Driving Vehicles: A Proposed Model & List of Value-Laden Terms that Warrant (Technical) Specification.Franziska Poszler, Maximilian Geisslinger & Christoph Lütge - 2024 - Science and Engineering Ethics 30 (5):1-31.
    Self-driving vehicles (SDVs) will need to make decisions that carry ethical dimensions and are of normative significance. For example, by choosing a specific trajectory, they determine how risks are distributed among traffic participants. Accordingly, policymakers, standardization organizations and scholars have conceptualized what (shall) constitute(s) ethical decision-making for SDVs. Eventually, these conceptualizations must be converted into specific system requirements to ensure proper technical implementation. Therefore, this article aims to translate critical requirements recently formulated in scholarly work, existing standards, regulatory drafts and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of Self-driving Cars: A Naturalistic Approach.Selene Arfini, Davide Spinelli & Daniele Chiffi - 2022 - Minds and Machines 32 (4):717-734.
    The potential development of self-driving cars (also known as autonomous vehicles or AVs – particularly Level 5 AVs) has called the attention of different interested parties. Yet, there are still only a few relevant international regulations on them, no emergency patterns accepted by communities and Original Equipment Manufacturers (OEMs), and no publicly accepted solutions to some of their pending ethical problems. Thus, this paper aims to provide some possible answers to these moral and practical dilemmas. In particular, we focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations