Switch to: References

Add citations

You must login to add citations.
  1. Self-Driving Vehicles—an Ethical Overview.Sven Ove Hansson, Matts-Åke Belin & Björn Lundgren - 2021 - Philosophy and Technology 34 (4):1383-1408.
    The introduction of self-driving vehicles gives rise to a large number of ethical issues that go beyond the common, extremely narrow, focus on improbable dilemma-like scenarios. This article provides a broad overview of realistic ethical issues related to self-driving vehicles. Some of the major topics covered are as follows: Strong opinions for and against driverless cars may give rise to severe social and political conflicts. A low tolerance for accidents caused by driverless vehicles may delay the introduction of driverless systems (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Organisational responses to the ethical issues of artificial intelligence.Bernd Carsten Stahl, Josephina Antoniou, Mark Ryan, Kevin Macnish & Tilimbe Jiya - 2022 - AI and Society 37 (1):23-37.
    The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications.Mark Ryan & Bernd Carsten Stahl - 2021 - Journal of Information, Communication and Ethics in Society 19 (1):61-86.
    Purpose The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence. This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Is tomorrow’s car appealing today? Ethical issues and user attitudes beyond automation.Darja Vrščaj, Sven Nyholm & Geert P. J. Verbong - 2020 - AI and Society 35 (4):1033-1046.
    The literature on ethics and user attitudes towards AVs discusses user concerns in relation to automation; however, we show that there are additional relevant issues at stake. To assess adolescents’ attitudes regarding the ‘car of the future’ as presented by car manufacturers, we conducted two studies with over 400 participants altogether. We used a mixed methods approach in which we combined qualitative and quantitative methods. In the first study, our respondents appeared to be more concerned about other aspects of AVs (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Morals, ethics, and the technology capabilities and limitations of automated and self-driving vehicles.Joshua Siegel & Georgios Pappas - 2023 - AI and Society 38 (1):213-226.
    We motivate the desire for self-driving and explain its potential and limitations, and explore the need for—and potential implementation of—morals, ethics, and other value systems as complementary “capabilities” to the Deep Technologies behind self-driving. We consider how the incorporation of such systems may drive or slow adoption of high automation within vehicles. First, we explore the role for morals, ethics, and other value systems in self-driving through a representative hypothetical dilemma faced by a self-driving car. Through the lens of engineering, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • No wheel but a dial: why and how passengers in self-driving cars should decide how their car drives.Johannes Himmelreich - 2022 - Ethics and Information Technology 24 (4):1-12.
    Much of the debate on the ethics of self-driving cars has revolved around trolley scenarios. This paper instead takes up the political or institutional question of who should decide how a self-driving car drives. Specifically, this paper is on the question of whether and why passengers should be able to control how their car drives. The paper reviews existing arguments—those for passenger ethics settings and for mandatory ethics settings respectively—and argues that they fail. Although the arguments are not successful, they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethical machine decisions and the input-selection problem.Björn Lundgren - 2021 - Synthese 199 (3-4):11423-11443.
    This article is about the role of factual uncertainty for moral decision-making as it concerns the ethics of machine decision-making. The view that is defended here is that factual uncertainties require a normative evaluation and that ethics of machine decision faces a triple-edged problem, which concerns what a machine ought to do, given its technical constraints, what decisional uncertainty is acceptable, and what trade-offs are acceptable to decrease the decisional uncertainty.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Programming Away Human Rights and Responsibilities? “The Moral Machine Experiment” and the Need for a More “Humane” AV Future.Mrinalini Kochupillai, Christoph Lütge & Franziska Poszler - 2020 - NanoEthics 14 (3):285-299.
    Dilemma situations involving the choice of which human life to save in the case of unavoidable accidents are expected to arise only rarely in the context of autonomous vehicles. Nonetheless, the scientific community has devoted significant attention to finding appropriate and acceptable automated decisions in the event that AVs or drivers of AVs were indeed to face such situations. Awad and colleagues, in their now famous paper “The Moral Machine Experiment”, used a “multilingual online ‘serious game’ for collecting large-scale data (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation