Switch to: References

Add citations

You must login to add citations.
  1. As-if trust.Michael K. MacKenzie & Alfred Moore - forthcoming - Critical Review of International Social and Political Philosophy.
    A lot of what we understand to be trust is not trust; it is, instead, an active and conscious decision to feign trust. We call this ‘as-if’ trust. If trust involves taking on risks and vulnerabilities, as-if trust involves taking on surplus risks and vulnerabilities. People may decide to act as if they trust in many situations, even when they do not have sufficient warrant to trust – which is to say even when they do not trust. Likewise, people might (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
    I discuss an influential argument put forward by Hatherley in theJournal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs.Ori Freiman - 2024 - Episteme 21 (3):1031-1047.
    Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis focuses (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Artificial Agency and the Game of Semantic Extension.Fossa Fabio - 2021 - Interdisciplinary Science Reviews 46 (4):440-457.
    Artificial agents are commonly described by using words that traditionally belong to the semantic field of organisms, particularly of animal and human life. I call this phenomenon the game of semantic extension. However, the semantic extension of words as crucial as “autonomous”, “intelligent”, “creative”, “moral”, and so on, is often perceived as unsatisfactory, which is signalled with the extensive use of inverted commas or other syntactical cues. Such practice, in turn, has provoked harsh criticism that usually refers back to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What does it mean to trust blockchain technology?Yan Teng - 2022 - Metaphilosophy 54 (1):145-160.
    This paper argues that the widespread belief that interactions between blockchains and their users are trust-free is inaccurate and misleading, since this belief not only overlooks the vital role played by trust in the lack of knowledge and control but also conceals the moral and normative relevance of relying on blockchain applications. The paper reaches this argument by providing a close philosophical examination of the concept referred to as trust in blockchain technology, clarifying the trustor group, the structure, and the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (E)‐Trust and Its Function: Why We Shouldn't Apply Trust and Trustworthiness to Human–AI Relations.Pepijn Al - 2023 - Journal of Applied Philosophy 40 (1):95-108.
    With an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can we not trust AI systems? (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Philosophical evaluation of the conceptualisation of trust in the NHS Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics Recent Issues 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trust in technology: interlocking trust concepts for privacy respecting video surveillance.Sebastian Weydner-Volkmann & Linus Feiten - 2021 - Journal of Information, Communication and Ethics in Society 19 (4):506-520.
    Purpose The purpose of this paper is to defend the notion of “trust in technology” against the philosophical view that this concept is misled and unsuitable for ethical evaluation. In contrast, it is shown that “trustworthy technology” addresses a critical societal need in the digital age as it is inclusive of IT-security risks not only from a technical but also from a public layperson perspective. Design/methodology/approach From an interdisciplinary perspective between philosophy andIT-security, the authors discuss a potential instantiation of a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The ethics of information warfare.Luciano Floridi & Mariarosaria Taddeo (eds.) - 2014 - Springer International Publishing.
    This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare. -/- The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Leap of Faith: Is There a Formula for “Trustworthy” AI?Matthias Braun, Hannah Bleher & Patrik Hummel - 2021 - Hastings Center Report 51 (3):17-22.
    Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High‐Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an approach to understanding trust in AI (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Trust in engineering.Philip J. Nickel - 2021 - In Diane P. Michelfelder & Neelke Doorn (eds.), Routledge Handbook of Philosophy of Engineering. Taylor & Francis Ltd. pp. 494-505.
    Engineers are traditionally regarded as trustworthy professionals who meet exacting standards. In this chapter I begin by explicating our trust relationship towards engineers, arguing that it is a linear but indirect relationship in which engineers “stand behind” the artifacts and technological systems that we rely on directly. The chapter goes on to explain how this relationship has become more complex as engineers have taken on two additional aims: the aim of social engineering to create and steer trust between people, and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • A Typology of Posthumanism: A Framework for Differentiating Analytic, Synthetic, Theoretical, and Practical Posthumanisms.Matthew E. Gladden - 2016 - In Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization. Defragmenter Media. pp. 31-91.
    The term ‘posthumanism’ has been employed to describe a diverse array of phenomena ranging from academic disciplines and artistic movements to political advocacy campaigns and the development of commercial technologies. Such phenomena differ widely in their subject matter, purpose, and methodology, raising the question of whether it is possible to fashion a coherent definition of posthumanism that encompasses all phenomena thus labelled. In this text, we seek to bring greater clarity to this discussion by formulating a novel conceptual framework for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust.Carolyn McLeod - 2020 - Stanford Encyclopedia of Philosophy.
    A summary of the philosophical literature on trust.
    Download  
     
    Export citation  
     
    Bookmark   77 citations  
  • Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.Stephanie Sheir, Arianna Manzini, Helen Smith & Jonathan Ives - forthcoming - AI and Society:1-14.
    Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought to aspire, at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards trustworthy blockchains: normative reflections on blockchain-enabled virtual institutions.Yan Teng - 2021 - Ethics and Information Technology 23 (3):385-397.
    This paper proposes a novel way to understand trust in blockchain technology by analogy with trust placed in institutions. In support of the analysis, a detailed investigation of institutional trust is provided, which is then used as the basis for understanding the nature and ethical limits of blockchain trust. Two interrelated arguments are presented. First, given blockchains’ capacity for being institution-like entities by inviting expectations similar to those invited by traditional institutions, blockchain trust is argued to be best conceptualized as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Transformative Choice, Practical Reasons and Trust.Rob Compaijen - 2018 - International Journal of Philosophical Studies 26 (2):275-292.
    In this article I reflect on the question of whether we can have reason to make transformative choices. In attempting to answer it, I do three things. First, I bring forward an internalist account of practical reasons which entails the idea that agents should deliberate to the best of their ability. Second, I discuss L.A. Paul’s views on transformative choice, arguing that, although they present a real problem, the problem is not as profound as she believes it is. Third, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Skillful coping with and through technologies.Mark Coeckelbergh - 2019 - AI and Society 34 (2):269-287.
    Dreyfus’s work is widely known for its critique of artificial intelligence and still stands as an example of how to do excellent philosophical work that is at the same time relevant to contemporary technological and scientific developments. But for philosophers of technology, especially for those sympathetic to using Heidegger, Merleau-Ponty, and Wittgenstein as sources of inspiration, it has much more to offer. This paper outlines Dreyfus’s account of skillful coping and critically evaluates its potential for thinking about technology. First, it (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How the EU AI Act Seeks to Establish an Epistemic Environment of Trust.Calvin Wai-Loon Ho & Karel Caals - 2024 - Asian Bioethics Review 16 (3):345-372.
    With focus on the development and use of artificial intelligence (AI) systems in the digital health context, we consider the following questions: How does the European Union (EU) seek to facilitate the development and uptake of trustworthy AI systems through the AI Act? What does trustworthiness and trust mean in the AI Act, and how are they linked to some of the ongoing discussions of these terms in bioethics, law, and philosophy? What are the normative components of trustworthiness? And how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • La libertad como el punto de encuentro para la construcción de la confianza en las relaciones humanas.Carlos Vargas-González & Iván-Darío Toro-Jaramillo - 2021 - Isegoría 65:09-09.
    This paper proposes freedom as the condition of possibility for the construction of trust in human relationships. The methodology used is a review of the scientific literature of the most recent moral and political philosophy. As a result of the dialogue between different positions, it is discovered that freedom, despite being present in the act of trust, is forgotten in the discussion around trust, a forgetfulness that has as its main causes the assumption that trust is natural and the confusion (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik van den Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Organizational Posthumanism.Matthew E. Gladden - 2016 - In Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization. Defragmenter Media. pp. 93-131.
    Building on existing forms of critical, cultural, biopolitical, and sociopolitical posthumanism, in this text a new framework is developed for understanding and guiding the forces of technologization and posthumanization that are reshaping contemporary organizations. This ‘organizational posthumanism’ is an approach to analyzing, creating, and managing organizations that employs a post-dualistic and post-anthropocentric perspective and which recognizes that emerging technologies will increasingly transform the kinds of members, structures, systems, processes, physical and virtual spaces, and external ecosystems that are available for organizations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):360-369.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can a Robot Be a Good Colleague?Sven Nyholm & Jilles Smids - 2020 - Science and Engineering Ethics 26 (4):2169-2188.
    This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Autonomous Systems in Society and War : Philosophical Inquiries.Linda Johansson - 2013 - Dissertation, Royal Institute of Technology, Stockholm
    The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust in farm data sharing: reflections on the EU code of conduct for agricultural data sharing.Simone van der Burg, Leanne Wiseman & Jovana Krkeljas - 2020 - Ethics and Information Technology 23 (3):185-198.
    Digital farming technologies promise to help farmers make well-informed decisions that improve the quality and quantity of their production, with less labour and less impact on the environment. This future, however, can only become a reality if farmers are willing to share their data with agribusinesses that develop digital technologies. To foster trust in data sharing, in Europe the EU Code of Conduct for agricultural data sharing by contractual agreement was launched in 2018 which encourages transparency about data use. This (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Personalized medicine, digital technology and trust: a Kantian account.Bjørn K. Myskja & Kristin S. Steinsbekk - 2020 - Medicine, Health Care and Philosophy 23 (4):577-587.
    Trust relations in the health services have changed from asymmetrical paternalism to symmetrical autonomy-based participation, according to a common account. The promises of personalized medicine emphasizing empowerment of the individual through active participation in managing her health, disease and well-being, is characteristic of symmetrical trust. In the influential Kantian account of autonomy, active participation in management of own health is not only an opportunity, but an obligation. Personalized medicine is made possible by the digitalization of medicine with an ensuing increased (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)Philosophical evaluation of the conceptualisation of trust in the NHS’ Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsible domestic robotics: exploring ethical implications of robots in the home.Lachlan Urquhart, Dominic Reedman-Flint & Natalie Leesakul - forthcoming - Journal of Information, Communication and Ethics in Society.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Survey of Expectations About the Role of Robots in Robot-Assisted Therapy for Children with ASD: Ethical Acceptability, Trust, Sociability, Appearance, and Attachment.Mark Coeckelbergh, Cristina Pop, Ramona Simut, Andreea Peca, Sebastian Pintea, Daniel David & Bram Vanderborght - 2016 - Science and Engineering Ethics 22 (1):47-65.
    The use of robots in therapy for children with autism spectrum disorder raises issues concerning the ethical and social acceptability of this technology and, more generally, about human–robot interaction. However, usually philosophical papers on the ethics of human–robot-interaction do not take into account stakeholders’ views; yet it is important to involve stakeholders in order to render the research responsive to concerns within the autism and autism therapy community. To support responsible research and innovation in this field, this paper identifies a (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Trust and resilient autonomous driving systems.Adam Henschke - 2020 - Ethics and Information Technology 22 (1):81-92.
    Autonomous vehicles, and the larger socio-technical systems that they are a part of are likely to have a deep and lasting impact on our societies. Trust is a key value that will play a role in the development of autonomous driving systems. This paper suggests that trust of autonomous driving systems will impact the ways that these systems are taken up, the norms and laws that guide them and the design of the systems themselves. Further to this, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evil and roboethics in management studies.Enrico Beltramini - 2019 - AI and Society 34 (4):921-929.
    In this article, I address the issue of evil and roboethics in the context of management studies and suggest that management scholars should locate evil in the realm of the human rather than of the artificial. After discussing the possibility of addressing the reality of evil machines in ontological terms, I explore users’ reaction to robots in a social context. I conclude that the issue of evil machines in management is more precisely a case of technology anthropomorphization.
    Download  
     
    Export citation  
     
    Bookmark