Switch to: Citations

Add references

You must login to add references.
  1. (1 other version)Just an artifact: why machines are perceived as moral agents.Joanna J. Bryson & Philip P. Kime - manuscript
    How obliged can we be to AI, and how much danger does it pose us? A surprising proportion of our society holds exaggerated fears or hopes for AI, such as the fear of robot world conquest, or the hope that AI will indefinitely perpetuate our cul- ture. These misapprehensions are symptomatic of a larger problem—a confusion about the nature and origins of ethics and its role in society. While AI technologies do pose promises and threats, these are not qualitatively different (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • "I don't trust you, you faker!" On Trust, Reliance, and Artificial Agency.Fabio Fossa - 2019 - Teoria 39 (1):63-80.
    The aim of this paper is to clarify the extent to which relationships between Human Agents (HAs) and Artificial Agents (AAs) can be adequately defined in terms of trust. Since such relationships consist mostly in the allocation of tasks to technological products, particular attention is paid to the notion of delegation. In short, I argue that it would be more accurate to describe direct relationships between HAs and AAs in terms of reliance, rather than in terms of trust. However, as (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Future of Transportation: Ethical, Legal, Social and Economic Impacts of Self-driving Vehicles in the Year 2025.Mark Ryan - 2020 - Science and Engineering Ethics 26 (3):1185-1208.
    Self-driving vehicles offer great potential to improve efficiency on roads, reduce traffic accidents, increase productivity, and minimise our environmental impact in the process. However, they have also seen resistance from different groups claiming that they are unsafe, pose a risk of being hacked, will threaten jobs, and increase environmental pollution from increased driving as a result of their convenience. In order to reap the benefits of SDVs, while avoiding some of the many pitfalls, it is important to effectively determine what (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will.Eddy Nahmias, Corey Allen & Bradley Loveall - 2019 - In Bernard Feltz, Marcus Missal & Andrew Sims (eds.), Free Will, Causality, and Neuroscience. Leiden: Brill.
    While philosophers and scientists sometimes suggest (or take for granted) that consciousness is an essential condition for free will and moral responsibility, there is surprisingly little discussion of why consciousness (and what sorts of conscious experience) is important. We discuss some of the proposals that have been offered. We then discuss our studies using descriptions of humanoid robots to explore people’s attributions of free will and responsibility, of various kinds of conscious sensations and emotions, and of reasoning capacities, and examine (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Patiency is not a virtue: the design of intelligent systems and systems of ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • Autonomy and Trust in Bioethics.Onora O'Neill - 2002 - New York: Cambridge University Press.
    Why has autonomy been a leading idea in philosophical writing on bioethics, and why has trust been marginal? In this important book, Onora O'Neill suggests that the conceptions of individual autonomy so widely relied on in bioethics are philosophically and ethically inadequate, and that they undermine rather than support relations of trust. She shows how Kant's non-individualistic view of autonomy provides a stronger basis for an approach to medicine, science and biotechnology, and does not marginalize untrustworthiness, while also explaining why (...)
    Download  
     
    Export citation  
     
    Bookmark   268 citations  
  • Moral Repair: Reconstructing Moral Relations After Wrongdoing.Margaret Urban Walker - 2006 - Cambridge University Press.
    Moral Repair examines the ethics and moral psychology of responses to wrongdoing. Explaining the emotional bonds and normative expectations that keep human beings responsive to moral standards and responsible to each other, Margaret Urban Walker uses realistic examples of both personal betrayal and political violence to analyze how moral bonds are damaged by serious wrongs and what must be done to repair the damage. Focusing on victims of wrong, their right to validation, and their sense of justice, Walker presents a (...)
    Download  
     
    Export citation  
     
    Bookmark   157 citations  
  • Can We Make Sense of the Notion of Trustworthy Technology?Philip J. Nickel, Maarten Franssen & Peter Kroes - 2010 - Knowledge, Technology & Policy 23 (3):429-444.
    In this paper we raise the question whether technological artifacts can properly speaking be trusted or said to be trustworthy. First, we set out some prevalent accounts of trust and trustworthiness and explain how they compare with the engineer’s notion of reliability. We distinguish between pure rational-choice accounts of trust, which do not differ in principle from mere judgments of reliability, and what we call “motivation-attributing” accounts of trust, which attribute specific motivations to trustworthy entities. Then we consider some examples (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • The Machine Question: Critical Perspectives on Ai, Robots, and Ethics.David J. Gunkel - 2012 - MIT Press.
    One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question" -- consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses a fundamental (...)
    Download  
     
    Export citation  
     
    Bookmark   87 citations  
  • What Is Trust?Thomas W. Simpson - 2012 - Pacific Philosophical Quarterly 93 (4):550-569.
    Trust is difficult to define. Instead of doing so, I propose that the best way to understand the concept is through a genealogical account. I show how a root notion of trust arises out of some basic features of what it is for humans to live socially, in which we rely on others to act cooperatively. I explore how this concept acquires resonances of hope and threat, and how we analogically apply this in related but different contexts. The genealogical account (...)
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Trust and the trickster problem.Zac Cogley - 2012 - Analytic Philosophy 53 (1):30-47.
    In this paper, I articulate and defend a conception of trust that solves what I call “the trickster problem.” The problem results from the fact that many accounts of trust treat it similar to, or identical with, relying on someone’s good will. But a trickster could rely on your good will to get you to go along with his scheme, without trusting you to do so. Recent philosophical accounts of trust aim to characterize what it is for one person to (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Trust: Reason, Routine, Reflexivity.Guido Mollering - 2006 - Elsevier.
    What makes trust such a powerful concept? Is it merely that in trust the whole range of social forces that we know play together? Or is it that trust involves a peculiar element beyond those we can account for? While trust is an attractive and evocative concept that has gained increasing popularity across the social sciences, it remains elusive, its many facets and applications obscuring a clear overall vision of its essence. In this book, Guido Möllering reviews a broad range (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories (...)
    Download  
     
    Export citation  
     
    Bookmark   72 citations  
  • Trust.Carolyn McLeod - 2020 - Stanford Encyclopedia of Philosophy.
    A summary of the philosophical literature on trust.
    Download  
     
    Export citation  
     
    Bookmark   77 citations  
  • Trust, hope and empowerment.Victoria McGeer - 2008 - Australasian Journal of Philosophy 86 (2):237 – 254.
    Philosophers and social scientists have focussed a great deal of attention on our human capacity to trust, but relatively little on the capacity to hope. This is a significant oversight, as hope and trust are importantly interconnected. This paper argues that, even though trust can and does feed our hopes, it is our empowering capacity to hope that significantly underwrites—and makes rational—our capacity to trust.
    Download  
     
    Export citation  
     
    Bookmark   108 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   204 citations  
  • Trust as an affective attitude.Karen Jones - 1996 - Ethics 107 (1):4-25.
    Download  
     
    Export citation  
     
    Bookmark   318 citations  
  • Deciding to trust, coming to believe.Richard Holton - 1994 - Australasian Journal of Philosophy 72 (1):63 – 76.
    Can we decide to trust? Sometimes, yes. And when we do, we need not believe that our trust will be vindicated. This paper is motivated by the need to incorporate these facts into an account of trust. Trust involves reliance; and in addition it requires the taking of a reactive attitude to that reliance. I explain how the states involved here differ from belief. And I explore the limits of our ability to trust. I then turn to the idea of (...)
    Download  
     
    Export citation  
     
    Bookmark   273 citations  
  • Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
    Download  
     
    Export citation  
     
    Bookmark   624 citations  
  • Trust on the line: a philosophical exploration of trust in the networked era.Esther Keymolen - 2016 - Oisterwijk, Netherlands: Wolf Legal Publishers.
    Governments, companies, and citizens all think trust is important. Especially today, in the networked era, where we make use of all sorts of e-services and increasingly interact and buy online, trust has become a necessary condition for society to thrive. But what do we mean when we talk about trust and how does the rise of the Internet transform the functioning of trust? This books starts off with a thorough conceptual analysis of trust, drawing on insights from - amongst others (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Agricultural Big Data Analytics and the Ethics of Power.Mark Ryan - 2020 - Journal of Agricultural and Environmental Ethics 33 (1):49-69.
    Agricultural Big Data analytics (ABDA) is being proposed to ensure better farming practices, decision-making, and a sustainable future for humankind. However, the use and adoption of these technologies may bring about potentially undesirable consequences, such as exercises of power. This paper will analyse Brey’s five distinctions of power relationships (manipulative, seductive, leadership, coercive, and forceful power) and apply them to the use agricultural Big Data. It will be shown that ABDA can be used as a form of manipulative power to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • How Can I Be Trusted?: A Virtue Theory of Trustworthiness.Nancy Nyquist Potter - 2002 - Rowman & Littlefield Publishers.
    This work examines the concept of trust in the light of virtue theory, and takes our responsibility to be trustworthy as central. Rather than thinking of trust as risk-taking, Potter views it as equally a matter of responsibility-taking. Her work illustrates that relations of trust are never independent from considerations of power, and that asking ourselves what we can do to be trustworthy allows us to move beyond adversarial trust relationships and toward a more democratic, just, and peaceful society.
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human entities. (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • The ethics of trust.H. J. N. Horsburgh - 1960 - Philosophical Quarterly 10 (41):343-354.
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (1 other version)Just an artifact: Why machines are perceived as moral agents.Joanna J. Bryson & Philip P. Kime - 2011 - Ijcai Proceedings-International Joint Conference on Artificial Intelligence 22:1641.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • Simulating rational social normative trust, predictive trust, and predictive reliance between agents.Maj Tuomela & Solveig Hofmann - 2003 - Ethics and Information Technology 5 (3):163-176.
    A program for the simulation of rational social normative trust, predictive `trust,' and predictive reliance between agents will be introduced. It offers a tool for social scientists or a trust component for multi-agent simulations/multi-agent systems, which need to include trust between agents to guide the decisions about the course of action. It is based on an analysis of rational social normative trust (RSNTR) (revised version of M. Tuomela 2002), which is presented and briefly argued. For collective agents, belief conditions for (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations