Switch to: Citations

Add references

You must login to add references.
  1. Technology and the character of contemporary life: a philosophical inquiry.Albert Borgmann - 1984 - Chicago: University of Chicago Press.
    Blending social analysis and philosophy, Albert Borgmann maintains that technology creates a controlling pattern in our lives.
    Download  
     
    Export citation  
     
    Bookmark   147 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   177 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  • Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting.Shannon Vallor - 2016 - New York, NY: Oxford University Press USA.
    New technologies from artificial intelligence to drones, and biomedical enhancement make the future of the human family increasingly hard to predict and protect. This book explores how the philosophical tradition of virtue ethics can help us to cultivate the moral wisdom we need to live wisely and well with emerging technologies.
    Download  
     
    Export citation  
     
    Bookmark   98 citations  
  • Agency and answerability: selected essays.Gary Watson - 2004 - New York: Oxford University Press.
    Since the 1970s Gary Watson has published a series of brilliant and highly influential essays on human action, examining such questions as: in what ways are we free and not free, rational and irrational, responsible or not for what we do? Moral philosophers and philosophers of action will welcome this collection, representing one of the most important bodies of work in the field.
    Download  
     
    Export citation  
     
    Bookmark   140 citations  
  • 1. Freedom and Resentment.Peter Strawson - 1993 - In John Martin Fischer & Mark Ravizza (eds.), Perspectives on moral responsibility. Ithaca, NY: Cornell University Press. pp. 1-25.
    Download  
     
    Export citation  
     
    Bookmark   467 citations  
  • The Threat of Algocracy: Reality, Resistance and Accommodation.John Danaher - 2016 - Philosophy and Technology 29 (3):245-268.
    One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • The rise of the robots and the crisis of moral patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Automation and Utopia: Human Flourishing in an Age Without Work.John Danaher - 2019 - Cambridge, MA: Harvard University Press.
    Human obsolescence is imminent. We are living through an era in which our activity is becoming less and less relevant to our well-being and to the fate of our planet. This trend toward increased obsolescence is likely to continue in the future, and we must do our best to prepare ourselves and our societies for this reality. Far from being a cause for despair, this is in fact an opportunity for optimism. Harnessed in the right way, the technology that hastens (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Patiency is not a virtue: the design of intelligent systems and systems of ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • The Value of Achievements.Gwen Bradford - 2013 - Pacific Philosophical Quarterly 94 (2):204-224.
    This article gives an account of what makes achievements valuable. Although the natural thought is that achievements are valuable because of the product, such as a cure for cancer or a work of art, I argue that the value of the product of an achievement is not sufficient to account for its overall value. Rather, I argue that achievements are valuable in virtue of their difficulty. I propose a new perfectionist theory of value that acknowledges the will as a characteristic (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Between Strict Liability and Blameworthy Quality of Will: Taking Responsibility’.Elinor Mason - 2019 - In David Shoemaker (ed.), Oxford Studies in Agency and Responsibility Volume 6. Oxford University Press. pp. 241-264.
    This chapter discusses blameworthiness for problematic acts that an agent does inadvertently. Blameworthiness, as opposed to liability, is difficult to make sense of in this sort of case, as there is usually thought to be a tight connection between blameworthiness and something in the agent’s quality of will. This chapter argues that in personal relationships we should sometimes take responsibility for inadvertent actions. Taking on responsibility when we inadvertently fail in our duties to our loved ones assures them that we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Privacy Is Power.Carissa Véliz - 2020 - London, UK: Penguin (Bantam Press).
    Selected by the Economist as one of the best books of 2020. -/- Privacy Is Power argues that people should protect their personal data because privacy is a kind of power. If we give too much of our data to corporations, the wealthy will rule. If we give too much personal data to governments, we risk sliding into authoritarianism. For democracy to be strong, the bulk of power needs to be with the citizenry, and whoever has the data will have (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Humans and Robots: Ethics, Agency, and Anthropomorphism.Sven Nyholm - 2020 - Rowman & Littlefield International.
    This book argues that we need to explore how human beings can best coordinate and collaborate with robots in responsible ways. It investigates ethically important differences between human agency and robot agency to work towards an ethics of responsible human-robot interaction.
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Responsibility and the Moral Sentiments.R. Jay Wallace - 1994 - Cambridge, Mass.: Harvard University Press.
    R. Jay Wallace argues in this book that moral accountability hinges on questions of fairness: When is it fair to hold people morally responsible for what they do? Would it be fair to do so even in a deterministic world? To answer these questions, we need to understand what we are doing when we hold people morally responsible, a stance that Wallace connects with a central class of moral sentiments, those of resentment, indignation, and guilt. To hold someone responsible, he (...)
    Download  
     
    Export citation  
     
    Bookmark   510 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.Shannon Vallor - 2015 - Philosophy and Technology 28 (1):107-124.
    This paper explores the ambiguous impact of new information and communications technologies on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers , while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, (...)
    Download  
     
    Export citation  
     
    Bookmark   64 citations  
  • Taking the blame: appropriate responses to medical error.Daniel W. Tigard - 2019 - Journal of Medical Ethics 45 (2):101-105.
    Medical errors are all too common. Ever since a report issued by the Institute of Medicine raised awareness of this unfortunate reality, an emerging theme has gained prominence in the literature on medical error. Fears of blame and punishment, it is often claimed, allow errors to remain undisclosed. Accordingly, modern healthcare must shift away from blame towards a culture of safety in order to effectively reduce the occurrence of error. Against this shift, I argue that it would serve the medical (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Socially responsive technologies: toward a co-developmental path.Daniel W. Tigard, Niël H. Conradie & Saskia K. Nagel - 2020 - AI and Society 35 (4):885-893.
    Robotic and artificially intelligent (AI) systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction (HCI and HRI), we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   218 citations  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Society-in-the-loop: programming the algorithmic social contract.Iyad Rahwan - 2018 - Ethics and Information Technology 20 (1):5-14.
    Recent rapid advances in Artificial Intelligence and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
    Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...)
    Download  
     
    Export citation  
     
    Bookmark   66 citations  
  • Accountability in a computerized society.Helen Nissenbaum - 1996 - Science and Engineering Ethics 2 (1):25-42.
    This essay warns of eroding accountability in computerized societies. It argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, four barriers are identified: 1) the problem of many hands, 2) the problem of bugs, 3) blaming the computer, and 4) software ownership without liability. The paper concludes with ideas on how to reverse this trend.
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   172 citations  
  • Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?Lily Frank & Sven Nyholm - 2017 - Artificial Intelligence and Law 25 (3):305-323.
    The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Conversation and Responsibility.Michael McKenna - 2011 - , US: Oxford University Press USA.
    In this book Michael McKenna advances a new theory of moral responsibility, one that builds upon the work of P. F. Strawson. As McKenna demonstrates, moral responsibility can be explained on analogy with a conversation. The relation between a morally responsible agent and those who hold her morally responsible is similar to the relation between a speaker and her audience. A responsible agent's actions are bearers of meaning--agent meaning--just as a speaker's utterances are bearers of speaker meaning. Agent meaning is (...)
    Download  
     
    Export citation  
     
    Bookmark   154 citations  
  • Talking to Our Selves: Reflection, Ignorance, and Agency.John M. Doris - 2015 - New York: Oxford University Press.
    Do we know what we're doing, and why? Psychological research seems to suggest not: reflection and self-awareness are surprisingly uncommon and inaccurate. John M. Doris presents a new account of agency and responsibility, which reconciles our understanding of ourselves as moral agents with empirical work on the unconscious mind.
    Download  
     
    Export citation  
     
    Bookmark   95 citations  
  • Responsibility From the Margins.David Shoemaker - 2015 - Oxford, GB: Oxford University Press.
    David Shoemaker presents a new pluralistic theory of responsibility, based on the idea of quality of will. His approach is motivated by our ambivalence to real-life cases of marginal agency, such as those caused by clinical depression, dementia, scrupulosity, psychopathy, autism, intellectual disability, and poor formative circumstances. Our ambivalent responses suggest that such agents are responsible in some ways but not others. Shoemaker develops a theory to account for our ambivalence, via close examination of several categories of pancultural emotional responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   157 citations  
  • Conversation & Responsibility.Michael McKenna - 2012 - , US: Oup Usa.
    In this book Michael McKenna advances a new theory of moral responsibility, one that builds upon the work of P.F. Strawson.
    Download  
     
    Export citation  
     
    Bookmark   168 citations  
  • Freedom and Resentment.Peter Strawson - 1962 - Proceedings of the British Academy 48:187-211.
    The doyen of living English philosophers, by these reflections, took hold of and changed the outlook of a good many other philosophers, if not quite enough. He did so, essentially, by assuming that talk of freedom and responsibility is talk not of facts or truths, in a certain sense, but of our attitudes. His more explicit concern was to look again at the question of whether determinism and freedom are consistent with one another -- by shifting attention to certain personal (...)
    Download  
     
    Export citation  
     
    Bookmark   1305 citations  
  • Automation, Work and the Achievement Gap.John Danaher & Sven Nyholm - 2021 - AI and Ethics 1 (3):227–237.
    Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Freedom and Resentment.Peter Strawson - 2003 - In Gary Watson (ed.), Free Will. Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   780 citations  
  • Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - International Data Privacy Law 1 (2):76-99.
    Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations