Switch to: References

Add citations

You must login to add citations.
  1. Committing Crimes with BCIs: How Brain-Computer Interface Users can Satisfy Actus Reus and be Criminally Responsible.Kramer Thompson - 2021 - Neuroethics 14 (S3):311-322.
    Brain-computer interfaces allow agents to control computers without moving their bodies. The agents imagine certain things and the brain-computer interfaces read the concomitant neural activity and operate the computer accordingly. But the use of brain-computer interfaces is problematic for criminal law, which requires that someone can only be found criminally responsible if they have satisfied the actus reus requirement: that the agent has performed some (suitably specified) conduct. Agents who affect the world using brain-computer interfaces do not obviously perform any (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective.Simon Burton, Ibrahim Habli, Tom Lawton, John McDermid, Phillip Morgan & Zoe Porter - 2020 - Artificial Intelligence 279 (C):103201.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Risk and Responsibility in Context.Adriana Placani & Stearns Broadhead (eds.) - 2023 - New York: Routledge.
    This volume bridges contemporary philosophical conceptions of risk and responsibility and offers an extensive examination of the topic. It shows that risk and responsibility combine in ways that give rise to new philosophical questions and problems. Philosophical interest in the relationship between risk and responsibility continues to rise, due in no small part due to environmental crises, emerging technologies, legal developments, and new medical advances. Despite such interest, scholars are just now working out how to conceive of the links between (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Jaz u odgovornosti u informatičkoj eri.Jelena Mijić - 2023 - Društvo I Politika 4 (4):25-38.
    Odgovornost pripisujemo sa namerom da postignemo neki cilj. Jedno od opših mesta u filozofskoj literaturi je da osobi možemo pripisati moralnu odgovornost ako su zadovoljena bar dva uslova: da subjekt delanja ima kontrolu nad svojim postupcima i da je u stanju da navede razloge u prilog svog postupka. Međutim, četvrtu industrijsku revoluciju karakterišu sociotehnološke pojave koje nas potencijalno suočavaju sa tzv. problemom jaza u odgovornosti. Rasprave o odgovornosti u kontekstu veštačke inteligencije karakteriše nejasna i neodređena upotreba ovog pojma. Da bismo (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility and Robot Ethics: A Critical Overview.Janina Loh - 2019 - Philosophies 4 (4):58.
    _ _This paper has three concerns: first, it represents an etymological and genealogical study of the phenomenon of responsibility. Secondly, it gives an overview of the three fields of robot ethics as a philosophical discipline and discusses the fundamental questions that arise within these three fields. Thirdly, it will be explained how in these three fields of robot ethics is spoken about responsibility and how responsibility is attributed in general. As a philosophical paper, it presents a theoretical approach and no (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Strictly Human: Limitations of Autonomous Systems.Sadjad Soltanzadeh - 2022 - Minds and Machines 32 (2):269-288.
    Can autonomous systems replace humans in the performance of their activities? How does the answer to this question inform the design of autonomous systems? The study of technical systems and their features should be preceded by the study of the activities in which they play roles. Each activity can be described by its overall goals, governing norms and the intermediate steps which are taken to achieve the goals and to follow the norms. This paper uses the activity realist approach to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Bringing older people’s perspectives on consumer socially assistive robots into debates about the future of privacy protection and AI governance.Andrea Slane & Isabel Pedersen - forthcoming - AI and Society:1-20.
    A growing number of consumer technology companies are aiming to convince older people that humanoid robots make helpful tools to support aging-in-place. As hybrid devices, socially assistive robots (SARs) are situated between health monitoring tools, familiar digital assistants, security aids, and more advanced AI-powered devices. Consequently, they implicate older people’s privacy in complex ways. Such devices are marketed to perform functions common to smart speakers (e.g., Amazon Echo) and smart home platforms (e.g., Google Home), while other functions are more specific (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Service robots in the mirror of reflective research.Michael Decker - 2012 - Poiesis and Praxis 9 (3-4):181-200.
    Service robotics has increasingly become the focus of reflective research on new technologies over the last decade. The current state of technology is characterized by prototypical robot systems developed for specific application scenarios outside factories. This has enabled context-based Science and Technology Studies and technology assessments of service robotic systems. This contribution describes the status quo of this reflective research as the starting point for interdisciplinary technology assessment (TA), taking account of TA studies and, in particular, of publications from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Ethics of Virtual Sexual Assault.John Danaher - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter addresses the growing problem of unwanted sexual interactions in virtual environments. It reviews the available evidence regarding the prevalence and severity of this problem. It then argues that due to the potential harms of such interactions, as well as their nonconsensual nature, there is a good prima facie argument for viewing them as serious moral wrongs. Does this prima facie argument hold up to scrutiny? After considering three major objections – the ‘it’s not real’ objection; the ‘it’s just (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Experimental Philosophy of Technology.Steven R. Kraaijeveld - 2021 - Philosophy and Technology 34:993-1012.
    Experimental philosophy is a relatively recent discipline that employs experimental methods to investigate the intuitions, concepts, and assumptions behind traditional philosophical arguments, problems, and theories. While experimental philosophy initially served to interrogate the role that intuitions play in philosophy, it has since branched out to bring empirical methods to bear on problems within a variety of traditional areas of philosophy—including metaphysics, philosophy of language, philosophy of mind, and epistemology. To date, no connection has been made between developments in experimental philosophy (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Fairness in Algorithmic Policing.Duncan Purves - 2022 - Journal of the American Philosophical Association 8 (4):741-761.
    Predictive policing, the practice of using of algorithmic systems to forecast crime, is heralded by police departments as the new frontier of crime analysis. At the same time, it is opposed by civil rights groups, academics, and media outlets for being ‘biased’ and therefore discriminatory against communities of color. This paper argues that the prevailing focus on racial bias has overshadowed two normative factors that are essential to a full assessment of the moral permissibility of predictive policing: fairness in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Modernity and Contemporaneity.Evangelos D. Protopapadakis & Georgios Arabatzis (eds.) - 2022 - The NKUA Applied Philosophy Research Lab Press.
    Modernity and Contemporaneity is the 3rd volume in the Hellenic-Serbian Philosophical Dialogue Series, a project that was initiated as an emphatic token of the will and commitment to establish permanent and fruitful collaboration between two strongly bonded Departments of Philosophy, this of the National and Kapodistrian University of Athens, and that of the University of Novi Sad respectively. This collaboration was founded from the very beginning upon friendship, mutual respect and strong engagement, as well us upon our firm resolution to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The ethics of information warfare.Luciano Floridi & Mariarosaria Taddeo (eds.) - 2014 - Springer International Publishing.
    This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare. -/- The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?Chang Ho Yoon, Robert Torrance & Naomi Scheinerman - 2022 - Journal of Medical Ethics 48 (9):581-585.
    We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomous’ ML recommendations are considered to be en par with human instruction in specific (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Autonomous weapon systems and responsibility gaps: a taxonomy.Nathan Gabriel Wood - 2023 - Ethics and Information Technology 25 (1):1-14.
    A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous weapon (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Brain–Computer Interfaces: Lessons to Be Learned from the Ethics of Algorithms.Andreas Wolkenstein, Ralf J. Jox & Orsolya Friedrich - 2018 - Cambridge Quarterly of Healthcare Ethics 27 (4):635-646.
    :Brain–computer interfaces are driven essentially by algorithms; however, the ethical role of such algorithms has so far been neglected in the ethical assessment of BCIs. The goal of this article is therefore twofold: First, it aims to offer insights into whether the problems related to the ethics of BCIs can be better grasped with the help of already existing work on the ethics of algorithms. As a second goal, the article explores what kinds of solutions are available in that body (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Safety by simulation: theorizing the future of robot regulation.Mika Viljanen - 2024 - AI and Society 39 (1):139-154.
    Mobility robots may soon be among us, triggering a need for safety regulation. Robot safety regulation, however, remains underexplored, with only a few articles analyzing what regulatory approaches could be feasible. This article offers an account of the available regulatory strategies and attempts to theorize the effects of simulation-based safety regulation. The article first discusses the distinctive features of mobility robots as regulatory targets and argues that emergent behavior constitutes the key regulatory concern in designing robot safety regulation regimes. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight.Ilse Verdiesen, Filippo Santoni de Sio & Virginia Dignum - 2020 - Minds and Machines 31 (1):137-163.
    Accountability and responsibility are key concepts in the academic and societal debate on Autonomous Weapon Systems, but these notions are often used as high-level overarching constructs and are not operationalised to be useful in practice. “Meaningful Human Control” is often mentioned as a requirement for the deployment of Autonomous Weapon Systems, but a common definition of what this notion means in practice, and a clear understanding of its relation with responsibility and accountability is also lacking. In this paper, we present (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Technology as Driver for Morally Motivated Conceptual Engineering.Herman Veluwenkamp, Marianna Capasso, Jonne Maas & Lavinia Marin - 2022 - Philosophy and Technology 35 (3):1-25.
    New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Reasons for Meaningful Human Control.Herman Veluwenkamp - 2022 - Ethics and Information Technology 24 (4):1-9.
    ”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Collective Responsibility and Artificial Intelligence.Isaac Taylor - 2024 - Philosophy and Technology 37 (1):1-18.
    The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be the case. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Brain to computer communication: Ethical perspectives on interaction models. [REVIEW]Guglielmo Tamburrini - 2009 - Neuroethics 2 (3):137-149.
    Brain Computer Interfaces (BCIs) enable one to control peripheral ICT and robotic devices by processing brain activity on-line. The potential usefulness of BCI systems, initially demonstrated in rehabilitation medicine, is now being explored in education, entertainment, intensive workflow monitoring, security, and training. Ethical issues arising in connection with these investigations are triaged taking into account technological imminence and pervasiveness of BCI technologies. By focussing on imminent technological developments, ethical reflection is informatively grounded into realistic protocols of brain-to-computer communication. In particular, (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Information Warfare: A Philosophical Perspective. [REVIEW]Mariarosaria Taddeo - 2012 - Philosophy and Technology 25 (1):105-120.
    This paper focuses on Information Warfare—the warfare characterised by the use of information and communication technologies. This is a fast growing phenomenon, which poses a number of issues ranging from the military use of such technologies to its political and ethical implications. The paper presents a conceptual analysis of this phenomenon with the goal of investigating its nature. Such an analysis is deemed to be necessary in order to lay the groundwork for future investigations into this topic, addressing the ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Philosophy and Technology 35 (3):1-24.
    In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra & John Danaher - 2022 - Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation.Sanja Srećković, Andrea Berber & Nenad Filipović - 2021 - Minds and Machines 32 (1):159-183.
    Certain characteristics make machine learning a powerful tool for processing large amounts of data, and also particularly unsuitable for explanatory purposes. There are worries that its increasing use in science may sideline the explanatory goals of research. We analyze the key characteristics of ML that might have implications for the future directions in scientific research: epistemic opacity and the ‘theory-agnostic’ modeling. These characteristics are further analyzed in a comparison of ML with the traditional statistical methods, in order to demonstrate what (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Customizable Ethics Settings for Building Resilience and Narrowing the Responsibility Gap: Case Studies in the Socio-Ethical Engineering of Autonomous Systems.Sadjad Soltanzadeh, Jai Galliott & Natalia Jevglevskaja - 2020 - Science and Engineering Ethics 26 (5):2693-2708.
    Ethics settings allow for morally significant decisions made by humans to be programmed into autonomous machines, such as autonomous vehicles or autonomous weapons. Customizable ethics settings are a type of ethics setting in which the users of autonomous machines make such decisions. Here two arguments are provided in defence of customizable ethics settings. Firstly, by approaching ethics settings in the context of failure management, it is argued that customizable ethics settings are instrumentally and inherently valuable for building resilience into the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Statistically responsible artificial intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Contemporary Technologies and the Morality of Warfare: The War of the Machines.Brian Smith - 2022 - Journal of Military Ethics 21 (1):88-92.
    The belief that automated technologies will have a salutary effect on war goes back to the late nineteenth century. In 1898, at Madison Square Garden, Nikola Tesla famously showcased the first radi...
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomous weapons systems and the moral equality of combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 22 (3):197-209.
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Preventing Optimific Wrongings.Thomas Sinclair - 2017 - Utilitas 29 (4):453-473.
    Most people believe that the rights of others sometimes require us to act in ways that have even substantially sub-optimal outcomes, as viewed from an axiological perspective that ranks outcomes objectively. Bringing about the optimal outcome, contrary to such a requirement, is an ‘optimific wronging’. It is less clear, however, that we are required to prevent optimific wrongings. Perhaps the value of the outcome, combined with the relative weakness of prohibitions on allowing harm as compared to those against doing harm, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Just war and robots’ killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • A taxonomy of human–machine collaboration: capturing automation and technical autonomy.Monika Simmler & Ruth Frischknecht - 2021 - AI and Society 36 (1):239-250.
    Due to the ongoing advancements in technology, socio-technical collaboration has become increasingly prevalent. This poses challenges in terms of governance and accountability, as well as issues in various other fields. Therefore, it is crucial to familiarize decision-makers and researchers with the core of human–machine collaboration. This study introduces a taxonomy that enables identification of the very nature of human–machine interaction. A literature review has revealed that automation and technical autonomy are main parameters for describing and understanding such interaction. Both aspects (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The case of classroom robots: teachers’ deliberations on the ethical tensions.Sofia Serholt, Wolmet Barendregt, Asimina Vasalou, Patrícia Alves-Oliveira, Aidan Jones, Sofia Petisca & Ana Paiva - 2017 - AI and Society 32 (4):613-631.
    Robots are increasingly being studied for use in education. It is expected that robots will have the potential to facilitate children’s learning and function autonomously within real classrooms in the near future. Previous research has raised the importance of designing acceptable robots for different practices. In parallel, scholars have raised ethical concerns surrounding children interacting with robots. Drawing on a Responsible Research and Innovation perspective, our goal is to move away from research concerned with designing features that will render robots (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The achievement gap thesis reconsidered: artificial intelligence, automation, and meaningful work.Lucas Scripter - forthcoming - AI and Society:1-14.
    John Danaher and Sven Nyholm have argued that automation, especially of the sort powered by artificial intelligence, poses a threat to meaningful work by diminishing the chances for meaning-conferring workplace achievement, what they call “achievement gaps”. In this paper, I argue that Danaher and Nyholm’s achievement gap thesis suffers from an ambiguity. The weak version of the thesis holds that automation may result in the appearance of achievement gaps, whereas the strong version holds that automation may result on balance loss (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Spectrum of Responsibility Ascription for End Users of Neurotechnologies.Andreas Schönau - 2021 - Neuroethics 14 (3):423-435.
    Invasive neural devices offer novel prospects for motor rehabilitation on different levels of agentive behavior. From a functional perspective, they interact with, support, or enable human intentional actions in such a way that movement capabilities are regained. However, when there is a technical malfunction resulting in an unintended movement, the complexity of the relationship between the end user and the device sometimes makes it difficult to determine who is responsible for the outcome – a circumstance that has been coined as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations