Switch to: References

Add citations

You must login to add citations.
  1. Meaning in Life in AI Ethics—Some Trends and Perspectives.Sven Nyholm & Markus Rüther - 2023 - Philosophy and Technology 36 (2):1-24.
    In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work.Sarah Bankins & Paul Formosa - 2023 - Journal of Business Ethics (4):1-16.
    The increasing workplace use of artificially intelligent (AI) technologies has implications for the experience of meaningful human work. Meaningful work refers to the perception that one’s work has worth, significance, or a higher purpose. The development and organisational deployment of AI is accelerating, but the ways in which this will support or diminish opportunities for meaningful work and the ethical implications of these changes remain under-explored. This conceptual paper is positioned at the intersection of the meaningful work and ethical AI (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A plea for integrated empirical and philosophical research on the impacts of feminized AI workers.Hannah Read, Javier Gomez-Lavin, Andrea Beltrama & Lisa Miracchi Titus - 2022 - Analysis 999 (1):89-97.
    Feminist philosophers have long emphasized the ways in which women’s oppression takes a variety of forms depending on complex combinations of factors. These include women’s objectification, dehumanization and unjust gendered divisions of labour caused in part by sexist ideologies regarding women’s social role. This paper argues that feminized artificial intelligence (feminized AI) poses new and important challenges to these perennial feminist philosophical issues. Despite the recent surge in theoretical and empirical attention paid to the ethics of AI in general, a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Endangered Experiences: Skipping Newfangled Technologies and Sticking to Real Life.Marc Champagne - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Media and its Negative Impacts on Autonomy.Siavosh Sahebi & Paul Formosa - 2022 - Philosophy and Technology 35 (3):1-24.
    How social media impacts the autonomy of its users is a topic of increasing focus. However, much of the literature that explores these impacts fails to engage in depth with the philosophical literature on autonomy. This has resulted in a failure to consider the full range of impacts that social media might have on autonomy. A deeper consideration of these impacts is thus needed, given the importance of both autonomy as a moral concept and social media as a feature of (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Techno-optimism: an Analysis, an Evaluation and a Modest Defence.John Danaher - 2022 - Philosophy and Technology 35 (2):1-29.
    What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy.Paul Formosa - 2021 - Minds and Machines 31 (4):595-616.
    Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social robots (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Sex Robots and Views from Nowhere: A Commentary on Jecker, Howard and Sparrow, and Wang.Kelly Kate Evans - 2021 - In Ruiping Fan & Mark J. Cherry (eds.), Sex Robots: Social Impact and the Future of Human Relations. Springer.
    This article explores the implications of what it means to moralize about future technological innovations. Specifically, I have been invited to comment on three papers that attempt to think about what seems to be an impending social reality: the availability of life-like sex robots. In response, I explore what it means to moralize about future technological innovations from a secular perspective, i.e., a perspective grounded in an immanent, socio-historically contingent view. I review the arguments of Nancy Jecker, Mark Howard and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A dialogue on the ethics of science: Henri Poincaré and Pope Francis.Nicholas Matthew Danne - 2021 - European Journal for Philosophy of Science 11 (3):1-12.
    To teach the ethics of science to science majors, I follow several teachers in the literature who recommend “persona” writing, or the student construction of dialogues between ethical thinkers of interest. To engage science majors in particular, and especially those new to academic philosophy, I recommend constructing persona dialogues from Henri Poincaré’s essay, “Ethics and Science”, and the non-theological third chapter of Pope Francis’s encyclical on the environment, Laudato si. This pairing of interlocutors offers two advantages. The first is that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I weave extant HRI studies that demonstrate empathic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Prolegómenos a una ética para la robótica social.Júlia Pareto Boada - 2021 - Dilemata 34:71-87.
    Social robotics has a high disruptive potential, for it expands the field of application of intelligent technology to practical contexts of a relational nature. Due to their capacity to “intersubjectively” interact with people, social robots can take over new roles in our daily activities, multiplying the ethical implications of intelligent robotics. In this paper, we offer some preliminary considerations for the ethical reflection on social robotics, so that to clarify how to correctly orient the critical-normative thinking in this arduous task. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The promise and perils of AI in medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Self-building technologies.François Kammerer - 2020 - AI and Society 35 (4):901-915.
    On the basis of two thought experiments, I argue that self-building technologies are possible given our current level of technological progress. We could already use technology to make us instantiate selfhood in a more perfect, complete manner. I then examine possible extensions of this thesis, regarding more radical self-building technologies which might become available in a distant future. I also discuss objections and reservations one might have about this view.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Stable Strategies for Personal Development: On the Prudential Value of Radical Enhancement and the Philosophical Value of Speculative Fiction.Ian Stoner - 2020 - Metaphilosophy 51 (1):128-150.
    In her short story “Stable Strategies for Middle Management,” Eileen Gunn imagines a future in which Margaret, an office worker, seeks radical genetic enhancements intended to help her secure the middle-management job she wants. One source of the story’s tension and dark humor is dramatic irony: readers can see that the enhancements Margaret buys stand little chance of making her life go better for her; enhancing is, for Margaret, probably a prudential mistake. This paper argues that our positions in the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • E. M. Forster’s ‘The Machine Stops’: humans, technology and dialogue.Ana Cristina Zimmermann & W. John Morgan - 2019 - AI and Society 34 (1):37-45.
    The article explores E.M. Forster’s story The Machine Stops as an example of dystopian literature and its possible associations with the use of technology and with today’s cyber culture. Dystopian societies are often characterized by dehumanization and Forster’s novel raises questions about how we live in time and space; and how we establish relationships with the Other and with the world through technology. We suggest that the fear of technology depicted in dystopian literature indicates a fear that machines are mimicking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • The “enhanced” warrior: drone warfare and the problematics of separation.Danial Qaurooni & Hamid Ekbia - 2017 - Phenomenology and the Cognitive Sciences 16 (1):53-73.
    Unmanned Aerial Vehicles, or drones, are increasingly employed for military purposes. They are extolled for improving operational endurance and targeting precision on the one hand and keeping drone crew from harm on the other. In the midst of such praise, what falls by the wayside is an entangled set of concerns about the ways in which the relationship between the pilots and their operational environment is being reconfigured. This paper traces the various manifestations of this reconfiguration and goes on to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The use of software tools and autonomous bots against vandalism: eroding Wikipedia’s moral order?Paul B. de Laat - 2015 - Ethics and Information Technology 17 (3):175-188.
    English - language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘ coactivity ’ in use between humans and bots, this research ‘ discloses ’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new hierarchical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The impact of intelligent decision-support systems on humans’ ethical decision-making: A systematic literature review and an integrated framework.Franziska Poszler & Benjamin Lange - forthcoming - Technological Forecasting and Social Change.
    With the rise and public accessibility of AI-enabled decision-support systems, individuals outsource increasingly more of their decisions, even those that carry ethical dimensions. Considering this trend, scholars have highlighted that uncritical deference to these systems would be problematic and consequently called for investigations of the impact of pertinent technology on humans’ ethical decision-making. To this end, this article conducts a systematic review of existing scholarship and derives an integrated framework that demonstrates how intelligent decision-support systems (IDSSs) shape humans’ ethical decision-making. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human Flourishing and Technology Affordances.Avigail Ferdman - 2023 - Philosophy and Technology 37 (1):1-28.
    Amid the growing interest in the relationship between technology and human flourishing, philosophical perfectionism can serve as a fruitful lens through which to normatively evaluate technology. This paper offers an analytic framework that explains the relationship between technology and flourishing by way of innate human capacities. According to perfectionism, our human flourishing is determined by how well we exercise our human capacities to know, create, be sociable, use our bodies and exercise the will, by engaging in activities that ultimately produce (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.Richard Volkman & Katleen Gabriels - 2023 - Science and Engineering Ethics 29 (2):1-14.
    Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Digital sovereignty and smart wearables: Three moral calculi for the distribution of legitimate control over the digital.Niël Henk Conradie & Saskia K. Nagel - 2022 - Journal of Responsible Technology 12 (C):100053.
    Download  
     
    Export citation  
     
    Bookmark  
  • Satellites, war, climate change, and the environment: are we at risk for environmental deskilling?Samantha Jo Fried - 2020 - AI and Society:1-9.
    Currently, we find ourselves in a paradigm in which we believe that accepting climate change data will lead to a kind of automatic action toward the preservation of our environment. I have argued elsewhere (Fried 2020) that this lack of civic action on climate data is significant when placed in the historical, military context of the technologies that collect this data––Earth remote sensing technologies. However, I have not yet discussed the phenomenological or moral implications of this context, which are deeply (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition.Julia Haas - 2020 - Minds and Machines 30 (2):219-246.
    I describe a suite of reinforcement learning environments in which artificial agents learn to value and respond to moral content and contexts. I illustrate the core principles of the framework by characterizing one such environment, or “gridworld,” in which an agent learns to trade-off between monetary profit and fair dealing, as applied in a standard behavioral economic paradigm. I then highlight the core technical and philosophical advantages of the learning approach for modeling moral cognition, and for addressing the so-called value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Attention, moral skill, and algorithmic recommendation.Nick Schuster & Seth Lazar - forthcoming - Philosophical Studies:1-26.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through the information (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent?Sven Nyholm - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):76-88.
    This paper discusses two opposing views about the relation between artificial intelligence (AI) and human intelligence: on the one hand, a worry that heavy reliance on AI technologies might make people less intelligent and, on the other, a hope that AI technologies might serve as a form of cognitive enhancement. The worry relates to the notion that if we hand over too many intelligence-requiring tasks to AI technologies, we might end up with fewer opportunities to train our own intelligence. Concerning (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Are superintelligent robots entitled to human rights?John-Stewart Gordon - 2022 - Ratio 35 (3):181-193.
    Ratio, Volume 35, Issue 3, Page 181-193, September 2022.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Drones in humanitarian contexts, robot ethics, and the human–robot interaction.Aimee van Wynsberghe & Tina Comes - 2020 - Ethics and Information Technology 22 (1):43-53.
    There are two dominant trends in the humanitarian care of 2019: the ‘technologizing of care’ and the centrality of the humanitarian principles. The concern, however, is that these two trends may conflict with one another. Faced with the growing use of drones in the humanitarian space there is need for ethical reflection to understand if this technology undermines humanitarian care. In the humanitarian space, few agree over the value of drone deployment; one school of thought believes drones can provide a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A paradigm shift for robot ethics: from HRI to human–robot–system interaction.Aimee van Wynsberghe & Shuhong Li - 2019 - Medicolegal and Bioethics:11-21.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Rituals and Machines: A Confucian Response to Technology-Driven Moral Deskilling.Pak-Hang Wong - 2019 - Philosophies 4 (4):59.
    Robots and other smart machines are increasingly interwoven into the social fabric of our society, with the area and scope of their application continuing to expand. As we become accustomed to interacting through and with robots, we also begin to supplement or replace existing human–human interactions with human–machine interactions. This article aims to discuss the impacts of the shift from human–human interactions to human–machine interactions in one facet of our self-constitution, i.e., morality. More specifically, it sets out to explore whether (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Dewey’s Notion of Intelligent Habit as a Basis for Ethical Assessment of Technology.Michał Wieczorek - 2023 - Contemporary Pragmatism 20 (4):356-377.
    This paper discusses how John Dewey’s notion of intelligent habit could contribute to technology ethics. For Dewey, intelligent (i.e., desirable) habits are reflective – arising from inquiry into the appropriate courses of action in each situation – and flexible – easily adaptable to the changing circumstances. We should strive to develop intelligent habits as they are the best tools for the achievement of our goals and are necessary for individual and societal flourishing. I argue that Dewey’s notion of intelligent habit (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2019 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2019 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Taste of Armageddon: A Virtue Ethics Perspective on Autonomous Weapons and Moral Injury.Massimiliano Lorenzo Cappuccio, Jai Christian Galliott & Fady Shibata Alnajjar - 2022 - Journal of Military Ethics 21 (1):19-38.
    Autonomous weapon systems could in principle release military personnel from the onus of killing during combat missions, reducing the related risk of suffering a moral injury and its debilita...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Technomoral Civic Virtues: a Critical Appreciation of Shannon Vallor’s Technology and the Virtues.Don Howard - 2018 - Philosophy and Technology 31 (2):293-304.
    This paper begins by summarizing the chief, original contributions to technology ethics in Shannon Vallor’s recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, highlighting especially the book’s distinctive inclusion of not only the western virtue ethics tradition but also the analogous traditions in Buddhist and Confucian ethics. But the main point of the paper is to suggest that the theoretical framework developed in the book be extended to include an analysis of the distinctive civic (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robotification & ethical cleansing.Marco Nørskov - 2022 - AI and Society 37 (2):425-441.
    Robotics is currently not only a cutting-edge research area, but is potentially disruptive to all domains of our lives—for better and worse. While legislation is struggling to keep pace with the development of these new artifacts, our intellectual limitations and physical laws seem to present the only hard demarcation lines, when it comes to state-of-the-art R&D. To better understand the possible implications, the paper at hand critically investigates underlying processes and structures of robotics in the context of Heidegger’s and Nishitani’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare.James Johnson - 2022 - Journal of Military Ethics 21 (3):246-271.
    Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the key elements of the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI “rational” efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining “meaningful” human control over (...)
    Download  
     
    Export citation  
     
    Bookmark