Switch to: References

Add citations

You must login to add citations.
  1. AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.Richard Volkman & Katleen Gabriels - 2023 - Science and Engineering Ethics 29 (2):1-14.
    Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding Emotions and Their Significance through Social Robots, and Vice Versa.Johanna Seibt & Raffaele Rodogno - 2019 - Techné: Research in Philosophy and Technology 23 (3):257-269.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can a robot invigilator prevent cheating?Omar Mubin, Massimiliano Cappuccio, Fady Alnajjar, Muneeb Imtiaz Ahmad & Suleman Shahid - 2020 - AI and Society 35 (4):981-989.
    One of the open questions in Educational robots is the role a robot should take in the classroom. The current focus in this area is on employing robots as a tool or in an assistive capacity such as the invigilator of an exam. With robots becoming commonplace in the classroom, inquiries will be raised regarding not only their suitability but also their ability to influence and control the morality and behaviour of the students via their presence. Therefore, as a means (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Socratic nudges, virtual moral assistants and the problem of autonomy.Francisco Lara & Blanca Rodríguez-López - forthcoming - AI and Society:1-13.
    Many of our daily activities are now made more convenient and efficient by virtual assistants, and the day when they can be designed to instruct us in certain skills, such as those needed to make moral judgements, is not far off. In this paper we ask to what extent it would be ethically acceptable for these so-called virtual assistants for moral enhancement to use subtle strategies, known as “nudges”, to influence our decisions. To achieve our goal, we will first characterise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Technologies of self-cultivation. How to improve Stoic self-care apps.Matthew Dennis - 2020 - Human Affairs 30 (4):549-558.
    Self-care apps are booming. Early iterations of this technology focused on tracking health and fitness routines, but recently some developers have turned their attention to the cultivation of character, basing their conceptual resources on the Hellenistic tradition (Stoic Meditations™, Stoa™, Stoic Mental Health Tracker™). Those familiar with the final writings of Michel Foucault will notice an intriguing coincidence between the development of these products and his claims that the Hellenistic tradition of self-cultivation has much to offer contemporary life. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Social robots and digital well-being: how to design future artificial agents.Matthew J. Dennis - 2021 - Mind and Society 21 (1):37-50.
    Value-sensitive design theorists propose that a range of values that should inform how future social robots are engineered. This article explores a new value: digital well-being, and proposes that the next generation of social robots should be designed to facilitate this value in those who use or come into contact with these machines. To do this, I explore how the morphology of social robots is closely connected to digital well-being. I argue that a key decision is whether social robots are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trust as a Test for Unethical Persuasive Design.Johnny Brennan - 2020 - Philosophy and Technology 34 (4):767-783.
    Persuasive design draws on our basic psychological makeup to build products that make our engagement with them habitual. It uses variable rewards, creates Fear of Missing Out, and leverages social approval to incrementally increase and maintain user engagement. Social media and networking platforms, video games, and slot machines are all examples of persuasive technologies. Recent attention has focused on the dangers of PD: It can deceptively prod users into forming habits that help the company’s bottom line but not the user’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations