Switch to: References

Add citations

You must login to add citations.
  1. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Living with AI personal assistant: an ethical appraisal.Lorraine K. C. Yeung, Cecilia S. Y. Tam, Sam S. S. Lau & Mandy M. Ko - forthcoming - AI and Society:1-16.
    Mark Coeckelbergh (Int J Soc Robot 1:217–221, 2009) argues that robot ethics should investigate what interaction with robots can do to humans rather than focusing on the robot’s moral status. We should ask what robots do to our sociality and whether human–robot interaction can contribute to the human good and human flourishing. This paper extends Coeckelbergh’s call and investigate what it means to live with disembodied AI-powered agents. We address the following question: Can the human–AI interaction contribute to our moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Interperforming in AI: question of ‘natural’ in machine learning and recurrent neural networks.Tolga Yalur - 2020 - AI and Society 35 (3):737-745.
    This article offers a critical inquiry of contemporary neural network models as an instance of machine learning, from an interdisciplinary perspective of AI studies and performativity. It shows the limits on the architecture of these network systems due to the misemployment of ‘natural’ performance, and it offers ‘context’ as a variable from a performative approach, instead of a constant. The article begins with a brief review of machine learning-based natural language processing systems and continues with a concentration on the relevant (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Is tomorrow’s car appealing today? Ethical issues and user attitudes beyond automation.Darja Vrščaj, Sven Nyholm & Geert P. J. Verbong - 2020 - AI and Society 35 (4):1033-1046.
    The literature on ethics and user attitudes towards AVs discusses user concerns in relation to automation; however, we show that there are additional relevant issues at stake. To assess adolescents’ attitudes regarding the ‘car of the future’ as presented by car manufacturers, we conducted two studies with over 400 participants altogether. We used a mixed methods approach in which we combined qualitative and quantitative methods. In the first study, our respondents appeared to be more concerned about other aspects of AVs (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Achieving Operational Excellence Through Artificial Intelligence: Driving Forces and Barriers.Muhammad Usman Tariq, Marc Poulin & Abdullah A. Abonamah - 2021 - Frontiers in Psychology 12.
    This paper presents an in-depth literature review on the driving forces and barriers for achieving operational excellence through artificial intelligence. Artificial intelligence is a technological concept spanning operational management, philosophy, humanities, statistics, mathematics, computer sciences, and social sciences. AI refers to machines mimicking human behavior in terms of cognitive functions. The evolution of new technological procedures and advancements in producing intelligence for machines creates a positive impact on decisions, operations, strategies, and management incorporated in the production process of goods and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Investigating user perceptions of commercial virtual assistants: A qualitative study.Leilasadat Mirghaderi, Monika Sziron & Elisabeth Hildt - 2022 - Frontiers in Psychology 13.
    As commercial virtual assistants become an integrated part of almost every smart device that we use on a daily basis, including but not limited to smartphones, speakers, personal computers, watches, TVs, and TV sticks, there are pressing questions that call for the study of how participants perceive commercial virtual assistants and what relational roles they assign to them. Furthermore, it is crucial to study which characteristics of commercial virtual assistants are perceived as important for establishing affective interaction with commercial virtual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence, culture and education.Sergey B. Kulikov & Anastasiya V. Shirokova - 2021 - AI and Society 36 (1):305-318.
    Sequential transformative design of research :224–235, 2015; Groleau et al. in J Mental Health 16:731–741, 2007; Robson and McCartan in Real world research: a resource for users of social research methods in applied settings, Wiley, Chichester, 2016) allows testing a group of theoretical assumptions about the connections of artificial intelligence with culture and education. In the course of research, semiotics ensures the description of self-organizing systems of cultural signs and symbols in terms of artificial intelligence as a special set of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Linking Human And Machine Behavior: A New Approach to Evaluate Training Data Quality for Beneficial Machine Learning.Thilo Hagendorff - 2021 - Minds and Machines 31 (4):563-593.
    Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up to now, those qualities are solely measured in technical terms, but not in ethical ones, despite the significant role of training and annotation data in supervised machine learning. This is the first study to fill this gap by describing new dimensions of data quality for supervised machine learning applications. Based on the rationale that different social and psychological backgrounds of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • What ethics can say on artificial intelligence: Insights from a systematic literature review.Francesco Vincenzo Giarmoleo, Ignacio Ferrero, Marta Rocchi & Massimiliano Matteo Pellegrini - forthcoming - Business and Society Review.
    The abundance of literature on ethical concerns regarding artificial intelligence (AI) highlights the need to systematize, integrate, and categorize existing efforts through a systematic literature review. The article aims to investigate prevalent concerns, proposed solutions, and prominent ethical approaches within the field. Considering 309 articles from the beginning of the publications in this field up until December 2021, this systematic literature review clarifies what the ethical concerns regarding AI are, and it charts them into two groups: (i) ethical concerns that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social robots and digital well-being: how to design future artificial agents.Matthew J. Dennis - 2021 - Mind and Society 21 (1):37-50.
    Value-sensitive design theorists propose that a range of values that should inform how future social robots are engineered. This article explores a new value: digital well-being, and proposes that the next generation of social robots should be designed to facilitate this value in those who use or come into contact with these machines. To do this, I explore how the morphology of social robots is closely connected to digital well-being. I argue that a key decision is whether social robots are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI-powered recommender systems and the preservation of personal autonomy.Juan Ignacio del Valle & Francisco Lara - forthcoming - AI and Society:1-13.
    Recommender Systems (RecSys) have been around since the early days of the Internet, helping users navigate the vast ocean of information and the increasingly available options that have been available for us ever since. The range of tasks for which one could use a RecSys is expanding as the technical capabilities grow, with the disruption of Machine Learning representing a tipping point in this domain, as in many others. However, the increase of the technical capabilities of AI-powered RecSys did not (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Mechanisms of Techno-Moral Change: A Taxonomy and Overview.John Danaher & Henrik Skaug Sætra - 2023 - Ethical Theory and Moral Practice 26 (5):763-784.
    The idea that technologies can change moral beliefs and practices is an old one. But how, exactly, does this happen? This paper builds on an emerging field of inquiry by developing a synoptic taxonomy of the mechanisms of techno-moral change. It argues that technology affects moral beliefs and practices in three main domains: decisional (how we make morally loaded decisions), relational (how we relate to others) and perceptual (how we perceive situations). It argues that across these three domains there are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Expanding Nallur's Landscape of Machine Implemented Ethics.William A. Bauer - 2020 - Science and Engineering Ethics 26 (5):2401-2410.
    What ethical principles should autonomous machines follow? How do we implement these principles, and how do we evaluate these implementations? These are some of the critical questions Vivek Nallur asks in his essay “Landscape of Machine Implemented Ethics (2020).” He provides a broad, insightful survey of answers to these questions, especially focused on the implementation question. In this commentary, I will first critically summarize the main themes and conclusions of Nallur’s essay and then expand upon the landscape that Nallur presents (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI Assistants and the Paradox of Internal Automaticity.William A. Bauer & Veljko Dubljević - 2019 - Neuroethics 13 (3):303-310.
    What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Virtual Reality and the Meaning of Life.John Danaher - forthcoming - In Oxford Handbook on Meaning in Life.
    It is commonly assumed that a virtual life would be less meaningful (perhaps even meaningless). As virtual reality technologies develop and become more integrated into our everyday lives, this poses a challenge for those that care about meaning in life. In this chapter, it is argued that the common assumption about meaninglessness and virtuality is mistaken. After clarifying the distinction between two different visions of virtual reality, four arguments are presented for thinking that meaning is possible in virtual reality. Following (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI Systems and Respect for Human Autonomy.Arto Laitinen & Otto Sahlgren - 2021 - Frontiers in Artificial Intelligence.
    This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, to computational (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations