Switch to: References

Add citations

You must login to add citations.
  1. Subjectness of Intelligence: Quantum-Theoretic Analysis and Ethical Perspective.Ilya A. Surov & Elena N. Melnikova - forthcoming - Foundations of Science.
    Download  
     
    Export citation  
     
    Bookmark  
  • Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Generative AI and human–robot interaction: implications and future agenda for business, society and ethics.Bojan Obrenovic, Xiao Gu, Guoyu Wang, Danijela Godinic & Ilimdorjon Jakhongirov - forthcoming - AI and Society:1-14.
    The revolution of artificial intelligence (AI), particularly generative AI, and its implications for human–robot interaction (HRI) opened up the debate on crucial regulatory, business, societal, and ethical considerations. This paper explores essential issues from the anthropomorphic perspective, examining the complex interplay between humans and AI models in societal and corporate contexts. We provided a comprehensive review of existing literature on HRI, with a special emphasis on the impact of generative models such as ChatGPT. The scientometric study posits that due to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Importance of Understanding Language in Large Language Models.Alaa Youssef, Samantha Stein, Justin Clapp & David Magnus - 2023 - American Journal of Bioethics 23 (10):6-7.
    Recent advancements in large language models (LLMs) have ushered in a transformative phase in artificial intelligence (AI). Unlike conventional AI, LLMs excel in facilitating fluid human–computer d...
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding and Avoiding AI Failures: A Practical Guide.Robert Williams & Roman Yampolskiy - 2021 - Philosophies 6 (3):53.
    As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Minding the gap(s): public perceptions of AI and socio-technical imaginaries.Laura Sartori & Giulia Bocca - 2023 - AI and Society 38 (2):443-458.
    Deepening and digging into the social side of AI is a novel but emerging requirement within the AI community. Future research should invest in an “AI for people”, going beyond the undoubtedly much-needed efforts into ethics, explainability and responsible AI. The article addresses this challenge by problematizing the discussion around AI shifting the attention to individuals and their awareness, knowledge and emotional response to AI. First, we outline our main argument relative to the need for a socio-technical perspective in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Truth, Lies and New Weapons Technologies: Prospects for Jus in Silico?Esther D. Reed - 2022 - Studies in Christian Ethics 35 (1):68-86.
    This article tests the proposition that new weapons technology requires Christian ethics to dispense with the just war tradition (JWT) and argues for its development rather than dissolution. Those working in the JWT should be under no illusions, however, that new weapons technologies could (or do already) represent threats to the doing of justice in the theatre of war. These threats include weapons systems that deliver indiscriminate, disproportionate or otherwise unjust outcomes, or that are operated within (quasi-)legal frameworks marked by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3.M. A. Palacios Barea, D. Boeren & J. F. Ferreira Goncalves - forthcoming - AI and Society:1-19.
    Algorithmic biases, or algorithmic unfairness, have been a topic of public and scientific scrutiny for the past years, as increasing evidence suggests the pervasive assimilation of human cognitive biases and stereotypes in such systems. This research is specifically concerned with analyzing the presence of discursive biases in the text generated by GPT-3, an NLPM which has been praised in recent years for resembling human language so closely that it is becoming difficult to differentiate between the human and the algorithm. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Investigating user perceptions of commercial virtual assistants: A qualitative study.Leilasadat Mirghaderi, Monika Sziron & Elisabeth Hildt - 2022 - Frontiers in Psychology 13.
    As commercial virtual assistants become an integrated part of almost every smart device that we use on a daily basis, including but not limited to smartphones, speakers, personal computers, watches, TVs, and TV sticks, there are pressing questions that call for the study of how participants perceive commercial virtual assistants and what relational roles they assign to them. Furthermore, it is crucial to study which characteristics of commercial virtual assistants are perceived as important for establishing affective interaction with commercial virtual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Public perception of military AI in the context of techno-optimistic society.Eleri Lillemäe, Kairi Talves & Wolfgang Wagner - forthcoming - AI and Society:1-15.
    In this study, we analyse the public perception of military AI in Estonia, a techno-optimistic country with high support for science and technology. This study involved quantitative survey data from 2021 on the public’s attitudes towards AI-based technology in general, and AI in developing and using weaponised unmanned ground systems (UGS) in particular. UGS are a technology that has been tested in militaries in recent years with the expectation of increasing effectiveness and saving manpower in dangerous military tasks. However, developing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • “I Am Not Your Robot:” the metaphysical challenge of humanity’s AIS ownership.Tyler L. Jaynes - 2021 - AI and Society 37 (4):1689-1702.
    Despite the reality that self-learning artificial intelligence systems (SLAIS) are gaining in sophistication, humanity’s focus regarding SLAIS-human interactions are unnervingly centred upon transnational commercial sectors and, most generally, around issues of intellectual property law. But as SLAIS gain greater environmental interaction capabilities in digital spaces, or the ability to self-author code to drive their development as algorithmic models, a concern arises as to whether a system that displays a “deceptive” level of human-like engagement with users in our physical world ought (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.Elisabeth Hildt - 2023 - American Journal of Bioethics Neuroscience 14 (2):58-71.
    Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence.Michele Farisco, Kathinka Evers & Arleen Salles - 2022 - Neuroethics 15 (1):1-12.
    Contemporary ethical analysis of Artificial Intelligence is growing rapidly. One of its most recognizable outcomes is the publication of a number of ethics guidelines that, intended to guide governmental policy, address issues raised by AI design, development, and implementation and generally present a set of recommendations. Here we propose two things: first, regarding content, since some of the applied issues raised by AI are related to fundamental questions about topics like intelligence, consciousness, and the ontological and ethical status of humans, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Epistemic Rights and Responsibilities of Digital Simulacra for Biomedicine.Mildred K. Cho & Nicole Martinez-Martin - 2022 - American Journal of Bioethics 23 (9):43-54.
    Big data and artificial intelligence (“AI”) promise to transform virtually all aspects of biomedical research and health care (Matheny et al. 2019), through facilitation of drug development, diagno...
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Epistemic Challenges of Digital Twins & Virtual Brains : Perspectives from Fundamental Neuroethics.Kathinka Evers & Arleen Salles - 2021 - SCIO: Revista de Filosofía 21.
    In this article, we present and analyse the concept of Digital Twin linked to distinct types of objects and examine the challenges involved in creating them from a fundamental neuroethics approach that emphasises conceptual analyses. We begin by providing a brief description of DTs and their initial development as models of artefacts and physical inanimate objects, identifying core challenges in building these tools and noting their intended benefits. Next, we describe attempts to build DTs of model living entities, such as (...)
    Download  
     
    Export citation  
     
    Bookmark