Switch to: References

Add citations

You must login to add citations.
  1. Connectionism about human agency: responsible AI and the social lifeworld.Jörg Noller - forthcoming - AI and Society:1-10.
    This paper analyzes responsible human–machine interaction concerning artificial neural networks (ANNs) and large language models (LLMs) by considering the extension of human agency and autonomy by means of artificial intelligence (AI). Thereby, the paper draws on the sociological concept of “interobjectivity,” first introduced by Bruno Latour, and applies it to technologically situated and interconnected agency. Drawing on Don Ihde’s phenomenology of human-technology relations, this interobjective account of AI allows to understand human–machine interaction as embedded in the social lifeworld. Finally, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine agency and representation.Beba Cibralic & James Mattingly - 2024 - AI and Society 39 (1):345-352.
    Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Distributed cognition and distributed morality: Agency, artifacts and systems.Richard Heersmink - 2017 - Science and Engineering Ethics 23 (2):431-448.
    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - 2024 - Science and Engineering Ethics 30 (6):1-19.
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction.Laura Crompton - 2021 - Journal of Responsible Technology 7:100013.
    AI as decision support supposedly helps human agents make ‘better’decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • Decentered ethics in the machine era and guidance for AI regulation.Christian Hugo Hoffmann & Benjamin Hahn - 2020 - AI and Society 35 (3):635-644.
    Recent advancements in AI have prompted a large number of AI ethics guidelines published by governments and nonprofits. While many of these papers propose concrete or seemingly applicable ideas, few philosophically sound proposals are made. In particular, we observe that the line of questioning has often not been examined critically and underlying conceptual problems not always dealt with at the root. In this paper, we investigate the nature of ethical AI systems and what their moral status might be by first (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Not Relational Enough? Towards an Eco-Relational Approach in Robot Ethics.Anna Puzio - 2024 - Philosophy and Technology 37 (2):1-24.
    With robots increasingly integrated into various areas of life, the question of relationships with them is gaining prominence. Are friendship and partnership with robots possible? While there is already extensive research on relationships with robots, this article critically examines whether the relationship with non-human entities is sufficiently explored on a deeper level, especially in terms of ethical concepts such as autonomy, agency, and responsibility. In robot ethics, ethical concepts and considerations often presuppose properties such as consciousness, sentience, and intelligence, which (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Agency, social relations, and order: Media sociology’s shift into the digital.Andreas Hepp - 2022 - Communications 47 (3):470-493.
    Until the end of the last century, media sociology was synonymous with the investigation of mass media as a social domain. Today, media sociology needs to address a much higher level of complexity, that is, a deeply mediatized world in which all human practices, social relations, and social order are entangled with digital media and their infrastructures. This article discusses this shift from a sociology of mass communication to the sociology of a deeply mediatized world. The principal aim of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Automated decision-making and the problem of evil.Andrea Berber - 2023 - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Time of the End? More-Than-Human Humanism and Artificial Intelligence.Massimo Lollini - 2022 - Humanist Studies and the Digital Age 7 (1).
    The first part (“Is there a future?”), discusses the idea of the future in the context of Carl Schmitt’s vision for the spatial revolutions of modernity, and then the idea of Anthropocene, as a synonym for an environmental crisis endangering the very survival of humankind. From this point of view, the conquest of space and the colonization of Mars at the center of futuristic and technocratic visions appear to be an attempt to escape from human responsibilities on Earth. The second (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Man as ‘aggregate of data’.Sjoukje van der Meulen & Max Bruinsma - 2019 - AI and Society 34 (2):343-354.
    Since the emergence of the innovative field of artificial intelligence in the 1960s, the late Hubert Dreyfus insisted on the ontological distinction between man and machine, human and artificial intelligence. In the different editions of his classic and influential book What computers can’t do, he posits that an algorithmic machine can never fully simulate the complex functioning of the human mind—not now, nor in the future. Dreyfus’ categorical distinctions between man and machine are still relevant today, but their relation has (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • “An Eye Turned into a Weapon”: a Philosophical Investigation of Remote Controlled, Automated, and Autonomous Drone Warfare.Oliver Müller - 2020 - Philosophy and Technology 34 (4):875-896.
    Military drones combine surveillance technology with missile equipment in a far-reaching way. In this article, I argue that military drones could and should be object for a philosophical investigation, referring in particular on Chamayou’s theory of the drone, who also coined the term “an eye turned into a weapon.” Focusing on issues of human self-understanding, agency, and alterity, I examine the intricate human-technology relations in the context of designing and deploying military drones. For that purpose, I am drawing on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Intentionality gap and preter-intentionality in generative artificial intelligence.Roberto Redaelli - forthcoming - AI and Society:1-8.
    The emergence of generative artificial intelligence, such as large language models and text-to-image models, has had a profound impact on society. The ability of these systems to simulate human capabilities such as text writing and image creation is radically redefining a wide range of practices, from artistic production to education. While there is no doubt that these innovations are beneficial to our lives, the pervasiveness of these technologies should not be underestimated, and raising increasingly pressing ethical questions that require a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility and Robot Ethics: A Critical Overview.Janina Loh - 2019 - Philosophies 4 (4):58.
    _ _This paper has three concerns: first, it represents an etymological and genealogical study of the phenomenon of responsibility. Secondly, it gives an overview of the three fields of robot ethics as a philosophical discipline and discusses the fundamental questions that arise within these three fields. Thirdly, it will be explained how in these three fields of robot ethics is spoken about responsibility and how responsibility is attributed in general. As a philosophical paper, it presents a theoretical approach and no (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations