Switch to: References

Add citations

You must login to add citations.
  1. What kinds of groups are group agents?Jimmy Lewis-Martin - 2022 - Synthese 200 (4):1-19.
    For a group to be an agent, it must be individuated from its environment and other systems. It must, in other words, be an individual. Despite the central importance of individuality for understanding group agency, the concept has been significantly overlooked. I propose to fill this gap in our understanding of group individuality by arguing that agents are autonomous as it is commonly understood in the enactive literature. According to this autonomous individuation account, an autonomous system is one wherein the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomous Artificial Intelligence and Liability: a Comment on List.Michael Da Silva - 2022 - Philosophy and Technology 35 (2):1-6.
    Christian List argues that responsibility gaps created by viewing artificial intelligence as intentional agents are problematic enough that regulators should only permit the use of autonomous AI in high-stakes settings where AI is designed to be moral or a liability transfer agreement will fill any gaps. This work challenges List’s proposed condition. A requirement for “moral” AI is too onerous given technical challenges and other ways to check AI quality. Moreover, transfer agreements only plausibly fill responsibility gaps by applying independently (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Techno-optimism: an Analysis, an Evaluation and a Modest Defence.John Danaher - 2022 - Philosophy and Technology 35 (2):1-29.
    What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Democratic Inclusion of Artificial Intelligence? Exploring the Patiency, Agency and Relational Conditions for Demos Membership.Ludvig Beckman & Jonas Hultin Rosenberg - 2022 - Philosophy and Technology 35 (2):1-24.
    Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Group Responsibility.Christian List - forthcoming - In Dana Nelkin & Derk Pereboom (eds.), Oxford Handbook of Moral Responsibility. Oxford: Oxford University Press.
    Are groups ever capable of bearing responsibility, over and above their individual members? This chapter discusses and defends the view that certain organized collectives – namely, those that qualify as group moral agents – can be held responsible for their actions, and that group responsibility is not reducible to individual responsibility. The view has important implications. It supports the recognition of corporate civil and even criminal liability in our legal systems, and it suggests that, by recognizing group agents as loci (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation