Switch to: References

Citations of:

Collective Agency and Cooperation in Natural and Artificial Systems

In Collective Agency and Cooperation in Natural and Artificial Systems. Springer Verlag (1st ed. 2015)

Add citations

You must login to add citations.
  1. Integrative social robotics, value-driven design, and transdisciplinarity.Johanna Seibt, Malene Flensborg Damholdt & Christina Vestergaard - 2020 - Interaction Studies 21 (1):111-144.
    Abstract“Integrative Social Robotics” (ISR) is a new approach or general method for generating social robotics applications in a responsible and “culturally sustainable” fashion. Currently social robotics is caught in a basic difficulty we call the “triple gridlock of description, evaluation, and regulation”. We briefly recapitulate this problem and then present the core ideas of ISR in the form of five principles that should guide the development of applications in social robotics. Characteristic of ISR is to intertwine a mixed method approach (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Beyond congruence: evidential integration and inferring the best evolutionary scenario.Arsham Nejad Kourki - 2022 - Biology and Philosophy 37 (5):1-25.
    Molecular methods have revolutionised virtually every area of biology, and metazoan phylogenetics is no exception: molecular phylogenies, molecular clocks, comparative phylogenomics, and developmental genetics have generated a plethora of molecular data spanning numerous taxa and collectively transformed our understanding of the evolutionary history of animals, often corroborating but at times opposing results of more traditional approaches. Moreover, the diversity of methods and models within molecular phylogenetics has resulted in significant disagreement among molecular phylogenies as well as between these and earlier (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What kinds of groups are group agents?Jimmy Lewis-Martin - 2022 - Synthese 200 (4):1-19.
    For a group to be an agent, it must be individuated from its environment and other systems. It must, in other words, be an individual. Despite the central importance of individuality for understanding group agency, the concept has been significantly overlooked. I propose to fill this gap in our understanding of group individuality by arguing that agents are autonomous as it is commonly understood in the enactive literature. According to this autonomous individuation account, an autonomous system is one wherein the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What dangers lurk in the development of emotionally competent artificial intelligence, especially regarding the trend towards sex robots? A review of Catrin Misselhorn’s most recent book.Janina Luise Samuel & André Schmiljun - 2023 - AI and Society 38 (6):2717-2721.
    The discussion around artificial empathy and its ethics is not a new one. This concept can be found in classic science fiction media such as Star Trek and Blade Runner and is also pondered on in more recent interactive media such as the video game Detroit: Become Human. In most depictions, emotions and empathy are presented as the key to being human. Misselhorn's new publication shows that these futuristic stories are becoming more and more relevant today. We must ask ourselves (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Collective forward-looking responsibility of patient advocacy organizations: conceptual and ethical analysis.Sabine Salloch, Christoph Rach & Regina Müller - 2021 - BMC Medical Ethics 22 (1):1-11.
    BackgroundPatient advocacy organizations (PAOs) have an increasing influence on health policy and biomedical research, therefore, questions about the specific character of their responsibility arise: Can PAOs bear moral responsibility and, if so, to whom are they responsible, for what and on which normative basis? Although the concept of responsibility in healthcare is strongly discussed, PAOs particularly have rarely been systematically analyzed as morally responsible agents. The aim of the current paper is to analyze the character of PAOs’ responsibility to provide (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Towards a new scale for assessing attitudes towards social robots.Malene Flensborg Damholdt, Christina Vestergaard, Marco Nørskov, Raul Hakli, Stefan Larsen & Johanna Seibt - 2020 - Interaction Studies 21 (1):24-56.
    Background:The surge in the development of social robots gives rise to an increased need for systematic methods of assessing attitudes towards robots.Aim:This study presents the development of a questionnaire for assessing attitudinal stance towards social robots: the ASOR.Methods:The 37-item ASOR questionnaire was developed by a task-force with members from different disciplines. It was founded on theoretical considerations of how social robots could influence five different aspects of relatedness.Results:Three hundred thirty-nine people responded to the survey. Factor analysis of the ASOR yielded (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A taxonomy of human–machine collaboration: capturing automation and technical autonomy.Monika Simmler & Ruth Frischknecht - 2021 - AI and Society 36 (1):239-250.
    Due to the ongoing advancements in technology, socio-technical collaboration has become increasingly prevalent. This poses challenges in terms of governance and accountability, as well as issues in various other fields. Therefore, it is crucial to familiarize decision-makers and researchers with the core of human–machine collaboration. This study introduces a taxonomy that enables identification of the very nature of human–machine interaction. A literature review has revealed that automation and technical autonomy are main parameters for describing and understanding such interaction. Both aspects (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Softwaremodule for an Ethical Elder Care Robot. Design and Implementation.Catrin Misselhorn - 2019 - Ethics in Progress 10 (2):68-81.
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. The goal of this article is to describe how one can approach the design of an elder care robot which is capable of moral decision-making and moral learning. A conceptual design for the development of such a system is provided and the steps that are necessary to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios.Manuel Dietrich & Thomas H. Weisswange - 2019 - Ethics and Information Technology 21 (3):227-239.
    Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of human (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (2 other versions)The ethics of crashes with self‐driving cars: A roadmap, II.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12506.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, we need to think about who should be held responsible when self‐driving cars crash and people are injured or killed. We also need to examine what new ethical obligations might be created for car users by the safety potential of self‐driving cars. The article first considers what lessons might be learned from the growing legal literature on responsibility for crashes with (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • (2 other versions)The ethics of crashes with self‐driving cars: A roadmap, I.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12507.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Machine Ethics in Care: Could a Moral Avatar Enhance the Autonomy of Care-Dependent Persons?Catrin Misselhorn - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):346-359.
    It is a common view that artificial systems could play an important role in dealing with the shortage of caregivers due to demographic change. One argument to show that this is also in the interest of care-dependent persons is that artificial systems might significantly enhance user autonomy since they might stay longer in their homes. This argument presupposes that the artificial systems in question do not require permanent supervision and control by human caregivers. For this reason, they need the capacity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial systems with moral capacities? A research design and its implementation in a geriatric care system.Catrin Misselhorn - 2020 - Artificial Intelligence 278 (C):103179.
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This gave rise to the development of artificial morality, an emerging field in artificial intelligence which explores whether and how artificial systems can be furnished with moral capacities. This will have a deep impact on our lives. Yet, the methodological foundations of artificial morality are still sketchy and often far off from possible applications. One important area of application of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations