Switch to: References

Add citations

You must login to add citations.
  1. Jaz u odgovornosti u informatičkoj eri.Jelena Mijić - 2023 - Društvo I Politika 4 (4):25-38.
    Odgovornost pripisujemo sa namerom da postignemo neki cilj. Jedno od opših mesta u filozofskoj literaturi je da osobi možemo pripisati moralnu odgovornost ako su zadovoljena bar dva uslova: da subjekt delanja ima kontrolu nad svojim postupcima i da je u stanju da navede razloge u prilog svog postupka. Međutim, četvrtu industrijsku revoluciju karakterišu sociotehnološke pojave koje nas potencijalno suočavaju sa tzv. problemom jaza u odgovornosti. Rasprave o odgovornosti u kontekstu veštačke inteligencije karakteriše nejasna i neodređena upotreba ovog pojma. Da bismo (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • (1 other version)Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Socially responsive technologies: toward a co-developmental path.Daniel W. Tigard, Niël H. Conradie & Saskia K. Nagel - 2020 - AI and Society 35 (4):885-893.
    Robotic and artificially intelligent (AI) systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction (HCI and HRI), we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals’ preferences and concerns.Florian Funer, Wenke Liedtke, Sara Tinnemeyer, Andrea Diana Klausen, Diana Schneider, Helena U. Zacharias, Martin Langanke & Sabine Salloch - 2024 - Journal of Medical Ethics 50 (1):6-11.
    Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals’ attitudes to potential changes of responsibility and decision-making (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Toward Relational Diversity for AI in Psychotherapy.Daniel W. Tigard - 2023 - American Journal of Bioethics 23 (5):64-66.
    It is an understatement to say we live in an exciting time considering the increasingly widespread applications of artificial intelligence (AI). This observation is brought to the fore by Sedlakova...
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial Intelligence in medicine: reshaping the face of medical practice.Max Tretter, David Samhammer & Peter Dabrock - 2023 - Ethik in der Medizin 36 (1):7-29.
    Background The use of Artificial Intelligence (AI) has the potential to provide relief in the challenging and often stressful clinical setting for physicians. So far, however, the actual changes in work for physicians remain a prediction for the future, including new demands on the social level of medical practice. Thus, the question of how the requirements for physicians will change due to the implementation of AI is addressed. Methods The question is approached through conceptual considerations based on the potentials that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Hiperética artificial: crítica a la colonización algorítmica de lo moral.Patrici Calvo - 2022 - Revista de Filosofía (Madrid) 49 (1):1-21.
    Este estudio reflexionar pretende críticamente sobre la posibilidad de un enfoque dataficado, hiperconectado y algoritmizado de clarificación, fundamentación y aplicación de lo moral: la hiperética artificial. Para ello, se mostrará la ética como un saber práctico que, preocupado por la racionalización de los comportamientos libres, ha encontrado en el diálogo entre afectados el criterio de moralidad desde el cual poder criticar tanto el conocimiento como el comportamiento. Posteriormente, se profundizará en la etificación, el intento de establecer procesos de transformación de (...)
    Download  
     
    Export citation  
     
    Bookmark