Switch to: References

Add citations

You must login to add citations.
  1. Why Won’t You Listen To Me? Predictive Neurotechnology and Epistemic Authority.Alessio Tacca & Frederic Gilbert - 2023 - Neuroethics 16 (3):1-12.
    From epileptic seizures to depressive symptoms, predictive neurotechnologies are used for a large range of applications. In this article we focus on advisory devices; namely, predictive neurotechnology programmed to detect specific neural events (e.g., epileptic seizure) and advise users to take necessary steps to reduce or avoid the impact of the forecasted neuroevent. Receiving advise from a predictive device is not without ethical concerns. The problem with predictive neural devices, in particular advisory ones, is the risk of seeing one’s autonomous (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation”.Esther Keymolen & Fleur Jongepier - 2022 - Ethics and Information Technology 24 (4):1-11.
    A large part of the explainable AI literature focuses on what explanations are in general, what algorithmic explainability is more specifically, and how to code these principles of explainability into AI systems. Much less attention has been devoted to the question of why algorithmic decisions and systems should be explainable and whether there ought to be a right to explanation and why. We therefore explore the normative landscape of the need for AI to be explainable and individuals having a right (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Epistemic (in)justice, social identity and the Black Box problem in patient care.Muneerah Khan & Cornelius Ewuoso - 2024 - Medicine, Health Care and Philosophy 27 (2):227-240.
    This manuscript draws on the moral norms arising from the nuanced accounts of epistemic (in)justice and social identity in relational autonomy to normatively assess and articulate the ethical problems associated with using AI in patient care in light of the Black Box problem. The article also describes how black-boxed AI may be used within the healthcare system. The manuscript highlights what needs to happen to align AI with the moral norms it draws on. Deeper thinking – from other backgrounds other (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Review on Digital Lethargy: Dispatches from an age of disconnection. Tung-Hui Hu (2022). Massachusetts, USA. MIT Press. [REVIEW]Siddharthiya Pillay - 2023 - Journal of Responsible Technology 15 (C):100067.
    Download  
     
    Export citation  
     
    Bookmark  
  • Justice and the Normative Standards of Explainability in Healthcare.Saskia K. Nagel, Nils Freyer & Hendrik Kempt - 2022 - Philosophy and Technology 35 (4):1-19.
    Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Navigating the uncommon: challenges in applying evidence-based medicine to rare diseases and the prospects of artificial intelligence solutions.Olivia Rennie - forthcoming - Medicine, Health Care and Philosophy:1-16.
    The study of rare diseases has long been an area of challenge for medical researchers, with agonizingly slow movement towards improved understanding of pathophysiology and treatments compared with more common illnesses. The push towards evidence-based medicine (EBM), which prioritizes certain types of evidence over others, poses a particular issue when mapped onto rare diseases, which may not be feasibly investigated using the methodologies endorsed by EBM, due to a number of constraints. While other trial designs have been suggested to overcome (...)
    Download  
     
    Export citation  
     
    Bookmark