Switch to: Citations

Add references

You must login to add references.
  1. Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy.Sebastian Laacke, Regina Mueller, Georg Schomerus & Sabine Salloch - 2021 - American Journal of Bioethics 21 (7):4-20.
    The development of artificial intelligence (AI) in medicine raises fundamental ethical issues. As one example, AI systems in the field of mental health successfully detect signs of mental disorders, such as depression, by using data from social media. These AI depression detectors (AIDDs) identify users who are at risk of depression prior to any contact with the healthcare system. The article focuses on the ethical implications of AIDDs regarding affected users’ health-related autonomy. Firstly, it presents the (ethical) discussion of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Forced to be free? Increasing patient autonomy by constraining it.Neil Levy - 2014 - Journal of Medical Ethics 40 (5):293-300.
    It is universally accepted in bioethics that doctors and other medical professionals have an obligation to procure the informed consent of their patients. Informed consent is required because patients have the moral right to autonomy in furthering the pursuit of their most important goals. In the present work, it is argued that evidence from psychology shows that human beings are subject to a number of biases and limitations as reasoners, which can be expected to lower the quality of their decisions (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations