Switch to: References

Add citations

You must login to add citations.
  1. Disrupted self, therapy, and the limits of conversational AI.Dina Babushkina & Bas de Boer - forthcoming - Philosophical Psychology.
    Conversational agents (CA) are thought to be promising for psychotherapy because they give the impression of being able to engage in conversations with human users. However, given the high risk for therapy patients who are already in a vulnerable situation, there is a need to investigate the extent to which CA are able to contribute to therapy goals and to discuss CA’s limitations, especially in complex cases. In this paper, we understand psychotherapy as a way of dealing with existential situations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What ethics can say on artificial intelligence: Insights from a systematic literature review.Francesco Vincenzo Giarmoleo, Ignacio Ferrero, Marta Rocchi & Massimiliano Matteo Pellegrini - 2024 - Business and Society Review 129 (2):258-292.
    The abundance of literature on ethical concerns regarding artificial intelligence (AI) highlights the need to systematize, integrate, and categorize existing efforts through a systematic literature review. The article aims to investigate prevalent concerns, proposed solutions, and prominent ethical approaches within the field. Considering 309 articles from the beginning of the publications in this field up until December 2021, this systematic literature review clarifies what the ethical concerns regarding AI are, and it charts them into two groups: (i) ethical concerns that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Unintended Harms of Novel Predictive Technologies in Mental Disorder Treatment.Şerife Tekin - 2024 - American Journal of Bioethics Neuroscience 15 (1):46-48.
    Words we use to characterize mental states matter; they affect, for better or worse, the individual whose mental states are in question. For example, referring to a child whose behavior seems a bit...
    Download  
     
    Export citation  
     
    Bookmark  
  • Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?Jana Sedlakova & Manuel Trachsel - 2022 - American Journal of Bioethics 23 (5):4-13.
    Conversational artificial intelligence (CAI) presents many opportunities in the psychotherapeutic landscape—such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. The objective of this paper is to complement current research on the ethics of AI for mental health by proposing a holistic, ethical, and epistemic analysis of CAI adoption. First, we focus on the question of whether CAI is rather a tool or an (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Mind the Gaps: Ethical and Epistemic Issues in the Digital Mental Health Response to Covid‐19.Joshua August Skorburg & Phoebe Friesen - 2021 - Hastings Center Report 51 (6):23-26.
    Well before the COVID-19 pandemic, proponents of digital psychiatry were touting the promise of various digital tools and techniques to revolutionize mental healthcare. As social distancing and its knock-on effects have strained existing mental health infrastructures, calls have grown louder for implementing various digital mental health solutions at scale. Decisions made today will shape the future of mental healthcare for the foreseeable future. We argue that bioethicists are uniquely positioned to cut through the hype surrounding digital mental health, which can (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Is There an App for That?: Ethical Issues in the Digital Mental Health Response to COVID-19.Joshua August Skorburg & Josephine Yam - 2022 - American Journal of Bioethics Neuroscience 13 (3):177-190.
    As COVID-19 spread, clinicians warned of mental illness epidemics within the coronavirus pandemic. Funding for digital mental health is surging and researchers are calling for widespread adoption to address the mental health sequalae of COVID-19. -/- We consider whether these technologies improve mental health outcomes and whether they exacerbate existing health inequalities laid bare by the pandemic. We argue the evidence for efficacy is weak and the likelihood of increasing inequalities is high. -/- First, we review recent trends in digital (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Detecting your depression with your smartphone? – An ethical analysis of epistemic injustice in passive self-tracking apps.Mirjam Faissner, Eva Kuhn, Regina Müller & Sebastian Laacke - 2024 - Ethics and Information Technology 26 (2):1-14.
    Smartphone apps might offer a low-threshold approach to the detection of mental health conditions, such as depression. Based on the gathering of ‘passive data,’ some apps generate a user’s ‘digital phenotype,’ compare it to those of users with clinically confirmed depression and issue a warning if a depressive episode is likely. These apps can, thus, serve as epistemic tools for affected users. From an ethical perspective, it is crucial to consider epistemic injustice to promote socially responsible innovations within digital mental (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Affective Scaffolding of Grief in the Digital Age: The Case of Deathbots.Regina E. Fabry & Mark Alfano - forthcoming - Topoi:1-13.
    Contemporary and emerging chatbots can be fine-tuned to imitate the style, tenor, and knowledge of a corpus, including the corpus of a particular individual. This makes it possible to build chatbots that imitate people who are no longer alive — deathbots. Such deathbots can be used in many ways, but one prominent way is to facilitate the process of grieving. In this paper, we present a framework that helps make sense of this process. In particular, we argue that deathbots can (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Digital Phenotyping: an Epistemic and Methodological Analysis.Simon Coghlan & Simon D’Alfonso - 2021 - Philosophy and Technology 34 (4):1905-1928.
    Some claim that digital phenotyping will revolutionize understanding of human psychology and experience and significantly promote human wellbeing. This paper investigates the nature of digital phenotyping in relation to its alleged promise. Unlike most of the literature to date on philosophy and digital phenotyping, which has focused on its ethical aspects, this paper focuses on its epistemic and methodological aspects. The paper advances a tetra-taxonomy involving four scenario types in which knowledge may be acquired from human “digitypes” by digital phenotyping. (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Justice, Vulnerable Populations, and the Use of Conversational AI in Psychotherapy.Bennett Knox, Pierce Christoffersen, Kalista Leggitt, Zeia Woodruff & Matthew H. Haber - 2023 - American Journal of Bioethics 23 (5):48-50.
    Sedlakova and Trachsel (2023) identify a major benefit of conversational artificial intelligence (CAI) in psychotherapy as its ability to expand access to mental healthcare for vulnerable populatio...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Conversational Artificial Intelligence and Distortions of the Psychotherapeutic Frame: Issues of Boundaries, Responsibility, and Industry Interests.Meghana Kasturi Vagwala & Rachel Asher - 2023 - American Journal of Bioethics 23 (5):28-30.
    Sedlakova and Traschel argue that conversational artificial intelligence (CAI) is more than a mere tool, but not quite an agent, as it “simulates having a therapeutic conversation [but] does not re...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Countering essentialism in psychiatric narratives.Marianne D. Broeker & Sarah Arnaud - forthcoming - Philosophical Psychology.
    The practice of self-diagnosing, amplified by the spread of psychiatric knowledge through social media, has grown rapidly. Yet, the motivations behind this trend, and, critically, its psychological repercussions remain poorly understood. Self-ascribing a psychiatric label always occurs within a broader narrative context, with narratives serving as essential interpretive tools for understanding oneself and others.In this paper, we identify four principal motivators for people pursuing self-diagnosis, pertaining to 1. waiting time and cost of mental health resources, 2. recognition, 3. identity formation, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can Public Health Investment and Oversight save Digital Mental Health?Anita Ho - 2022 - American Journal of Bioethics Neuroscience 13 (3):201-203.
    Download  
     
    Export citation  
     
    Bookmark