Switch to: References

Add citations

You must login to add citations.
  1. Why algorithmic speed can be more important than algorithmic accuracy.Jakob Mainz, Lauritz Munch, Jens Christian Bjerring & Sissel Godtfredsen - 2023 - Clinical Ethics 18 (2):161-164.
    Artificial Intelligence (AI) often outperforms human doctors in terms of decisional speed. For some diseases, the expected benefit of a fast but less accurate decision exceeds the benefit of a slow but more accurate one. In such cases, we argue, it is often justified to rely on a medical AI to maximise decision speed – even if the AI is less accurate than human doctors.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ChatGPT’s Relevance for Bioethics: A Novel Challenge to the Intrinsically Relational, Critical, and Reason-Giving Aspect of Healthcare.Ramón Alvarado & Nicolae Morar - 2023 - American Journal of Bioethics 23 (10):71-73.
    The rapid development of large language models (LLM’s) and of their associated interfaces such as ChatGPT has brought forth a wave of epistemic and moral concerns in a variety of domains of inquiry...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Epistemic injustice and data science technologies.John Symons & Ramón Alvarado - 2022 - Synthese 200 (2):1-26.
    Technologies that deploy data science methods are liable to result in epistemic harms involving the diminution of individuals with respect to their standing as knowers or their credibility as sources of testimony. Not all harms of this kind are unjust but when they are we ought to try to prevent or correct them. Epistemically unjust harms will typically intersect with other more familiar and well-studied kinds of harm that result from the design, development, and use of data science technologies. However, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and society: a virtue ethics approach.Mirko Farina, Petr Zhdanov, Artur Karimov & Andrea Lavazza - forthcoming - AI and Society:1-14.
    Advances in artificial intelligence and robotics stand to change many aspects of our lives, including our values. If trends continue as expected, many industries will undergo automation in the near future, calling into question whether we can still value the sense of identity and security our occupations once provided us with. Likewise, the advent of social robots driven by AI, appears to be shifting the meaning of numerous, long-standing values associated with interpersonal relationships, like friendship. Furthermore, powerful actors’ and institutions’ (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.
    This paper concerns the double standard debate in the ethics of AI literature. This debate essentially revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around the desideratum of transparency. That is, the debate has focused on whether AI systems must be more transparent than humans in their decision-making processes in order for it to be morally permissible to use such systems. Some have argued that the (...)
    Download  
     
    Export citation  
     
    Bookmark