Switch to: References

Add citations

You must login to add citations.
  1. Should We Discourage AI Extension? Epistemic Responsibility and AI.Hadeel Naeem & Julian Hauser - 2024 - Philosophy and Technology 37 (3):1-17.
    We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark  
  • Percentages and reasons: AI explainability and ultimate human responsibility within the medical field.Eva Winkler, Andreas Wabro & Markus Herrmann - 2024 - Ethics and Information Technology 26 (2):1-10.
    With regard to current debates on the ethical implementation of AI, especially two demands are linked: the call for explainability and for ultimate human responsibility. In the medical field, both are condensed into the role of one person: It is the physician to whom AI output should be explainable and who should thus bear ultimate responsibility for diagnostic or treatment decisions that are based on such AI output. In this article, we argue that a black box AI indeed creates a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can large language models help solve the cost problem for the right to explanation?Lauritz Munch & Jens Christian Bjerring - forthcoming - Journal of Medical Ethics.
    By now a consensus has emerged that people, when subjected to high-stakes decisions through automated decision systems, have a moral right to have these decisions explained to them. However, furnishing such explanations can be costly. So the right to an explanation creates what we call the cost problem: providing subjects of automated decisions with appropriate explanations of the grounds of these decisions can be costly for the companies and organisations that use these automated decision systems. In this paper, we explore (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fragility, robustness and antifragility in deep learning.Chandresh Pravin, Ivan Martino, Giuseppe Nicosia & Varun Ojha - 2024 - Artificial Intelligence 327 (C):104060.
    Download  
     
    Export citation  
     
    Bookmark