Switch to: References

Add citations

You must login to add citations.
  1. The Turing test.Graham Oppy & D. Dowe - 2003 - Stanford Encyclopedia of Philosophy.
    This paper provides a survey of philosophical discussion of the "the Turing Test". In particular, it provides a very careful and thorough discussion of the famous 1950 paper that was published in Mind.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Do Computers "Have Syntax, But No Semantics"?Jaroslav Peregrin - 2021 - Minds and Machines 31 (2):305-321.
    The heyday of discussions initiated by Searle's claim that computers have syntax, but no semantics has now past, yet philosophers and scientists still tend to frame their views on artificial intelligence in terms of syntax and semantics. In this paper I do not intend to take part in these discussions; my aim is more fundamental, viz. to ask what claims about syntax and semantics in this context can mean in the first place. And I argue that their sense is so (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour.Cameron Buckner - 2023 - British Journal for the Philosophy of Science 74 (3):681-712.
    The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations