Switch to: References

Add citations

You must login to add citations.
  1. Propositional interpretability in artificial intelligence.David J. Chalmers - manuscript
    Mechanistic interpretability is the program of explaining what AI systems are doing in terms of their internal mechanisms. I analyze some aspects of the program, along with setting out some concrete challenges and assessing progress to date. I argue for the importance of propositional interpretability, which involves interpreting a system’s mechanisms and behav- ior in terms of propositional attitudes: attitudes (such as belief, desire, or subjective probabil- ity) to propositions (e.g. the proposition that it is hot outside). Propositional attitudes are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How Can We Tell if a Machine is Conscious?Michael Tye - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This essay is concerned to show that a clear methodology exists for answering the question “How Can We Tell if a Machine is Conscious?” The methodology does not deliver certainty but rather rational preference.
    Download  
     
    Export citation  
     
    Bookmark   2 citations