Why computers can't feel pain

Minds and Machines 19 (4):507-516 (2009)
  Copy   BIBTEX

Abstract

The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988 ) monograph, “Representation & Reality”, which if correct, has important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set ( x )”. Then, equating Q ( x ) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere.

Author's Profile

John Mark Bishop
Goldsmiths College, University of London

Analytics

Added to PP
2009-12-02

Downloads
881 (#14,827)

6 months
147 (#20,403)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?