Abstract
The notion of Simulative Reasoning in the study of propositional attitudes within Artificial
Intelligence (AI) is strongly related to the Simulation Theory of mental ascription in Philosophy.
Roughly speaking, when an AI system engages in Simulative Reasoning about a target agent, it
reasons with that agent’s beliefs as temporary hypotheses of its own, thereby coming to conclusions
about what the agent might conclude or might have concluded. The contrast is with non-simulative
meta-reasoning, where the AI system reasons within a detailed theory about the agent’s (conjectured) reasoning acts. The motive within AI for preferring Simulative Reasoning is that it is more
convenient and efficient, because of a simplification of the representations and reasoning processes.
The chapter discusses this advantage in detail. It also sketches the use of Simulative Reasoning
in an AI natural language processing system, ATT-Meta, that is currently being implemented.
This system is directed at the understanding of propositional attitude reports. In ATT-Meta,
Simulative Reasoning is yoked to a somewhat independent set of ideas about how attitude reports
should be treated. Central here are the claims that (a) speakers often employ commonsense (and
largely metaphorical) models of mind in describing agents’ attitudes, (b) the listener accordingly
needs often to reason within the terms of such models, rather than on the basis of any objectively
justifiable characterization of the mind, and (c) the commonsense models filter the suggestions that
Simulative Reasoning comes up with concerning target agents’ reasoning conclusions. There is a
yet tighter connection between the commonsense models and the Simulative Reasoning. It turns
out that Simulative Reasoning can be rationally reconstructed in terms of a more general type
of reasoning about the possibly-counterfactual “world” that the target agent believes in, together
with an assumption that that agent has a faithful representation of the world. In the ATT-Meta
approach, the reasoner adopts that assumption when it views the target agent through a particular
commonsense model (called IDEAS-AS-MODELS).