When Explanations "Cause" Error: A Look at Representations and Compressions

Abstract

We depend upon explanation in order to “make sense” out of our world. And, making sense is all the more important when dealing with change. But, what happens if our explanations are wrong? This question is examined with respect to two types of explanatory model. Models based on labels and categories we shall refer to as “representations.” More complex models involving stories, multiple algorithms, rules of thumb, questions, ambiguity we shall refer to as “compressions.” Both compressions and representations are reductions. But representations are far more reductive than compressions. Representations can be treated as a set of defined meanings – coherence with regard to a representation is the degree of fidelity between the item in question and the definition of the representation, of the label. By contrast, compressions contain enough degrees of freedom and ambiguity to allow us to make internal predictions so that we may determine our potential actions in the possibility space. Compressions are explanatory via mechanism. Representations are explanatory via category. Managers are often confusing their evocation of a representation as the creation of a context of compression . When this type of explanatory error occurs, more errors follow. In the drive for efficiency such substitutions are all too often proclaimed – at the manager’s peril

Author's Profile

Michael Lissack
American Society for Cybernetics

Analytics

Added to PP
2012-01-10

Downloads
439 (#36,347)

6 months
60 (#65,779)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?