Mathematicians distinguish between proofs that explain their results and those that merely prove. This paper explores the nature of explanatory proofs, their role in mathematical practice, and some of the reasons why philosophers should care about them. Among the questions addressed are the following: what kinds of proofs are generally explanatory (or not)? What makes a proofexplanatory? Do all mathematical explanations involve proof in an essential way? Are there really such things as explanatory (...) proofs, and if so, how do they relate to the sorts of explanation encountered in philosophy of science and metaphysics? (shrink)
In this paper, I propose that applying the methods of data science to “the problem of whether mathematical explanations occur within mathematics itself” (Mancosu 2018) might be a fruitful way to shed new light on the problem. By carefully selecting indicator words for explanation and justification, and then systematically searching for these indicators in databases of scholarly works in mathematics, we can get an idea of how mathematicians use these terms in mathematical practice and with what frequency. The results of (...) this empirical study suggest that mathematical explanations do occur in research articles published in mathematics journals, as indicated by the occurrence of explanation indicators. When compared with the use of justification indicators, however, the data suggest that justifications occur much more frequently than explanations in scholarly mathematical practice. The results also suggest that justificatory proofs occur much more frequently than explanatory proofs, thus suggesting that proof may be playing a larger justificatory role than an explanatory role in scholarly mathematical practice. (shrink)
This essay addresses one of the open questions of proof-theoretic semantics: how to understand the semantic values of atomic sentences. I embed a revised version of the explanatoryproof system of Millson and Straßer (2019) into the proof-theoretic semantics of Francez (2015) and show how to specify (part of) the intended interpretation of atomic sentences on the basis of their occurrences in the premises and conclusions of inferences to and from best explanations.
Philosophers are divided on whether the proof- or truth-theoretic approach to logic is more fruitful. The paper demonstrates the considerable explanatory power of a truth-based approach to logic by showing that and how it can provide (i) an explanatory characterization —both semantic and proof-theoretical—of logical inference, (ii) an explanatory criterion for logical constants and operators, (iii) an explanatory account of logic’s role (function) in knowledge, as well as explanations of (iv) the characteristic features of (...) logic —formality, strong modal force, generality, topic neutrality, basicness, and (quasi-)apriority, (v) the veridicality of logic and its applicability to science, (v) the normativity of logic, (vi) error, revision, and expansion in/of logic, and (vii) the relation between logic and mathematics. The high explanatory power of the truth-theoretic approach does not rule out an equal or even higher explanatory power of the proof-theoretic approach. But to the extent that the truth-theoretic approach is shown to be highly explanatory, it sets a standard for other approaches to logic, including the proof-theoretic approach. (shrink)
Edward Feser defends the ‘Aristotelian proof’ for the existence of God, which reasons that the only adequate explanation of the existence of change is in terms of an unchangeable, purely actual being. His argument, however, relies on the falsity of the Existential Inertia Thesis, according to which concrete objects tend to persist in existence without requiring an existential sustaining cause. In this article, I first characterize the dialectical context of Feser’s Aristotelian proof, paying special attention to EIT and (...) its rival thesis—the Existential Expiration Thesis. Next, I provide a more precise characterization of EIT, after which I outline two metaphysical accounts of existential inertia. I then develop new lines of reasoning in favor of EIT that appeal to the theoretical virtues of explanatory power and simplicity. Finally, I address the predominant criticisms of EIT in the literature. (shrink)
Gauss’s quadratic reciprocity theorem is among the most important results in the history of number theory. It’s also among the most mysterious: since its discovery in the late 18th century, mathematicians have regarded reciprocity as a deeply surprising fact in need of explanation. Intriguingly, though, there’s little agreement on how the theorem is best explained. Two quite different kinds of proof are most often praised as explanatory: an elementary argument that gives the theorem an intuitive geometric interpretation, due (...) to Gauss and Eisenstein, and a sophisticated proof using algebraic number theory, due to Hilbert. Philosophers have yet to look carefully at such explanatory disagreements in mathematics. I do so here. According to the view I defend, there are two important explanatory virtues—depth and transparency—which different proofs (and other potential explanations) possess to different degrees. Although not mutually exclusive in principle, the packages of features associated with the two stand in some tension with one another, so that very deep explanations are rarely transparent, and vice versa. After developing the theory of depth and transparency and applying it to the case of quadratic reciprocity, I draw some morals about the nature of mathematical explanation. (shrink)
According to a widespread view in metaphysics and philosophy of science, all explanations involve relations of ontic dependence between the items appearing in the explanandum and the items appearing in the explanans. I argue that a family of mathematical cases, which I call “viewing-as explanations”, are incompatible with the Dependence Thesis. These cases, I claim, feature genuine explanations that aren’t supported by ontic dependence relations. Hence the thesis isn’t true in general. The first part of the paper defends this claim (...) and discusses its significance. The second part of the paper considers whether viewing-as explanations occur in the empirical sciences, focusing on the case of so-called fictional models. It’s sometimes suggested that fictional models can be explanatory even though they fail to represent actual worldly dependence relations. Whether or not such models explain, I suggest, depends on whether we think scientific explanations necessarily give information relevant to intervention and control. Finally, I argue that counterfactual approaches to explanation also have trouble accommodating viewing-as cases. (shrink)
In this paper I argue for an association between impurity and explanatory power in contemporary mathematics. This proposal is defended against the ancient and influential idea that purity and explanation go hand-in-hand (Aristotle, Bolzano) and recent suggestions that purity/impurity ascriptions and explanatory power are more or less distinct (Section 1). This is done by analyzing a central and deep result of additive number theory, Szemerédi’s theorem, and various of its proofs (Section 2). In particular, I focus upon the (...) radically impure (ergodic) proof due to Furstenberg (Section 3). Furstenberg’s ergodic proof is striking because it utilizes intuitively foreign and infinitary resources to prove a finitary combinatorial result and does so in a perspicuous fashion. I claim that Furstenberg’s proof is explanatory in light of its clear expression of a crucial structural result, which provides the “reason why” Szemerédi’s theorem is true. This is, however, rather surprising: how can such intuitively different conceptual resources “get a grip on” the theorem to be proved? I account for this phenomenon by articulating a new construal of the content of a mathematical statement, which I call structural content (Section 4). I argue that the availability of structural content saves intuitive epistemic distinctions made in mathematical practice and simultaneously explicates the intervention of surprising and explanatorily rich conceptual resources. Structural content also disarms general arguments for thinking that impurity and explanatory power might come apart. Finally, I sketch a proposal that, once structural content is in hand, impure resources lead to explanatory proofs via suitably understood varieties of simplification and unification (Section 5). (shrink)
In a recent article forthcoming in *Mind*, Leech (2020) presents a challenge for essentialist accounts of metaphysical modality: why should it be that essences imply corresponding necessities? Leech’s main focus is to argue that one cannot overcome the challenge by utilizing an account of essence in terms of generalized identity due to Correia and Skiles (2019), on pain of circularity. In this reply, we will show how to use identity-based essentialism to bridge ‘epistemic’ and ‘explanatory’ understandings of this alleged (...) essence-to-necessity gap without circularity, Leech’s arguments notwithstanding. We do so by first presenting a novel proof that generalized identities imply corresponding necessities. We then propose several substantive identity-based explanations of how it is, exactly, that essences imply necessities. -/- . (shrink)
What Bar-On and Simmons call 'Conceptual Deflationism' is the thesis that truth is a 'thin' concept in the sense that it is not suited to play any explanatory role in our scientific theorizing. One obvious place it might play such a role is in semantics, so disquotationalists have been widely concerned to argued that 'compositional principles', such as -/- (C) A conjunction is true iff its conjuncts are true -/- are ultimately quite trivial and, more generally, that semantic theorists (...) have misconceived the relation between truth, meaning, and logic. This paper argues, to the contrary, that even such simple compositional principles as (C) have substantial content that cannot be captured by deflationist 'proofs' of them. The key thought is that (C) is supposed, among other things, to affirm the truth-functionality of conjunction and that disquotationalists cannot, ultimately, make sense of truth-functionality. -/- This paper is something of a companion to "The Logical Strength of Compositional Principles". (shrink)
While there has been much discussion about what makes some mathematical proofs more explanatory than others, and what are mathematical coincidences, in this article I explore the distinct phenomenon of mathematical facts that call for explanation. The existence of mathematical facts that call for explanation stands in tension with virtually all existing accounts of “calling for explanation”, which imply that necessary facts cannot call for explanation. In this paper I explore what theoretical revisions are needed in order to accommodate (...) this phenomenon. One of the important upshots is that, contrary to the current consensus, low prior probability is not a necessary condition for calling for explanation. In the final section I explain how the results of this inquiry help us make progress in assessing Hartry Field’s style of reliability argument against mathematical Platonism and against robust realism in other domains of necessary facts, such as ethics. (shrink)
Karl Popper’s “Indeterminism in Quantum Physics and in Classical Physics” suffers unjust neglect. He judged determinism false: the future is open. In principle, replacing Laplace's variant of predetermination with predictable predetermination renders “scientific” determinism scientific and so refutable. Popper claimed that he had refuted it. Now a metaphysical system may have an extension—in the mathematical sense—that may render it explanatory and testable. If it exists, then it is not unique but has many alternative extensions. Popper’s proof is then (...) inconclusive. (shrink)
In this chapter we use methods of corpus linguistics to investigate the ways in which mathematicians describe their work as explanatory in their research papers. We analyse use of the words explain/explanation (and various related words and expressions) in a large corpus of texts containing research papers in mathematics and in physical sciences, comparing this with their use in corpora of general, day-to-day English. We find that although mathematicians do use this family of words, such use is considerably less (...) prevalent in mathematics papers than in physics papers or in general English. Furthermore, we find that the proportion with which mathematicians use expressions related to ‘explaining why’ and ‘explaining how’ is significantly different to the equivalent proportion in physics and in general English. We discuss possible accounts for these differences. (shrink)
For more than one decade, Andy Clark has defended the now-famous extended mind thesis, the idea that cognitive processes leak into the world. In this paper I analyse Clark’s theoretical justification for the thesis: explanatory simplicity. I argue that his way of justifying the thesis leads into contradiction, either at the level of propositional attitude ascriptions or at the theoretical level. I evaluate three possible strategies of dealing with this issue, concluding that they are all likely to fail and (...) that therefore, as regards explanatory simplicity, the burden of proof is on Clark’s side. The paper divides into two main sections: in “Simplicity and Coherence”, I define the two concepts that are important in this context (simplicity and explanatory coherence). In “How to Cope with Coherence”, these two concepts are applied to the central thought experiment, the Inga/Otto case. It will be shown that justifying the extended mind thesis by reference to simplicity may cause trouble, because ‘extended’ behavioural descriptions are likely to yield rather complicated explanations. (shrink)
The paper examines Posterior Analytics II 11, 94a20-36 and makes three points. (1) The confusing formula ‘given what things, is it necessary for this to be’ [τίνων ὄντων ἀνάγκη τοῦτ᾿ εἶναι] at a21-22 introduces material cause, not syllogistic necessity. (2) When biological material necessitation is the only causal factor, Aristotle is reluctant to formalize it in syllogistic terms, and this helps to explain why, in II 11, he turns to geometry in order to illustrate a kind of material cause that (...) can be expressed as the middle term of an explanatory syllogism. (3) If geometrical proof is viewed as a complex construction built on simpler constructions, it can in effect be described as a case of purely material constitution. (shrink)
The history of the relationship between Christian theology and the natural sciences has been conditioned by the initial decision of the masters of the "first scientific revolution" to disregard any necessary explanatory premiss to account for the constituting organization and the framing of naturally occurring entities. Not paying any attention to hierarchical control, they ended-up disseminating a vision and understanding in which it was no longer possible for a theology of nature to send questions in the direction of the (...) experimental sciences, as was done in the past between theology and many philosophically-based thought-systems. Presenting the history of some hinge-periods in the development of the Western-world sciences, this book first sets out to consider the conceptual revolution which has, in the 20th Century, related consciousness, physical laws and levels of organization, in order to show that a new chance existed then for theology. This discourse was invited to revise its language to open it up to the quest for meaning which we find on the periphery of the project of the experimental sciences. The Century-old reflection on the foundations of probability had prepared the ground for the introduction of the concept of information, at first linked to an effort aimed at maximizing the efficiency of electromagnetic communications. Taking the full measure of the questions that information theory poses to the biological sciences, this work attempts to identify the areas of convergence setting the stage for general systems theory, while it also tries to identify the insufficiencies of this recent vision and to highlight the questions left unanswered. Re-reading some of the traditional proofs of God's existence from the order of the world, relying on some pioneering insights of Ludwig von Bertalanffy and Norbert Wiener, the author brings those proofs and insights in contact with the fascinating initial project of cybernetics and the elements of a "mythical" nature which, from its inception, it could never entirely eliminate. This book ends with the confrontation between the conceptually most extended regulation factors in the history of Western thought. It articulates the poetic utopia concerned with an immediate grasp of the world in its "deictic" character with the concurrent one aimed at the domination over matter and energy expressed by technology's driving rational utopia. (shrink)
Are there arguments in mathematics? Are there explanations in mathematics? Are there any connections between argument, proof and explanation? Highly controversial answers and arguments are reviewed. The main point is that in the case of a mathematical proof, the pragmatic criterion used to make a distinction between argument and explanation is likely to be insufficient for you may grant the conclusion of a proof but keep on thinking that the proof is not explanatory.
Explanatory pluralism holds that the sorts of comprehensive theoretical and ontological economies, which microreductionists and New Wave reductionists envision and which antireductionists fear, offer misleading views of both scientific practice and scientific progress. Both advocates and foes of employing reductionist strategies at the interface of psychology and neuroscience have overplayed the alleged economies that interlevel connections (including identities) justify while overlooking their fundamental role in promoting scientific research. A brief review of research on visual processing provides support for the (...)explanatory pluralist=s general model of cross-scientific relations and discloses the valuable heuristic role hypothetical identities play in cross-scientific research. That model also supplies grounds for hesitation about the correlation objection to the psycho-physical identity theory and complaints about an explanatory gap in physicalist accounts of consciousness. These takes on psycho-neural connections miss both the sorts of considerations that motivate hypothetical identities in science and their fundamental contribution to progressive research. Thus, their focus on the contributions of research at multiple levels of analysis does not bar explanatory pluralists from considering heuristic identity theory (HIT). Arguably, it encourages it. (shrink)
A number of philosophers have recently suggested that some abstract, plausibly non-causal and/or mathematical, explanations explain in a way that is radically dif- ferent from the way causal explanation explain. Namely, while causal explanations explain by providing information about causal dependence, allegedly some abstract explanations explain in a way tied to the independence of the explanandum from the microdetails, or causal laws, for example. We oppose this recent trend to regard abstractions as explanatory in some sui generis way, and (...) argue that a prominent ac- count of causal explanation can be naturally extended to capture explanations that radically abstract away from microphysical and causal-nomological details. To this end, we distinguish di erent senses in which an explanation can be more or less abstract, and analyse the connection between explanations’ abstractness and their explanatory power. According to our analysis abstract explanations have much in common with counterfactual causal explanations. (shrink)
Courtesy of its free energy formulation, the hierarchical predictive processing theory of the brain (PTB) is often claimed to be a grand unifying theory. To test this claim, we examine a central case: activity of mesocorticolimbic dopaminergic (DA) systems. After reviewing the three most prominent hypotheses of DA activity—the anhedonia, incentive salience, and reward prediction error hypotheses—we conclude that the evidence currently vindicates explanatory pluralism. This vindication implies that the grand unifying claims of advocates of PTB are unwarranted. More (...) generally, we suggest that the form of scientific progress in the cognitive sciences is unlikely to be a single overarching grand unifying theory. (shrink)
This paper develops and motivates a unification theory of metaphysical explanation, or as I will call it, Metaphysical Unificationism. The theory’s main inspiration is the unification account of scientific explanation, according to which explanatoriness is a holistic feature of theories that derive a large number of explananda from a meager set of explanantia, using a small number of argument patterns. In developing Metaphysical Unificationism, I will point out that it has a number of interesting consequences. The view offers a novel (...) conception of metaphysical explanation that doesn’t rely on the notion of a “determinative” or “explanatory” relation; it allows us to draw a principled distinction between metaphysical and scientific explanations; it implies that naturalness and fundamentality are distinct but intimately related notions; and perhaps most importantly, it re-establishes the unduly neglected link between explanation and understanding in the metaphysical realm. A number of objections can be raised against the view, but I will argue that none of these is conclusive. The upshot is that Metaphysical Unificationism provides a powerful and hitherto overlooked alternative to extant theories of metaphysical explanation. (shrink)
There are several important arguments in metaethics that rely on explanatory considerations. Gilbert Harman has presented a challenge to the existence of moral facts that depends on the claim that the best explanation of our moral beliefs does not involve moral facts. The Reliability Challenge against moral realism depends on the claim that moral realism is incompatible with there being a satisfying explanation of our reliability about moral truths. The purpose of this chapter is to examine these and related (...) arguments. In particular, this chapter will discuss four kinds of arguments – Harman’s Challenge, evolutionary debunking arguments, irrelevant influence arguments, and the Reliability Challenge – understood as arguments against moral realism. The main goals of this chapter are (i) to articulate the strongest version of these arguments; (ii) to present and assess the central epistemological principles underlying these arguments; and (iii) to determine what a realist would have to do to adequately respond to these arguments. (shrink)
The coherence of independent reports provides a strong reason to believe that the reports are true. This plausible claim has come under attack from recent work in Bayesian epistemology. This work shows that, under certain probabilistic conditions, coherence cannot increase the probability of the target claim. These theorems are taken to demonstrate that epistemic coherentism is untenable. To date no one has investigated how these results bear on different conceptions of coherence. I investigate this situation using Thagard’s ECHO model of (...)explanatory coherence. Thagard’s ECHO model provides a natural representation of the evidential significance of multiple independent reports. (shrink)
In this note, I discuss David Enoch's influential deliberative indispensability argument for metanormative realism, and contend that the argument fails. In doing so, I uncover an important disanalogy between explanatory indispensability arguments and deliberative indispensability arguments, one that explains how we could accept the former without accepting the latter.
Which rules for aggregating judgments on logically connected propositions are manipulable and which not? In this paper, we introduce a preference-free concept of non-manipulability and contrast it with a preference-theoretic concept of strategy-proofness. We characterize all non-manipulable and all strategy-proof judgment aggregation rules and prove an impossibility theorem similar to the Gibbard--Satterthwaite theorem. We also discuss weaker forms of non-manipulability and strategy-proofness. Comparing two frequently discussed aggregation rules, we show that “conclusion-based voting” is less vulnerable to manipulation than “premise-based (...) voting”, which is strategy-proof only for “reason-oriented” individuals. Surprisingly, for “outcome-oriented” individuals, the two rules are strategically equivalent, generating identical judgments in equilibrium. Our results introduce game-theoretic considerations into judgment aggregation and have implications for debates on deliberative democracy. (shrink)
In science and everyday life, we often infer that something is true because it would explain some set of facts better than any other hypothesis we can think of. But what if we have reason to believe that there is a better way to explain these facts that we just haven't thought of? Wouldn't that undermine our warrant for believing the best available explanation? Many philosophers have assumed that we can solve such underconsideration problems by stipulating that a hypothesis should (...) not only be 'the best' explanation available; rather, it should also be 'good enough'. Unfortunately, however, the only current suggestion for what it might mean to say that an explanation is 'good enough' is, well, not good enough. This paper aims to provide a better account of what is required for an explanatory hypothesis to be considered 'good enough'. In brief, the account holds that a `good enough' hypothesis is one that has gone through a process that I call explanatory consolidation, in which accumulating evidence and failed attempts to formulate better alternatives gradually make it more plausible that the explanation we currently have is better than any other that could be formulated. (shrink)
In this paper, we do three things. First, we put forth a novel hypothesis about judgments of moral responsibility according to which such judgments are a species of explanatory judgments. Second, we argue that this hypothesis explains both some general features of everyday thinking about responsibility and the appeal of skeptical arguments against moral responsibility. Finally, we argue that, if correct, the hypothesis provides a defense against these skeptical arguments.
In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
Smith argues that, unlike other forms of evidence, naked statistical evidence fails to satisfy normic support. This is his solution to the puzzles of statistical evidence in legal proof. This paper focuses on Smith’s claim that DNA evidence in cold-hit cases does not satisfy normic support. I argue that if this claim is correct, virtually no other form of evidence used at trial can satisfy normic support. This is troublesome. I discuss a few ways in which Smith can respond.
We argue elsewhere that explanatoriness is evidentially irrelevant . Let H be some hypothesis, O some observation, and E the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr = Pr. This thesis, hereafter “SOT” , is defended by appeal to a representative case. The case concerns smoking and lung cancer. McCain and Poston grant that SOT holds in cases, like our case concerning smoking and lung cancer, that involve frequency (...) data. However, McCain and Poston contend that there is a wider sense of evidential relevance—wider than the sense at play in SOT—on which explanatoriness is evidentially relevant even in cases involving frequency data. This is their main point, but they also contend that SOT does not hold in certain cases not involving frequency data. We reply to each of these points and conclude with some general remarks on screening-off as a test of evidential relevance. (shrink)
The proof theory of many-valued systems has not been investigated to an extent comparable to the work done on axiomatizatbility of many-valued logics. Proof theory requires appropriate formalisms, such as sequent calculus, natural deduction, and tableaux for classical (and intuitionistic) logic. One particular method for systematically obtaining calculi for all finite-valued logics was invented independently by several researchers, with slight variations in design and presentation. The main aim of this report is to develop the proof theory of (...) finite-valued first order logics in a general way, and to present some of the more important results in this area. In Systems covered are the resolution calculus, sequent calculus, tableaux, and natural deduction. This report is actually a template, from which all results can be specialized to particular logics. (shrink)
The value of optimality modeling has long been a source of contention amongst population biologists. Here I present a view of the optimality approach as at once playing a crucial explanatory role and yet also depending on external sources of confirmation. Optimality models are not alone in facing this tension between their explanatory value and their dependence on other approaches; I suspect that the scenario is quite common in science. This investigation of the optimality approach thus serves as (...) a case study, on the basis of which I suggest that there is a widely felt tension in science between explanatory independence and broad epistemic interdependence, and that this tension influences scientific methodology. (shrink)
This paper discusses proof-theoretic semantics, the project of specifying the meanings of the logical constants in terms of rules of inference governing them. I concentrate on Michael Dummett’s and Dag Prawitz’ philosophical motivations and give precise characterisations of the crucial notions of harmony and stability, placed in the context of proving normalisation results in systems of natural deduction. I point out a problem for defining the meaning of negation in this framework and prospects for an account of the meanings (...) of modal operators in terms of rules of inference. (shrink)
Starting from the plurality of explanatory strategies in the actual practice of socialscientists, I introduce a framework for explanatory pluralism – a normative endorsement of the plurality of forms and levels of explanation used by social scientists. Equipped with thisframework, central issues in the individualism/holism debate are revisited, namely emergence,reduction and the idea of microfoundations. Discussing these issues, we notice that in recentcontributions the focus has been shifting towards relationism, pluralism and interaction, awayfrom dichotomous individualism/holism thinking and a (...) winner-takes-all approach. Then, thechallenge of the debate is no longer to develop the ultimate individualistic approach ordefending the holist approach, but rather how to be combine individualism and holism; howcan they co-exist, interact, be integrated or develop some division of labour, while making thebest out of the strengths and limitations of the respective explanatory strategies of holists andindividualists? Thus, the debate shifts to how exactly pluralism should be understood as thenext leading question, going beyond the current individualism/holism debate. The paper endswith a discussion and evaluation of different understandings of explanatory pluralismdefended in the literature. (shrink)
The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: Blue Brain, used for particular simulations of the cortical column in hybrid models, and Eliasmith’s SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational explanation, (...) I argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-à-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a “checklist” for a modeler. (shrink)
When a train operator tells us that our train will be late ‘because of delays’, their attempt at explanation fails because there is insufficient distance between the explanans and the explanandum. In this paper, I motivate and defend an account of ‘explanatory distance’, based on the idea that explanations give information about dependence. I show that this account offers useful resources for addressing problem cases, including recent debates about grounding explanation, and the historical case of Molière’s dormitive virtue.
We argued that explanatoriness is evidentially irrelevant in the following sense: Let H be a hypothesis, O an observation, and E the proposition that H would explain O if H and O were true. Then our claim is that Pr = Pr. We defended this screening-off thesis by discussing an example concerning smoking and cancer. Climenhaga argues that SOT is mistaken because it delivers the wrong verdict about a slightly different smoking-and-cancer case. He also considers a variant of SOT, called (...) “SOT*”, and contends that it too gives the wrong result. We here reply to Climenhaga’s arguments and suggest that SOT provides a criticism of the widely held theory of inference called “inference to the best explanation”. (shrink)
We demonstrate how real progress can be made in the debate surrounding the enhanced indispensability argument. Drawing on a counterfactual theory of explanation, well-motivated independently of the debate, we provide a novel analysis of ‘explanatory generality’ and how mathematics is involved in its procurement. On our analysis, mathematics’ sole explanatory contribution to the procurement of explanatory generality is to make counterfactual information about physical dependencies easier to grasp and reason with for creatures like us. This gives precise (...) content to key intuitions traded in the debate, regarding mathematics’ procurement of explanatory generality, and adjudicates unambiguously in favour of the nominalist, at least as far as explanatory generality is concerned. (shrink)
Many philosophers are sceptical about the power of philosophy to refute commonsensical claims. They look at the famous attempts and judge them inconclusive. I prove that, even if those famous attempts are failures, there are alternative successful philosophical proofs against commonsensical claims. After presenting the proofs I briefly comment on their significance.
Although many aspects of Inference to the Best Explanation have been extensively discussed, very little has so far been said about what it takes for a hypothesis to count as a rival explanatory hypothesis in the context of IBE. The primary aim of this article is to rectify this situation by arguing for a specific account of explanatory rivalry. On this account, explanatory rivals are complete explanations of a given explanandum. When explanatory rivals are conceived of (...) in this way, I argue that IBE is a more plausible and defensible rule of inference than it would otherwise be. The secondary aim of the article is to demonstrate the importance of accounts of explanatory rivalry by examining a prominent philosophical argument in which IBE is employed, viz. the so-called Ultimate Argument for scientific realism. In short, I argue that a well-known objection to the Ultimate Argument due to Arthur Fine fails in virtue of tacitly assuming an account of explanatory rivalry that we have independent reasons to reject. (shrink)
When scientists seek further confirmation of their results, they often attempt to duplicate the results using diverse means. To the extent that they are successful in doing so, their results are said to be robust. This paper investigates the logic of such "robustness analysis" [RA]. The most important and challenging question an account of RA can answer is what sense of evidential diversity is involved in RAs. I argue that prevailing formal explications of such diversity are unsatisfactory. I propose a (...) unified, explanatory account of diversity in RAs. The resulting account is, I argue, truer to actual cases of RA in science; moreover, this account affords us a helpful, new foothold on the logic undergirding RAs. (shrink)
Boris Kment takes a new approach to the study of modality that emphasises the origin of modal notions in everyday thought. He argues that the concepts of necessity and possibility originate in counterfactual reasoning, which allows us to investigate explanatory connections. Contrary to accepted views, explanation is more fundamental than modality.
This brief commentary has three goals. The first is to argue that ‘‘framework debate’’ in cognitive science is unresolvable. The idea that one theory or framework can singly account for the vast complexity and variety of cognitive processes seems unlikely if not impossible. The second goal is a consequence of this: We should consider how the various theories on offer work together in diverse contexts of investigation. A final goal is to supply a brief review for readers who are compelled (...) by these points to explore existing literature on the topic. Despite this literature, pluralism has garnered very little attention from broader cognitive science. We end by briefly considering what it might mean for theoretical cognitive science. (shrink)
Is epistemic inconsistency a mere symptom of having violated other requirements of rationality—notably, reasons-responsiveness requirements? Or is inconsistency irrational on its own? This question has important implications for the debate on the normativity of epistemic rationality. In this paper, I defend a new account of the explanatory role of the requirement of epistemic consistency. Roughly, I will argue that, in cases where an epistemically rational agent is permitted to believe P and also permitted to disbelieve P, the consistency requirement (...) plays a distinct explanatory role. I will also argue that such a type of permissiveness is a live possibility when it comes to rational epistemic standards. (shrink)
A question, long discussed by legal scholars, has recently provoked a considerable amount of philosophical attention: ‘Is it ever appropriate to base a legal verdict on statistical evidence alone?’ Many philosophers who have considered this question reject legal reliance on bare statistics, even when the odds of error are extremely low. This paper develops a puzzle for the dominant theories concerning why we should eschew bare statistics. Namely, there seem to be compelling scenarios in which there are multiple sources of (...) incriminating statistical evidence. As we conjoin together different types of statistical evidence, it becomes increasingly incredible to suppose that a positive verdict would be impermissible. I suggest that none of the dominant views in the literature can easily accommodate such cases, and close by offering a diagnosis of my own. (shrink)
The traditional view of evidence in mathematics is that evidence is just proof and proof is just derivation. There are good reasons for thinking that this view should be rejected: it misrepresents both historical and current mathematical practice. Nonetheless, evidence, proof, and derivation are closely intertwined. This paper seeks to tease these concepts apart. It emphasizes the role of argumentation as a context shared by evidence, proofs, and derivations. The utility of argumentation theory, in general, and argumentation (...) schemes, in particular, as a methodology for the study of mathematical practice is thereby demonstrated. Argumentation schemes represent an almost untapped resource for mathematics education. Notably, they provide a consistent treatment of rigorous and non-rigorous argumentation, thereby working to exhibit the continuity of reasoning in mathematics with reasoning in other areas. Moreover, since argumentation schemes are a comparatively mature methodology, there is a substantial body of existing work to draw upon, including some increasingly sophisticated software tools. Such tools have significant potential for the analysis and evaluation of mathematical argumentation. The first four sections of the paper address the relationships of evidence to proof, proof to derivation, argument to proof, and argument to evidence, respectively. The final section directly addresses some of the educational implications of an argumentation scheme account of mathematical reasoning. (shrink)
During the first half of the twentieth century, many philosophers of memory opposed the postulation of memory traces based on the claim that a satisfactory account of remembering need not include references to causal processes involved in recollection. However, in 1966, an influential paper by Martin and Deutscher showed that causal claims are indeed necessary for a proper account of remembering. This, however, did not settle the issue, as in 1977 Malcolm argued that even if one were to buy Martin (...) and Deutscher’s argument for causal claims, we still don’t need to postulate the existence of memory traces. This paper reconstructs the dialectic between realists and anti-realists about memory traces, suggesting that ultimately realists’ arguments amount to inferences to the best explanation. I then argue that Malcolm’s anti-realist strategy consists in the suggestion that causal explanations that do not invoke memory traces are at least as good as those that do. But then, Malcolm, I argue that there are a large number of memory phenomena for which explanations that do not postulate the existence of memory traces are definitively worse than explanations that do postulate them. Next, I offer a causal model based on an interventionist framework to illustrate when memory traces can help to explain memory phenomena and proceed to substantiate the model with details coming from extant findings in the neuroscience of memory. (shrink)
In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases (...) suggests standards of proof should not be interpreted numerically. These ‘proof paradoxes’ illustrate that purely statistical evidence can warrant high credence in a disputed fact without satisfying the relevant legal standard. In this essay I evaluate three influential attempts to explain why merely statistical evidence cannot satisfy legal standards. (shrink)
Some explanations are relatively abstract: they abstract away from the idiosyncratic or messy details of the case in hand. The received wisdom in philosophy is that this is a virtue for any explanation to possess. I argue that the apparent consensus on this point is illusory. When philosophers make this claim, they differ on which of four alternative varieties of abstractness they have in mind. What’s more, for each variety of abstractness there are several alternative reasons to think that the (...) variety of abstractness in question is a virtue. I identify the most promising reasons, and dismiss some others. The paper concludes by relating this discussion to the idea that explanations in biology, psychology and social science cannot be replaced by relatively micro explanations without loss of understanding. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.