One of the main challenges confronting Humean accounts of natural law is that Humean laws appear to be unable to play the explanatory role of laws in scientific practice. The worry is roughly that if the laws are just regularities in the particular matters of fact (as the Humean would have it), then they cannot also explain the particular matters of fact, on pain of circularity. Loewer (2012) has defended Humeanism, arguing that this worry only arises if we (...) fail to distinguish between scientific and metaphysical explanations. However, Lange (2013, 2018) has argued that scientific and metaphysical explanations are linked by a transitivity principle, which would undercut Loewer's defense and re-ignite the circularity worry for the Humean. I argue here that the Humean has antecedent reasons to doubt that there are any systematic connections between scientific and metaphysical explanations. The reason is that the Humean should think that scientific and metaphysical explanation have disparate aims, and therefore that neither form of explanation is beholden to the other in its pronouncements about what explains what. Consequently, the Humean has every reason to doubt that Lange's transitivity principle obtains. (shrink)
Peter Godfrey-Smith and Nicholas Shea have argued that standard versions of teleosemantics render explanations of successful behavior by appealing to true beliefs circular and, consequently, non-explanatory. As an alternative, Shea has recently suggested an original teleosemantic account (that he calls ?Infotel-semantics?), which is supposed to be immune to the problem of circularity. The paper argues that the standard version of teleosemantics has a satisfactory reply to the circularity objection and that, in any case, Infotel-semantics is not better (...) off than standard teleosemantics. (shrink)
We argue that explanationist views in epistemology continue to face persistent challenges to both their necessity and their sufficiency. This is so despite arguments offered by Kevin McCain in a paper recently published in this journal which attempt to show otherwise. We highlight ways in which McCain’s attempted solutions to problems we had previously raised go awry, while also presenting a novel challenge for all contemporary explanationist views.
Few have given an extended treatment of the non-statistical sense of normality: a sense captured in sentences like “dogs have four legs,” or “hammers normally have metal heads,” or “it is normal for badgers to take dust baths.” The most direct extant treatment is Bernhard Nickel’s Between Logic and the World, where he claims that the normal or characteristic for a kind is what we can explain by appeal to the right sorts of explanations. Just which explanatory strategies can (...) ground normalities, though, is difficult to determine without inviting circularity into the account. After raising this and other worries for Nickel’s account, I develop my own account according to which normal features are those which are explained by the kind of thing involved. (shrink)
Humeanism about laws of nature — the view that the laws reduce to the Humean mosaic — is a popular view, but currently existing versions face powerful objections. The non-supervenience objection, the non-fundamentality objection and the explanatorycircularity objection have all been thought to cause problems for the Humean. However, these objections share a guiding thought — they are all based on the idea that there is a certain kind of divergence between the practice of science and the (...) metaphysical picture suggested by Humeanism. -/- I suggest that the Humean can respond to these objections not by rejecting this divergence, but by arguing that is appropriate. In particular the Humean can, in the spirit of Loewer (2012), distinguish between scientific and metaphysical explanation — this is motivated by differing aims of explanation in science and metaphysics. And they can further leverage this into distinctions between scientific and metaphysical fundamentality and scientific and metaphysical possibility. We can use these distinctions to respond to the objections that the Humean faces. (shrink)
There is a tension in our theorizing about laws of nature: our practice of using and reasoning with laws of nature suggests that laws are universal generalizations, but if laws are universal generalizations then we face the problem of explanatorycircularity. In this paper I elucidate this tension and show how it motivates a view of laws that I call Minimal Anti-Humeanism. This view says that the laws are the universal generalizations that are not grounded in their instances. (...) I argue that this view has a variety of advantages that could make it attractive to people with both Humean and anti-Humean inclinations. (shrink)
We demonstrate how real progress can be made in the debate surrounding the enhanced indispensability argument. Drawing on a counterfactual theory of explanation, well-motivated independently of the debate, we provide a novel analysis of ‘explanatory generality’ and how mathematics is involved in its procurement. On our analysis, mathematics’ sole explanatory contribution to the procurement of explanatory generality is to make counterfactual information about physical dependencies easier to grasp and reason with for creatures like us. This gives precise (...) content to key intuitions traded in the debate, regarding mathematics’ procurement of explanatory generality, and adjudicates unambiguously in favour of the nominalist, at least as far as ex- planatory generality is concerned. (shrink)
The present PhD thesis is concerned with the question whether good reasoning requires that the subject has some cognitive grip on the relation between premises and conclusion. One consideration in favor of such a requirement goes as follows: In order for my belief-formation to be an instance of reasoning, and not merely a causally related sequence of beliefs, the process must be guided by my endorsement of a rule of reasoning. Therefore I must have justified beliefs about the relation between (...) my premises and my conclusion. -/- The rationality of a belief often depends on whether it is rightly connected to other beliefs, or more generally to other mental states —the states capable of providing a reason to holding the belief in question. For instance, some rational beliefs are connected to other beliefs by being inferred from them. It is often accepted that the connection implies that the subject in some sense ‘takes the mental states in question to be reason-providing’. But views on how exactly this is to be understood differ widely. They range from interpretations according to which ‘taking a mental state to be reason-providing’ imposes a mere causal sustaining relation between belief and reason-providing state to interpretations according to which one ‘takes a mental state to be reason-providing’ only if one believes that the state is reason-providing. The most common worry about the latter view is that it faces a vicious regress. In this thesis a different but in some respects similar interpretation of ‘taking something as reason-providing’ is given. It is argued to consist of a disposition to react in certain ways to information that challenges the reason-providing capacity of the allegedly reason-providing state. For instance, that one has inferred A from B partly consists in being disposed to suspend judgment about A if one obtains a reason to believe that B does not render A probable. The account is defended against regress-objections and the suspicion of explanatorycircularity. (shrink)
Debate about cognitive science explanations has been formulated in terms of identifying the proper level(s) of explanation. Views range from reductionist, favoring only neuroscience explanations, to mechanist, favoring the integration of multiple levels, to pluralist, favoring the preservation of even the most general, high-level explanations, such as those provided by embodied or dynamical approaches. In this paper, we challenge this framing. We suggest that these are not different levels of explanation at all but, rather, different styles of explanation that capture (...) different, cross-cutting patterns in cognitive phenomena. Which pattern is explanatory depends on both the cognitive phenomenon under investigation and the research interests occasioning the explanation. This reframing changes how we should answer the basic questions of which cognitive science approaches explain and how these explanations relate to one another. On this view, we should expect different approaches to offer independent explanations in terms of their different focal patterns and the value of those explanations to partly derive from the broad patterns they feature. (shrink)
In this paper, I present a general theory of topological explanations, and illustrate its fruitfulness by showing how it accounts for explanatory asymmetry. My argument is developed in three steps. In the first step, I show what it is for some topological property A to explain some physical or dynamical property B. Based on that, I derive three key criteria of successful topological explanations: a criterion concerning the facticity of topological explanations, i.e. what makes it true of a particular (...) system; a criterion for describing counterfactual dependencies in two explanatory modes, i.e. the vertical and the horizontal; and, finally, a third perspectival one that tells us when to use the vertical and when to use the horizontal mode. In the second step, I show how this general theory of topological explanations accounts for explanatory asymmetry in both the vertical and horizontal explanatory modes. Finally, in the third step, I argue that this theory is universally applicable across biological sciences, which helps to unify essential concepts of biological networks. (shrink)
When scientists seek further confirmation of their results, they often attempt to duplicate the results using diverse means. To the extent that they are successful in doing so, their results are said to be robust. This paper investigates the logic of such "robustness analysis" [RA]. The most important and challenging question an account of RA can answer is what sense of evidential diversity is involved in RAs. I argue that prevailing formal explications of such diversity are unsatisfactory. I propose a (...) unified, explanatory account of diversity in RAs. The resulting account is, I argue, truer to actual cases of RA in science; moreover, this account affords us a helpful, new foothold on the logic undergirding RAs. (shrink)
Courtesy of its free energy formulation, the hierarchical predictive processing theory of the brain (PTB) is often claimed to be a grand unifying theory. To test this claim, we examine a central case: activity of mesocorticolimbic dopaminergic (DA) systems. After reviewing the three most prominent hypotheses of DA activity—the anhedonia, incentive salience, and reward prediction error hypotheses—we conclude that the evidence currently vindicates explanatory pluralism. This vindication implies that the grand unifying claims of advocates of PTB are unwarranted. More (...) generally, we suggest that the form of scientific progress in the cognitive sciences is unlikely to be a single overarching grand unifying theory. (shrink)
If the reliability of a source of testimony is open to question, it seems epistemically illegitimate to verify the source’s reliability by appealing to that source’s own testimony. Is this because it is illegitimate to trust a questionable source’s testimony on any matter whatsoever? Or is there a distinctive problem with appealing to the source’s testimony on the matter of that source’s own reliability? After distinguishing between two kinds of epistemically illegitimate circularity—bootstrapping and self-verification—I argue for a qualified version (...) of the claim that there is nothing especially illegitimate about using a questionable source to evaluate its own reliability. Instead, it is illegitimate to appeal to a questionable source’s testimony on any matter whatsoever, with the matter of the source’s own reliability serving only as a special case. (shrink)
Joseph Levine is generally credited with the invention of the term ‘explanatory gap’ to describe our ignorance about the relationship between consciousness and the physical structures which sustain it.¹ Levine’s account of the problem of the FN:1 explanatory gap in his book Purple Haze (2001) may be summarized in terms of three theses, which I will describe and name as follows...
The paper investigates measures of explanatory power and how to define the inference schema “Inference to the Best Explanation”. It argues that these measures can also be used to quantify the systematic power of a hypothesis and the inference schema “Inference to the Best Systematization” is defined. It demonstrates that systematic power is a fruitful criterion for theory choice and IBS is truth-conducive. It also shows that even radical Bayesians must admit that systemic power is an integral component of (...) Bayesian reasoning. Finally, the paper puts the achieved results in perspective with van Fraassen’s famous criticism of IBE. (shrink)
This paper examines the explanatory gap account. The key notions for its proper understanding are analysed. In particular, the analysis is concerned with the role of “thick” and “thin” modes of presentation and “thick” and “thin” concepts which are relevant for the notions of “thick” and “thin” conceivability, and to that effect relevant for the gappy and non-gappy identities. The last section of the paper discusses the issue of the intelligibility of explanations. One of the conclusions is that the (...)explanatory gap account only succeeds in establishing the epistemic gap. The claim that psychophysical identity is not intelligibly explicable, and thus opens the explanatory gap, would require an indepen- dent argument which would prove that intelligible explanations stem only from conceptual analysis. This, I argue, is not the case. (shrink)
The explanatory role of natural selection is one of the long-term debates in evolutionary biology. Nevertheless, the consensus has been slippery because conceptual confusions and the absence of a unified, formal causal model that integrates different explanatory scopes of natural selection. In this study we attempt to examine two questions: (i) What can the theory of natural selection explain? and (ii) Is there a causal or explanatory model that integrates all natural selection explananda? For the first question, (...) we argue that five explananda have been assigned to the theory of natural selection and that four of them may be actually considered explananda of natural selection. For the second question, we claim that a probabilistic conception of causality and the statistical relevance concept of explanation are both good models for understanding the explanatory role of natural selection. We review the biological and philosophical disputes about the explanatory role of natural selection and formalize some explananda in probabilistic terms using classical results from population genetics. Most of these explananda have been discussed in philosophical terms but some of them have been mixed up and confused. We analyze and set the limits of these problems. (shrink)
I assume that there exists a general phenomenon, the phenomenon of the explanatory gap, surrounding consciousness, normativity, intentionality, and more. Explanatory gaps are often thought to foreclose reductive possibilities wherever they appear. In response, reductivists who grant the existence of these gaps have offered countless local solutions. But typically such reductivist responses have had a serious shortcoming: because they appeal to essentially domain-specific features, they cannot be fully generalized, and in this sense these responses have been not just (...) local but parochial. Here I do better. Taking for granted that the explanatory gap is a genuine phenomenon, I offer a fully general diagnosis that unifies these previously fragmented reductivist responses. (shrink)
In this note, I discuss David Enoch's influential deliberative indispensability argument for metanormative realism, and contend that the argument fails. In doing so, I uncover an important disanalogy between explanatory indispensability arguments and deliberative indispensability arguments, one that explains how we could accept the former without accepting the latter.
According to a standard criticism, Robert Brandom's “normative pragmatics”, i.e. his attempt to explain normative statuses in terms of practical attitudes, faces a dilemma. If practical attitudes and their interactions are specified in purely non-normative terms, then they underdetermine normative statuses; but if normative terms are allowed into the account, then the account becomes viciously circular. This paper argues that there is no dilemma, because the feared circularity is not vicious. While normative claims do exhibit their respective authors' practical (...) attitudes and thereby contribute towards establishing the normative statuses they are about, this circularity is not a mark of Brandom's explanatory strategy but a feature of social practice of which we theorists partake. (shrink)
Metaphysical theories heavily rely on the use of primitives to which they typically appeal. I will start by examining and evaluating some traditional well-known theories and I will discuss the role of primitives in metaphysical theories in general. I will then turn to a discussion of claims of between theories that, I think, depend on equivalences of primitives, and I will explore the nature of primitives. I will then claim that almost all explanatory power of metaphysical theories comes from (...) their primitives, and so I will turn to scrutinize the notion of "power" and "explanatory". (shrink)
This paper examines a paradigm case of allegedly successful reductive explanation, viz. the explanation of the fact that water boils at 100°C based on facts about H2O. The case figures prominently in Joseph Levine’s explanatory gap argument against physicalism. The paper studies the way the argument evolved in the writings of Levine, focusing especially on the question how the reductive explanation of boiling water figures in the argument. It will turn out that there are two versions of the (...) class='Hi'>explanatory gap argument to be found in Levine’s writings. The earlier version relies heavily on conceptual analysis and construes reductive explanation as a process of deduction. The later version makes do without conceptual analysis and understands reductive explanations as based on theoretic reductions that are justified by explanatory power. Along the way will be shown that the bridge principles — which are being neglected in the explanatory gap literature — play a crucial role in the explanatory gap argument. (shrink)
In this paper, I outline a heuristic for thinking about the relation between explanation and understanding that can be used to capture various levels of “intimacy”, between them. I argue that the level of complexity in the structure of explanation is inversely proportional to the level of intimacy between explanation and understanding, i.e. the more complexity the less intimacy. I further argue that the level of complexity in the structure of explanation also affects the explanatory depth in a similar (...) way to intimacy between explanation and understanding, i.e. the less complexity the greater explanatory depth and vice versa. (shrink)
In this paper, I start by describing and examining the main results about the option of formalizing the Yablo Paradox in arithmetic. As it is known, although it is natural to assume that there is a right representation of that paradox in first order arithmetic, there are some technical results that give rise to doubts about this possibility. Then, I present some arguments that have challenged that Yablo’s construction is non-circular. Just like that, Priest (1997) has argued that such formalization (...) shows that Yablo’s Paradox involves implicit circularity. In the same direction, Beall (2001) has introduced epistemic factors in this discussion. Even more, Priest has also argued that the introduction of infinitary reasoning would be of little help. Finally, one could reject definitions of circularity in term of fixed-point adopting non-well-founded set theory. Then, one could hold that the Yablo paradox and the Liar paradox share the same non-well-founded structure. So, if the latter is circular, the first is too. In all such cases, I survey Cook’s approach (2006, forthcoming) on those arguments for the charge of circularity. In the end, I present my position and summarize the discussion involved in this volume. En este artículo, describo y examino los principales resultados vinculados a la formalización de la paradoja de Yablo en la aritmética. Aunque es natural suponer que hay una representación correcta de la paradoja en la aritmética de primer orden, hay algunos resultados técnicos que hacen surgir dudas acerca de esta posibilidad. Más aún, presento algunos argumentos que han cuestionado que la construcción de Yablo no sea circular. Así, Priest (1997) ha argumentado que la formalización de la paradoja de Yablo en la aritmética de primer orden muestra que la misma involucra implícitamente circularidad. En la misma dirección, Beall (2001) ha introducido factores epistémicos en esta discusión. Más aún, Priest ha también argumentado que la introducción de razonamiento infinitario como complemento de la formalización en la aritmética sería de poca ayuda. Finalmente, se podría rechazar todo intento de dar definiciones de circularidad en términos de puntos fijos adoptando teoría de conjuntos infundados. Entonces, se podría sostener que la paradoja de Yablo y la del mentiroso comparten la misma estructura infundada. Por eso, si la última es circular, también lo es la primera. En todos los casos, presento el enfoque de Roy Cook (2006, en prensa) sobre estos argumentos que atribuyen circularidad a la construcción de Yablo. En el final, presento mi posición y un breve resumen de la discusión involucrada en este volumen. (shrink)
Epistemically circular arguments have been receiving quite a bit of attention in the literature for the past decade or so. Often the goal is to determine whether reliabilists (or other foundationalists) are committed to the legitimacy of epistemically circular arguments. It is often assumed that epistemic circularity is objectionable, though sometimes reliabilists accept that their position entails the legitimacy of some epistemically circular arguments, and then go on to affirm that such arguments really are good ones. My goal in (...) this paper is to argue against the legitimacy of epistemically circular arguments. My strategy is to give a direct argument against the legitimacy of epistemically circular arguments, which rests on a principle of basis-relative safety, and then to argue that reliabilists do not have the resources to resist the argument. I argue that even if the premises of an epistemically circular argument enjoy reliabilist justification, the argument does not transmit that justification to its conclusion. The main goal of my argument is to show that epistemic circularity is always a bad thing, but it also has the positive consequence that reliabilists are freed from an awkward commitment to the legitimacy of some intuitively bad arguments. (shrink)
In this paper I discuss the ontological status of actants. Actants are argued as being the basic constituting entities of networks in the framework of Actor Network Theory (Latour, 2007). I introduce two problems concerning actants that have been pointed out by Collin (2010). The first problem concerns the explanatory role of actants. According to Collin, actants cannot play the role of explanans of networks and products of the same newtork at the same time, at pain of circularity. (...) The second problem is that if actants are, as suggested by Latour, fundamentally propertyless, then it is unclear how they combine into networks. This makes the nature of actants inexplicable. -/- I suggest that both problems rest on the assumption of a form of object ontology, i.e. the assumption that the ontological basis of reality consists in discrete individual entities that have intrinsic properties. I argue that the solution to this problem consists in the assumption of an ontology of relations, as suggested within the framework of Ontic Structural Realism (Ladyman & Ross, 2007). Ontic Structural Realism is a theory concerning the ontology of science that claims that scientific theories represent a reality consisting on only relation, and no individual entities. -/- Furthermore I argue that the employment of OSR can, at the price of little modification for both theories, solve both of the two problems identified by Collin concerning ANT. -/- Throughout the text I seek support for my claims by referring to examples of application of ANT to the context of networked learning. As I argue, the complexity of the phenomenon of networked learning gives us a convenient vantage point from which we can clearly understand many important aspects of both ANT and OSR. -/- While my proposal can be considered as an attempt to solve Collin's problems, it is also an experiment of reconciliation between analytic and constructivist philosophy of science. -/- In fact I point out that on the one hand Actor Network Theory and Ontic Structural Realism show an interesting number of points of agreement, such as the naturalistic character and the focus on relationality. On the other hand, I argue that all the intuitive discrepancies that originates from the Science and Technology Studies’ criticism against analytic philosophy of science are at a closer look only apparent. (shrink)
Ted Poston's book Reason and Explanation: A Defense of Explanatory Coherentism is a book worthy of careful study. Poston develops and defends an explanationist theory of (epistemic) justification on which justification is a matter of explanatory coherence which in turn is a matter of conservativeness, explanatory power, and simplicity. He argues that his theory is consistent with Bayesianism. He argues, moreover, that his theory is needed as a supplement to Bayesianism. There are seven chapters. I provide a (...) chapter-by-chapter summary along with some substantive concerns. (shrink)
Some explanations are relatively abstract: they abstract away from the idiosyncratic or messy details of the case in hand. The received wisdom in philosophy is that this is a virtue for any explanation to possess. I argue that the apparent consensus on this point is illusory. When philosophers make this claim, they differ on which of four alternative varieties of abstractness they have in mind. What’s more, for each variety of abstractness there are several alternative reasons to think that the (...) variety of abstractness in question is a virtue. I identify the most promising reasons, and dismiss some others. The paper concludes by relating this discussion to the idea that explanations in biology, psychology and social science cannot be replaced by relatively micro explanations without loss of understanding. (shrink)
Among the factors necessary for the occurrence of some event, which of these are selectively highlighted in its explanation and labeled as causes — and which are explanatorily omitted, or relegated to the status of background conditions? Following J. S. Mill, most have thought that only a pragmatic answer to this question was possible. In this paper I suggest we understand this ‘causal selection problem’ in causal-explanatory terms, and propose that explanatory trade-offs between abstraction and stability can provide (...) a principled solution to it. After sketching that solution, it is applied to a few biological examples, including to a debate concerning the ‘causal democracy’ of organismal development, with an anti-democratic (though not a gene-centric) moral. (shrink)
This paper aims to build a bridge between two areas of philosophical research, the structure of kinds and metaphysical modality. Our central thesis is that kinds typically involve super-explanatory properties, and that these properties are therefore metaphysically essential to natural kinds. Philosophers of science who work on kinds tend to emphasize their complexity, and are generally resistant to any suggestion that they have “essences”. The complexities are real enough, but they should not be allowed to obscure the way that (...) kinds are typically unified by certain core properties. We shall show how this unifying role offers a natural account of why certain properties are metaphysically essential to kinds. (shrink)
Phenomenal knowledge is knowledge of what it is like to be in conscious states, such as seeing red or being in pain. According to the knowledge argument (Jackson 1982, 1986), phenomenal knowledge is knowledge that, i.e., knowledge of phenomenal facts. According to the ability hypothesis (Nemirow 1979; Lewis 1983), phenomenal knowledge is mere practical knowledge how, i.e., the mere possession of abilities. However, some phenomenal knowledge also seems to be knowledge why, i.e., knowledge of explanatory facts. For example, someone (...) who has just experienced pain for the first time learns not only that this is what pain is like, but also why people tend to avoid it. -/- Some philosophers have claimed that experiencing pain gives knowledge why in a normative sense: it tells us why pain is bad and why inflicting it is wrong (Kahane 2010). But phenomenal knowledge seems to explain not (only) why people should avoid pain, but why they in fact tend to do so. In this paper, I will explicate and defend a precise version of this claim and use it as a basis for a new version of the knowledge argument, which I call the explanatory knowledge argument. According to the argument, some phenomenal knowledge (1) explains regularities in a distinctive, ultimate or regress-ending way, and (2) predict them without induction. No physical knowledge explains and predicts regularities in the same way. This implies the existence of distinctive, phenomenal explanatory facts, which cannot be identified with physical facts. -/- I will show that this argument can be defended against the main objections to the original knowledge argument, the ability hypothesis and the phenomenal concept strategy, even if it turns out that the original cannot. In this way, the explanatory knowledge argument further strengthens the case against physicalism. (shrink)
Robust virtue epistemology holds that knowledge is true belief obtained through cognitive ability. In this essay I explain that robust virtue epistemology faces a dilemma, and the viability of the theory depends on an adequate understanding of the ‘through’ relation. Greco interprets this ‘through’ relation as one of causal explanation; the success is through the agent’s abilities iff the abilities play a sufficiently salient role in a causal explanation of why she possesses a true belief. In this paper I argue (...) that Greco’s account of the ‘through’ relation is inadequate. I describe kinds of counterexample and explain why salience is the wrong kind of property to track epistemically relevant conditions or to capture the nature of knowledge. Advocates of robust virtue epistemology should develop an alternative account of the ‘through’ relation. I also argue that virtue epistemology should employ an environment-relative interpretation of epistemic virtue. (shrink)
Pretheoretically we hold that we cannot gain justification or knowledge through an epistemically circular reasoning process. Epistemically circular reasoning occurs when a subject forms the belief that p on the basis of an argument A, where at least one of the premises of A already presupposes the truth of p. It has often been argued that process reliabilism does not rule out that this kind of reasoning leads to justification or knowledge. For some philosophers, this is a reason to reject (...) reliabilism. Those who try to defend reliabilism have two basic options: (I) accept that reliabilism does not rule out circular reasoning, but argue that this kind of reasoning is not as epistemically “bad” as it seems, or (II) hold on to the view that circular reasoning is epistemically “bad”, but deny that reliabilism really allows this kind of reasoning. Option (I) has been spelled out in several ways, all of which have found to be problematic. Option (II) has not been discussed very widely. Vogel considers and quickly dismisses it on the basis of three reasons. Weisberg has shown in detail that one of these reasons is unconvincing. In this paper I argue that the other two reasons are unconvincing as well and that therefore option (II) might in fact be a more promising starting point to defend reliabilism than option (I). (shrink)
Boris Kment takes a new approach to the study of modality that emphasises the origin of modal notions in everyday thought. He argues that the concepts of necessity and possibility originate in counterfactual reasoning, which allows us to investigate explanatory connections. Contrary to accepted views, explanation is more fundamental than modality.
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
The literature on the indispensability argument for mathematical realism often refers to the ‘indispensable explanatory role’ of mathematics. I argue that we should examine the notion of explanatory indispensability from the point of view of specific conceptions of scientific explanation. The reason is that explanatory indispensability in and of itself turns out to be insufficient for justifying the ontological conclusions at stake. To show this I introduce a distinction between different kinds of explanatory roles—some ‘thick’ and (...) ontologically committing, others ‘thin’ and ontologically peripheral—and examine this distinction in relation to some notable ‘ontic’ accounts of explanation. I also discuss the issue in the broader context of other ‘explanationist’ realist arguments. (shrink)
Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to (...) seek social status. For the present work, we conducted six studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); a sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5, Baseline N = 499, follow-up n = 296), and a large, one-week YouGov sample matched to U.S. demographic norms (Baseline N = 2,519, follow-up n = 1,776). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding motivation was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life. (shrink)
Jonathan Kvanvig has argued that “objectual” understanding, i.e. the understanding we have of a large body of information, cannot be reduced to explanatory concepts. In this paper, I show that Kvanvig fails to establish this point, and then propose a framework for reducing objectual understanding to explanatory understanding.
By using the general investigation framework offered by the cognitive science of religion (CSR), I analyse religion as a necessary condition for the evolutionary path of knowledge. The main argument is the "paradox of the birth of knowledge": in order to get to the meaning of the part, a sense context is needed; but a sense of the whole presupposes the sense (meaning) of the parts. Religion proposes solutions to escape this paradox, based on the imagination of sense (meaning) contexts, (...) respectively closures of these contexts through meta-senses. What is important is the practical effectiveness of solutions proposed by religion, taking into account the costs of faith and the costs of the absence of religious belief. The hypothesis has the following consequences: religion is a necessary condition for the initial evolution of knowledge and the emergence of religion is determined by the evolution of knowledge. The continuation of the solving of paradox is a Bayesian one, using explorations: a sense of the whole allows cognitive arrangements of the parties, which in turn open the possibility of a rearrangement of the whole. The contribution of religion to the emergence of sense (meaning) could be governed by the rule: any map of the world is more useful than no map; any meaning (of life) is better than no meaning. The human mind fills the perceptual and cognitive gaps, some (religious) filling solutions being true vault keys of the entire cognitive construction called the world. Knowledge is conditioned by the existence of an organized context, as the cosmos created by religion by means of explanatory meta-theories supports knowledge by closing the cognitive context and using meaning networks. The proposed analysis is consistent with a redefinition of rationality from the perspective of evolution: the importance and relevance of knowledge is determined by its practical outcome - survival. In the context of useful fictions, it does not matter what God actually does, but what we have done by believing in God. Existence has provided a pragmatic verification of the cognitive solutions that underlie the survival strategies promoted by religions. (shrink)
This paper critiques the new mechanistic explanatory program on grounds that, even when applied to the kinds of examples that it was originally designed to treat, it does not distinguish correct explanations from those that blunder. First, I offer a systematization of the explanatory account, one according to which explanations are mechanistic models that satisfy three desiderata: they must 1) represent causal relations, 2) describe the proper parts, and 3) depict the system at the right ‘level.’ Second, I (...) argue that even the most developed attempts to fulfill these desiderata fall short by failing to appropriately constrain explanatorily apt mechanistic models. -/- *This paper used to be called "The Emperor's New Mechanisms". (shrink)
Mayr’s proximate–ultimate distinction has received renewed interest in recent years. Here we discuss its role in arguments about the relevance of developmental to evolutionary biology. We show that two recent critiques of the proximate–ultimate distinction fail to explain why developmental processes in particular should be of interest to evolutionary biologists. We trace these failures to a common problem: both critiques take the proximate–ultimate distinction to neglect specific causal interactions in nature. We argue that this is implausible, and that the distinction (...) should instead be understood in the context of explanatory abstractions in complete causal models of evolutionary change. Once the debate is reframed in this way, the proximate–ultimate distinction’s role in arguments against the theoretical significance of evo-devo is seen to rely on a generally implicit premise: that the variation produced by development is abundant, small and undirected. We show that a “lean version” of the proximate–ultimate distinction can be maintained even when this isotropy assumption does not hold. Finally, we connect these considerations to biological practice. We show that the investigation of developmental constraints in evolutionary transitions has long relied on a methodology which foregrounds the explanatory role of developmental processes. It is, however, entirely compatible with the lean version of the proximate–ultimate distinction. (shrink)
A number of philosophers have recently suggested that some abstract, plausibly non-causal and/or mathematical, explanations explain in a way that is radically dif- ferent from the way causal explanation explain. Namely, while causal explanations explain by providing information about causal dependence, allegedly some abstract explanations explain in a way tied to the independence of the explanandum from the microdetails, or causal laws, for example. We oppose this recent trend to regard abstractions as explanatory in some sui generis way, and (...) argue that a prominent ac- count of causal explanation can be naturally extended to capture explanations that radically abstract away from microphysical and causal-nomological details. To this end, we distinguish di erent senses in which an explanation can be more or less abstract, and analyse the connection between explanations’ abstractness and their explanatory power. According to our analysis abstract explanations have much in common with counterfactual causal explanations. (shrink)
The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: Blue Brain, used for particular simulations of the cortical column in hybrid models, and Eliasmith’s SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational explanation, (...) I argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-à-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a “checklist” for a modeler. (shrink)
Recent literature on noncausal explanation raises the question as to whether explanatory monism, the thesis that all explanations submit to the same analysis, is true. The leading monist proposal holds that all explanations support change-relating counterfactuals. We provide several objections to this monist position.
There are several important arguments in metaethics that rely on explanatory considerations. Gilbert Harman has presented a challenge to the existence of moral facts that depends on the claim that the best explanation of our moral beliefs does not involve moral facts. The Reliability Challenge against moral realism depends on the claim that moral realism is incompatible with there being a satisfying explanation of our reliability about moral truths. The purpose of this chapter is to examine these and related (...) arguments. In particular, this chapter will discuss four kinds of arguments – Harman’s Challenge, evolutionary debunking arguments, irrelevant influence arguments, and the Reliability Challenge – understood as arguments against moral realism. The main goals of this chapter are (i) to articulate the strongest version of these arguments; (ii) to present and assess the central epistemological principles underlying these arguments; and (iii) to determine what a realist would have to do to adequately respond to these arguments. (shrink)
Philosophical and scientific investigations of the proprietary aspects of self—mineness or mental ownership—often presuppose that searching for unique constituents is a productive strategy. But there seem not to be any unique constituents. Here, it is argued that the “self-specificity” paradigm, which emphasizes subjective perspective, fails. Previously, it was argued that mode of access also fails to explain mineness. Fortunately, these failures, when leavened by other findings (those that exhibit varieties and vagaries of mineness), intimate an approach better suited to searching (...) for an explanation. Having an alternative in hand, one that shows promise of achieving explanatory adequacy, provides an additional reason to suspend the search for unique constituents. In short, a negative and a positive thesis are developed: we should cease looking for unique constituents and should seek to explain mineness in accord with the model developed here. This model rejects attempts to explain the phenomenon in terms of either a narrative or a minimal sense of self; it seeks to explain at a “molecular” level, one that appeals to multiple, interacting dimensions. The molecular-level model allows for the possibility that subjective perspective is distinct from a stark perspective (one that does not imply mineness). It proposes that the confounding of tacit expectations plays an important role in explaining mental ownership and its complement, disownership. But the confounding of tacit expectations is not sufficient. Because we are able to be aware of the existence of mental states that do not belong to self, we require a mechanism for determining degree of self-relatedness. One such mechanism is proposed here, and it is shown how this mechanism can be integrated into a general model of mental ownership. In the spirit of suggesting how this model might be able to help resolve outstanding problems, the question as to whether inserted thoughts belong to the patient who reports them is also considered. (shrink)
The value of optimality modeling has long been a source of contention amongst population biologists. Here I present a view of the optimality approach as at once playing a crucial explanatory role and yet also depending on external sources of confirmation. Optimality models are not alone in facing this tension between their explanatory value and their dependence on other approaches; I suspect that the scenario is quite common in science. This investigation of the optimality approach thus serves as (...) a case study, on the basis of which I suggest that there is a widely felt tension in science between explanatory independence and broad epistemic inter dependence, and that this tension influences scientific methodology. (shrink)
This paper develops and motivates a unification theory of metaphysical explanation, or as I will call it, Metaphysical Unificationism. The theory’s main inspiration is the unification account of scientific explanation, according to which explanatoriness is a holistic feature of theories that derive a large number of explananda from a meager set of explanantia, using a small number of argument patterns. In developing Metaphysical Unificationism, I will point out that it has a number of interesting consequences. The view offers a novel (...) conception of metaphysical explanation that doesn’t rely on the notion of a “determinative” or “explanatory” relation; it allows us to draw a principled distinction between metaphysical and scientific explanations; it implies that naturalness and fundamentality are distinct but intimately related notions; and perhaps most importantly, it re-establishes the unduly neglected link between explanation and understanding in the metaphysical realm. A number of objections can be raised against the view, but I will argue that none of these is conclusive. The upshot is that Metaphysical Unificationism provides a powerful and hitherto overlooked alternative to extant theories of metaphysical explanation. (shrink)
The Special Composition Question may be formulated as follows: for any xs whatsoever, what are the metaphysically necessary and jointly sufficient conditions in virtue of which there is a y such that those xs compose y? But what is the scope of the sought after explanation? Should an answer merely explain compositional facts, or should it explain certain ontological facts as well? On one natural reading, the question seeks an explanation of both the compositional facts and the ontological; the question (...) seeks to explain how composite objects exist; how there is a y such that the xs compose y. But it turns out that some answers to the Special Composition Question presuppose those ontological facts rather than explain those ontological facts. In this paper, I will indicate what I take to be the different explanatory demands met by the representative answers. I will argue that the wide scope explanatory demands can’t be satisfied. I will also show that this result has bearing on the current debate about composition. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.