Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and (...) show how organisms that are risk-prone in the beliefs they form can be fitter than organisms that are risk-free. To address the second question, we discuss a problem that is widely recognized in statistics – the problem of over-fitting – and one influential device for addressing that problem, the Akaike Information Criterion (AIC). We then use AIC to solve epistemological versions of the disjunction and distality problems, which are two key problems concerning what it is for a belief state to have one semantic content rather than another. (shrink)
Michael Scriven’s (1959) example of identical twins (who are said to be equal in fitness but unequal in their reproductive success) has been used by many philosophers of biology to discuss how fitness should be defined, how selection should be distinguished from drift, and how the environment in which a selection process occurs should be conceptualized. Here it is argued that evolutionary theory has no commitment, one way or the other, as to whether the twins are equally fit. This is (...) because the theory of natural selection is fundamentally about the fitnesses of traits, not the fitnesses of token individuals. A plausible philosophical thesis about supervenience entails that the twins are equally fit if they live in identical environments, but evolutionary biology is not committed to the thesis that the twins live in identical environments. Evolutionary theory is right to focus on traits, rather than on token individuals, because the fitnesses of token organisms (as opposed to their actual survivorship and degree of reproductive success) are almost always unknowable. This point has ramifications for the question of how Darwin’s theory of evolution and R. A. Fisher’s are conceptually different. (shrink)
This article reviews two standard criticisms of creationism/intelligent design (ID): it is unfalsifiable, and it is refuted by the many imperfect adaptations found in nature. Problems with both criticisms are discussed. A conception of testability is described that avoids the defects in Karl Popper’s falsifiability criterion. Although ID comes in multiple forms, which call for different criticisms, it emerges that ID fails to constitute a serious alternative to evolutionary theory.
We argue elsewhere that explanatoriness is evidentially irrelevant . Let H be some hypothesis, O some observation, and E the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr = Pr. This thesis, hereafter “SOT” , is defended by appeal to a representative case. The case concerns smoking and lung cancer. McCain and Poston grant that SOT holds in cases, like our case concerning smoking and lung cancer, that involve frequency (...) data. However, McCain and Poston contend that there is a wider sense of evidential relevance—wider than the sense at play in SOT—on which explanatoriness is evidentially relevant even in cases involving frequency data. This is their main point, but they also contend that SOT does not hold in certain cases not involving frequency data. We reply to each of these points and conclude with some general remarks on screening-off as a test of evidential relevance. (shrink)
The chapter discusses the principle of conservatism and traces how the general principle is related to the specific one. This tracing suggests that the principle of conservatism needs to be refined. Connecting the principle in cognitive science to more general questions about scientific inference also allows us to revisit the question of realism versus instrumentalism. The framework deployed in model selection theory is very general; it is not specific to the subject matter of science. The chapter outlines some non-Bayesian ideas (...) that have been developed in model selection theory. The principle of conservatism, like C. Lloyd Morgan's canon, describes a preference concerning kinds of parameters. It says that a model that postulates only lower-level intentionality is preferable to one that postulates higher-level intentionality if both fit the data equally well. The model selection approach to parsimony helps explain why unification is a theoretical virtue. (shrink)
In their 2010 book, Biology’s First Law, D. McShea and R. Brandon present a principle that they call ‘‘ZFEL,’’ the zero force evolutionary law. ZFEL says (roughly) that when there are no evolutionary forces acting on a population, the population’s complexity (i.e., how diverse its member organisms are) will increase. Here we develop criticisms of ZFEL and describe a different law of evolution; it says that diversity and complexity do not change when there are no evolutionary causes.
In their book What Darwin Got Wrong, Jerry Fodor and Massimo Piattelli-Palmarini construct an a priori philosophical argument and an empirical biological argument. The biological argument aims to show that natural selection is much less important in the evolutionary process than many biologists maintain. The a priori argument begins with the claim that there cannot be selection for one but not the other of two traits that are perfectly correlated in a population; it concludes that there cannot be an evolutionary (...) theory of adaptation. This article focuses mainly on the a priori argument. (shrink)
We argued that explanatoriness is evidentially irrelevant in the following sense: Let H be a hypothesis, O an observation, and E the proposition that H would explain O if H and O were true. Then our claim is that Pr = Pr. We defended this screening-off thesis by discussing an example concerning smoking and cancer. Climenhaga argues that SOT is mistaken because it delivers the wrong verdict about a slightly different smoking-and-cancer case. He also considers a variant of SOT, called (...) “SOT*”, and contends that it too gives the wrong result. We here reply to Climenhaga’s arguments and suggest that SOT provides a criticism of the widely held theory of inference called “inference to the best explanation”. (shrink)
Developing a definition of group selection, and applying that definition to the dispute in the social sciences between methodological holists and methodological individualists, are the two goals of this paper. The definition proposed distinguishes between changes in groups that are due to group selection and changes in groups that are artefacts of selection processes occurring at lower levels of organization. It also explains why the existence of group selection is not implied by the mere fact that fitness values of organisms (...) are sensitive to the composition of groups. And, lastly, the definition explains why group selection need not involve selection for altruism. Group selection is thereby seen as an evolutionary force which is objectively distinct from other evolutionary forces. Applying the distinction between group and individual selection to the holism/individualism dispute has the desirable result that the dispute is not decidable a priori. This way of looking at the dispute yields a conception of individualism which is untainted by atomism and a conception of holism which is unspoiled by hypostatis. (shrink)
Maximum Parsimony (MP) and Maximum Likelihood (ML) are two methods for evaluating which phlogenetic tree is best supported by data on the characteristics of leaf objects (which may be species, populations, or individual organisms). MP has been criticized for assuming that evolution proceeds parsimoniously -- that if a lineage begins in state i and ends in state j, the way it got from i to j is by the smallest number of changes. MP has been criticized for needing to assume (...) some model or other of the evolutionary process even though biologists often do not know what that process really is. This paper critically evaluates both criticisms. (shrink)
All extant purely probabilistic measures of explanatory power satisfy the following technical condition: if Pr(E | H1) > Pr(E | H2) and Pr(E | ~H1) < Pr(E | ~H2), then H1’s explanatory power with respect to E is greater than H2’s explanatory power with respect to E. We argue that any measure satisfying this condition faces three serious problems – the Problem of Temporal Shallowness, the Problem of Negative Causal Interactions, and the Problem of Non-Explanations. We further argue that many (...) such measures face a fourth problem – the Problem of Explanatory Irrelevance. (shrink)
Sober [2011] argues that some causal statements are a priori true and that a priori causal truths are central to explanations in the theory of natural selection. Lange and Rosenberg [2011] criticize Sober's argument. They concede that there are a priori causal truths, but maintain that those truths are only ‘minimally causal’. They also argue that explanations that are built around a priori causal truths are not causal explanations, properly speaking. Here we criticize both of Lange and Rosenberg's claims.
Is all explanations causal explanation? Puzzles about barometer readings "explain" storms and shadow lengths "explaining" flagpole heights make it attractive to think so. Wesley Salmon (1984) has endorsed this causal thesis. One way to test this thesis is to assess the explanatory import of pseudo-processes. I do so by discussing the concept of heritability, which measures a pseudo-process, and one role it played in the theory of natural selection: explaining response to selection. This will show, not just that heritability has (...) heuristic or predictive utility, but that it can be explanatory. Possible responses to this case are also discussed. (shrink)
The logical empiricists said some good things about epistemology and scientific method. However, they associated those epistemological ideas with some rather less good ideas about philosophy of language. There is something epistemologically suspect about statements that cannot be tested. But to say that those statements are meaningless is to go too far. And there is something impossible about trying to figure out which of two empirically equivalent theories is true. But to say that those theories are synonymous is also to (...) go too far. My goal in this paper is not to resuscitate all these positivist ideas, but to revisit just one of them. Instrumentalism is the idea that theories are instruments for making predictions. Of course, no one would disagree that this is one of the things we use theories to do. In just the same way, no one could disagree with the emotivist claim that one of the things we do with ethical terms like "good" and "right" is to express our feelings of approval and disapproval. Instrumentalism and emotivism become contentious, and therefore interesting, when these claims are supplemented. (shrink)
Decision theory requires agents to assign probabilities to states of the world and utilities to the possible outcomes of different actions. When agents commit to having the probabilities and/or utilities in a decision problem defined by objective features of the world, they may find themselves unable to decide which actions maximize expected utility. Decision theory has long recognized that work-around strategies are available in special cases; this is where dominance reasoning, minimax, and maximin play a role. Here we describe a (...) different work around, wherein a rational decision about one decision problem can be reached by “interpolating” information from another problem that the agent believes has already been rationally solved. (shrink)
Do traits evolve because they are good for the group, or do they evolve because they are good for the individual organisms that have them? The question is whether groups, rather than individual organisms, are ever “units of selection.” My exposition begins with the 1960’s, when the idea that traits evolve because they are good for the group was criticized, not just for being factually mistaken, but for embodying a kind of confused thinking that is fundamentally at odds with the (...) logic that Darwin’s theory requires. A counter-movement has arisen since the 1960’s, called multi-level selection theory, according to which selection acts at multiple levels, including the level of the group. After discussing the 1960’s attack on group selection and the concept’s subsequent revival, I examine Darwin’s views on the subject. I discuss what Darwin says about four examples: human morality, the barbed stinger of the honeybee, neuter workers in species of social insect, and the sterility of many interspecies hybrids. I argue that Darwin defended hypotheses of group selection in the first three problems, but rejected it in the fourth. I also discuss Darwin’s general views about the role of group selection in evolution. (shrink)
Los filósofos han tendido a discutir el esencialismo como si fuera una doctrina global, una filosofía que, por alguna razón uniforme, debiera ser adoptada por todas las ciencias o por ninguna. Popper (1972) ha adoptado una postura global negativa, porque ve al esencialismo como un obstáculo fundamental para la racionalidad científica. También Quine (1953b, 1960), por una combinación de motivos semánticos y epistemológicos, quiere desterrar el esencialismo de la totalidad del discurso científico. Sin embargo, en fechas más recientes, Putnam (1975) (...) y Kripke (1972) han propugnado doctrinas esencialistas y han afirmado que es tarea de cada ciencia investigar las propiedades esenciales de sus clases naturales constitutivas. (shrink)
We conceptualize observation selection effects (OSEs) by considering how a shift from one process of observation to another affects discrimination-conduciveness, by which we mean the degree to which possible observations discriminate between hypotheses, given the observation process at work. OSEs in this sense come in degrees and are causal, where the cause is the shift in process, and the effect is a change in degree of discrimination-conduciveness. We contrast our understanding of OSEs with others that have appeared in the literature. (...) After describing conditions of adequacy that an acceptable measure of degree of discrimination-conduciveness must satisfy, we use those conditions of adequacy to evaluate several possible measures. We also discuss how the effect of shifting from one observation process to another might be measured. We apply our framework to several examples, including the ravens paradox and the phenomenon of publication bias. (shrink)
We argue in Roche and Sober (2013) that explanatoriness is evidentially irrelevant in that Pr(H | O&EXPL) = Pr(H | O), where H is a hypothesis, O is an observation, and EXPL is the proposition that if H and O were true, then H would explain O. This is a “screening-off” thesis. Here we clarify that thesis, reply to criticisms advanced by Lange (2017), consider alternative formulations of Inference to the Best Explanation, discuss a strengthened screening-off thesis, and consider how (...) it bears on the claim that unification is evidentially relevant. (shrink)
Carl Hempel (1965) argued that probabilistic hypotheses are limited in what they can explain. He contended that a hypothesis cannot explain why E is true if the hypothesis says that E has a probability less than 0.5. Wesley Salmon (1971, 1984, 1990, 1998) and Richard Jeffrey (1969) argued to the contrary, contending that P can explain why E is true even when P says that E’s probability is very low. This debate concerned noncontrastive explananda. Here, a view of contrastive causal (...) explanation is described and defended. It provides a new limit on what probabilistic hypotheses can explain; the limitation is that P cannot explain why E is true rather than A if P assign E a probability that is less than or equal to the probability that P assigns to A. The view entails that a true deterministic theory and a true probabilistic theory that apply to the same explanandum partition are such that the deterministic theory explains all the true contrastive propositions constructable from that partition, whereas the probabilistic theory often fails to do so. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.