Some scientific categories seem to correspond to genuine features of the world and are indispensable for successful science in some domain; in short, they are natural kinds. This book gives a general account of what it is to be a natural kind and puts the account to work illuminating numerous specific examples.
The no-miracles argument and the pessimistic induction are arguably the main considerations for and against scientific realism. Recently these arguments have been accused of embodying a familiar, seductive fallacy. In each case, we are tricked by a base rate fallacy, one much-discussed in the psychological literature. In this paper we consider this accusation and use it as an explanation for why the two most prominent `wholesale' arguments in the literature seem irresolvable. Framed probabilistically, we can see very clearly why realists (...) and anti-realists have been talking past one another. We then formulate a dilemma for advocates of either argument, answer potential objections to our criticism, discuss what remains (if anything) of these two major arguments, and then speculate about a future philosophy of science freed from these two arguments. In so doing, we connect the point about base rates to the wholesale/retail distinction; we believe it hints at an answer of how to distinguish profitable from unprofitable realism debates. In short, we offer a probabilistic analysis of the feeling of ennui afflicting contemporary philosophy of science. (shrink)
There is considerable disagreement about the epistemic value of novel predictive success, i.e. when a scientist predicts an unexpected phenomenon, experiments are conducted, and the prediction proves to be accurate. We survey the field on this question, noting both fully articulated views such as weak and strong predictivism, and more nascent views, such as pluralist reasons for the instrumental value of prediction. By examining the various reasons offered for the value of prediction across a range of inferential contexts , we (...) can see that neither weak nor strong predictivism captures all of the reasons for valuing prediction available. A third path is presented, Pluralist Instrumental Predictivism; PIP for short. (shrink)
NK≠HPC.P. D. Magnus - 2014 - Philosophical Quarterly 64 (256):471-477.details
The Homeostatic Property Cluster (HPC) account of natural kinds has become popular since it was proposed by Richard Boyd in the late 1980s. Although it is often taken as a defining natural kinds as such, it is easy enough to see that something's being a natural kind is neither necessary nor sufficient for its being an HPC. This paper argues that it is better not to understand HPCs as defining what it is to be a natural kind but instead as (...) providing the ontological realization of (some) natural kinds. (shrink)
Kyle Stanford has recently claimed to offer a new challenge to scientific realism. Taking his inspiration from the familiar Pessimistic Induction (PI), Stanford proposes a New Induction (NI). Contra Anjan Chakravartty’s suggestion that the NI is a ‘red herring’, I argue that it reveals something deep and important about science. The Problem of Unconceived Alternatives, which lies at the heart of the NI, yields a richer anti-realism than the PI. It explains why science falls short when it falls short, and (...) so it might figure in the most coherent account of scientific practice. However, this best account will be antirealist in some respects and about some theories. It will not be a sweeping antirealism about all or most of science. (shrink)
There are two senses of ‘what scientists know’: An individual sense (the separate opinions of individual scientists) and a collective sense (the state of the discipline). The latter is what matters for policy and planning, but it is not something that can be directly observed or reported. A function can be defined to map individual judgments onto an aggregate judgment. I argue that such a function cannot effectively capture community opinion, especially in cases that matter to us.
Abstract: There is a long tradition of trying to analyze art either by providing a definition (essentialism) or by tracing its contours as an indefinable, open concept (anti-essentialism). Both art essentialists and art anti-essentialists share an implicit assumption of art concept monism. This article argues that this assumption is a mistake. Species concept pluralism—a well-explored position in philosophy of biology—provides a model for art concept pluralism. The article explores the conditions under which concept pluralism is appropriate, and argues that they (...) obtain for art. Art concept pluralism allows us to recognize that different art concepts are useful for different purposes, and what has been feuding definitions can be seen as characterizations of specific art concepts. (shrink)
The accepted narrative treats John Stuart Mill’s Kinds as the historical prototype for our natural kinds, but Mill actually employs two separate notions: Kinds and natural groups. Considering these, along with the accounts of Mill’s nineteenth-century interlocutors, forces us to recognize two distinct questions. First, what marks a natural kind as worthy of inclusion in taxonomy? Second, what exists in the world that makes a category meet that criterion? Mill’s two notions offer separate answers to the two questions: natural groups (...) for taxonomy and Kinds for ontology. This distinction is ignored in many contemporary debates about natural kinds and is obscured by the standard narrative that treats our natural kinds just as a development of Mill’s Kinds. (shrink)
Homeostatic property clusters (HPCs) are offered as a way of understanding natural kinds, especially biological species. I review the HPC approach and then discuss an objection by Ereshefsky and Matthen, to the effect that an HPC qua cluster seems ill-fitted as a description of a polymorphic species. The standard response by champions of the HPC approach is to say that all members of a polymorphic species have things in common, namely dispositions or conditional properties. I argue that this response fails. (...) Instances of an HPC kind need not all be similar in their exhibited properties. Instead, HPCs should instead be understood as unified by the underlying causal mechanism that maintains them. The causal mechanism can both produce and explain some systematic differences between a kind’s members. An HPC kind is best understood not as a single cluster of properties maintained in stasis by causal forces, but as a complex of related property clusters kept in relation by an underlying causal process. This approach requires recognizing that taxonomic systems serve both explanatory and inductive purposes. (shrink)
forall x: Calgary is a full-featured textbook on formal logic. It covers key notions of logic such as consequence and validity of arguments, the syntax of truth-functional propositional logic TFL and truth-table semantics, the syntax of first-order (predicate) logic FOL with identity (first-order interpretations), translating (formalizing) English in TFL and FOL, and Fitch-style natural deduction proof systems for both TFL and FOL. It also deals with some advanced topics such as truth-functional completeness and modal logic. Exercises with solutions are available. (...) It is provided in PDF (for screen reading, printing, and a special version for dyslexics) and in LaTeX source code. (shrink)
The problem of underdetermination is thought to hold important lessons for philosophy of science. Yet, as Kyle Stanford has recently argued, typical treatments of it offer only restatements of familiar philosophical problems. Following suggestions in Duhem and Sklar, Stanford calls for a New Induction from the history of science. It will provide proof, he thinks, of "the kind of underdetermination that the history of science reveals to be a distinctive and genuine threat to even our best scientific theories" . This (...) paper examines Stanford's New Induction and argues that it -- like the other forms of underdetermination that he criticizes -- merely recapitulates familiar philosophical conundra. (shrink)
Given the fact that many people use Wikipedia, we should ask: Can we trust it? The empirical evidence suggests that Wikipedia articles are sometimes quite good but that they vary a great deal. As such, it is wrong to ask for a monolithic verdict on Wikipedia. Interacting with Wikipedia involves assessing where it is likely to be reliable and where not. I identify five strategies that we use to assess claims from other sources and argue that, to a greater of (...) lesser degree, Wikipedia frustrates all of them. Interacting responsibly with something like Wikipedia requires new epistemic methods and strategies. (shrink)
It is now commonly held that values play a role in scientific judgment, but many arguments for that conclusion are limited. First, many arguments do not show that values are, strictly speaking, indispensable. The role of values could in principle be filled by a random or arbitrary decision. Second, many arguments concern scientific theories and concepts which have obvious practical consequences, thus suggesting or at least leaving open the possibility that abstruse sciences without such a connection could be value-free. Third, (...) many arguments concern the role values play in inferring from evidence, thus taking evidence as given. This paper argues that these limitations do not hold in general. There are values involved in every scientific judgment. They cannot even conceivably be replaced by a coin toss, they arise as much for exotic as for practical sciences, and they are at issue as much for observation as for explicit inference. (shrink)
The underdetermination of theory by evidence is supposed to be a reason to rethink science. It is not. Many authors claim that underdetermination has momentous consequences for the status of scientific claims, but such claims are hidden in an umbra of obscurity and a penumbra of equivocation. So many various phenomena pass for `underdetermination' that it's tempting to think that it is no unified phenomenon at all, so I begin by providing a framework within which all these worries can be (...) seen as species of one genus: A claim of underdetermination involves (at least implicitly) a set of rival theories, a standard of responsible judgment, and a scope of circumstances in which responsible choice between the rivals is impossible. Within this framework, I show that one variety of underdetermination motivated modern scepticism and thus is a familiar problem at the heart of epistemology. I survey arguments that infer from underdetermination to some reëvaluation of science: top-down arguments infer a priori from the ubiquity of underdetermination to some conclusion about science; bottom-up arguments infer from specific instances of underdetermination, to the claim that underdetermination is widespread, and then to some conclusion about science. The top-down arguments either fail to deliver underdetermination of any great significance or (as with modern scepticism) deliver some well-worn epistemic concern. The bottom-up arguments must rely on cases. I consider several promising cases and find them to either be so specialized that they cannot underwrite conclusions about science in general or not be underdetermined at all. Neither top-down nor bottom-up arguments can motivate any deep reconsideration of science. (shrink)
According to the standard narrative, natural kind is a technical notion that was introduced by John Stuart Mill in the 1840s and the recent craze for natural kinds, launched by Putnam and Kripke, is a continuation of that tradition. I argue that the standard narrative is mistaken. The Millian tradition of kinds was not particularly influential in the 20th-century, and the Putnam-Kripke revolution did not clearly engage with even the remnants that were left of it. The presently active tradition of (...) natural kinds is less than half a century old. Recognizing this might help us better appreciate both Mill and natural kinds. (shrink)
William James’ argument against William Clifford in The Will to Believe is often understood in terms of doxastic efficacy, the power of belief to influence an outcome. Although that is one strand of James’ argument, there is another which is driven by ampliative risk. The second strand of James’ argument, when applied to scientific cases, is tantamount to what is now called the Argument from Inductive Risk. Either strand of James’ argument is sufficient to rebut Clifford's strong evidentialism and show (...) that it is sometimes permissible to believe in the absence of compelling evidence. However, the two considerations have different scope and force. Doxastic efficacy applies in only some cases but allows any values to play a role in determining belief; risk applies in all cases but only allows particular conditional values to play a role. (shrink)
Cover versions form a loose but identifiable category of tracks and performances. We distinguish four kinds of covers and argue that they mark important differences in the modes of evaluation that are possible or appropriate for each: mimic covers, which aim merely to echo the canonical track; rendition covers, which change the sound of the canonical track; transformative covers, which diverge so much as to instantiate a distinct, albeit derivative song; and referential covers, which not only instantiate a distinct song, (...) but for which the new song is in part about the original song. In order to allow for the very possibility of transformative and referential covers, we argue that a cover is characterized by relation to a canonical track rather than merely by being a new instance of a song that had been recorded previously. (shrink)
This paper offers a general characterization of underdetermination and gives a prima facie case for the underdetermination of the topology of the universe. A survey of several philosophical approaches to the problem fails to resolve the issue: the case involves the possibility of massive reduplication, but Strawson on massive reduplication provides no help here; it is not obvious that any of the rival theories are to be preferred on grounds of simplicity; and the usual talk of empirically equivalent theories misses (...) the point entirely. (If the choice is underdetermined, then the theories are not empirically equivalent!) Yet the thought experiment is analogous to a live scientific possibility, and actual astronomy faces underdetermination of this kind. This paper concludes by suggesting how the matter can be resolved, either by localizing the underdetermination or by defeating it entirely. Introduction A brief preliminary Around the universe in 80 days Some attempts at resolving the problem 4.1 Indexicality 4.2 Simplicity 4.3 Empirical equivalence 4.4 Is this just a philosophers' fantasy? Move along... ...nothing to see here 6.1 Rules of repetition 6.2 Some possible replies Conclusion. (shrink)
Nelson Goodman's distinction between autographic and allographic arts is appealing, we suggest, because it promises to resolve several prima facie puzzles. We consider and rebut a recent argument that alleges that digital images explode the autographic/allographic distinction. Regardless, there is another familiar problem with the distinction, especially as Goodman formulates it: it seems to entirely ignore an important sense in which all artworks are historical. We note in reply that some artworks can be considered both as historical products and as (...) formal structures. Talk about such works is ambiguous between the two conceptions. This allows us to recover Goodman's distinction: art forms that are ambiguous in this way are allographic. With that formulation settled, we argue that digital images are allographic. We conclude by considering the objection that digital photographs, unlike other digital images, would count as autographic by our criterion; we reply that this points to the vexed nature of photography rather than any problem with the distinction. (shrink)
The underdetermination of theory by data obtains when, inescapably, evidence is insufficient to allow scientists to decide responsibly between rival theories. One response to would-be underdetermination is to deny that the rival theories are distinct theories at all, insisting instead that they are just different formulations of the same underlying theory; we call this the identical rivals response. An argument adapted from John Norton suggests that the response is presumptively always appropriate, while another from Larry Laudan and Jarrett Leplin suggests (...) that the response is never appropriate. Arguments from Einstein for the special and general theories of relativity may fruitfully be seen as instances of the identical rivals response; since Einstein’s arguments are generally accepted, the response is at least sometimes appropriate. But when is it appropriate? We attempt to steer a middle course between Norton’s view and that of Laudan and Leplin: the identical rivals response is appropriate when there is good reason for adopting a parsimonious ontology. Although in simple cases the identical rivals response need not involve any ontological difference between the theories, in actual scientific cases it typically requires treating apparent posits of the various theories as mere verbal ornaments or computational conveniences. Since these would-be posits are not now detectable, there is no perfectly reliable way to decide whether we should eliminate them or not. As such, there is no rule for deciding whether the identical rivals response is appropriate or not. Nevertheless, there are considerations that suggest for and against the response; we conclude by suggesting two of them. (shrink)
It seems obvious that a community of one thousand scientists working together to make discoveries and solve puzzles should arrange itself differently than would one thousand scientist-hermits working independently. Because of limited time, resources, and attention, an independent scientist can explore only some of the possible approaches to a problem. Working alone, each hermit would explore the most promising approaches. They would needlessly duplicate the work of others and would be unlikely to develop approaches which look unpromising but really have (...) tremendous potential. Contrariwise, a large community can more rigorously explore the space of possible approaches. Most scientists should work on the most promising approaches, but a smaller number can be committed to approaches that initially look less promising. Exploratory work can reveal if one of those initially unpromising approaches has unrealized potential, and more scientists can adopt it once its potential becomes more apparent. (shrink)
According to many philosophers, psychological explanation canlegitimately be given in terms of belief and desire, but not in termsof knowledge. To explain why someone does what they do (so the common wisdom holds) you can appeal to what they think or what they want, but not what they know. Timothy Williamson has recently argued against this view. Knowledge, Williamson insists, plays an essential role in ordinary psychological explanation.Williamson's argument works on two fronts.First, he argues against the claim that, unlike knowledge, (...) belief is``composite'' (representable as a conjunction of a narrow and a broadcondition). Belief's failure to be composite, Williamson thinks, undermines the usual motivations for psychological explanation in terms of belief rather than knowledge.Unfortunately, we claim, the motivations Williamson argues against donot depend on the claim that belief is composite, so what he saysleaves the case for a psychology of belief unscathed.Second, Williamson argues that knowledge can sometimes provide abetter explanation of action than belief can.We argue that, in the cases considered, explanations that cite beliefs(but not knowledge) are no less successful than explanations that citeknowledge. Thus, we conclude that Williamson's arguments fail both coming andgoing: they fail to undermine a psychology of belief, and they fail tomotivate a psychology of knowledge. (shrink)
If two theory formulations are merely different expressions of the same theory, then any problem of choosing between them cannot be due to the underdetermination of theories by data. So one might suspect that we need to be able to tell distinct theories from mere alternate formulations before we can say anything substantive about underdetermination, that we need to solve the problem of identical rivals before addressing the problem of underdetermination. Here I consider two possible solutions: Quine proposes that we (...) call two theories identical if they are equivalent under a reconstrual of predicates, but this would mishandle important cases. Another proposal is to defer to the particular judgements of actual scientists. Consideration of an historical episodethe alleged equivalence of wave and matrix mechanicsshows that this second proposal also fails. Nevertheless, I suggest, the original suspicion is wrong; there are ways to enquire into underdetermination without having solved the problem of identical rivals. (shrink)
Philip Kitcher develops the Galilean Strategy to defend realism against its many opponents. I explore the structure of the Galilean Strategy and consider it specifically as an instrument against constructive empiricism. Kitcher claims that the Galilean Strategy underwrites an inference from success to truth. We should resist that conclusion, I argue, but the Galilean Strategy should lead us by other routes to believe in many things about which the empiricist would rather remain agnostic. 1 Target: empiricism 2 The Galilean Strategy (...) 3 Strengthening the argument 4 Success and truth 5 Conclusion. (shrink)
This discussion note addresses Caleb Hazelwood’s ‘Practice-Centered Pluralism and a Disjunctive Theory of Art’. Hazelwood advances a disjunctive definition of art on the basis of an analogy with species concept pluralism in the philosophy of biology. We recognize the analogy between species and art, we applaud attention to practice, and we are bullish on pluralism—but it is a mistake to take these as the basis for a disjunctive definition.
In this paper, I explore and defend the idea that musical works are historical individuals. Guy Rohrbaugh (2003) proposes this for works of art in general. Julian Dodd (2007) objects that the whole idea is outré metaphysics, that it is too far beyond the pale to be taken seriously. Their disagreement could be seen as a skirmish in the broader war between revisionists and reactionaries, a conflict about which of metaphysics and art should trump the other when there is a (...) conflict. That dispute is a matter of philosophical methodology as much as it is a dispute about art. I argue that the ontology of works as individuals need not be dunked in that morass. My primary strategy is to show, contra Dodd's accusation, that historical individuals are familiar parts of the world. Although the ontological details are open to debate, it is the standard opinion of biologists is that biological species are historical individuals. So there is no conflict here between fidelity to art and respectable metaphysics. What suits species will fit musical work as well. (shrink)
There are two ways that we might respond to the underdetermination of theory by data. One response, which we can call the agnostic response, is to suspend judgment: "Where scientific standards cannot guide us, we should believe nothing". Another response, which we can call the fideist response, is to believe whatever we would like to believe: "If science cannot speak to the question, then we may believe anything without science ever contradicting us". C.S. Peirce recognized these options and suggested evading (...) the dilemma. It is a Logical Maxim, he suggests, that there could be no genuine underdetermination. This is no longer a viable option in the wake of developments in modern physics, so we must face the dilemma head on. The agnostic and fideist responses to underdetermination represent fundamentally different epistemic viewpoints. Nevertheless, the choice between them is not an unresolvable struggle between incommensurable worldviews. There are legitimate considerations tugging in each direction. Given the balance of these considerations, there should be a modest presumption of agnosticism. This may conflict with Peirce's Logical Maxim, but it preserves all that we can preserve of the Peircean motivation. (shrink)
The Argument from Inductive Risk (AIR) is taken to show that values are inevitably involved in making judgements or forming beliefs. After reviewing this conclusion, I pose cases which are prima facie counterexamples: the unreflective application of conventions, use of black-boxed instruments, reliance on opaque algorithms, and unskilled observation reports. These cases are counterexamples to the AIR posed in ethical terms as a matter of personal values. Nevertheless, it need not be understood in those terms. The values which load a (...) theory choice may be those of institutions or past actors. This means that the challenge of responsibly handling inductive risk is not merely an ethical issue, but is also social, political, and historical. (shrink)
In late 2014, the jazz combo Mostly Other People Do the Killing released Blue—an album that is a note-for-note remake of Miles Davis's 1959 landmark album Kind of Blue. This is a thought experiment made concrete, raising metaphysical puzzles familiar from discussion of indiscernible counterparts. It is an actual album, rather than merely a concept, and so poses the aesthetic puzzle of why one would ever actually listen to it.
It has been common wisdom for centuries that scientific inference cannot be deductive; if it is inference at all, it must be a distinctive kind of inductive inference. According to demonstrative theories of induction, however, important scientific inferences are not inductive in the sense of requiring ampliative inference rules at all. Rather, they are deductive inferences with sufficiently strong premises. General considerations about inferences suffice to show that there is no difference in justification between an inference construed demonstratively or ampliatively. (...) The inductive risk may be shouldered by premises or rules, but it cannot be shirked. Demonstrative theories of induction might, nevertheless, better describe scientific practice. And there may be good methodological reasons for constructing our inferences one way rather than the other. By exploring the limits of these possible advantages, I argue that scientific inference is neither of essence deductive nor of essence inductive. (shrink)
Peter Baumann offers the tantalizing suggestion that Thomas Reid is almost, but not quite, a pragmatist. He motivates this claim by posing a dilemma for common sense philosophy: Will it be dogmatism or scepticism? Baumann claims that Reid points to but does not embrace a pragmatist third way between these unsavory options. If we understand `pragmatism' differently than Baumann does, however, we need not be so equivocal in attributing it to Reid. Reid makes what we could call an argument from (...) practical commitment, and this is plausibly an instance of what William James calls the pragmatic method. (shrink)
Background theories in science are used both to prove and to disprove that theory choice is underdetermined by data. The alleged proof appeals to the fact that experiments to decide between theories typically require auxiliary assumptions from other theories. If this generates a kind of underdetermination, it shows that standards of scientific inference are fallible and must be appropriately contextualized. The alleged disproof appeals to the possibility of suitable background theories to show that no theory choice can be timelessly or (...) noncontextually underdetermined: Foreground theories might be distinguished against different backgrounds. Philosophers have often replied to such a disproof by focussing their attention not on theories but on Total Sciences. If empirically equivalent Total Sciences were at stake, then there would be no background against which they could be differentiated. I offer several reasons to think that Total Science is a philosophers' fiction. No respectable underdetermination can be based on it. (shrink)
Thomas Reid is often misread as defending common sense, if at all, only by relying on illicit premises about God or our natural faculties. On these theological or reliabilist misreadings, Reid makes common sense assertions where he cannot give arguments. This paper attempts to untangle Reid's defense of common sense by distinguishing four arguments: (a) the argument from madness, (b) the argument from natural faculties, (c) the argument from impotence, and (d) the argument from practical commitment. Of these, (a) and (...) (c) do rely on problematic premises that are no more secure than claims of common sense itself. Yet (b) and (d) do not. This conclusion can be established directly by considering the arguments informally, but one might still worry that there is an implicit premise in them. In order to address this concern, I reconstruct the arguments in the framework of subjective Bayesianism. The worry becomes this: Do the arguments rely on specific values for the prior probability of some premises? Reid's appeals to our prior cognitive and practical commitments do not. Rather than relying on specific probability assignments, they draw on things that are part of the Bayesian framework itself, such as the nature of observation and the connection between belief and action. Contra the theological or reliabilist readings, the defense of common sense does not require indefensible premises. (shrink)
Christy Mag Uidhir has recently argued (a) that there is no in principle aesthetic difference between a live performance and a recording of that performance, and (b) that the proper aesthetic object is a type which is instantiated by the performance and potentially repeatable when recordings are played back. This paper considers several objections to (a) and finds them lacking. I then consider improvised music, a subject that Mag Uidhir explicitly brackets in his discussion. Improvisation reveals problems with (b), because (...) the performance-event and the performance-type are distinct but equally proper aesthetic objects. (shrink)
One approach to science treats science as a cognitive accomplishment of individuals and defines a scientific community as an aggregate of individual inquirers. Another treats science as a fundamentally collective endeavor and defines a scientist as a member of a scientific community. Distributed cognition has been offered as a framework that could be used to reconcile these two approaches. Adam Toon has recently asked if the cognitive and the social can be friends at last. He answers that they probably cannot, (...) posing objections to the would-be rapprochement. We clarify both the animosity and the tonic proposed to resolve it, ultimately arguing that worries raised by Toon and others are uncompelling. (shrink)
This paper argues against the common, often implicit view that theories are some specific kind of thing. Instead, I argue for theory concept pluralism: There are multiple distinct theory concepts which we legitimately use in different domains and for different purposes, and we should not expect this to change. The argument goes by analogy with species concept pluralism, a familiar position in philosophy of biology. I conclude by considering some consequences for philosophy of science if theory concept pluralism is correct.
Typical discussions of virtual reality (VR) fixate on technology for providing sensory stimulation of a certain kind. They thus fail to understand reality as the place wherein we live and work, misunderstanding it instead as merely a sort of presentation. The first half of the paper examines popular conceptions of VR. The most common conception is a shallow one according to which VR is a matter of simulating appearances. Yet there is, even in popular depictions, a second, more subtle conception (...) according to which VR is a matter of facilitating new kinds of interaction. The latter half of the paper turns to questions about the contemporary technology of Internet chatrooms. The fact that chatrooms can be used in certain ways suggests something about the prospects for VR. The penultimate section asks whether chatrooms may legitimately be thought of as places. (In a sense, they may.) The final section asks whether cybersex may legitimately be thought of as sex. (Again, yes.) Chatroom technology thus provides an argument for the second conception of VR over its much ballyhooed rival. (shrink)
Some philosophers think that there is a gap between is and ought which necessarily makes normative enquiry a different kind of thing than empirical science. This position gains support from our ability to explicate our inferential practices in a way that makes it impermissible to move from descriptive premises to a normative conclusion. But we can also explicate them in a way that allows such moves. So there is no categorical answer as to whether there is or is not a (...) gap. The question of an is-ought gap is a practical and strategic matter rather than a logical one, and it may properly be answered in different ways for different questions or at different times. (shrink)
A considerable literature has grown up around the claim of Uniqueness, according to which evidence rationally determines belief. It is opposed to Permissivism, according to which evidence underdetermines belief. This paper highlights an overlooked third possibility, according to which there is no rational doxastic attitude. I call this 'Nihilism'. I argue that adherents of the other two positions ought to reject it but that it might, nevertheless, obtain at least sometimes.
Sol LeWitt is probably most famous for wall drawings. They are an extension of work he had done in sculpture and on paper, in which a simple rule specifies permutations and variations of elements. With wall drawings, the rule is given for marks to be made on a wall. We should distinguish these algorithmic works from impossible-to-implement instruction works and works realized by following preparatory sketches. Taking the core feature of a wall drawing to be that it is algorithmic, some (...) of LeWitt's later works are wall drawings in name only. (shrink)
Part of a book symposium on Anjan Chakravartty's Scientific ontology: integrating naturalized metaphysics and voluntarist epistemology (Oxford University Press, 2017).
Although some authors hold that natural kinds are necessarily relative to disciplinary domains, many authors presume that natural kinds must be absolute, categorical features of the reality —often assuming that without even mentioning the alternative. Recognizing both possibilities, one may ask whether the difference especially matters. I argue that it does. Looking at recent arguments about natural kind realism, I argue that we can best make sense of the realism question by thinking of natural kindness as a relation that holds (...) between a category and a domain. (shrink)
Eric Barnes’ The Paradox of Predictivism is concerned primarily with two facts: predictivism and pluralism. In the middle part of the book, he peers through these two lenses at the tired realist scarecrow of the no-miracles argument. He attempts to reanimate this weatherworn realist argument, contra suggestions by people like me that it should be abandoned. In this paper, I want to get clear on Barnes’ contribution to the debate. He focuses on what he calls the miraculous endorsement argument, which (...) explains not the success of a specific theory but instead the history of successes for an entire research program. The history of successes is explained by reliable and improving methods, which are the flipside of approximately true background theories. Yet, as Barnes notes, the whole story must begin with methods that are at least minimally reliable. Barnes demands that the realist explain the origin of the minimally reliable take-off point, and he suggests a way that the realist might do so. I contend that his explanation still relies on contingent developments and so fails to completely explain the development of take-off theories. However, this line of argument digs into familiar details of the no-miracles argument and overlooks what’s new in Barnes’ approach. By calling attention to pluralism, he reminds us that we need an account of scientific expertise. This is important, I suggest, because expertise is not indefinite. We do not trust specific experts for everything, but only for things within the bounds of their expertise. Drawing these boundaries relies on our own background theories and is only likely to be reliable if our background theories are approximately true. I argue, then, that pluralism gives us reason to be realists. (shrink)
A discussion and qualified defense of Philip Kitcher on scientific significance and ‘well-ordered science.’ (Qualified because I argue that Kitcher’s position is made unstable by his reliance on the largely unanalyzed notion of natural curiosity.).
Within philosophy of science, debates about realism often turn on whether posited entities exist or whether scientific claims are true. Natural kinds tend to be investigated by philosophers of language or metaphysicians, for whom semantic or ontological considerations can overshadow scientific ones. Since science crucially involves dividing the world up into categories of things, however, issues concerning classification ought to be central for philosophy of science. Muhammad Ali Khalidi's book fills that gap, and I commend it to readers with an (...) interest in scientific taxonomy and natural kinds. He works through general issues to craft a useful philosophical conception and uses the account to think through a wide range of specific examples. Although there are differences in the details, that one-sentence summary of Khalidi's book could just as well describe my own recent monograph on natural kinds. (shrink)
Debates about the underdetermination of theory by data often turn on specific examples. Cases invoked often enough become familiar, even well worn. Since Helen Longino’s discussion of the case, the connection between prenatal hormone levels and gender-linked childhood behaviour has become one of these stock examples. However, as I argue here, the case is not genuinely underdetermined. We can easily imagine a possible experiment to decide the question. The fact that we would not perform this experiment is a moral, rather (...) than epistemic, point. Finally, I suggest that the ”underdetermination’ of the case may be inessential for Longino to establish her central claim about it. (shrink)
The Bare Theory was offered by David Albert as a way of standing by the completeness of quantum mechanics in the face of the measurement problem. This paper surveys objections to the Bare Theory that recur in the literature: what will here be called the oddity objection, the coherence objection, and the context-of-the-universe objection. Critics usually take the Bare Theory to have unacceptably bizarre consequences, but to be free from internal contradiction. Bizarre consequences need not be decisive against the Bare (...) Theory, but a further objection—dubbed here the calibration objection—has been underestimated. This paper argues that the Bare Theory is not only odd but also inconsistent. We can imagine a successor to the Bare Theory—the Stripped Theory—which avoids the objections and fulfills the original promise of the Bare Theory, but at the cost of amplifying the bizarre consequences. The Stripped Theory is either a stunning development in our understanding of the world or a reductio disproving the completeness of quantum mechanics. The Bare Theory The usual objections The calibration objection Beyond the Bare Theory. (shrink)
An introduction to sentential logic and first-order predicate logic with identity, logical systems that significantly influenced twentieth-century analytic philosophy. After working through the material in this book, a student should be able to understand most quantified expressions that arise in their philosophical reading. -/- This books treats symbolization, formal semantics, and proof theory for each language. The discussion of formal semantics is more direct than in many introductory texts. Although forall x does not contain proofs of soundness and completeness, it (...) lays the groundwork for understanding why these are things that need to be proven. -/- The book highlights the choices involved in developing sentential and predicate logic. Students should realize that these two are not the only possible formal languages. In translating to a formal language, we simplify and profit in clarity. The simplification comes at a cost, and different formal languages are suited to translating different parts of natural language. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.