The connection between whole and part is intimate: not only can we share the same space, but I’m incapable of leaving my parts behind; settle the nonmereological facts and you thereby settle what is a part of what; wholes don’t seem to be an additional ontological commitment over their parts. Composition as identity promises to explain this intimacy. But it threatens to make the connection too intimate, for surely the parts could have made a different whole and the whole have (...) had different parts. In this paper I attempt to offer an account of parthood that is intimate enough but not too intimate: the parts generate the whole, but they are not themselves the whole. (shrink)
Abstract -/- “The Obvious Invisibility of the Relationship Between Technology and Social Values” -/- We all too often assume that technology is the product of objective scientific research. And, we assume that technology’s moral value lies in only the moral character of its user. Yet, in order to objectify technology in a manner that removes it from a moral realm, we rely on the assumption that technology is value neutral, i.e., it is independent of all contexts other than the context (...) in which it is used morally or immorally by someone. However, there is a power to technology. At the very least, technology should be seen as reflecting the values that it rises from: a developmental context. Consequently, we can make the invisible moral realm visible as we investigate the relationship between culture and technology. Pragmatically, we can trace how cultural values inform what counts as a scientific question. When we look at the 1980s epidemiological model for AIDS we see how the presuppositions within a culture frame the type of question asked, i.e. What population do we see AIDS occurring in? This question initially leads scientific research away from actual individual behaviors until a different question is asked that leads the question towards behavioral epidemiological models rather than population models. Secondly, we can make visible the circumstance that depicts that what constitutes a scientific problem has a direct relationship to the types of technologies that are developed. We can more clearly see this circular relationship between culture and technology by looking at clinical research and practice regarding heart disease. Men who were dying of diagnosed heart disease were showing symptoms of problems in large arteries of the heart. Women were also dying of heart disease, but since their large arteries were not overwhelmingly compromised they were not being properly diagnosed until recently. Different technologies and protocol were needed to “see” different symptoms in the smaller vessels of women. Thirdly, I show how the types of technology developed, in turn, can create new values or enhance preexisting values within a culture. DNA research can be used to investigate long, unanswered questions about qualitative dimensions of life in quantitative manners. Yet, long held unanswered questions about a biological basis of race or intelligence are themselves rarely challenged as uninteresting or misguided even when they are tied to prejudicial qualitative assumptions if the chance of a “scientific” answer is possible. When existing technologies or modifications of existing technologies can be used they seem to direct human reactions and behaviors in predictable ways. The technologies themselves may be said to direct culture in ethically valenced manners. (shrink)
Chapter: WHITE PRIVILEGE AND THE COLOR OF FEAR This chapter focuses on the role that power, innocence and ignorance play in maintaining the position of white privilege. There are times when white people use their privilege in ways that overtly attempt to put and keep people of color in their places, but more often white privilege is less obvious. White privilege does not stand out in white peoples’ behavior at all times. When white behavior is normalized, it is masked. At (...) these times, white privilege and power hide behind the masks of innocence and the masks of ignorance. White people can mask from themselves and others their location with relations to power. In the film, The Color of Fear, David C. hides his power. As he hides his power, he keeps his privilege invisible, that is, behind a mask. In this chapter, we focus on the masking and unmasking of innocence and ignorance to get a better look at how the process of normalization of these masks makes whiteness powerful and consequently hides white privilege. The logic of power and privilege is reflected in the following relationship: -/- innocence + ignorance = Invisibility (of White Privilege) -/- . (shrink)
How can researchers use race, as they do now, to conduct health-care studies when its very definition is in question? The belief that race is a social construct without “biological authenticity” though widely shared across disciplines in social science is not subscribed to by traditional science. Yet with an interdisciplinary approach, the two horns of the social construct/genetics dilemma of race are not mutually exclusive. We can use traditional science to provide a rigorous framework and use a social-science approach so (...) that “invisible” factors are used to adjust the design of studies on an as-needed basis. One approach is to first observe health-care outcomes and then categorize the outcomes, thus removing genetic differences as racial proxies from the design of the study. From the outcomes, we can then determine if there is a pattern of conceivable racial categories. If needed, we can apply dynamic notions of race to acknowledge bias without prejudice. We can use them constructively to improve outcomes and reduce racial disparities. Another approach is nearly identical but considers race not at all: While analyzing outcomes, we can determine if there are biological differences significant enough to identify classifications of humans. That is, we look for genetic patterns in the outcomes and classify only those patterns. There is no attempt to link those patterns to race. (shrink)
This paper compares two alternative explanations of pragmatic encroachment on knowledge (i.e., the claim that whether an agent knows that p can depend on pragmatic factors). After reviewing the evidence for such pragmatic encroachment, we ask how it is best explained, assuming it obtains. Several authors have recently argued that the best explanation is provided by a particular account of belief, which we call pragmatic credal reductivism. On this view, what it is for an agent to believe a proposition is (...) for her credence in this proposition to be above a certain threshold, a threshold that varies depending on pragmatic factors. We show that while this account of belief can provide an elegant explanation of pragmatic encroachment on knowledge, it is not alone in doing so, for an alternative account of belief, which we call the reasoning disposition account, can do so as well. And the latter account, we argue, is far more plausible than pragmatic credal reductivism, since it accords far better with a number of claims about belief that are very hard to deny. (shrink)
RossCameron proposes to reconcile presentism and truth-maker theory by invoking temporal distributional properties, instantiated by present entities, as the truth-makers for truths about the past. This chapter argues that Cameron's proposal fails because objects can change which temporal distributional properties they instantiate and this entails that the truth-values of truths about the past can change in an objectionable way.
RossCameron's The Moving Spotlight argues that of the three most common dynamical theories of time – presentism, the growing block theory and the moving spotlight theory – his version of the MST is the best. This paper focuses on Cameron's response the epistemic objection. It considers two of Cameron's arguments: that a standard version of the MST can successfully resist the epistemic objection, and that Cameron's preferred version of the MST has an additional avenue (...) open to it for resisting the objection, one that is consistent with an appealing account of truthmaking. I argue that neither argument succeeds....By SMST, I shall mean the view that there exists a static four-dimensional block of events such that if an event ever... (shrink)
According to truthmaker theory, particular truths are true in virtue of the existence of particular entities. Truthmaker maximalism holds that this is so for all truths. Negative existential and other ‘negative’ truths threaten the position. Despite this, maximalism is an appealing thesis for truthmaker theorists. This motivates interest in parsimonious maximalist theories, which do not posit extra entities for truthmaker duty. Such theories have been offered by David Lewis and Gideon Rosen, RossCameron, and Jonathan Schaffer. But these (...) theories cannot be sustained, I’ll argue, and hence maximalism comes with a serious ontological cost. Neither Armstrong’s invocation of totality facts nor the Martin-Kukso line on absences can meet this cost satisfactorily. I’ll claim that negative facts are the best (and perhaps only) way out of the problem for the truthmaker maximalist. (shrink)
I apply the notion of truthmaking to the topic of fundamentality by articulating a truthmaker theory of fundamentality according to which some truths are truth-grounded in certain entities while the ones that don't stand in a metaphysical-semantic relation to the truths that do. I motivate this view by critically discussing two problems with RossCameron's truthmaker theory of fundamentality. I then defend this view against Theodore Sider's objection that the truthmaking approach to fundamentality violates the purity constraint. Truthmaker (...) theorists can have a trouble-free theory of fundamentality. (shrink)
Presentists face a familiar problem. If only present objects exist, then what 'makes true' our true claims about the past? According to RossCameron, the 'truth-makers' for past and future tensed propositions are presently instantiated Temporal Distributional Properties. We present an argument against Cameron's view. There are two ways that we might understand the term 'distribute' as it appears. On one reading, the resulting properties are not up to the task of playing the truth-maker role; on the (...) other, the properties are incompatible with presentism. (shrink)
The Epistemic Objection says that certain theories of time imply that it is impossible to know which time is absolutely present. Standard presentations of the Epistemic Objection are elliptical—and some of the most natural premises one might fill in to complete the argument end up leading to radical skepticism. But there is a way of filling in the details which avoids this problem, using epistemic safety. The new version has two interesting upshots. First, while RossCameron alleges that (...) the Epistemic Objection applies to presentism as much as to theories like the growing block, the safety version does not overgeneralize this way. Second, the Epistemic Objection does generalize in a different, overlooked way. The safety objection is a serious problem for a widely held combination of views: “propositional temporalism” together with “metaphysical eternalism”. (shrink)
The Growing Block Theory of time says that the metaphysical openness of the future should be understood in terms of there not being any future objects or events. But in a series of works, RossCameron, Elizabeth Barnes, and Robbie Williams have developed a competing view that understands metaphysical openness in terms of it being indeterminate whether there exist future objects or events. I argue that the three reasons they give for preferring their account are not compelling. And (...) since the notion of “indeterminate existence” suffers conceptual problems, the Growing Block is the preferable view. (shrink)
This paper defends Priorianism, a theory in the philosophy of time which combines three theses: first, that there is a metaphysical distinction between the present time and non-present times; second, that there are temporary propositions, that is, propositions that change in truth-value simpliciter over time; and third, that there is change over time only if there are temporary propositions. Priorianism is accepted by many Presentists, Growing Block Theorists, and Moving Spotlight Theorists. However, it is difficult to defend the view without (...) appealing to premises that those who reject the view find controversial. My aim in this paper is to defend Priorianism in a way that largely avoids appealing to such premises. I do three things: first (Section 1), I describe the component theses of Priorianism and the relations between them. Next (Section 2), I show how Priorians can respond to the argument that the B-theory implies that there are temporary propositions, and therefore satisfies the Priorian condition for there being change over time. Finally (Section 3), I defend the Priorian thesis that there is change over time only if there are temporary propositions against an alternative principle of change defended by RossCameron (The Moving Spotlight, 2015). (shrink)
Truthmaker says that things, broadly construed, are the ontological grounds of truth and, therefore, that things make truths true. Recently, there have been a number of arguments purporting to show that if one embraces Truthmaker, then one ought to embrace Truthmaker Maximalism—the view that all non-analytic propositions have truthmakers. But then if one embraces Truthmaker, one ought to think that negative existentials have truthmakers. I argue that this is false. I begin by arguing that recent attempts by Ross (...) class='Hi'>Cameron and Jonathan Schaffer to provide negative existentials with truthmakers fail. I then argue that the conditional—if one embraces Truthmaker, the one ought to embrace Truthmaker Maximalism—is false by considering worlds where very little, if anything at all, exists. The conclusion is that thinking that negative existentials do not have truthmakers, and therefore rejecting Truthmaker Maximalism, need not worry Truthmaker embracers. (shrink)
Some facts ground other facts. Some fact is fundamental iff there are no other facts which partially or fully ground that fact. According to metaphysical foundationalism, every non-fundamental fact is fully grounded by some fundamental fact. In this paper I examine and defend some neglected considerations which might be made in favor of metaphysical foundationalism. Building off of work by RossCameron, I suggest that foundationalist theories are more unified than, and so in one important respect simpler than, (...) non-foundationalist theories, insofar as foundationalist theories allow us to derive all non-fundamental facts from some fundamental fact. Non-foundationalist theories can enjoy a similar sort of theoretical unification only by taking on objectionable metaphysical laws. (shrink)
In 2009, we celebrated the bicentennial of the birth of Charles Darwin and the sesquicentennial of the publication of his book The Origin of Species. This seems to be a good opportunity to evaluate the importance of Darwin’s work for the social sciences, mainly for philosophical anthropology. The aim of this paper is to discuss the traditional anthropocentric conceptions of man, which consider our biological species to be exceptional – qualitatively higher than other living organisms. Over the course of the (...) 20th century, philosophers have argued for the claim in a number of ways: man is supposed to be the only animal capable of laughter, love, thought or language. It has also been claimed that only members of Homo sapiens have free will, morality or religion. The paper refutes these arguments on the basis of contemporary studies by M. Davila Ross, H. Fischer, K. Arnold, K. Zuberbühler, G. Konopka, B. Libet, F. De Waal, M. Bekoff, P. Boyer, B. Hood, G. Paul and others. The author argues that the only differences between man and other animals are quantitative, and therefore the nature of humans should be studied using the methods of naturalized philosophy, with respect to the natural sciences. (shrink)
Is there a distinctively epistemic kind of blame? It has become commonplace for epistemologists to talk about epistemic blame, and to rely on this notion for theoretical purposes. But not everyone is convinced. Some of the most compelling reasons for skepticism about epistemic blame focus on disanologies, or asymmetries, between the moral and epistemic domains. In this paper, I defend the idea that there is a distinctively epistemic kind of blame. I do so primarily by developing an account of the (...) nature of epistemic blame. My account draws on a prominent line of theorizing in moral philosophy that ties blame to our relationships with one another. I argue that with my account of epistemic blame on hand, the most compelling worries about epistemic blame can be deflated. There is a distinctively epistemic kind of blame. (shrink)
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to (...) a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain. (shrink)
This paper provides a critical overview of recent work on epistemic blame. The paper identifies key features of the concept of epistemic blame and discusses two ways of motivating the importance of this concept. Four different approaches to the nature of epistemic blame are examined. Central issues surrounding the ethics and value of epistemic blame are identified and briefly explored. In addition to providing an overview of the state of the art of this growing but controversial field, the paper highlights (...) areas where future work is needed. (shrink)
One challenge in developing an account of the nature of epistemic blame is to explain what differentiates epistemic blame from mere negative epistemic evaluation. The challenge is to explain the difference, without invoking practices or behaviors that seem out of place in the epistemic domain. In this paper, I examine whether the most sophisticated recent account of the nature of epistemic blame—due to Jessica Brown—is up for the challenge. I argue that the account ultimately falls short, but does so in (...) an instructive way. Drawing on the lessons learned, I put forward an alternative approach to the nature of epistemic blame. My account understands epistemic blame in terms of modifications to the intentions and expectations that comprise our “epistemic relationships” with one another. This approach has a number of attractions shared by Brown’s account, but it can also explain the significance of epistemic blame. (shrink)
Our prominent definitions of cognition are too vague and lack empirical grounding. They have not kept up with recent developments, and cannot bear the weight placed on them across many different debates. I here articulate and defend a more adequate theory. On this theory, behaviors under the control of cognition tend to display a cluster of characteristic properties, a cluster which tends to be absent from behaviors produced by non-cognitive processes. This cluster is reverse-engineered from the empirical tests that comparative (...) psychologists use to determine whether a behavior was generated by a cognitive or a non-cognitive process. Cognition should be understood as the natural kind of psychological process that non-accidentally exhibits the properties assessed by these tests (as well as others we have not yet discovered). Finally, I review two plausible neural accounts of cognition's underlying mechanisms?one based in localization of function to particular brain regions and another based in the more recent distributed networks approach to neuroscience?which would explain why these properties non-accidentally cluster. While this notion of cognition may be useful for a number of debates, I here focus on its application to a recent crisis over the distinction between cognition and association in comparative psychology. (shrink)
How should we determine the distribution of psychological traits—such as Theory of Mind, episodic memory, and metacognition—throughout the Animal kingdom? Researchers have long worried about the distorting effects of anthropomorphic bias on this comparative project. A purported corrective against this bias was offered as a cornerstone of comparative psychology by C. Lloyd Morgan in his famous “Canon”. Also dangerous, however, is a distinct bias that loads the deck against animal mentality: our tendency to tie the competence criteria for cognitive capacities (...) to an exaggerated sense of typical human performance. I dub this error “anthropofabulation”, since it combines anthropocentrism with confabulation about our own prowess. Anthropofabulation has long distorted the debate about animal minds, but it is a bias that has been little discussed and against which the Canon provides no protection. Luckily, there is a venerable corrective against anthropofabulation: a principle offered long ago by David Hume, which I call “Hume’s Dictum”. In this paper, I argue that Hume’s Dictum deserves a privileged place next to Morgan’s Canon in the methodology of comparative psychology, illustrating my point through a discussion of the debate over Theory of Mind in nonhuman animals. (shrink)
The paper critically examines recent work on justifications and excuses in epistemology. I start with a discussion of Gerken’s claim that the “excuse maneuver” is ad hoc. Recent work from Timothy Williamson and Clayton Littlejohn provides resources to advance the debate. Focusing in particular on a key insight in Williamson’s view, I then consider an additional worry for the so-called excuse maneuver. I call it the “excuses are not enough” objection. Dealing with this objection generates pressure in two directions: one (...) is to show that excuses are a positive enough normative standing to help certain externalists with important cases; the other is to do so in a way that does not lead back to Gerken’s objection. I show how a Williamson-inspired framework is flexible enough to deal with both sources of pressure. Perhaps surprisingly, I draw on recent virtue epistemology. (shrink)
Philosophers and cognitive scientists have worried that research on animal mind-reading faces a ‘logical problem’: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence (e.g. line-of-gaze) for those mental states. The most impressive attempt to confront this problem has been mounted recently by Robert Lurz. However, Lurz' approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. Moreover, participants in this (...) debate do not agree on criteria for representation. As such, future debate should either abandon the representational idiom or confront underlying semantic disagreements. (shrink)
This chapter aims to explore the intersection of Christian theism, a neo-Aristotelian gloss on metaphysical grounding, and creaturely participation in God. In section one, I aim to de- velop several core tenets at the heart of a theistic participatory ontology as it is found in the Christian tradition, what I call minimal participatory ontology. In section two, I examine the contemporary notion of metaphysical grounding, namely the formal and structure features of the grounding relation, and offer a grounding-theoretic framework for (...) understanding a mini- mal participatory ontology. Finally, in section three, I put forward a neo-Aristotelian account of metaphysical grounding in particular, one that is uniquely suited to capture the central tenets of minimal participatory ontology. (shrink)
A plausible condition on having the standing to blame someone is that the target of blame's wrongdoing must in some sense be your “business”—the wrong must in some sense harm or affect you, or others close to you. This is known as the business condition on standing to blame. Many cases of epistemic blame discussed in the literature do not obviously involve examples of someone harming or affecting another. As such, not enough has been said about how an individual's epistemic (...) failing can really count as another person's business. In this paper, I deploy a relationship-based account of epistemic blame to clarify the conditions under which the business condition can be met in the epistemic domain. The basic idea is that one person's epistemic failing can be another's business in virtue of the way it impairs their epistemic relationship. (shrink)
This is a transcript of a conversation between P F Strawson and Gareth Evans in 1973, filmed for The Open University. Under the title 'Truth', Strawson and Evans discuss the question as to whether the distinction between genuinely fact-stating uses of language and other uses can be grounded on a theory of truth, especially a 'thin' notion of truth in the tradition of F P Ramsey.
In De Anima 2.4, Aristotle claims that nutritive soul encompasses two distinct biological functions: nutrition and reproduction. We challenge a pervasive interpretation which posits ‘nutrients’ as the correlative object of the nutritive capacity. Instead, the shared object of nutrition and reproduction is that which is nourished and reproduced: the ensouled body, qua ensouled. Both functions aim at preserving this object, and thus at preserving the form, life, and being of the individual organism. In each case, we show how Aristotle’s detailed (...) biological analysis supports this ontological argument. (shrink)
Evaluating counterfactuals in worlds with deterministic laws poses a puzzle. In a wide array of cases, it does not seem plausible that if a non-actual event were to occur that either the past would be different or that the laws would be different. But it’s also difficult to see how we can avoid this result. Some philosophers have argued that we can avoid this dilemma by allowing that a proposition can be a law even though it has violations. On this (...) view, for the relevant cases, the past and the laws would still hold, but the laws would have a violation. In this paper, I raise a problem for the claim that the laws and the past are preserved for all of the relevant counterfactual antecedents. I further argue that this problem undermines motivating the possibility of violations on the grounds that they allow us to hold that the past and the laws are typically counterfactually preserved, even if they are not always preserved. (shrink)
The functionalist approach to kinds has suffered recently due to its association with law-based approaches to induction and explanation. Philosophers of science increasingly view nomological approaches as inappropriate for the special sciences like psychology and biology, which has led to a surge of interest in approaches to natural kinds that are more obviously compatible with mechanistic and model-based methods, especially homeostatic property cluster theory. But can the functionalist approach to kinds be weaned off its dependency on laws? Dan Weiskopf has (...) recently offered a reboot of the functionalist program by replacing its nomological commitments with a model-based approach more closely derived from practice in psychology. Roughly, Weiskopf holds that the natural kinds of psychology will be the functional properties that feature in many empirically successful cognitive models, and that those properties need not be localized to parts of an underlying mechanism. I here skeptically examine the three modeling practices that Weiskopf thinks introduce such non-localizable properties: fictionalization, reification, and functional abstraction. In each case, I argue that recognizing functional properties introduced by these practices as autonomous kinds comes at clear cost to those explanations’ counterfactual explanatory power. At each step, a tempting functionalist response is parochialism: to hold that the false or omitted counterfactuals fall outside the modeler’s explanatory aims, and so should not be counted against functional kinds. I conclude by noting the dangers this attitude poses to scientific disagreement, inviting functionalists to better articulate how the individuation conditions for functional kinds might outstrip the perspective of a single modeler. (shrink)
Languages vary in their semantic partitioning of the world. This has led to speculation that language might shape basic cognitive processes. Spatial cognition has been an area of research in which linguistic relativity – the effect of language on thought – has both been proposed and rejected. Prior studies have been inconclusive, lacking experimental rigor or appropriate research design. Lacking detailed ethnographic knowledge as well as failing to pay attention to intralanguage variations, these studies often fall short of defining an (...) appropriate concept of language, culture, and cognition. Our study constitutes the first research exploring (1) individuals speaking different languages yet living (for generations) in the same immediate environment and (2) systematic intralanguage variation. Results show that language does not shape spatial cognition and plays at best the secondary role of foregrounding alternative possibilities for encoding spatial arrangements. (shrink)
The chapter develops a taxonomy of views about the epistemic responsibilities of citizens in a democracy. Prominent approaches to epistemic democracy, epistocracy, epistemic libertarianism, and pure proceduralism are examined through the lens of this taxonomy. The primary aim is to explore options for developing an account of the epistemic responsibilities of citizens in a democracy. The chapter also argues that a number of recent attacks on democracy may not adequately register the availability of a minimal approach to the epistemic responsibilities (...) of citizens in a democracy. (shrink)
I here critique the application of the traditional, similarity-based account of natural kinds to debates in psychology. A challenge to such accounts of kindhood—familiar from the study of biological species—is a metaphysical phenomenon that I call ‘transitional gradation’: the systematic progression of slightly modified transitional forms between related candidate kinds. Where such gradation proliferates, it renders the selection of similarity criteria for kinds arbitrary. Reflection on general features of learning—especially on the gradual revision of concepts throughout the acquisition of expertise—shows (...) that even the strongest candidates for similarity-based kinds in psychology exhibit systematic transitional gradation. As a result, philosophers of psychology should abandon discussion of kindhood, or explore non-similarity based accounts. (shrink)
There is a distinction between merely having the right belief, and further basing that belief on the right reasons. Any adequate epistemology needs to be able to accommodate the basing relation that marks this distinction. However, trouble arises for Bayesianism. I argue that when we combine Bayesianism with the standard approaches to the basing relation, we get the result that no agent forms their credences in the right way; indeed, no agent even gets close. This is a serious problem, for (...) it prevents us from making epistemic distinctions between agents that are doing a reasonably good job at forming their credences and those that are forming them in clearly bad ways. I argue that if this result holds, then we have a problem for Bayesianism. However, I show how the Bayesian can avoid this problem by rejecting the standard approaches to the basing relation. By drawing on recent work on the basing relation, we can develop an account of the relation that allows us to avoid the result that no agent comes close to forming their credences in the right way. The Bayesian can successfully accommodate the basing relation. (shrink)
Epistemological Disjunctivism is a view about paradigm cases of perceptual knowledge. Duncan Pritchard claims that it is particularly well suited to accounting for internalist and externalist intuitions. A number of authors have disputed this claim, arguing that there are problems for Pritchard’s way with internalist intuitions. I share the worry. However, I don’t think it has been expressed as effectively as it can be. My aim in this paper is to present a new way of formulating the worry, in terms (...) of an “explanatory challenge”. The explanatory challenge is a simple, yet powerful and illuminating challenge for Epistemological Disjunctivism. It is illuminating in the sense that it shows us why Epistemological Disjunctivism must take on certain internalistically problematic commitments. A secondary aim of this paper is to examine whether the recently much-discussed distinction between justifications and excuses in epistemology can support an adequate response. I will argue that it cannot. (shrink)
Plenitude, roughly, the thesis that for any non-empty region of spacetime there is a material object that is exactly located at that region, is often thought to be part and parcel of the standard Lewisian package in the metaphysics of persistence. While the wedding of plentitude and Lewisian four-dimensionalism is a natural one indeed, there are a hand-full of dissenters who argue against the notion that Lewisian four-dimensionalism has exclusive rights to plentitude. These ‘promiscuous’ three-dimensionalists argue that a temporalized version (...) of plenitude is entirely compatible with a three-dimensional ontology of enduring entities. While few would deny the coherence of such a position, and much work has been done by its proponents to appease critics, there has been surprisingly little by way of exploring the various forms such an ontology might take as well as the potential advantages of one plenitudinous three-dimensional ontology over another. Here I develop a novel form of plenitudinous three-dimensionalism, what John Hawthorne (Metaphysical essays, 2006a, b) has called “Neo-Aristotelian Plenitude,” and argue that if one is inclined to endorse an abundant three-dimensional ontology, one is wise to opt for a plenitude of accidental unities. (shrink)
Is authoritarian power ever legitimate? The contemporary political theory literature—which largely conceptualizes legitimacy in terms of democracy or basic rights—would seem to suggest not. I argue, however, that there exists another, overlooked aspect of legitimacy concerning a government’s ability to ensure safety and security. While, under normal conditions, maintaining democracy and rights is typically compatible with guaranteeing safety, in emergency situations, conflicts between these two aspects of legitimacy can and often do arise. A salient example of this is the COVID-19 (...) pandemic, during which severe limitations on free movement and association have become legitimate techniques of government. Climate change poses an even graver threat to public safety. Consequently, I argue, legitimacy may require a similarly authoritarian approach. While unsettling, this suggests the political importance of climate action. For if we wish to avoid legitimating authoritarian power, we must act to prevent crises from arising that can only be resolved by such means. (shrink)
I first offer a broad taxonomy of models of divine omnipresence in the Christian tradition, both past and present. I then examine the recent model proposed by Hud Hudson (2009, 2014) and Alexander Pruss (2013)—ubiquitous entension—and flag a worry with their account that stems from predominant analyses of the concept of ‘material object’. I then attempt to show that ubiquitous entension has a rich Latin medieval precedent in the work of Augusine and Anselm. I argue that the model of omnipresence (...) explicated by Augustine and Anselm has the resources to avoid the noted worry by offering an alternative account of the divide between the immaterial and the material. I conclude by considering a few alternative analyses of ‘material object’ that make conceptual room for a contemporary Christian theist to follow suite in thinking that at least some immaterial entities are literally spatially located when relating to the denizens of spacetime. (shrink)
Causal essentialists hold that a property essentially bears its causal and nomic relations. Further, as many causal essentialists have noted, the main motivations for causal essentialism also motivate holding that properties are individuated in terms of their causal and nomic relations. This amounts to a kind of identity of indiscernibles thesis; properties that are indiscernible with respect to their causal and nomic relations are identical. This can be compared with the more well-known identity of indiscernibles thesis, according to which particulars (...) that are qualitatively indiscernible are identical. Robert Adams has developed a well-known objection to this thesis by considering a series of possibilities involving nearly qualitatively indiscernible particulars that naturally leads to a possibility involving qualitatively indiscernible particulars. I argue that we can construct parallel cases involving a series of possibilities involving properties that are nearly indiscernible with respect to their causal and nomic relations that naturally lead to possibilities involving properties that are indiscernible with respect to their causal and nomic relations. The same features that make Adams’ argument forceful also carry over to my cases, giving us a powerful objection to the causal essentialist identity of indiscernibles thesis. (shrink)
The main objection to pragmatism about knowledge is that it entails that truth-irrelevant factors can make a difference to knowledge. Blake Roeber (2018) has recently argued that this objection fails. I agree with Roeber. But in this paper, I present another way of thinking about the dispute between purists and pragmatists about knowledge. I do so by formulating a new objection to pragmatism about knowledge. This is that pragmatism about knowledge entails that factors irrelevant to both truth and “cognitive agency” (...) can make a difference to knowledge. An interesting additional upshot of my argument is the connection revealed between the debate between pragmatists and purists about knowledge, and the debate between “alethists” and pragmatists about reasons for belief. (shrink)
Plato’s references to Empedocles in the myth of the Statesman perform a crucial role in the overarching political argument of the dialogue. Empedocles conceives of the cosmos as structured like a democracy, where the constituent powers ‘rule in turn’, sharing the offices of rulership equally via a cyclical exchange of power. In a complex act of philosophical appropriation, Plato takes up Empedocles’ cosmic cycles of rule in order to ‘correct’ them: instead of a democracy in which rule is shared cyclically (...) amongst equal constituents, Plato’s cosmos undergoes cycles of the presence and absence of a single cosmic monarch who possesses ‘kingly epistēmē’. By means of a revision of Empedocles’ democratic cosmology, Plato’s richly woven myth is designed precisely to reject the appropriateness of democracy as a form of human political association and legitimate monarchy in its stead. (shrink)
A prominent objection to the knowledge norm of belief is that it is too demanding or too strong. The objection is commonly framed in terms of the idea that there is a tight connection between norm violation and the appropriateness of criticism or blame. In this paper I do two things. First, I argue that this way of motivating the objection leads to an impasse in the epistemic norms debate. It leads to an impasse when knowledge normers invoke excuses to (...) explain away a prima facie connection between blamelessness and justified belief. Second, I argue that a way out of the impasse becomes available when we take a closer look at some distinctions in the theory of responsibility. There are at least three different notions of responsibility relevant here. I argue that a weaker notion of responsibility – attributability – should be used to motivate the objection that the knowledge norm of belief is too strong. Insofar as the proposal motivates the objection without appeal to blamelessness, it opens up space to move beyond the impasse. (shrink)
Recent work by Stout and colleagues indicates that the neural correlates of language and Early Stone Age toolmaking overlap significantly. The aim of this paper is to add computational detail to their findings. I use an error minimisation model to outline where the information processing overlap between toolmaking and language lies. I argue that the Early Stone Age signals the emergence of complex structured representations. I then highlight a feature of my account: It allows us to understand the early evolution (...) of syntax in terms of an increase in the number and complexity of models in a cognitive system, rather than the development of new types of processing. (shrink)
The free energy principle is notoriously difficult to understand. In this paper, we relate the principle to a framework that philosophers of biology are familiar with: Ruth Millikan's teleosemantics. We argue that: (i) systems that minimise free energy are systems with a proper function; and (ii) Karl Friston's notion of *implicit modelling* can be understood in terms of Millikan's notion of *mapping relations*. Our analysis reveals some surprising formal similarities between the two frameworks, and suggests interesting lines of future research. (...) We hope this will aid further philosophical evaluation of the free energy principle. (shrink)
One typical aim of responsibilist virtue epistemology is to employ the notion of intellectual virtue in pursuit of an ameliorative epistemology. This paper focuses on “political inquiry” as a case study for examining the ameliorative value of intellectual virtue. My main claim is that the case of political inquiry threatens to expose responsibilist virtue epistemology in a general way as focusing too narrowly on the role of individual intellectual character traits in attempting to improve our epistemic practices.
This paper deals with a collection of concerns that, over a period of time, led the author away from the Routley–Meyer semantics, and towards proof- theoretic approaches to relevant logics, and indeed to the weak relevant logic MC of meaning containment.
No matter how it is viewed, as a plausible version of anti-utilitarianism or of non-consequentialist, or even as a plausible version of deontology, the theory of prima facie duties certainly makes W. D. Ross one of the most important moral philosopher of the twentieth-century. By outlining his pluralistic deontology, this paper attempts to argue for a positive answer to the question of whether Ross’s theory can offer a solution to the issue of conflicting duties. If such a solution (...) is convincing, as I believe it is, it would indicate the possibility to justify within the deontological framework, i.e., without committing to the principle of good-maximizing, those “hard cases” where people should break a promise or other (prima facie) duty in order to prevent a disastrous outcome. The theory of prima facie duties might then suggest that deontology and utilitarianism would likely be reconcilable. (shrink)
Over the last fifteen years, an ambitious explanatory framework has been proposed to unify explanations across biology and cognitive science. Active inference, whose most famous tenet is the free energy principle, has inspired excitement and confusion in equal measure. Here, we lay the ground for proper critical analysis of active inference, in three ways. First, we give simplified versions of its core mathematical models. Second, we outline the historical development of active inference and its relationship to other theoretical approaches. Third, (...) we describe three different kinds of claim -- labelled mathematical, empirical and general -- routinely made by proponents of the framework, and suggest dialectical links between them. Overall, we aim to increase philosophical understanding of active inference so that it may be more readily evaluated. -/- This the final submitted version of the Introduction to the Topical Collection "The Free Energy Principle: From Biology to Cognition", forthcoming in Biology & Philosophy. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.