ABSTRACTRational agents have consistent beliefs. Bayesianism is a theory of consistency for partial belief states. Rational agents also respond appropriately to experience. Dogmatism is a theory of how to respond appropriately to experience. Hence, Dogmatism and Bayesianism are theories of two very different aspects of rationality. It's surprising, then, that in recent years it has become common to claim that Dogmatism and Bayesianism are jointly inconsistent: how can two independently consistent theories with distinct subject matter be jointly inconsistent? In this (...) essay I argue that Bayesianism and Dogmatism are inconsistent only with the addition of a specific hypothesis about how the appropriate responses to perceptual experience are to be incorporated into the formal models of the Bayesian. That hypothesis isn't essential either to Bayesianism or to Dogmatism, and so Bayesianism and Dogmatism are jointly consistent. That leaves the matter of how experiences and credences are related, a... (shrink)
Seeing a red hat can (i) increase my credence in the hat is red, and (ii) introduce a negative dependence between that proposition and po- tential undermining defeaters such as the light is red. The rigidity of Jeffrey Conditionalization makes this awkward, as rigidity preserves inde- pendence. The picture is less awkward given ‘Holistic Conditionalization’, or so it is claimed. I defend Jeffrey Conditionalization’s consistency with underminable perceptual learning and its superiority to Holistic Conditionalization, arguing that the latter is merely (...) a special case of the former, is itself rigid, and is committed to implausible accounts of perceptual con- firmation and of undermining defeat. (shrink)
As I head home from work, I’m not sure whether my daughter’s new bike is green, and I’m also not sure whether I’m on drugs that distort my color perception. One thing that I am sure about is that my attitudes towards those possibilities are evidentially independent of one another, in the sense that changing my confidence in one shouldn’t affect my confidence in the other. When I get home and see the bike it looks green, so I increase my (...) confidence that it is green. But something else has changed: now an increase in my confidence that I’m on color-drugs would undermine my confidence that the bike is green. Jonathan Weisberg and Jim Pryor argue that the preceding story is problematic for standard Bayesian accounts of perceptual learning. Due to the ‘rigidity’ of Conditionalization, a negative probabilistic correlation between two propositions cannot be introduced by updating on one of them. Hence if my beliefs about my own color-sobriety start out independent of my beliefs about the color of the bike, then they must remain independent after I have my perceptual experience and update accordingly. Weisberg takes this to be a reason to reject Conditionalization. I argue that this conclusion is too pessimistic: Conditionalization is only part of the Bayesian story of perceptual learning, and the other part needn’t preserve independence. Hence Bayesian accounts of perceptual learning are perfectly consistent with potential underminers for perceptual beliefs. (shrink)
Culture is a notoriously elusive concept. This fact has done nothing to hinder its popularity in contemporary analytic political philosophy among writers like John Rawls, Will Kymlicka, Michael Walzer, David Miller, Iris Marion Young, Joseph Raz, Avishai Margalit and Bikhu Parekh, among many others. However, this should stop, both for the metaphysical reason that the concept of culture, like that of race, is itself either incoherent or lacking a referent in reality, and for several normative reasons. I focus on (...) the following interconnected points: • The vagueness of the term allows a myriad of candidates to claim rights, and typically to the detriment of increased equality and environmental goals . • Cultural capital cannot be regulated in the way that political capital must be regulated without undermining the cultures supposedly being protected. And the possession of cultural capital is almost never democratically regulated. In particular, granting cultures political status creates intergenerational conflict, rewarding the elders and creating incentives to be conservative and restrict cultural mobility of the younger generation. •The notion of a group owning “its” culture is conceptually suspect and corrupted by the foregoing points about unequal cultural capital. In defending a group’s right to preserve its culture we do not defend equally the rights of the individuals that make it up , and we ignore altogether the rights of those who may be unfairly denied recognition as “members” of the culture. (shrink)
It has been a common assumption that words are substances that instantiate or have properties. In this paper, I question the assumption that our ontology of words requires posting substances by outlining a bundle theory of words, wherein words are bundles of various sorts of properties (such as semantic, phonetic, orthographic, and grammatical properties). I argue that this view can better account for certain phenomena than substance theories, is ontologically more parsimonious, and coheres with claims in linguistics.
The idea that two words can be instances of the same word is a central intuition in our conception of language. This fact underlies many of the claims that we make about how we communicate, and how we understand each other. Given this, irrespective of what we think words are, it is common to think that any putative ontology of words, must be able to explain this feature of language. That is, we need to provide criteria of identity for word-types (...) which allow us to individuate words such that it can be the case that two particular word-instances are instances of the same word-type (on the assumption that there are such types). One solution, recently further developed by Irmak (2018), holds that words are individuated by their history. In this paper, I argue that this view either fails to account for our intuitions about word identity, or is too vague to be a plausible answer to the problem of word individuation. (shrink)
The natural name theory, recently discussed by Johnson (2018), is proposed as an explanation of pure quotation where the quoted term(s) refers to a linguistic object such as in the sentence ‘In the above, ‘bank’ is ambiguous’. After outlining the theory, I raise a problem for the natural name theory. I argue that positing a resemblance relation between the name and the linguistic object it names does not allow us to rule out cases where the natural name fails to resemble (...) the linguistic object it names. I argue that to avoid this problem, we can combine the natural name theory with a type-realist metaphysics of language, and hold that the name is natural because the name is an instance of the kind that it names. I conclude by reflecting on the importance of the metaphysics of language for questions in the philosophy of language. (shrink)
Primitives are both important and unavoidable, and which set of primitives we endorse will greatly shape our theories and how those theories provide solutions to the problems that we take to be important. After introducing the notion of a primitive posit, I discuss the different kinds of primitives that we might posit. Following Cowling (2013), I distinguish between ontological and ideological primitives, and, following Benovsky (2013) between functional and content views of primitives. I then propose that these two distinctions cut (...) across each other leading to four types of primitive posits. I then argue that theoretical virtues should be taken to be meta-theoretical ideological primitives. I close with some reflections on the global nature of comparing sets of primitives. (shrink)
Daniel Dennett (1996) has disputed David Chalmers' (1995) assertion that there is a "hard problem of consciousness" worth solving in the philosophy of mind. In this paper I defend Chalmers against Dennett on this point: I argue that there is a hard problem of consciousness, that it is distinct in kind from the so-called easy problems, and that it is vital for the sake of honest and productive research in the cognitive sciences to be clear about the difference. But I (...) have my own rebuke for Chalmers on the point of explanation. Chalmers (1995, 1996) proposes to "solve" the hard problem of consciousness by positing qualia as fundamental features of the universe, alongside such ontological basics as mass and space-time. But this is an inadequate solution: to posit, I will urge, is not to explain. To bolster this view, I borrow from an account of explanation by which it must provide "epistemic satisfaction" to be considered successful (Rowlands, 2001; Campbell, 2009), and show that Chalmers' proposal fails on this account. I conclude that research in the science of consciousness cannot move forward without greater conceptual clarity in the field. (shrink)
This paper addresses the ontological status of the ontological categories as defended within E.J. Lowe’s four-category ontology (kinds, objects, properties/relations, and modes). I consider the arguments in Griffith (2015. “Do Ontological Categories Exist?” Metaphysica 16 (1):25–35) against Lowe’s claim that ontological categories do not exist, and argue that Griffith’s objections to Lowe do not work once we fully take advantage of ontological resources available within Lowe’s four-category ontology. I then argue that the claim that ontological categories do not exist has (...) no undesirable consequences for Lowe’s brand of realism. (shrink)
Humeanism is “the thesis that the whole truth about a world like ours supervenes on the spatiotemporal distribution of local qualities.” (Lewis, 1994, 473) Since the whole truth about our world contains truths about causation, causation must be located in the mosaic of local qualities that the Humean says constitute the whole truth about the world. The most natural ways to do this involve causation being in some sense extrinsic. To take the simplest possible Humean analysis, we might say that (...) c causes e iff throughout the mosaic events of the same type as c are usually followed by events of type e. For short, the causal relation is the constant conjunction relation. Whether this obtains is determined by the mosaic, so this is a Humean theory, but it isn’t determined just by c and e themselves, so whether c causes e is extrinsic to the pair. Now this is obviously a bad theory of causation, but the fact that causation is extrinsic is retained even by good Humean theories of causation. John Hawthorne (2004) objects to this feature of Humeanism. I’m going to argue that his arguments don’t work, but first we need to clear up three preliminaries about causation and intrinsicness. (shrink)
What are words? What makes two token words tokens of the same word-type? Are words abstract entities, or are they (merely) collections of tokens? The ontology of words tries to provide answers to these, and related questions. This article provides an overview of some of the most prominent views proposed in the literature, with a particular focus on the debate between type-realist, nominalist, and eliminativist ontologies of words.
Suppose a rational agent S has some evidence E that bears on p, and on that basis makes a judgment about p. For simplicity, we’ll normally assume that she judges that p, though we’re also interested in cases where the agent makes other judgments, such as that p is probable, or that p is well-supported by the evidence. We’ll also assume, again for simplicity, that the agent knows that E is the basis for her judgment. Finally, we’ll assume that the (...) judgment is a rational one to make, though we won’t assume the agent knows this. Indeed, whether the agent can always know that she’s making a rational judgment when in fact she is will be of central importance in some of the debates that follow. (shrink)
In a recent study, we found a negative association between psychopathy and violence against genetic relatives. We interpreted this result as a form of nepotism and argued that it failed to support the hypothesis that psychopathy is a mental disorder, suggesting instead that it supports the hypothesis that psychopathy is an evolved life history strategy. This interpretation and subsequent arguments have been challenged in a number of ways. Here, we identify several misunderstandings regarding the harmful dysfunction definition of mental disorder (...) as it applies to psychopathy and regarding the meaning of nepotism. Furthermore, we examine the evidence provided by our critics that psychopathy is associated with other disorders, and we offer a comment on their alternative model of psychopathy. We conclude that there remains little evidence that psychopathy is the product of dysfunctional mechanisms. (shrink)
Humans typically display hindsight bias. They are more confident that the evidence available beforehand made some outcome probable when they know the outcome occurred than when they don't. There is broad consensus that hindsight bias is irrational, but this consensus is wrong. Hindsight bias is generally rationally permissible and sometimes rationally required. The fact that a given outcome occurred provides both evidence about what the total evidence available ex ante was, and also evidence about what that evidence supports. Even if (...) you in fact evaluate the ex ante evidence correctly, you should not be certain of this. Then, learning the outcome provides evidence that if you erred, you are more likely to have erred low rather than high in estimating the degree to which the ex ante evidence supported the hypothesis that that outcome would occur. (shrink)
This article examines whether it is possible to uphold one form of deflationism towards metaphysics, ontological pluralism, whilst maintaining metaphysical realism. The focus therefore is on one prominent deflationist who fits the definition of an ontological pluralist, Eli Hirsch, and his self-ascription as a realist. The article argues that ontological pluralism is not amenable to the ascription of realism under some basic intuitions as to what a “realist” position is committed to. These basic intuitions include a commitment to more than (...) a stuff-ontology, and a view that realism carries with it more than a rejection of idealism. This issue is more than merely terminological. The ascription of realism is an important classification in order to understand what sorts of entities can be the truthmakers within a given theory. “Realism” is thus an important term to understand the nature of the entities that a given theory accepts into its ontology. (shrink)
Providing empirically supportable instances of ontological emergence is notoriously difficult. Typically, the literature has focused on two possible sources. The first is the mind and consciousness; the second is within physics, and more specifically certain quantum effects. In this paper, I wish to suggest that the literature has overlooked a further possible instance of emergence, taken from the special science of linguistics. In particular, I will focus on the property of truth-evaluability, taken to be a property of sentences as created (...) by the language faculty within human minds (or brains). The claim will not be as strong as to suggest that the linguistic data and theories prove emergence. Rather the dialectical aim here is to say that we have some good reasons (even if not conclusive reasons) to think that the property is emergent. (shrink)
Michael Strevens’s book Depth is a great achievement.1 To say anything interesting, useful and true about explanation requires taking on fundamental issues in the metaphysics and epistemology of science. So this book not only tells us a lot about scientific explanation, it has a lot to say about causation, lawhood, probability and the relation between the physical and the special sciences. It should be read by anyone interested in any of those questions, which includes presumably the vast majority of readers (...) of this journal. One of its many virtues is that it lets us see more clearly what questions about explanation, causation, lawhood and so on need answering, and frames those questions in perspicuous ways. I’m going to focus on one of these questions, what I’ll call the Goldilocks problem. As it turns out, I’m not going to agree with all the details of Strevens’s answer to this problem, though I suspect that something like his answer is right. At least, I hope something like his answer is right; if it isn’t, I’m not sure where else we can look. (shrink)
Generous selections from these four seminal texts on the theory and practice of education have never before appeared together in a single volume. The Introductions that precede the texts provide brief biographical sketches of each author, situating him within his broader historical, cultural and intellectual context. The editors also provide a brief outline of key themes that emerge within the selection as a helpful guide to the reader. The final chapter engages the reflections of the classic authors with contemporary issues (...) and challenges in the philosophy and practice of education. (shrink)
Some philosophers need no introduction. Julius Kovesi is a philosopher who, regrettably, does need introducing. Kovesi’s career was as a moral philosopher and intellectual historian. This book is intended to reintroduce him, more than twenty years after his death and more than forty years after the publication of his only book, Moral Notions. This Introduction will sketch some of the key features of his life and philosophical thought.
Gordon Belot has recently developed a novel argument against Bayesianism. He shows that there is an interesting class of problems that, intuitively, no rational belief forming method is likely to get right. But a Bayesian agent’s credence, before the problem starts, that she will get the problem right has to be 1. This is an implausible kind of immodesty on the part of Bayesians. My aim is to show that while this is a good argument against traditional, precise Bayesians, the (...) argument doesn’t neatly extend to imprecise Bayesians. As such, Belot’s argument is a reason to prefer imprecise Bayesianism to precise Bayesianism. (shrink)
Keith DeRose has argued that the two main problems facing subject-sensitive invariantism come from the appropriateness of certain third-person denials of knowledge and the inappropriateness of now you know it, now you don't claims. I argue that proponents of SSI can adequately address both problems. First, I argue that the debate between contextualism and SSI has failed to account for an important pragmatic feature of third-person denials of knowledge. Appealing to these pragmatic features, I show that straightforward third-person denials are (...) inappropriate in the relevant cases. And while there are certain denials that are appropriate, they pose no problems for SSI. Next, I offer an explanation, compatible with SSI, of the oddity of now you know it, now you don't claims. To conclude, I discuss the intuitiveness of purism, whose rejection is the source of many problems for SSI. I propose to explain away the intuitiveness of purism as a side-effect of the narrow focus of previous epistemological inquiries. (shrink)
There are many controversial theses about intrinsicness and duplication. The first aim of this paper is to introduce a puzzle that shows that two of the uncontroversial sounding ones can’t both be true. The second aim is to suggest that the best way out of the puzzle requires sharpening some distinctions that are too frequently blurred, and adopting a fairly radical reconception of the ways things are.
I argue with my friends a lot. That is, I offer them reasons to believe all sorts of philosophical conclusions. Sadly, despite the quality of my arguments, and despite their apparent intelligence, they don’t always agree. They keep insisting on principles in the face of my wittier and wittier counterexamples, and they keep offering their own dull alleged counterexamples to my clever principles. What is a philosopher to do in these circumstances? (And I don’t mean get better friends.) One popular (...) answer these days is that I should, to some extent, defer to my friends. If I look at a batch of reasons and conclude p, and my equally talented friend reaches an incompatible conclusion q, I should revise my opinion so I’m now undecided between p and q. I should, in the preferred lingo, assign equal weight to my view as to theirs. This is despite the fact that I’ve looked at their reasons for concluding q and found them wanting. If I hadn’t, I would have already concluded q. The mere fact that a friend (from now on I’ll leave off the qualifier ‘equally talented and informed’, since all my friends satisfy that) reaches a contrary opinion should be reason to move me. Such a position is defended by Richard Feldman (2006a, 2006b), David Christensen (2007) and Adam Elga (forthcoming). This equal weight view, hereafter EW, is itself a philosophical position. And while some of my friends believe it, some of my friends do not. (Nor, I should add for your benefit, do I.) This raises an odd little dilemma. If EW is correct, then the fact that my friends disagree about it means that I shouldn’t be particularly confident that it is true, since EW says that I shouldn’t be too confident about any position on which my friends disagree. But, as I’ll argue below, to consistently implement EW, I have to be maximally confident that it is true. So to accept EW, I have to inconsistently both be very confident that it is true and not very confident that it is true. This seems like a problem, and a reason to not accept EW.. (shrink)
Applying good inductive rules inside the scope of suppositions leads to implausible results. I argue it is a mistake to think that inductive rules of inference behave anything like 'inference rules' in natural deduction systems. And this implies that it isn't always true that good arguments can be run 'off-line' to gain a priori knowledge of conditional conclusions.
We defend Uniqueness, the claim that given a body of total evidence, there is a uniquely rational doxastic state that it is rational for one to be in. Epistemic rationality doesn't give you any leeway in forming your beliefs. To this end, we bring in two metaepistemological pictures about the roles played by rational evaluations. Rational evaluative terms serve to guide our practices of deference to the opinions of others, and also to help us formulate contingency plans about what to (...) believe in various situations. We argue that Uniqueness vindicates these two roles for rational evaluations, while Permissivism clashes with them. (shrink)
At 435c-d and 504b ff., Socrates indicates that there is a "longer and fuller way" that one must take in order to get "the best possible view" of the soul and its virtues. But Plato does not have him take this "longer way." Instead Socrates restricts himself to an indirect indication of its goals by his images of sun, line, and cave and to a programmatic outline of its first phase, the five mathematical studies. Doesn't this pointed restraint function as (...) a provocation, moving us to want to begin the "longer way" and to make use of its conceptual resources to rethink Socrates' images? I begin by finding a double movement in the complex trajectory of the five studies: they both guide the soul in the "turn away from what is coming to be ... [to] what is" (518c) and, at the same time, lead the soul back, albeit in the medium of pure intelligibility, to the sensible world; for the pure figures and ratios that they disclose constitute the core structures of sensible things. I then draw on what Socrates says about geometry and harmonics to address three fundamental questions that he leaves open: the nature of the Good in its responsibility for truth and for the being of the forms; the relations of forms, mathematicals, and sensibles as these are disclosed by dialectic; and the bearing of the philosopher's discovery of the Good on his disposition towards his community and the task of ruling. I close by marking six sets of further questions that these reflections bequeath for dialogues to come. (shrink)
Investigation of neural and cognitive processes underlying individual variation in moral preferences is underway, with notable similarities emerging between moral- and risk-based decision-making. Here we specifically assessed moral distributive justice preferences and non-moral financial gambling preferences in the same individuals, and report an association between these seemingly disparate forms of decision-making. Moreover, we find this association between distributive justice and risky decision-making exists primarily when the latter is assessed with the Iowa Gambling Task. These findings are consistent with neuroimaging studies (...) of brain function during moral and risky decision-making. This research also constitutes the first replication of a novel experimental measure of distributive justice decision-making, for which individual variation in performance was found. Further examination of decision-making processes across different contexts may lead to an improved understanding of the factors affecting moral behaviour. (shrink)
There is a lot that we don’t know. That means that there are a lot of possibilities that are, epistemically speaking, open. For instance, we don’t know whether it rained in Seattle yesterday. So, for us at least, there is an epistemic possibility where it rained in Seattle yesterday, and one where it did not. It’s tempting to give a very simple analysis of epistemic possibility: • A possibility is an epistemic possibility if we do not know that it does (...) not obtain. But this is problematic for a few reasons. One issue, one that we’ll come back to, concerns the first two words. The analysis appears to quantify over possibilities. But what are they? As we said, that will become a large issue pretty soon, so let’s set it aside for now. A more immediate problem is that it isn’t clear what it is to have de re attitudes towards possibilities, such that we know a particular possibility does or doesn’t obtain. Let’s try rephrasing our analysis so that it avoids this complication. (shrink)
I argue that a attractive theory about the metaphysics of belief—the prag- matic, interpretationist theory endorsed by Stalnaker, Lewis, and Dennett, among others—implies that agents have a novel form of voluntary control over their beliefs. According to the pragmatic picture, what it is to have a given belief is in part for that belief to be part of an optimal rationalization of your actions. Since you have voluntary control over your actions, and what actions you perform in part determines what (...) beliefs you count as having, this theory entails that you have some voluntary control over your beliefs. However, the pragmatic picture doesn’t entail that you can believe something as a result of intention to believe it. Nevertheless, I argue that the limited sort of voluntary control implied by the pragmatic picture may be of use in vindicating the deontological conception of epistemic justification. (shrink)
In two excellent recent papers, Jacob Ross has argued that the standard arguments for the ‘thirder’ answer to the Sleeping Beauty puzzle lead to violations of countable additivity. The problem is that most arguments for that answer generalise in awkward ways when he looks at the whole class of what he calls Sleeping Beauty problems. In this note I develop a new argument for the thirder answer that doesn't generalise in this way.
Empirical research indicates that feelings of disgust actually affect our moral beliefs and moral motivations. The question is, should they? Daniel Kelly argues that they should not. More particularly, he argues for what we may call the irrelevancy thesis and the anti-moralization thesis. According to the irrelevancy thesis, feelings of disgust should be given no weight when judging the moral character of an action (or norm, practice, outcome, or ideal). According to the anti-moralization thesis, feelings of disgust should not be (...) allowed a role in, or harnessed in the service of, moral motivation. In this paper, I will argue against both theses, staking out a moderate position according to which feelings of disgust can (but needn’t always) play a proper role in aid of moral belief formation and moral motivation. (shrink)
This paper has three aims. First, I’ll argue that there’s no good reason to accept any kind of ‘easy knowledge’ objection to externalist foundationalism. It might be a little surprising that we can come to know that our perception is accurate by using our perception, but any attempt to argue this is impossible seems to rest on either false premises or fallacious reasoning. Second, there is something defective about using our perception to test whether our perception is working. What this (...) reveals is that there are things we aim for in testing other than knowing that the device being tested is working. I’ll suggest that testing aims for sensitive knowledge that the device is working. Testing a device, such as our perceptual system, by using its own outputs may deliver knowledge, but it can’t deliver sensitive knowledge. So it’s a bad way to test the system. The big conclusion here is that sensitivity is an important epistemic virtue, although it is not necessary for knowledge. Third, I’ll argue that the idea that sensitivity is an epistemic virtue can provide a solution to a tricky puzzle about inductive evidence. This provides another reason for thinking that the conclusion of section two is correct: not all epistemic virtues are to do with knowledge. (shrink)
At least since Aristotle’s famous 'sea-battle' passages in On Interpretation 9, some substantial minority of philosophers has been attracted to the doctrine of the open future--the doctrine that future contingent statements are not true. But, prima facie, such views seem inconsistent with the following intuition: if something has happened, then (looking back) it was the case that it would happen. How can it be that, looking forwards, it isn’t true that there will be a sea battle, while also being true (...) that, looking backwards, it was the case that there would be a sea battle? This tension forms, in large part, what might be called the problem of future contingents. A dominant trend in temporal logic and semantic theorizing about future contingents seeks to validate both intuitions. Theorists in this tradition--including some interpretations of Aristotle, but paradigmatically, Thomason (1970), as well as more recent developments in Belnap, et. al (2001) and MacFarlane (2003, 2014)--have argued that the apparent tension between the intuitions is in fact merely apparent. In short, such theorists seek to maintain both of the following two theses: (i) the open future: Future contingents are not true, and (ii) retro-closure: From the fact that something is true, it follows that it was the case that it would be true. It is well-known that reflection on the problem of future contingents has in many ways been inspired by importantly parallel issues regarding divine foreknowledge and indeterminism. In this paper, we take up this perspective, and ask what accepting both the open future and retro-closure predicts about omniscience. When we theorize about a perfect knower, we are theorizing about what an ideal agent ought to believe. Our contention is that there isn’t an acceptable view of ideally rational belief given the assumptions of the open future and retro-closure, and thus this casts doubt on the conjunction of those assumptions. (shrink)
The principle of Conditional Excluded Middle has been a matter of longstanding controversy in both semantics and metaphysics. According to this principle, we are, inter alia, committed to claims like the following: If the coin had been flipped, it would have landed heads, or if the coin had been flipped, it would not have landed heads. In favour of the principle, theorists have appealed, primarily, to linguistic data such as that we tend to hear ¬(A > B) as equivalent to (...) (A > ¬B). Williams (2010), provides one of the most compelling recent arguments along these lines by appealing to intuitive equivalencies between certain quantified conditional statements. We argue that the strategy Williams employs can be parodied to generate an argument for the unwelcome principle of Should Excluded Middle: the principle that, for any A, it either should be that A or it should be that not A. Uncovering what goes wrong with this argument casts doubt on a key premise in Williams’ argument. The way we develop this point is by defending the thesis that, like "should", "would" is a so-called neg-raising predicate. Neg-raising is the linguistic phenomenon whereby “I don’t think that Trump is a good president” strongly tends to implicate “I think that Trump is not a good president,” despite the former not semantically entailing the latter. We show how a defender of a Lewis-style semantics for counterfactuals should implement the idea that the counterfactual is a “neg-raiser”. (shrink)
We argue against the knowledge rule of assertion, and in favour of integrating the account of assertion more tightly with our best theories of evidence and action. We think that the knowledge rule has an incredible consequence when it comes to practical deliberation, that it can be right for a person to do something that she can't properly assert she can do. We develop some vignettes that show how this is possible, and how odd this consequence is. We then argue (...) that these vignettes point towards alternate rules that tie assertion to sufficient evidence-responsiveness or to proper action. These rules have many of the virtues that are commonly claimed for the knowledge rule, but lack the knowledge rule's problematic consequences when it comes to assertions about what to do. (shrink)
The book is divided into three parts. The first, containing three papers, focuses on the characterization of the central tenets of previii sentism (by Neil McKinnon) and eternalism (by Samuel Baron and Kristie Miller), and on the ‘sceptical stance’ (by Ulrich Meyer), a view to the effect that there is no substantial difference between presentism and eternalism. The second and main section of the book contains three pairs of papers that bring the main problems with presentism to the fore (...) and outlines its defence strategy. Each pair of papers in this section can be read as a discussion between presentists and eternalists, wherein each directly responds to the arguments and objections offered by the other. This is a discussion that is sometimes absent in the literature, or which is at best carried out in a fragmented way. The first two papers of the section deal with the problem of the compatibility of Special Relativity Theory (SRT) and presentism. SRT is often considered to be a theory that contradicts the main tenet of presentism, thereby rendering presentism at odds with one of our most solid scientific theories. Christian Wüthrich’s paper presents arguments for the incompatibility of the two theories (SRT and presentism) within a new framework that includes a discussion of further complications arising from the theory of Qauantum Mechanics. Jonathan Lowe’s paper, by contrast, develops new general arguments against the incompatibility thesis and replies to Wüthrich’s paper. The second pair of papers focuses on the problem that presentists face, in providing grounds for past tensed truths. In the first (by Matthew Davidson), new arguments are provided to defend the idea that the presentist cannot adequately explain how what is now true about the past is grounded, since for the presentist the past is completely devoid of ontological ground. The second paper (by Brian Kierland) takes up the challenge of developing a presentist explanation of past truths, beginning by outlining some existing views in the literature before advancing an original proposal. (shrink)
In a recent paper in this journal, McCall and Lowe (2003) argue that an understanding of Special Relativity reveals that the A theorist’s notion of temporal passage is consistent with the B theory of time. They arrive at this conclusion by considering the twins’ paradox, where one of two twins (T) travels to Alpha Centauri and back and upon her return has aged 30 years, while her earth-bound twin (S) has aged 40 years. This paper argues that their account of (...) temporal passage fails to reconcile the A theoretic notion of temporal passage with the B theory of time since the B theorist is at liberty to adopt this as an account of temporal passage as understood by the B theorist. (shrink)
Montague and Kaplan began a revolution in semantics, which promised to explain how a univocal expression could make distinct truth-conditional contributions in its various occurrences. The idea was to treat context as a parameter at which a sentence is semantically evaluated. But the revolution has stalled. One salient problem comes from recurring demonstratives: "He is tall and he is not tall". For the sentence to be true at a context, each occurrence of the demonstrative must make a different truth-conditional contribution. (...) But this difference cannot be accounted for by standard parameter sensitivity. Semanticists, consoled by the thought that this ambiguity would ultimately be needed anyhow to explain anaphora, have been too content to posit massive ambiguities in demonstrative pronouns. This article aims to revived the parameter revolution by showing how to treat demonstrative pronouns as univocal while providing an account of anaphora that doesn't end up re-introducing the ambiguity. (shrink)
In this paper we explore the idea that Pentecostalism is best supported by conjoining it to a postmodern, narrative epistemology in which everything is a text requiring interpretation. On this view, truth doesn’t consist in a set of uninterpreted facts that make the claims of Christianity true; rather, as James K. A. Smith says, truth emerges when there is a “fit” or proportionality between the Christian story and one’s affective and emotional life. We argue that Pentecostals should reject this account (...) of truth, since it leads to either a self-refuting story-relativism or the equally problematic fallacy of story-ism: favoring one’s own story over others without legitimate reason. In either case, we contend, the gospel itself is placed at risk. (shrink)
In “Against Arguments from Reference” (Mallon et al., 2009), Ron Mallon, Edouard Machery, Shaun Nichols, and Stephen Stich (hereafter, MMNS) argue that recent experiments concerning reference undermine various philosophical arguments that presuppose the correctness of the causal-historical theory of reference. We will argue three things in reply. First, the experiments in question—concerning Kripke’s Gödel/Schmidt example—don’t really speak to the dispute between descriptivism and the causal-historical theory; though the two theories are empirically testable, we need to look at quite different data (...) than MMNS do to decide between them. Second, the Gödel/Schmidt example plays a different, and much smaller, role in Kripke’s argument for the causal-historical theory than MMNS assume. Finally, and relatedly, even if Kripke is wrong about the Gödel/Schmidt example—indeed, even if the causal-historical theory is not the correct theory of names for some human languages—that does not, contrary to MMNS’s claim, undermine uses of the causalhistorical theory in philosophical research projects. (shrink)
The question of what distinguishes moral problems from other problems is important to the study of the evolution and functioning of morality. Many researchers concerned with this topic have assumed, either implicitly or explicitly, that all moral problems are problems of cooperation. This assumption offers a response to the moral demarcation problem by identifying a necessary condition of moral problems. Characterizing moral problems as problems of cooperation is a popular response to this issue – especially among researchers empirically studying the (...) beginnings and limits of moral psychology. However, demarcating the moral in this way severely restricts the domain of moral problems. There are plenty of moral problems that aren’t simply problems of cooperation. In this paper I argue that understanding moral problems as problems of cooperation is too restrictive and offer an alternative way of demarcating moral from non-moral problems. Characterizing what makes a problem moral in terms of cooperation excludes a variety of problems that are ordinarily understood and responded to as moral. The alternative characterization that I propose is based on the American Indian/Native American concept of harmony. Using the concept of cooperation to demarcate the moral removes moral agents from their surroundings or contexts by assuming moral agency applies only to humans or other similarly evolved lifeforms. In contrast, using the concept of harmony allows for moral consideration to be granted to non-humans as well (e.g., non-human animals, plant life, ecosystems, etc.). (shrink)
Some time ago, Joel Katzav and Brian Ellis debated the compatibility of dispositional essentialism with the principle of least action. Surprisingly, very little has been said on the matter since, even by the most naturalistically inclined metaphysicians. Here, we revisit the Katzav–Ellis arguments of 2004–05. We outline the two problems for the dispositionalist identified Katzav in his 2004 , and claim they are not as problematic for the dispositional essentialist at it first seems – but not for the reasons (...) espoused by Ellis. (shrink)
In “A Reliabilist Solution to the Problem of Promiscuous Bootstrapping”, Hilary Kornblith (2009) proposes a reliabilist solution to the bootstrapping problem. I’m going to argue that Kornblith’s proposal, far from solving the bootstrapping problem, in fact makes the problem much harder for the reliabilist to solve. Indeed, I’m going to argue that Kornblith’s considerations give us a way to develop a quick reductio of a certain kind of reliabilism. Let’s start with a crude statement of the problem. The bootstrapper, call (...) them S, looks at a device D1 that happens to be reliable, though at this stage S doesn’t know this. We assume that S is a reliable reader of devices. S then draws the following conclusions. (shrink)
This paper was written for a workshop on ethics and epistemology at Missouri. I use an example from unpublished work with Ishani Maitra to develop a new kind of argument for expressivism. (I don’t endorse the argument, but I think it is interesting.) Roughly, the argument is that knowledge is a norm governing assertions, but moral claims do not have to be known to be properly made, so to make a moral claim is not to make an assertion. Some suggestions (...) are made for how a non-expressivist might avoid the argument. (shrink)
Lewis Carroll’s 1895 paper “Achilles and the Tortoise” showed that we need a distinction between rules of inference and premises. We cannot, on pain of regress, treat all rules simply as further premises in an argument. But Carroll’s paper doesn’t say very much about what rules there must be. Indeed, it is consistent with what Carroll says there to think that the only rule is -elimination. You might think that modern Bayesians, who seem to think that the only rule of (...) inference they need is conditionalisation, have taken just this lesson from Carroll. But obviously nothing in Carroll’s argument rules out there being other rules as well. (shrink)
One of the benefits of the 2D framework we looked at last week was that it explained how we could understand a sentence without knowing which proposition it expressed. And we could do this even if we give an account of understanding which is closely tied to the possible worlds semantics we use to analyse propositions. Really this can be done very easily, without appeal to any high-flying Kripkean cases. In “Analytic Metaphysics” Jackson discusses a very simple case of it. (...) I can understand an utterance of “I have a beard” without knowing which proposition it expresses. I know how the proposition is generated from context plus meaning, if X is the speaker then the sentence expresses the proposition X has a beard. And that is enough for understanding. But if I don’t know who said the sentence, so I don’t know who X is, I don’t know which proposition is expressed by that utterance. (shrink)
a houseis a structure that provides shelter for humanity. Studies have shown that in most parts of the world, urban rents are determined by various factors. These factors include location, level of facilities and services, neighborhood characteristics, space etcetera. Among these factors, the most influencing factor of rent in Wa Municipality is the level of facilities and services provided for tenant use. The objectives of this research was to examine the cost of housing construction, to determine the role played by (...) government in housing provision, recommend policies for housing provision, determine the portion of household income spent on rent. The methodology of this research is base on interplay of deskwork and fieldwork and these took the form of data collection, presentation and data analysis of findings. In the course of this study, both qualitative and quantitative primary and secondary data were collected. A summary of the findings from the research indicates that: cost of building materials is the major contributory factor to the cost of construction aside land and labour cost, the existing rent control law as currently operated have little or no impact on rent charged in the Municipality, current rent levels in the municipality are deemed to be satisfactory, besides the already documented rent determinants, population, occupation, and prospective duration of lease were also identified. One other major finding was that landlords do not take into account the room v let but take into consideration the number of people occupying the room to charge their rent and as such tenants who cannot afford to pay the full recoverable rent has to search for tenants they don‟t know. The group recommends that, There should be given a high priority to local building materials, which could reduce the cost of building and the improvement of the supply chain of various building materials; there should be a mechanism that would ensure that the Rent Control Board, the house owners and tenants would be provided with a platform where consensus building can be done in order to ensure transparency in rent charge Finally, the government should also urge the exemption of value added taxes on building components sourced locally as well as import duties on imported goods. (shrink)
Sometimes ignorance is a legitimate excuse for morally wrong behavior, and sometimes it isn’t. If someone has secretly replaced my sugar with arsenic, then I’m blameless for putting arsenic in your tea. But if I put arsenic in your tea because I keep arsenic and sugar jars on the same shelf and don’t label them, then I’m plausibly blameworthy for poisoning you. Why is my ignorance in the first case a legitimate excuse, but my ignorance in the second case isn’t? (...) This essay explores the relationship between ignorance and blameworthiness. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.