We consider how an epistemic network might self-assemble from the ritualization of the individual decisions of simple heterogeneous agents. In such evolved social networks, inquirers may be significantly more successful than they could be investigating nature on their own. The evolved network may also dramatically lower the epistemic risk faced by even the most talented inquirers. We consider networks that self-assemble in the context of both perfect and imperfect communication and compare the behaviour of inquirers in each. This provides a (...) step in bringing together two new and developing research programs, the theory of self-assembling games and the theory of network epistemology. (shrink)
Introduction to 'Skyrmsfest: Papers in Honor of BrianSkyrms' issue of Philosophical Studies, January 2010. Remarks about BrianSkyrms and about the 10 papers in the issue.
Recent work by BrianSkyrms offers a very general way to think about how information flows and evolves in biological networks — from the way monkeys in a troop communicate, to the way cells in a body coordinate their actions. A central feature of his account is a way to formally measure the quantity of information contained in the signals in these networks. In this paper, we argue there is a tension between how Skyrms talks of signalling (...) networks and his formal measure of information. Although Skyrms refers to both how information flows through networks and that signals carry information, we show that his formal measure only captures the latter. We then suggest that to capture the notion of flow in signalling networks, we need to treat them as causal networks. This provides the formal tools to define a measure that does capture flow, and we do so by drawing on recent work defining causal specificity. Finally, we suggest that this new measure is crucial if we wish to explain how evolution creates information. For signals to play a role in explaining their own origins and stability, they can’t just carry information about acts: they must be difference-makers for acts. (shrink)
The problem of the man who met death in Damascus appeared in the infancy of the theory of rational choice known as causal decision theory. A straightforward, unadorned version of causal decision theory is presented here and applied, along with BrianSkyrms’ deliberation dynamics, to Death in Damascus and similar problems. Decision instability is a fascinating topic, but not a source of difficulty for causal decision theory. Andy Egan’s purported counterexample to causal decision theory, Murder Lesion, is considered; (...) a simple response shows how Murder Lesion and similar examples fail to be counterexamples, and clarifies the use of the unadorned theory in problems of decision instability. I compare unadorned causal decision theory to previous treatments by Frank Arntzenius and by Jim Joyce, and recommend a well-founded heuristic that all three accounts can endorse. Whatever course deliberation takes, causal decision theory is consistently a good guide to rational action. (shrink)
Naturalistic theories of representation seek to specify the conditions that must be met for an entity to represent another entity. Although these approaches have been relatively successful in certain areas, such as communication theory or genetics, many doubt that they can be employed to naturalize complex cognitive representations. In this essay I identify some of the difficulties for developing a teleosemantic theory of cognitive representations and provide a strategy for accommodating them: to look into models of signaling in evolutionary game (...) theory. I show how these models can be used to formulate teleosemantics and expand it in new directions. (shrink)
Intuitively, Gettier cases are instances of justified true beliefs that are not cases of knowledge. Should we therefore conclude that knowledge is not justified true belief? Only if we have reason to trust intuition here. But intuitions are unreliable in a wide range of cases. And it can be argued that the Gettier intuitions have a greater resemblance to unreliable intuitions than to reliable intuitions. Whats distinctive about the faulty intuitions, I argue, is that respecting them would mean abandoning a (...) simple, systematic and largely successful theory in favour of a complicated, disjunctive and idiosyncratic theory. So maybe respecting the Gettier intuitions was the wrong reaction, we should instead have been explaining why we are all so easily misled by these kinds of cases. (shrink)
I defend normative externalism from the objection that it cannot account for the wrongfulness of moral recklessness. The defence is fairly simple—there is no wrong of moral recklessness. There is an intuitive argument by analogy that there should be a wrong of moral recklessness, and the bulk of the paper consists of a response to this analogy. A central part of my response is that if people were motivated to avoid moral recklessness, they would have to have an unpleasant sort (...) of motivation, what Michael Smith calls “moral fetishism”. (shrink)
In his Principles of Philosophy, Descartes says, Finally, it is so manifest that we possess a free will, capable of giving or withholding its assent, that this truth must be reckoned among the first and most common notions which are born with us.
I consider the problem of how to derive what an agent believes from their credence function and utility function. I argue the best solution of this problem is pragmatic, i.e. it is sensitive to the kinds of choices actually facing the agent. I further argue that this explains why our notion of justified belief appears to be pragmatic, as is argued e.g. by Fantl and McGrath. The notion of epistemic justification is not really a pragmatic notion, but it is being (...) applied to a pragmatically defined concept, i.e. belief. (shrink)
Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecise credences is compatible with dogmatism.
Intelligent activity requires the use of various intellectual skills. While these skills are connected to knowledge, they should not be identified with knowledge. There are realistic examples where the skills in question come apart from knowledge. That is, there are realistic cases of knowledge without skill, and of skill without knowledge. Whether a person is intelligent depends, in part, on whether they have these skills. Whether a particular action is intelligent depends, in part, on whether it was produced by an (...) exercise of skill. These claims promote a picture of intelligence that is in tension with a strongly intellectualist picture, though they are not in tension with a number of prominent claims recently made by intellectualists. (shrink)
Recently four different papers have suggested that the supervaluational solution to the Problem of the Many is flawed. Stephen Schiffer (1998, 2000a, 2000b) has argued that the theory cannot account for reports of speech involving vague singular terms. Vann McGee and Brian McLaughlin (2000) say that theory cannot, yet, account for vague singular beliefs. Neil McKinnon (2002) has argued that we cannot provide a plausible theory of when precisifications are acceptable, which the supervaluational theory needs. And Roy Sorensen (2000) (...) argues that supervaluationism is inconsistent with a directly referential theory of names. McGee and McLaughlin see the problem they raise as a cause for further research, but the other authors all take the problems they raise to provide sufficient reasons to jettison supervaluationism. I will argue that none of these problems provide such a reason, though the arguments are valuable critiques. In many cases, we must make some adjustments to the supervaluational theory to meet the posed challenges. The goal of this paper is to make those adjustments, and meet the challenges. (shrink)
Conciliatory theories of disagreement face a revenge problem; they cannot be coherently believed by one who thinks they have peers who are not conciliationists. I argue that this is a deep problem for conciliationism.
Many writers have held that in his later work, David Lewis adopted a theory of predicate meaning such that the meaning of a predicate is the most natural property that is (mostly) consistent with the way the predicate is used. That orthodox interpretation is shared by both supporters and critics of Lewis's theory of meaning, but it has recently been strongly criticised by Wolfgang Schwarz. In this paper, I accept many of Schwarze's criticisms of the orthodox interpretation, and add some (...) more. But I also argue that the orthodox interpretation has a grain of truth in it, and seeing that helps us appreciate the strength of Lewis's late theory of meaning. (shrink)
I set out and defend a view on indicative conditionals that I call “indexical relativism ”. The core of the view is that which proposition is expressed by an utterance of a conditional is a function of the speaker’s context and the assessor’s context. This implies a kind of relativism, namely that a single utterance may be correctly assessed as true by one assessor and false by another.
In previous work I’ve defended an interest-relative theory of belief. This paper continues the defence. It has four aims. -/- 1. To offer a new kind of reason for being unsatis ed with the simple Lockean reduction of belief to credence. 2. To defend the legitimacy of appealing to credences in a theory of belief. 3. To illustrate the importance of theoretical, as well as practical, interests in an interest-relative account of belief. 4. To revise my account to cover propositions (...) that are practically and theoretically irrelevant to the agent. (shrink)
We live in a world of crowds and corporations, artworks and artifacts, legislatures and languages, money and markets. These are all social objects — they are made, at least in part, by people and by communities. But what exactly are these things? How are they made, and what is the role of people in making them? In The Ant Trap, Brian Epstein rewrites our understanding of the nature of the social world and the foundations of the social sciences. Epstein (...) explains and challenges the three prevailing traditions about how the social world is made. One tradition takes the social world to be built out of people, much as traffic is built out of cars. A second tradition also takes people to be the building blocks of the social world, but focuses on thoughts and attitudes we have toward one another. And a third tradition takes the social world to be a collective projection onto the physical world. Epstein shows that these share critical flaws. Most fundamentally, all three traditions overestimate the role of people in building the social world: they are overly anthropocentric. Epstein starts from scratch, bringing the resources of contemporary metaphysics to bear. In the place of traditional theories, he introduces a model based on a new distinction between the grounds and the anchors of social facts. Epstein illustrates the model with a study of the nature of law, and shows how to interpret the prevailing traditions about the social world. Then he turns to social groups, and to what it means for a group to take an action or have an intention. Contrary to the overwhelming consensus, these often depend on more than the actions and intentions of group members. (shrink)
Three objections have recently been levelled at the analysis of intrinsicness offered by Rae Langton and David Lewis. While these objections do seem telling against the particular theory Langton and Lewis offer, they do not threaten the broader strategy Langton and Lewis adopt: defining intrinsicness in terms of combinatorial features of properties. I show how to amend their theory to overcome the objections without abandoning the strategy.
I defend interest-relative invariantism from a number of recent attacks. One common thread to my response is that interest-relative invariantism is a muchweaker thesis than is often acknowledged, and a number of the attacks only challenge very specific, and I think implausible, versions of it. Another is that a number of the attacks fail to acknowledge how many things we have independent reason to believe knowledge is sensitive to. Whether there is a defeater for someone's knowledge can be sensitive to (...) all manner of features of their environment, as the host of examples from the post-Gettier literature shows. Adding in interest-sensitive defeaters is a much less radical move than most critics claim it is. (shrink)
Timothy Williamson has recently argued that few mental states are luminous , meaning that to be in that state is to be in a position to know that you are in the state. His argument rests on the plausible principle that beliefs only count as knowledge if they are safely true. That is, any belief that could easily have been false is not a piece of knowledge. I argue that the form of the safety rule Williamson uses is inappropriate, and (...) the correct safety rule might not conflict with luminosity. (shrink)
Suppose a rational agent S has some evidence E that bears on p, and on that basis makes a judgment about p. For simplicity, we’ll normally assume that she judges that p, though we’re also interested in cases where the agent makes other judgments, such as that p is probable, or that p is well-supported by the evidence. We’ll also assume, again for simplicity, that the agent knows that E is the basis for her judgment. Finally, we’ll assume that the (...) judgment is a rational one to make, though we won’t assume the agent knows this. Indeed, whether the agent can always know that she’s making a rational judgment when in fact she is will be of central importance in some of the debates that follow. (shrink)
In a recent article, Adam Elga outlines a strategy for “Defeating Dr Evil with Self-Locating Belief”. The strategy relies on an indifference principle that is not up to the task. In general, there are two things to dislike about indifference principles: adopting one normally means confusing risk for uncertainty, and they tend to lead to incoherent views in some ‘paradoxical’ situations. I argue that both kinds of objection can be levelled against Elga’s indifference principle. There are also some difficulties with (...) the concept of evidence that Elga uses, and these create further difficulties for the principle. (shrink)
Accuracy‐first epistemology is an approach to formal epistemology which takes accuracy to be a measure of epistemic utility and attempts to vindicate norms of epistemic rationality by showing how conformity with them is beneficial. If accuracy‐first epistemology can actually vindicate any epistemic norms, it must adopt a plausible account of epistemic value. Any such account must avoid the epistemic version of Derek Parfit's “repugnant conclusion.” I argue that the only plausible way of doing so is to say that accurate credences (...) in certain propositions have no, or almost no, epistemic value. I prove that this is incompatible with standard accuracy‐first arguments for probabilism, and argue that there is no way for accuracy‐first epistemology to show that all credences of all agents should be coherent. (shrink)
What the world needs now is another theory of vagueness. Not because the old theories are useless. Quite the contrary, the old theories provide many of the materials we need to construct the truest theory of vagueness ever seen. The theory shall be similar in motivation to supervaluationism, but more akin to many-valued theories in conceptualisation. What I take from the many-valued theories is the idea that some sentences can be truer than others. But I say very different things to (...) the ordering over sentences this relation generates. I say it is not a linear ordering, so it cannot be represented by the real numbers. I also argue that since there is higher-order vagueness, any mapping between sentences and mathematical objects is bound to be inappropriate. This is no cause for regret; we can say all we want to say by using the comparative truer than without mapping it onto some mathematical objects. From supervaluationism I take the idea that we can keep classical logic without keeping the familiar bivalent semantics for classical logic. But my preservation of classical logic is more comprehensive than is normally permitted by supervaluationism, for I preserve classical inference rules as well as classical sequents. And I do this without relying on the concept of acceptable precisifications as an unexplained explainer. The world does not need another guide to varieties of theories of vagueness, especially since Timothy Williamson (1994) and Rosanna Keefe (2000) have already provided quite good guides. I assume throughout familiarity with popular theories of vagueness. (shrink)
Michael Strevens’s book Depth is a great achievement.1 To say anything interesting, useful and true about explanation requires taking on fundamental issues in the metaphysics and epistemology of science. So this book not only tells us a lot about scientific explanation, it has a lot to say about causation, lawhood, probability and the relation between the physical and the special sciences. It should be read by anyone interested in any of those questions, which includes presumably the vast majority of readers (...) of this journal. One of its many virtues is that it lets us see more clearly what questions about explanation, causation, lawhood and so on need answering, and frames those questions in perspicuous ways. I’m going to focus on one of these questions, what I’ll call the Goldilocks problem. As it turns out, I’m not going to agree with all the details of Strevens’s answer to this problem, though I suspect that something like his answer is right. At least, I hope something like his answer is right; if it isn’t, I’m not sure where else we can look. (shrink)
Gordon Belot has recently developed a novel argument against Bayesianism. He shows that there is an interesting class of problems that, intuitively, no rational belief forming method is likely to get right. But a Bayesian agent’s credence, before the problem starts, that she will get the problem right has to be 1. This is an implausible kind of immodesty on the part of Bayesians. My aim is to show that while this is a good argument against traditional, precise Bayesians, the (...) argument doesn’t neatly extend to imprecise Bayesians. As such, Belot’s argument is a reason to prefer imprecise Bayesianism to precise Bayesianism. (shrink)
The Sleeping Beauty puzzle provides a nice illustration of the approach to self-locating belief defended by Robert Stalnaker in Our Knowledge of the Internal World (Stalnaker, 2008), as well as a test of the utility of that method. The setup of the Sleeping Beauty puzzle is by now fairly familiar. On Sunday Sleeping Beauty is told the rules of the game, and a (known to be) fair coin is flipped. On Monday, Sleeping Beauty is woken, and then put back to (...) sleep. If, and only if, the coin landed tails, she is woken again on Tuesday after having her memory of the Monday awakening erased.1 On Wednesday she is woken again and the game ends. There are a few questions we can ask about Beauty’s attitudes as the game progresses. We’d like to know what her credence that the coin landed heads should be (a) Before she goes to sleep Sunday; (b) When she wakes on Monday; (c) When she wakes on Tuesday; and (d) When she wakes on Wednesday? Standard treatments of the Sleeping Beauty puzzle ignore (d), run together (b) and (c) into one (somewhat ill-formed) question, and then divide theorists into ‘halfers’ or ‘thirders’ depending on how they answer it. Following Stalnaker, I’m going to focus on (b) here, though I’ll have a little to say about (c) and (d) as well. I’ll be following orthodoxy in taking 1 2 to be the clear answer to (a), and in taking the correct answers to (b) and (c) to be independent of how the coin lands, though I’ll briefly question that assumption at the end. An answer to these four questions should respect two different kinds of constraints. The answer for day n should make sense ‘statically’. It should be a sensible answer to the question of what Beauty should do given what information she then has. And the answer should make sense ‘dynamically’. It should be a sensible answer to the question of how Beauty should have updated her credences from some earlier day, given rational credences on the earlier day. As has been fairly clear since the discussion of the problem in Elga (2000), Sleeping Beauty is puzzling because static and dynamic considerations appear to push in different directions.. (shrink)
Uncertainty plays an important role in The General Theory, particularly in the theory of interest rates. Keynes did not provide a theory of uncertainty, but he did make some enlightening remarks about the direction he thought such a theory should take. I argue that some modern innovations in the theory of probability allow us to build a theory which captures these Keynesian insights. If this is the right theory, however, uncertainty cannot carry its weight in Keynes’s arguments. This does not (...) mean that the conclusions of these arguments are necessarily mistaken; in their best formulation they may succeed with merely an appeal to risk. (shrink)
I argue that what evidence an agent has does not supervene on how she currently is. Agents do not always have to infer what the past was like from how things currently seem; sometimes the facts about the past are retained pieces of evidence that can be the start of reasoning. The main argument is a variant on Frank Arntzenius’s Shangri La example, an example that is often used to motivate the thought that evidence does supervene on current features.
Data about attitude reports provide some of the most interesting arguments for, and against, various theses of semantic relativism. This paper is a short survey of three such arguments. First, I’ll argue (against recent work by von Fintel and Gillies) that relativists can explain the behaviour of relativistic terms in factive attitude reports. Second, I’ll argue (against Glanzberg) that looking at attitude reports suggests that relativists have a more plausible story to tell than contextualists about the division of labour between (...) semantics and meta-semantics. Finally, I’ll offer a new argument for invariantism (i.e. against both relativism and contextualism) about moral terms. The argument will turn on the observation that the behaviour of normative terms in factive and non-factive attitude reports is quite unlike the behaviour of any other plausibly context-sensitive term. Before that, I’ll start with some taxonomy, just so as it’s clear what the intended conclusions below are supposed to be. (shrink)
There are many controversial theses about intrinsicness and duplication. The first aim of this paper is to introduce a puzzle that shows that two of the uncontroversial sounding ones can’t both be true. The second aim is to suggest that the best way out of the puzzle requires sharpening some distinctions that are too frequently blurred, and adopting a fairly radical reconception of the ways things are.
Nick Bostrom argues that if we accept some plausible assumptions about how the future will unfold, we should believe we are probably not humans. The argument appeals crucially to an indifference principle whose precise content is a little unclear. I set out four possible interpretations of the principle, none of which can be used to support Bostrom’s argument. On the first two interpretations the principle is false, on the third it does not entail the conclusion, and on the fourth it (...) only entails the conclusion given an auxiliary hypothesis that we have no reason to believe. (shrink)
In “Now the French are invading England” (Analysis 62, 2002, pp. 34-41), Komarine Romdenh-Romluc offers a new theory of the relationship between recorded indexicals and their content. Romdenh-Romluc’s proposes that Kaplan’s basic idea, that reference is determined by applying a rule to a context, is correct, but we have to be careful about what the context is, since it is not always the context of utterance. A few well known examples illustrate this. The “here” and “now” in “I am not (...) here now” on an answering machine do not refer to the time and place of the original utterance, but to the time the message is played back, and the place its attached telephone is located. Any occurrence of “today” in a newspaper or magazine refers not to the day the story in which it appears was written, nor to the day the newspaper or magazine was printed, but to the cover date of that publication. Still, it is plausible that for each (token of an) indexical there is a salient context, and that “today” refers to the day of its context, “here” to the place of its context, and soon. Romdenh-Romluc takes this to be true, and then makes a proposal about what the salient context is. It is “the context that Ac would identify on the basis of cues that she would reasonably take U to be exploiting.” (39) Ac is the relevant audience, “the individual who it is reasonable to take the speaker to be addressing”, and who is assumed to be linguistically competent and attentive. (So Ac might not be the person U intends to address. This will not matter for what follows.) The proposal seems to suggest that it is impossible to trick a reasonably attentive hearer about what the referent of a particular indexical is. Since such trickery does seem possible, Romdenh-Romluc’s theory needs (at least) supplementation. I present two examples of such tricks. (shrink)
Recently, Timothy Williamson has argued that considerations about margins of errors can generate a new class of cases where agents have justified true beliefs without knowledge. I think this is a great argument, and it has a number of interesting philosophical conclusions. In this note I’m going to go over the assumptions of Williamson’s argument. I’m going to argue that the assumptions which generate the justification without knowledge are true. I’m then going to go over some of the recent arguments (...) in epistemology that are refuted by Williamson’s work. And I’m going to end with an admittedly inconclusive discussion of what we can know when using an imperfect measuring device. (shrink)
I advocate Time-Slice Rationality, the thesis that the relationship between two time-slices of the same person is not importantly different, for purposes of rational evaluation, from the relationship between time-slices of distinct persons. The locus of rationality, so to speak, is the time-slice rather than the temporally extended agent. This claim is motivated by consideration of puzzle cases for personal identity over time and by a very moderate form of internalism about rationality. Time-Slice Rationality conflicts with two proposed principles of (...) rationality, Conditionalization and Reflection. Conditionalization is a diachronic norm saying how your current degrees of belief should fit with your old ones, while Reflection is a norm enjoining you to defer to the degrees of belief that you expect to have in the future. But they are independently problematic and should be replaced by improved, time-slice-centric principles. Conditionalization should be replaced by a synchronic norm saying what degrees of belief you ought to have given your current evidence and Reflection should be replaced by a norm which instructs you to defer to the degrees of belief of agents you take to be experts. These replacement principles do all the work that the old principles were supposed to do while avoiding their problems. In this way, Time-Slice Rationality puts the theory of rationality on firmer foundations and yields better norms than alternative, non-time-slice-centric approaches. (shrink)
Applying good inductive rules inside the scope of suppositions leads to implausible results. I argue it is a mistake to think that inductive rules of inference behave anything like 'inference rules' in natural deduction systems. And this implies that it isn't always true that good arguments can be run 'off-line' to gain a priori knowledge of conditional conclusions.
I argue with my friends a lot. That is, I offer them reasons to believe all sorts of philosophical conclusions. Sadly, despite the quality of my arguments, and despite their apparent intelligence, they don’t always agree. They keep insisting on principles in the face of my wittier and wittier counterexamples, and they keep offering their own dull alleged counterexamples to my clever principles. What is a philosopher to do in these circumstances? (And I don’t mean get better friends.) One popular (...) answer these days is that I should, to some extent, defer to my friends. If I look at a batch of reasons and conclude p, and my equally talented friend reaches an incompatible conclusion q, I should revise my opinion so I’m now undecided between p and q. I should, in the preferred lingo, assign equal weight to my view as to theirs. This is despite the fact that I’ve looked at their reasons for concluding q and found them wanting. If I hadn’t, I would have already concluded q. The mere fact that a friend (from now on I’ll leave off the qualifier ‘equally talented and informed’, since all my friends satisfy that) reaches a contrary opinion should be reason to move me. Such a position is defended by Richard Feldman (2006a, 2006b), David Christensen (2007) and Adam Elga (forthcoming). This equal weight view, hereafter EW, is itself a philosophical position. And while some of my friends believe it, some of my friends do not. (Nor, I should add for your benefit, do I.) This raises an odd little dilemma. If EW is correct, then the fact that my friends disagree about it means that I shouldn’t be particularly confident that it is true, since EW says that I shouldn’t be too confident about any position on which my friends disagree. But, as I’ll argue below, to consistently implement EW, I have to be maximally confident that it is true. So to accept EW, I have to inconsistently both be very confident that it is true and not very confident that it is true. This seems like a problem, and a reason to not accept EW.. (shrink)
This paper is about three of the most prominent debates in modern epistemology. The conclusion is that three prima facie appealing positions in these debates cannot be held simultaneously. The first debate is scepticism vs anti-scepticism. My conclusions apply to most kinds of debates between sceptics and their opponents, but I will focus on the inductive sceptic, who claims we cannot come to know what will happen in the future by induction. This is a fairly weak kind of scepticism, and (...) I suspect many philosophers who are generally anti-sceptical are attracted by this kind of scepticism. Still, even this kind of scepticism is quite unintuitive. I’m pretty sure I know (1) on the basis of induction. (1) It will snow in Ithaca next winter. Although I am taking a very strong version of anti-scepticism to be intuitively true here, the points I make will generalise to most other versions of scepticism. (Focussing on the inductive sceptic avoids some potential complications that I will note as they arise.) The second debate is a version of rationalism vs empiricism. The kind of rationalist I have in mind accepts that some deeply contingent propositions can be known a priori, and the empiricist I have in mind denies this. Kripke showed that there are contingent propositions that can be known a priori. One example is Water is the watery stuff of our acquaintance. (‘Watery’ is David Chalmers’s nice term for the properties of water by which folk identify it.) All the examples Kripke gave are of propositions that are, to use Gareth Evans’s term, deeply necessary (Evans, 1979). It is a matter of controversy presently just how to analyse Evans’s concepts of deep necessity and contingency, but most of the controversies are over details that are not important right here. I’ll simply adopt Stephen Yablo’s recent suggestion: a proposition is deeply contingent if it could have turned out to be true, and could have turned out to be false (Yablo, 2002)1. Kripke did not provide examples of any deeply contingent propositions knowable a priori, though nothing he showed rules out their existence.. (shrink)
If we add as an extra premise that the agent does know H, then it is possible for her to know E H, we get the conclusion that the agent does not really know H. But even without that closure premise, or something like it, the conclusion seems quite dramatic. One possible response to the argument, floated by both Descartes and Hume, is to accept the conclusion and embrace scepticism. We cannot know anything that goes beyond our evidence, so (...) we do not know very much at all. This is a remarkably sceptical conclusion, so we should resist it if at all possible. A more modern response, associated with externalists like John McDowell and Timothy Williamson, is to accept the conclusion but deny it is as sceptical as it first appears. The Humean argument, even if it works, only shows that our evidence and our knowledge are more closely linked than we might have thought. Perhaps that’s true because we have a lot of evidence, not because we have very little knowledge. There’s something right about this response I think. We have more evidence than Descartes or Hume thought we had. But I think we still need the idea of ampliative knowledge. It stretches the concept of evidence to breaking point to suggest that all of our knowledge, including knowledge about the future, is part of our evidence. So the conclusion really is unacceptable. Or, at least, I think we should try to see what an epistemology that rejects the conclusion looks like. (shrink)
John Burgess has recently argued that Timothy Williamson’s attempts to avoid the objection that his theory of vagueness is based on an untenable metaphysics of content are unsuccessful. Burgess’s arguments are important, and largely correct, but there is a mistake in the discussion of one of the key examples. In this note I provide some alternative examples and use them to repair the mistaken section of the argument.
Kaplan (1989) famously claimed that monsters--operators that shift the context--do not exist in English and "could not be added to it". Several recent theorists have pointed out a range of data that seem to refute Kaplan's claim, but others (most explicitly Stalnaker 2014) have offered a principled argument that monsters are impossible. This paper interprets and resolves the dispute. Contra appearances, this is no dry, technical matter: it cuts to the heart of a deep disagreement about the fundamental structure of (...) a semantic theory. We argue that: (i) the interesting notion of a monster is not an operator that shifts some formal parameter, but rather an operator that shifts parameters that play a certain theoretical role; (ii) one cannot determine whether a given semantic theory allows monsters simply by looking at the formal semantics; (iii) theories which forbid shifting the formal "context" parameter are perfectly compatible with the existence of monsters (in the interesting sense). We explain and defend these claims by contrasting two kinds of semantic theory--Kaplan's (1989) and Lewis's (1980). (shrink)
Pragmatic encroachment theories have a problem with evidence. On the one hand, the arguments that knowledge is interest-relative look like they will generalise to show that evidence too is interest-relative. On the other hand, our best story of how interests affect knowledge presupposes an interest-invariant notion of evidence. -/- The aim of this paper is to sketch a theory of evidence that is interest-relative, but which allows that ‘best story’ to go through with minimal changes. The core idea is that (...) the evidence someone has is just what evidence a radical interpreter says they have. And a radical interpreter is playing a kind of game with the person they are interpreting. The cases that pose problems for pragmatic encroachment theorists generate fascinating games between the interpreter and the interpretee. They are games with multiple equilibria. To resolve them we need to detour into the theory of equilibrium selection. I’ll argue that the theory we need is the theory of risk-dominant equilibria. That theory will tell us how the interpreter will play the game, which in turn will tell us what evidence the person has. The evidence will be interest-relative, because what the equilibrium of the game is will be interest-relative. But it will not undermine the story we tell about how interests usually affect knowledge. (shrink)
?Love hurts??as the saying goes?and a certain amount of pain and difficulty in intimate relationships is unavoidable. Sometimes it may even be beneficial, since adversity can lead to personal growth, self-discovery, and a range of other components of a life well-lived. But other times, love can be downright dangerous. It may bind a spouse to her domestic abuser, draw an unscrupulous adult toward sexual involvement with a child, put someone under the insidious spell of a cult leader, and even inspire (...) jealousy-fueled homicide. How might these perilous devotions be diminished? The ancients thought that treatments such as phlebotomy, exercise, or bloodletting could ?cure? an individual of love. But modern neuroscience and emerging developments in psychopharmacology open up a range of possible interventions that might actually work. These developments raise profound moral questions about the potential uses?and misuses?of such anti-love biotechnology. In this article, we describe a number of prospective love-diminishing interventions, and offer a preliminary ethical framework for dealing with them responsibly should they arise. (shrink)
This paper presents a systematic approach for analyzing and explaining the nature of social groups. I argue against prominent views that attempt to unify all social groups or to divide them into simple typologies. Instead I argue that social groups are enormously diverse, but show how we can investigate their natures nonetheless. I analyze social groups from a bottom-up perspective, constructing profiles of the metaphysical features of groups of specific kinds. We can characterize any given kind of social group with (...) four complementary profiles: its “construction” profile, its “extra essentials” profile, its “anchor” profile, and its “accident” profile. Together these provide a framework for understanding the nature of groups, help classify and categorize groups, and shed light on group agency. (shrink)
There is a lot that we don’t know. That means that there are a lot of possibilities that are, epistemically speaking, open. For instance, we don’t know whether it rained in Seattle yesterday. So, for us at least, there is an epistemic possibility where it rained in Seattle yesterday, and one where it did not. It’s tempting to give a very simple analysis of epistemic possibility: • A possibility is an epistemic possibility if we do not know that it does (...) not obtain. But this is problematic for a few reasons. One issue, one that we’ll come back to, concerns the first two words. The analysis appears to quantify over possibilities. But what are they? As we said, that will become a large issue pretty soon, so let’s set it aside for now. A more immediate problem is that it isn’t clear what it is to have de re attitudes towards possibilities, such that we know a particular possibility does or doesn’t obtain. Let’s try rephrasing our analysis so that it avoids this complication. (shrink)
Pharmaceuticals or other emerging technologies could be used to enhance (or diminish) feelings of lust, attraction, and attachment in adult romantic partnerships. While such interventions could conceivably be used to promote individual (and couple) well-being, their widespread development and/or adoption might lead to “medicalization” of human love and heartache—for some, a source of serious concern. In this essay, we argue that the “medicalization of love” need not necessarily be problematic, on balance, but could plausibly be expected to have either good (...) or bad consequences depending upon how it unfolds. By anticipating some of the specific ways in which these technologies could yield unwanted outcomes, bioethicists and others can help direct the course of love’s “medicalization”—should it happen to occur—more toward the “good” side than the “bad.”. (shrink)
The traditional generality problem for process reliabilism concerns the difficulty in identifying each belief forming process with a particular kind of process. Thatidentification is necessary since individual belief forming processes are typically of many kinds, and those kinds may vary in reliability. I raise a new kind of generality problem, one which turns on the difficulty of identifying beliefs with processes by which they were formed. This problem arises because individual beliefs may be the culmination of overlapping processes of distinct (...) lengths, and these processes may differ in reliability. I illustrate the force of this problem with a discussion of recent work on the bootstrapping problem. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.