The 'Why ain'cha rich?' argument for one-boxing in Newcomb'sproblem allegedly vindicates evidential decision theory and undermines causal decision theory. But there is a good response to the argument on behalf of causal decision theory. I develop this response. Then I pose a new problem and use it to give a new 'Why ain'cha rich?' argument. Unlike the old argument, the new argument targets evidential decision theory. And unlike the old argument, the new argument is sound.
Newcomb’s problem is a decision puzzle whose difficulty and interest stem from the fact that the possible outcomes are probabilistically dependent on, yet causally independent of, the agent’s options. The problem is named for its inventor, the physicist William Newcomb, but first appeared in print in a 1969 paper by Robert Nozick [12]. Closely related to, though less well-known than, the Prisoners’ Dilemma, it has been the subject of intense debate in the philosophical literature. After three decades, the (...) issues remain unresolved. Newcomb’s problem is of genuine importance because it poses a challenge to the theoretical adequacy of orthodox Bayesian decision theory. It has led both to the development of causal decision theory and to efforts aimed at defending the adequacy of the orthodox theory. (shrink)
I consider a familiar argument for two-boxing in Newcomb'sProblem and find it defective because it involves a type of divergence from standard Baysian reasoning, which, though sometimes justified, conflicts with the stipulations of the Newcomb scenario. In an appendix, I also find fault with a different argument for two-boxing that has been presented by Graham Priest.
Nicholas Rescher claims that rational decision theory “may leave us in the lurch”, because there are two apparently acceptable ways of applying “the standard machinery of expected-value analysis” to his Dr. Psycho paradox which recommend contradictory actions. He detects a similar contradiction in Newcomb’s problem. We consider his claims from the point of view of both Bayesian decision theory and causal decision theory. In Dr. Psycho and in Newcomb’s Problem, Rescher has used premisses about probabilities which he assumes (...) to be independent. From the former point of view, we show that the probability premisses are not independent but inconsistent, and their inconsistency is provable within probability theory alone. From the latter point of view, we show that their consistency can be saved, but then the contradictory recommendations evaporate. Consequently, whether one subscribes to evidential or causal decision theory, rational decision theory is not in any way vitiated by Rescher’s arguments. (shrink)
The standard formulation of Newcomb'sproblem compares evidential and causal conceptions of expected utility, with those maximizing evidential expected utility tending to end up far richer. Thus, in a world in which agents face Newcomb problems, the evidential decision theorist might ask the causal decision theorist: "if you're so smart, why ain’cha rich?” Ultimately, however, the expected riches of evidential decision theorists in Newcomb problems do not vindicate their theory, because their success does not generalize. Consider a theory (...) that allows the agents who employ it to end up rich in worlds containing Newcomb problems and continues to outperform in other cases. This type of theory, which I call a “success-first” decision theory, is motivated by the desire to draw a tighter connection between rationality and success, rather than to support any particular account of expected utility. The primary aim of this paper is to provide a comprehensive justification of success-first decision theories as accounts of rational decision. I locate this justification in an experimental approach to decision theory supported by the aims of methodological naturalism. (shrink)
The dispute in philosophical decision theory between causalists and evidentialists remains unsettled. Many are attracted to the causal view’s endorsement of a species of dominance reasoning, and to the intuitive verdicts it gets on a range of cases with the structure of the infamous Newcomb’s Problem. But it also faces a rising wave of purported counterexamples and theoretical challenges. In this paper I will describe a novel decision theory which saves what is appealing about the causal view while avoiding (...) its most worrying objections, and which promises to generalize to solve a set of related problems in other normative domains. (shrink)
Causalists and Evidentialists can agree about the right course of action in an (apparent) Newcomb problem, if the causal facts are not as initially they seem. If declining $1,000 causes the Predictor to have placed $1m in the opaque box, CDT agrees with EDT that one-boxing is rational. This creates a difficulty for Causalists. We explain the problem with reference to Dummett's work on backward causation and Lewis's on chance and crystal balls. We show that the possibility that (...) the causal facts might be properly judged to be non-standard in Newcomb problems leads to a dilemma for Causalism. One horn embraces a subjectivist understanding of causation, in a sense analogous to Lewis's own subjectivist conception of objective chance. In this case the analogy with chance reveals a terminological choice point, such that either (i) CDT is completely reconciled with EDT, or (ii) EDT takes precedence in the cases in which the two theories give different recommendations. The other horn of the dilemma rejects subjectivism, but now the analogy with chance suggests that it is simply mysterious why causation so construed should constrain rational action. (shrink)
The best-known argument for Evidential Decision Theory (EDT) is the ‘Why ain’cha rich?’ challenge to rival Causal Decision Theory (CDT). The basis for this challenge is that in Newcomb-like situations, acts that conform to EDT may be known in advance to have the better return than acts that conform to CDT. Frank Arntzenius has recently proposed an ingenious counter argument, based on an example in which, he claims, it is predictable in advance that acts that conform to EDT will do (...) less well than acts that conform to CDT. We raise two objections to Arntzenius’s example. We argue, first, that the example is subtly incoherent, in a way that undermines its effectiveness against EDT; and, second, that the example relies on calculating the average return over an inappropriate population of acts. (shrink)
I formulate a principle of preference, which I call the Guaranteed Principle. I argue that the preferences of rational agents satisfy the Guaranteed Principle, that the preferences of agents who embody causal decision theory do not, and hence that causal decision theory is false.
Consequentialists often assume rational monism: the thesis that options are always made rationally permissible by the maximization of the selfsame quantity. This essay argues that consequentialists should reject rational monism and instead accept rational pluralism: the thesis that, on different occasions, options are made rationally permissible by the maximization of different quantities. The essay then develops a systematic form of rational pluralism which, unlike its rivals, is capable of handling both the Newcomb problems that challenge evidential decision theory and the (...) unstable problems that challenge causal decision theory. (shrink)
There is a difference between the conditions in which one can felicitously assert a ‘must’-claim versus those in which one can use the corresponding non-modal claim. But it is difficult to pin down just what this difference amounts to. And it is even harder to account for this difference, since assertions of 'Must ϕ' and assertions of ϕ alone seem to have the same basic goal: namely, coming to agreement that [[ϕ]] is true. In this paper I take on this (...) puzzle, known as Karttunen’s Problem. I begin by arguing that a ‘must’-claim is felicitous only if there is a shared argument for its prejacent. I then argue that this generalization, which I call Support, can explain the more familiar generalization that ‘must’-claims are felicitous only if the speaker’s evidence for them is in some sense indirect. Finally, I sketch a pragmatic derivation of Support. (shrink)
A question that has been largely overlooked by philosophers of religion is how God would be able to effect a rational choice between two worlds of unsurpassable goodness. To answer this question, I draw a parallel with the paradigm cases of indifferent choice, including Buridan's ass, and argue that such cases can be satisfactorily resolved provided that the protagonists employ what Otto Neurath calls an ‘auxiliary motive’. I supply rational grounds for the employment of such a motive, and then argue (...) against the views of Leibniz and Nicholas Rescher to show that this solution would also work for God. (shrink)
Two plausible claims seem to be inconsistent with each other. One is the idea that if one reasonably believes that one ought to fi, then indeed, on pain of acting irrationally, one ought to fi. The other is the view that we are fallible with respect to our beliefs about what we ought to do. Ewing’s Problem is how to react to this apparent inconsistency. I reject two easy ways out. One is Ewing’s own solution to his problem, (...) which is to introduce two different notions of ought. The other is the view that Ewing’s Problem rests on a simple confusion regarding the scope of the ought-operator. Then, I discuss two hard ways out, which I label objectivism and subjectivism, and for which G.E. Moore and Bishop Butler are introduced as historical witnesses. These are hard ways out because both of these views have strong counterintuitive consequences. After explaining why Ewing’s Problem is so difficult, I show that there is conceptual room in-between Moore and Butler, but I remain sceptical whether Ewing’s Problem is solvable within a realist framework of normative facts. (shrink)
Andy Egan's Smoking Lesion and Psycho Button cases are supposed to be counterexamples to Causal Decision Theory. This paper argues that they are not: more precisely, it argues that if CDT makes the right call in Newcomb'sproblem then it makes the right call in Egan cases too.
Suppose that you have to take a test tomorrow but you do not want to study. Unfortunately you should study, since you care about passing and you expect to pass only if you study. Is there anything you can do to make it the case that you should not study? Is there any way for you to ‘rationalize’ slacking off? I suggest that such rationalization is impossible. Then I show that if evidential decision theory is true, rationalization is not only (...) possible but sometimes advisable. (shrink)
Lucy Allais seeks to provide a reading of the Transcendental Deduction of the Categories which is compatible with a nonconceptualist account of Kant’s theory of intuition. According to her interpretation, the aim of the Deduction is to show that a priori concept application is required for empirical concept application. I argue that once we distinguish the application of the categories from the instantiation of the categories, we see that Allais’s reconstruction of the Deduction cannot provide an answer to Hume’s (...) class='Hi'>problem about our entitlement to use a priori concepts when thinking about the objects of empirical intuition. If the Deduction is to provide a response to Hume, Allais’s interpretation must be rejected. (shrink)
Povinelli’s Problem is a well-known methodological problem confronting those researching nonhuman primate cognition. In this paper I add a new wrinkle to this problem. The wrinkle concerns introspection, i.e., the ability to detect one’s own mental states. I argue that introspection either creates a new obstacle to solving Povinelli’s Problem, or creates a slightly different, but closely related, problem. I apply these arguments to Robert Lurz and Carla Krachun’s (Review of Philosophy and Psychology 2: 449–481, (...) 2011) recent attempt at solving Povinelli’s Problem. (shrink)
Andy Egan has presented a dilemma for decision theory. As is well known, Newcomb cases appear to undermine the case for evidential decision theory. However, Egan has come up with a new scenario which poses difficulties for causal decision theory. I offer a simple solution to this dilemma in terms of a modified EDT. I propose an epistemological test: take some feature which is relevant to your evaluation of the scenarios under consideration, evidentially correlated with the actions under consideration albeit, (...) causally independent of them. Hold this feature fixed as a hypothesis. The test shows that, in Newcomb cases, EDT would mislead the agent. Where the test shows EDT to be misleading, I propose to use fictive conditional credences in the EDT-formula under the constraint that they are set to equal values. I then discuss Huw Price’s defence of EDT as an alternative to my diagnosis. I argue that my solution also applies if one accepts the main premisses of Price’s argument. I close with applying my solution to Nozick’s original Newcomb problem. (shrink)
The generality problem is widely considered to be a devastating objection to reliabilist theories of justification. My goal in this paper is to argue that a version of the generality problem applies to all plausible theories of justification. Assume that any plausible theory must allow for the possibility of reflective justification—S's belief, B, is justified on the basis of S's knowledge that she arrived at B as a result of a highly (but not perfectly) reliable way of reasoning, (...) R. The generality problem applies to all cases of reflective justification: Given that is the product of a process-token that is an instance of indefinitely many belief-forming process-types (or BFPTs), why is the reliability of R, rather than the reliability of one of the indefinitely many other BFPTs, relevant to B's justificatory status? This form of the generality problem is restricted because it applies only to cases of reflective justification. But unless it is solved, the generality problem haunts all plausible theories of justification, not just reliabilist ones. (shrink)
In a recent Philosophy of Science article Gerhard Schurz proposes meta-inductivistic prediction strategies as a new approach to Hume's. This comment examines the limitations of Schurz's approach. It can be proven that the meta-inductivist approach does not work any more if the meta-inductivists have to face an infinite number of alternative predictors. With his limitation it remains doubtful whether the meta-inductivist can provide a full solution to the problem of induction.
This paper reports (in section 1 “Introduction”) some quotes from Nelson Goodman which clarify that, contrary to a common misunderstanding, Goodman always denied that “grue” requires temporal information and “green” does not require temporal information; and, more in general, that Goodman always denied that grue-like predicates require additional information compared to what green-like predicates require. One of the quotations is the following, taken from the first page of the Foreword to chapter 8 “Induction” of the Goodman’s book “Problems and Projects”: (...) “Nevertheless, we may by now confidently conclude that no general distinction between projectible and non-projectible predicates can be drawn on syntactic or even on semantic grounds. Attempts to distinguish projectible predicates as purely qualitative, or non-projectible ones as time-dependent, for example, have plainly failed”. Barker and Achinstein in their famous paper of 1960 tried to demonstrate that the grue-speaker (named Mr. Grue in their paper) needs temporal information to be able to determine whether an object is grue, but Goodman replied (in “Positionality and Pictures”, contained in his book “Problems and Projects”, chapter 8, section 6b) that they failed to prove that Mr. Grue needs temporal information to determine whether an object is grue. According to Goodman, since the predicates “blue” and “green” are interdefinable with the predicates “grue” and “bleen”, “if we can tell which objects are blue and which objects are green, we can tell which ones are grue and which ones are bleen” [pages 12-13 of “Reconceptions in Philosophy and Other Arts and Sciences”]. But this paper points out that an example of interdefinability is also that one about the predicate “gruet”, which is a predicate that applies to an object if the object either is green and examined before time t, or is non-green and not examined before time t. The three predicates “green”, “gruet”, “examined before time t” are interdefinable: and even though the predicates “green” and “examined before time t” are interdefinable, being able to tell if an object is green does not imply being able to tell if an object is examined before time t. The interdefinability among three elements is a type of interdefinability present, for example, also among the logical connectives. Another example of interdefinability is that one about a decidable predicate PD, which is interdefinable with an undecidable predicate PU: therefore even though we can tell whether an object is PD and whether an object is non-PD, we cannot tell whether an object is PU (since PU is an undecidable predicate) and whether an object is non-PU. Although the predicates PD and PU are interdefinable, the possibility to determine whether an object is PD does not imply the possibility to determine whether an object is PU (since PU is an undecidable predicate). Similarly, although the predicates “green” and “grue” are interdefinable, the possibility to determine whether an object is “green” even in absence of temporal information does not imply the possibility to determine whether an object is “grue” even in absence of temporal information. These and other examples about “grue” and “bleen” point out that even in case two predicates are interdefinable, the possibility to apply a predicate P does not imply the possibility to apply a predicate interdefinable with P. And that the possibility to apply the predicate “green” without having temporal information does not imply the possibility to apply the predicate “grue” without having temporal information. Furthermore, knowing that an object is both green and grue implies temporal information: in fact, we know by definition that a grue object can only be: 1) either green (in case the object is examined before time t); 2) or blue (in case the object is not examined before time t). Thus, knowing that an object is both grue and green, we know that we are faced with case 1, the case of a grue object that is green and examined before time t. Then the paper points out why the Goodman-Kripke paradox is a paradox about meaning that cannot have repercussions on induction. Finally the paper points out why Hume’s problem is a problem different from Goodman’s paradox and requires a specific treatment. (shrink)
If we had more powerful minds would we be puzzled by less - because we could make better theories - or by more - because we could ask more difficult questions? This paper focuses on clarifying the question, with an emphasis on comparisons between actual and possible species of thinker. A pre-publication version of the paper is available on my website at http://www.fernieroad.ca/a/PAPERS/papers.html .
On page 14 of "Reconceptions in Philosophy and Other Arts and Sciences" (section 4 of chapter 1) by Nelson Goodman and Catherine Z. Elgin is written: “Since ‘blue’ and ‘green’ are interdefinable with ‘grue’ and ‘bleen’, the question of which pair is basic and which pair derived is entirely a question of which pair we start with”. This paper points out that an example of interdefinability is also that one about the predicate “grueb”, which is a predicate that applies to (...) an object if the object either is green and examined before time b, or is non-green and not examined before time b. The three predicates “green”, “grueb”, “examined before time b” are interdefinable. According to Goodman, since the predicates “blue” and “green” are interdefinable with the predicates “grue” and “bleen”, “if we can tell which objects are blue and which objects are green, we can tell which ones are grue and which ones are bleen” [pages 12-13 of “Reconceptions in Philosophy and Other Arts and Sciences”]. But , even though the predicates “green” and “examined before time b” are interdefinable, being able to tell if an object is green does not imply being able to tell if an object is examined before time b. The interdefinability among three elements is a type of interdefinability present, for example, also among the logical connectives. Another example of interdefinability is that one about a decidable predicate PD, which is interdefinable with an undecidable predicate PU: therefore even though we can tell whether an object is PD and whether an object is non-PD, we cannot tell whether an object is PU (since PU is an undecidable predicate) and whether an object is non-PU. Although the predicates PD and PU are interdefinable, the possibility to determine whether an object is PD does not imply the possibility to determine whether an object is PU (since PU is an undecidable predicate). Similarly, although the predicates “green” and “grue” are interdefinable, the possibility to determine whether an object is “green” even in absence of temporal information does not imply the possibility to determine whether an object is “grue” even in absence of temporal information. These and other examples about “grue” and “bleen” point out that even in case two predicates are interdefinable, the possibility to apply a predicate P does not imply the possibility to apply a predicate interdefinable with P. And that the possibility to apply the predicate “green” without having temporal information does not imply the possibility to apply the predicate “grue” without having temporal information. According to Goodman, if it is possible to determine if an object is green without needing temporal information, then it is also possible to determine if an object is grue without needing temporal information. But, knowing that an object is both green and grue implies temporal information: in fact, we know by definition that a grue object can only be: 1) either green (in case the object is examined before time t); 2) or blue (in case the object is not examined before time t). Thus, knowing that an object is both grue and green, we know that we are faced with case 1, the case of a grue object that is green and examined before time t. Then the paper points out why the Goodman-Kripke paradox is a paradox about meaning that cannot have repercussions on induction. Finally the paper points out why Hume’s problem is a problem different from Goodman’s paradox and requires a specific treatment. (shrink)
Kant identifies what are in fact Free Riders as the most noxious species of polemicists. Kant thinks polemic reduces the stature and authority of reason to a method of squabbling that destabilizes social equilibrium and portends disintegration into the Hobessian state of nature. In the first Critique, Kant proposes two textually related solutions to the Free Rider problem.
The most pressing worry for panpsychism is arguably the combination problem, the problem of intelligibly explaining how the experiences of microphysical entities combine to form the experiences of macrophysical entities such as ourselves. This chapter argues that the combination problem is similar in kind to other problems of mental combination that are problems for everyone: the problem of phenomenal unity, the problem of mental structure, and the problem of new quality spaces. The ubiquity of (...) combination problems suggests the ignorance hypothesis, the hypothesis that we are ignorant of certain key facts about mental combination, which allows the panpsychist to avoid certain objections based on the combination problem. (shrink)
This essay re-examines Kierkegaard's view of Socrates. I consider the problem that arises from Kierkegaard's appeal to Socrates as an exemplar for irony. The problem is that he also appears to think that, as an exemplar for irony, Socrates cannot be represented. And part of the problem is the paradox of self-reference that immediately arises from trying to represent x as unrepresentable. On the solution I propose, Kierkegaard does not hold that, as an exemplar for irony, Socrates (...) is in no way representable. Rather, he holds that, as an exemplar for irony, Socrates cannot be represented in a purely disinterested way. I show how, in The Concept of Irony, Kierkegaard makes use of 'limiting cases' of representation in order to bring Socrates into view as one who defies purely disinterested representation. I also show how this approach to Socrates connects up with Kierkegaard's more general interest in the problem of ethical exemplarity, where the problem is how ethical exemplars can be given as such, that is, in such a way that purely disinterested contemplation is not the appropriate response to them. (shrink)
Duncan Pritchard has, in the years following his (2005) defence of a safety-based account of knowledge in Epistemic Luck, abjured his (2005) view that knowledge can be analysed exclusively in terms of a modal safety condition. He has since (Pritchard in Synthese 158:277–297, 2007; J Philosophic Res 34:33–45, 2009a, 2010) opted for an account according to which two distinct conditions function with equal importance and weight within an analysis of knowledge: an anti-luck condition (safety) and an ability condition-the latter being (...) a condition aimed at preserving what Pritchard now takes to be a fundamental insight about knowledge: that it arises from cognitive ability (Greco 2010; Sosa 2007, 2009). Pritchard calls his new view anti-luck virtue epistemology (ALVE). A key premise in Pritchard’s argument for ALVE is what I call the independence thesis; the thesis that satisfying neither the anti-luck condition nor the ability condition entails that the other is satisfied. Pritchard’s argument for the independence thesis relies crucially upon the case he makes for thinking that cognitive achievements are compatible with knowledge-undermining environmental luck—that is, the sort of luck widely thought to undermine knowledge in standard barn facade cases. In the first part of this paper, I outline the key steps in Pritchard’s argument for anti-luck virtue epistemology and highlight how it is that the compatibility of cognitive achievement and knowledge- undermining environmental luck is indispensible to the argument’s success. The second part of this paper aims to show that this compatibility premise crucial to Pritchard’s argument is incorrect. (shrink)
Philosophers and cognitive scientists have worried that research on animal mind-reading faces a ‘logical problem’: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence (e.g. line-of-gaze) for those mental states. The most impressive attempt to confront this problem has been mounted recently by Robert Lurz. However, Lurz' approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. (...) Moreover, participants in this debate do not agree on criteria for representation. As such, future debate should either abandon the representational idiom or confront underlying semantic disagreements. (shrink)
According to Mathias Risse and Richard Zeckhauser, racial profiling can be justified in a society, such as the contemporary United States, where the legacy of slavery and segregation is found in lesser but, nonetheless, troubling forms of racial inequality. Racial profiling, Risse and Zeckhauser recognize, is often marked by police abuse and the harassment of racial minorities and by the disproportionate use of race in profiling. These, on their view, are unjustified. But, they contend, this does not mean that all (...) forms of racial profiling are unjustified; nor, they claim, need one be indifferent to the harms of racism in order to justify racial profiling. In fact, one of the aims of their paper is to show that racial profiling, suitably understood, “is consistent with support for far-reaching measures to decrease racial inequities and inequality.” Hence, one of their most striking claims, in an original and provocative paper, is that one can endorse racial profiling without being in any way indifferent to the disadvantaged status of racial minorities. In an initial response to these claims, I argued that Risse and Zeckhauser tend to underestimate the harms of racial profiling. I suggested two main reasons why they did so. The first is that they tend to identify the more serious harms associated with profiling with background racism, and therefore to believe that these are not properly attributable to profiling itself. The second reason is that they ignore the ways in which background racism makes even relatively minor harms harder to bear and to justify than would otherwise be the case. Hence, I concluded, racial profiling cannot be a normal part of police practice in a society still struggling with racism, although under very special conditions and with special regulation and compensation in place, it might be justified as an extraordinary police measure. I want to stand by those claims. However, Risse’s response to my arguments persuades me that I misinterpreted his earlier position in one significant respect. So I will start by explaining what interpretive mistake I believe that I made. I will then argue that despite Risse’s patient and careful response to my arguments, my initial concerns with his justification of profiling remain valid. -/- . (shrink)
In this paper, I shall consider the challenge that Quine posed in 1947 to the advocates of quantified modal logic to provide an explanation, or interpretation, of modal notions that is intuitively clear, allows “quantifying in”, and does not presuppose, mysterious, intensional entities. The modal concepts that Quine and his contemporaries, e.g. Carnap and Ruth Barcan Marcus, were primarily concerned with in the 1940’s were the notions of (broadly) logical, or analytical, necessity and possibility, rather than the metaphysical modalities that (...) have since become popular, largely due to the influence of Kripke. In the 1950’s modal logicians responded to Quine’s challenge by providing quantified modal logic with model-theoretic semantics of various types. In doing so they also, explicitly or implicitly addressed Quine’s interpretation problem. Here I shall consider the approaches developed by Carnap in the late 1940’s, and by Kanger, Hintikka, Montague, and Kripke in the 1950’s, and discuss to what extent these approaches were successful in meeting Quine’s doubts about the intelligibility of quantified modal logic. (shrink)
Speaks defends the view that propositions are properties: for example, the proposition that grass is green is the property being such that grass is green. We argue that there is no reason to prefer Speaks's theory to analogous but competing theories that identify propositions with, say, 2-adic relations. This style of argument has recently been deployed by many, including Moore and King, against the view that propositions are n-tuples, and by Caplan and Tillman against King's view that propositions are facts (...) of a special sort. We offer our argument as an objection to the view that propositions are unsaturated relations. (shrink)
This paper challenges Francis Hutcheson's and John Clarke of Hull's alleged demonstrations that William Wollaston's moral theory is inconsistent. It also present a form of the inconsistency objection that fares better than theirs, namely, that of Thomas Bott (1688-1754). Ultimately, the paper shows that Wollaston's moral standard is not what some have thought it to be; that consequently, his philosophy withstands the best-known efforts to expose it as inconsistent; and further, that one of the least-known British moralists is more important (...) than hitherto thought, in that he uncovers the inconsistency Clarke and Hutcheson try in vain to elicit. (shrink)
I argue that Frege's so-called "concept 'horse' problem" is not one problem but many. When these different sub-problems are distinguished, some emerge as more tractable than others. I argue that, contrary to a widespread scholarly assumption originating with Peter Geach, there is scant evidence that Frege engaged with the general problem of the inexpressibility of logical category distinctions in writings available to Wittgenstein. In consequence, Geach is mistaken in his claim that in the Tractatus Wittgenstein simply accepts (...) from Frege certain lessons about the inexpressibility of logical category distinctions and the say-show distinction. In truth, Wittgenstein drew his own morals about these matters, quite possibly as the result of reflecting on how the general problem of the inexpressibility of logical category distinctions arises in Frege's writings , but also, quite possibly, by discerning certain glimmerings of these doctrines in the writings of Russell. (shrink)
According to Emma Borg, minimalism is (roughly) the view that natural language sentences have truth conditions, and that these truth conditions are fully determined by syntactic structure and lexical content. A principal motivation for her brand of minimalism is that it coheres well with the popular view that semantic competence is underpinned by the cognition of a minimal semantic theory. In this paper, I argue that the liar paradox presents a serious problem for this principal motivation. Two lines of (...) response to the problem are discussed, and difficulties facing those responses are raised. I close by issuing a challenge: to construe the principal motivation for BM in such a way so as to avoid the problem of paradox. (shrink)
Andrew Cling presents a new version of the epistemic regress problem, and argues that intuitionist foundationalism, social contextualism, holistic coherentism, and infinitism fail to solve it. Cling’s discussion is quite instructive, and deserving of careful consideration. But, I argue, Cling’s discussion is not in all respects decisive. I argue that Cling’s dilemma argument against holistic coherentism fails.
• It would be a moral disgrace for God (if he existed) to allow the many evils in the world, in the same way it would be for a parent to allow a nursery to be infested with criminals who abused the children. • There is a contradiction in asserting all three of the propositions: God is perfectly good; God is perfectly powerful; evil exists (since if God wanted to remove the evils and could, he would). • The religious believer (...) has no hope of getting away with excuses that evil is not as bad as it seems, or that it is all a result of free will, and so on. Piper avoids mentioning the best solution so far put forward to the problem of evil. It is Leibniz’s theory that God does not create a better world because there isn’t one — that is, that (contrary to appearances) if one part of the world were improved, the ramifications would result in it being worse elsewhere, and worse overall. It is a “bump in the carpet” theory: push evil down here, and it pops up over there. Leibniz put it by saying this is the “Best of All Possible Worlds”. That phrase was a public relations disaster for his theory, suggesting as it does that everything is perfectly fine as it is. He does not mean that, but only that designing worlds is a lot harder than it looks, and determining the amount of evil in the best one is no easy matter. Though humour is hardly appropriate to the subject matter, the point of Leibniz’s idea is contained in the old joke, “An optimist is someone who thinks this is the best of all possible worlds, and a pessimist thinks.. (shrink)
According to Kant, the singular judgement ‘This rose is beautiful’ is, or may be, aesthetic, while the general judgement ‘Roses in general are beautiful’ is not. What, then, is the logical relation between the two judgements? I argue that there is none, and that one cannot allow there to be any if one agrees with Kant that the judgement ‘This rose is beautiful’ cannot be made on the basis of testimony. The appearance of a logical relation between the two judgements (...) can, however, be explained in terms of what one does in making a judgement of taste. Finally, I describe an analogy between Kant's treatment of judgements of taste and J. L. Austin's treatment of explicit performative utterances, which I attribute to a deeper affinity between their respective projects. (shrink)
I resolve the major challenge to an Expressivist theory of the meaning of normative discourse: the Frege–Geach Problem. Drawing on considerations from the semantics of directive language (e.g., imperatives), I argue that, although certain forms of Expressivism (like Gibbard’s) do run into at least one version of the Problem, it is reasonably clear that there is a version of Expressivism that does not.
A challenge to Kant’s less known duty of self-knowledge comes from his own firm view that it is impossible to know oneself. This paper resolves this problem by considering the duty of self-knowledge as involving the pursuit of knowledge of oneself as one appears in the empirical world. First, I argue that, although Kant places severe restrictions on the possibility of knowing oneself as one is, he admits the possibility of knowing oneself as one appears using methods from empirical (...) anthropology. Second, I show that empirical knowledge of oneself is fairly reliable and is, in fact, considered as morally significant from Kant’s moral anthropological perspective. Taking these points together, I conclude that Kant’s duty of self-knowledge exclusively entails the pursuit of empirical self-knowledge. (shrink)
Heinrich Behmann (1891-1970) obtained his Habilitation under David Hilbert in Göttingen in 1921 with a thesis on the decision problem. In his thesis, he solved - independently of Löwenheim and Skolem's earlier work - the decision problem for monadic second-order logic in a framework that combined elements of the algebra of logic and the newer axiomatic approach to logic then being developed in Göttingen. In a talk given in 1921, he outlined this solution, but also presented important programmatic (...) remarks on the significance of the decision problem and of decision procedures more generally. The text of this talk as well as a partial English translation are included. (shrink)
David Braybrooke argues that meeting people’s needs ought to be the primary goal of social policy. But he then faces the problem of how to deal with the fact that our most pressing needs, needs to be kept alive with resource-draining medical technology, threaten to exhaust our resources for meeting all other needs. I consider several solutions to this problem, eventually suggesting that the need to be kept alive is no different in kind from needs to fulfill various (...) projects, and that needs may have a structure similar to rights, with people’s legitimate needs serving as constraints on each other’s entitlements to resources. This affords a set of axioms constraining possible needs. Further, if, as Braybrooke thinks, needs are created by communities approving projects, so that the means to prosecute the projects then come to count as needs, then communities are obliged to approve only projects that are co-feasible given the world’s finite resources. The result is that it can be legitimate not to funnel resources towards endless life-prolongation projects. (shrink)
Formula thinking is a kind of thinking strictly by route in which the thinker never deviates from a set course. Craft thinking involves a rough approximation to a set course but allows for deviation. The arts involve craft thinking. Repairing a machine involves formula thinking. America has become almost completely dominated by formula thinking.
The main goal of this paper is to investigate what explanatory resources Robert Brandom’s distinction between acknowledged and consequential commitments affords in relation to the problem of logical omniscience. With this distinction the importance of the doxastic perspective under consideration for the relationship between logic and norms of reasoning is emphasized, and it becomes possible to handle a number of problematic cases discussed in the literature without thereby incurring a commitment to revisionism about logic. One such case in particular (...) is the preface paradox, which will receive an extensive treatment. As we shall see, the problem of logical omniscience not only arises within theories based on deductive logic; but also within the recent paradigm shift in psychology of reasoning. So dealing with this problem is important not only for philosophical purposes but also from a psychological perspective. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.