Sceptical theists--e.g., William Alston and Michael Bergmann--have claimed that considerations concerning human cognitive limitations are alone sufficient to undermine evidential arguments from evil. We argue that, if the considerations deployed by sceptical theists are sufficient to undermine evidential arguments from evil, then those considerations are also sufficient to undermine inferences that play a crucial role in ordinary moral reasoning. If cogent, our argument suffices to discredit sceptical theist responses to evidential arguments from evil.
A commonly held view is that a central aim of metaphysics is to give a fundamental account of reality which refers only to the fundamental entities. But a puzzle arises. It is at least a working hypothesis for those pursuing the aim that, first, there must be fundamental entities. But, second, it also seems possible that the world has no foundation, with each entity depending on others. These two claims are inconsistent with the widely held third claim that the fundamental (...) just is the foundational. It is tempting to resolve the puzzle by rejecting the first or second claim, perhaps because it is obscure how the third claim might plausibly be challenged. But I develop a new analysis of fundamentality which challenges the third claim by allowing for an entity to be fundamental without being foundational. The analysis, roughly, is that an entity is fundamental just in case not all facts about it are grounded in facts about other entities. The possibility of fundamentality without foundations not only provides for a novel resolution to the puzzle, but has applications to some live debates: for example, it undermines Jonathan Schaffer's modal argument for priority monism. (shrink)
The Oxford Handbook of Metaphysics offers the most authoritative and compelling guide to this diverse and fertile field of philosophy. Twenty-four of the world's most distinguished specialists provide brand-new essays about 'what there is': what kinds of things there are, and what relations hold among entities falling under various categories. They give the latest word on such topics as identity, modality, time, causation, persons and minds, freedom, and vagueness. The Handbook's unrivaled breadth and depth make it the definitive reference work (...) for students and academics across the philosophical spectrum. (shrink)
In this paper I argue that Michael Friedman's conception of the contitutive a priori faces two serious problems. These two problems show that the view collapses into a form of conventionalism.
The main question addressed in this paper is whether some false sentences can constitute evidence for the truth of other propositions. In this paper it is argued that there are good reasons to suspect that at least some false propositions can constitute evidence for the truth of certain other contingent propositions. The paper also introduces a novel condition concerning propositions that constitute evidence that explains a ubiquitous evidential practice and it contains a defense of a particular condition concerning the possession (...) of evidence. The core position adopted here then is that false propositions that are approximately true reports of measurements can constitute evidence for the truth of other propositions. So, it will be argued that evidence is only quasi-factive in this very specific sense. (shrink)
Paradoxes have played an important role both in philosophy and in mathematics and paradox resolution is an important topic in both fields. Paradox resolution is deeply important because if such resolution cannot be achieved, we are threatened with the charge of debilitating irrationality. This is supposed to be the case for the following reason. Paradoxes consist of jointly contradictory sets of statements that are individually plausible or believable. These facts about paradoxes then give rise to a deeply troubling epistemic problem. (...) Specifically, if one believes all of the constitutive propositions that make up a paradox, then one is apparently committed to belief in every proposition. This is the result of the principle of classical logical known as ex contradictione (sequitur) quodlibetthat anything and everything follows from a contradiction, and the plausible idea that belief is closed under logical or material implication (i.e. the epistemic closure principle). But, it is manifestly and profoundly irrational to believe every proposition and so the presence of even one contradiction in one’s doxa appears to result in what seems to be total irrationality. This problem is the problem of paradox-induced explosion. In this paper it will be argued that in many cases this problem can plausibly be avoided in a purely epistemic manner, without having either to resort to non-classical logics for belief (e.g. paraconsistent logics) or to the denial of the standard closure principle for beliefs. The manner in which this result can be achieved depends on drawing an important distinction between the propositional attitude of belief and the weaker attitude of acceptance such that paradox constituting propositions are accepted but not believed. Paradox-induced explosion is then avoided by noting that while belief may well be closed under material implication or even under logical implication, these sorts of weaker commitments are not subject to closure principles of those sorts. So, this possibility provides us with a less radical way to deal with the existence of paradoxes and it preserves the idea that intelligent agents can actually entertain paradoxes. (shrink)
Following Nancy Cartwright and others, I suggest that most (if not all) theories incorporate, or depend on, one or more idealizing assumptions. I then argue that such theories ought to be regimented as counterfactuals, the antecedents of which are simplifying assumptions. If this account of the logic form of theories is granted, then a serious problem arises for Bayesians concerning the prior probabilities of theories that have counterfactual form. If no such probabilities can be assigned, the the posterior probabilities will (...) be undefined, as the latter are defined in terms of the former. I argue here that the most plausible attempts to address the problem of probabilities of conditionals fail to help Bayesians, and, hence, that Bayesians are faced with a new problem. In so far as these proposed solutions fail, I argue that Bayesians must give up Bayesianism or accept the counterintuitive view that no theories that incorporate any idealizations have ever really been confirmed to any extent whatsoever. Moreover, as it appears that the latter horn of this dilemma is highly implausible, we are left with the conclusion that Bayesianism should be rejected, at least as it stands. (shrink)
In a recent article, Peter Gärdenfors (1992) has suggested that the AGM (Alchourrón, Gärdenfors, and Makinson) theory of belief revision can be given an epistemic basis by interpreting the revision postulates of that theory in terms of a version of the coherence theory of justification. To accomplish this goal Gärdenfors suggests that the AGM revision postulates concerning the conservative nature of belief revision can be interpreted in terms of a concept of epistemic entrenchment and that there are good empirical reasons (...) to adopt this view as opposed to some form of foundationalist account of the justification of our beliefs. In this paper I argue that Gärdenfors’ attempt to underwrite the AGM theory of belief revision by appealing to a form of coherentism is seriously inadequate for several reasons. (shrink)
This paper challenges Williamson's "E = K" thesis on the basis of evidential practice. The main point is that most evidence is only approximately true and so cannot be known if knowledge is factive.
Defenders of doxastic voluntarism accept that we can voluntarily commit ourselves to propositions, including belief-contravening propositions. Thus, defenders of doxastic voluntarism allow that we can choose to believe propositions that are negatively implicated by our evidence. In this paper it is argued that the conjunction of epistemic deontology and doxastic voluntarism as it applies to ordinary cases of belief-contravening propositional commitments is incompatible with evidentialism. In this paper ED and DV will be assumed and this negative result will be used (...) to suggest that voluntary belief-contravening commitments are not themselves beliefs and that these sorts of commitments are not governed by evidentialism. So, the apparent incompatibility of the package views noted above can be resolved without ceding evidentialism with respect to beliefs. (shrink)
In this essay, I examine the use of the concept of privilege within the critical theoretical discourse on oppression and liberation. In order to fulfill the rhetorical aims of liberation, concepts for privilege must meet what I term the ‘boundary condition’, which demarcates the boundary between a privileged elite and the rest of society, and the ‘ignorance condition’, which establishes that the elite status and the advantages it confers are not publicly recognised or affirmed. I argue that the dominant use (...) of the concept of privilege cannot fulfill these conditions. As a result, while I do not advocate for the complete abandonment of the rhetoric of privilege, I conclude that it obscures as much as it illuminates, and that the critical theoretical discourse on liberation and oppression should be suspicious of its use. (shrink)
Following the standard practice in sociology, cultural anthropology and history, sociologists, historians of science and some philosophers of science define scientific communities as groups with shared beliefs, values and practices. In this paper it is argued that in real cases the beliefs of the members of such communities often vary significantly in important ways. This has rather dire implications for the convergence defense against the charge of the excessive subjectivity of subjective Bayesianism because that defense requires that communities of Bayesian (...) inquirers share a significant set of modal beliefs. The important implication is then that given the actual variation in modal beliefs across individuals, either Bayesians cannot claim that actual theories have been objectively confirmed or they must accept that such theories have been confirmed relative only to epistemically insignificant communities. (shrink)
In this paper we argue that dissociative identity disorder (DID) is best interpreted as a causal model of a (possible) post-traumatic psychological process, as a mechanical model of an abnormal psychological condition. From this perspective we examine and criticize the evidential status of DID, and we demonstrate that there is really no good reason to believe that anyone has ever suffered from DID so understood. This is so because the proponents of DID violate basic methodological principles of good causal modeling. (...) When every ounce of your concentration is fixed upon blasting a winged pig out of the sky, you do not question its species' ontological status. James Morrow, City of Truth (1990). (shrink)
Two art exhibitions, “Training Humans” and “Making Faces,” and the accompanying essay “Excavating AI: The politics of images in machine learning training sets” by Kate Crawford and Trevor Paglen, are making substantial impact on discourse taking place in the social and mass media networks, and some scholarly circles. Critical scrutiny reveals, however, a self-contradictory stance regarding informed consent for the use of facial images, as well as serious flaws in their critique of ML training sets. Our analysis underlines the non-negotiability (...) of informed consent when using human data in artistic and other contexts, and clarifies issues relating to the description of ML training sets. (shrink)
This paper shows that any view of future contingent claims that treats such claims as having indeterminate truth values or as simply being false implies probabilistic irrationality. This is because such views of the future imply violations of reflection, special reflection and conditionalization.
The development of possible worlds semantics for modal claims has led to a more general application of that theory as a complete semantics for various formal and natural languages, and this view is widely held to be an adequate (philosophical) interpretation of the model theory for such languages. We argue here that this view generates a self-referential inconsistency that indicates either the falsity or the incompleteness of PWS.
In a recent revision (chapter 4 of Nowakowa and Nowak 2000) of an older article Leszek Nowak (1992) has attempted to rebut Niiniluoto’s 1990 critical suggestion that proponents of the Poznań idealizational approach to the sciences have committed a rather elementary logical error in the formal machinery that they advocate for use in the analysis of scientific methodology. In this paper I criticize Nowak’s responses to Niiniluoto’s suggestion, and, subsequently, work out some of the consequences of that criticism for understanding (...) the role that idealization plays in scientific methodology. (shrink)
In this paper it is argued that the conjunction of linguistic ersatzism, the ontologically deflationary view that possible worlds are maximal and consistent sets of sentences, and possible world semantics, the view that the meaning of a sentence is the set of possible worlds at which it is true, implies that no actual speaker can effectively use virtually any language to successfully communicate information. This result is based on complexity issues that relate to our finite computational ability to deal with (...) large bodies of information and a strong, but well motivated, assumption about the cognitive accessibility of meanings of sentences ersatzers seem to be implicitly committed to. It follows that linguistic ersatzism, possible world semantics, or both must be rejected. (shrink)
In this paper we respond to criticisms by Michael Bergmann and Michael Rea in their “In Defense of Sceptical Theism : A Reply to Almeida and Oppy,” Australasian Journal of Philosophy 83.
The testimonia concerning weight in early Greek atomism appear to contradict one another. Some reports assert that the atoms do have weight, while others outright deny weight as a property of the atoms. A common solution to this apparent contradiction divides the testimonia into two groups. The first group describes the atoms within a κόσμος, where they have weight; the second group describes the atoms outside of a κόσμος, where they are weightless. A key testimonium for proponents of this solution (...) is Aëtius 1.3.18. It apparently denies weight as a property of the atoms, and supposedly describes the atoms when they are outside of a κόσμος. I argue against this interpretive solution by demonstrating, first, that Aëtius 1.3.18 does not deny that weight is a property of the atoms. Second, I argue that the report does not describe the atoms when they are outside of a κόσμος. Although these are largely negative conclusions, I contend that we are not left without a solution to the present interpretive difficulty. Once our testimonia concerning weight in early Greek atomism are examined thoroughly, it is clear that there is no conflict among them. (shrink)
Nick Trakakis and Yujin Nagasawa criticise the argument in Almeida and Oppy . According to Trakakis and Nagasawa, we are mistaken in our claim that the sceptical theist response to evidential arguments from evil is unacceptable because it would undermine ordinary moral reasoning. In their view, there is no good reason to think that sceptical theism leads to an objectionable form of moral scepticism. We disagree. In this paper, we explain why we think that the argument of Nagasawa and (...) Trakakis fails to overthrow our objection to sceptical theism. (shrink)
In this paper I argue that Tyler Burge's non-reductive view of testiomonial knowledge cannot adeqautrely discriminate between fallacious ad vericumdium appeals to expet testimony and legitimate appeals to authority.
In this paper I argue that the best explanation of expertise about taste is that such alleged experts are simply more eloquent in describing the taste experiences that they have than are ordinary tasters.
In a series of influential articles, George Bealer argues for the autonomy of philosophical knowledge on the basis that philosophically known truths must be necessary truths. The main point of his argument is that the truths investigated by the sciences are contingent truths to be discovered a posteriori by observation, while the truths of philosophy are necessary truths to be discovered a priori by intuition. The project of assimilating philosophy to the sciences is supposed to be rendered illegitimate by the (...) more or less sharp distinction in these characteristic methods and its modal basis. In this article Bealer's particular way of drawing the distinction between philosophy and science is challenged in a novel manner, and thereby philosophical naturalism is further defended. (shrink)
In his 1993 article George Bealer offers three separate arguments that are directed against the internal coherence of empiricism, specifically against Quine’s version of empiricism. One of these arguments is the starting points argument (SPA) and it is supposed to show that Quinean empiricism is incoherent. We argue here that this argument is deeply flawed, and we demonstrate how a Quinean may successfully defend his views against Bealer’s SPA. Our defense of Quinean empiricism against the SPA depends on showing (1) (...) that Bealer is, in an important sense, a foundationalist, and (2) that Quine is, in an important sense, a coherentist. Having established these two contentions we show that Bealer’s SPA begs the question against Quinean empiricists. (shrink)
It is a commonplace belief that many beliefs, e.g. religious convictions, are a purely private matter, and this is meant in some way to serve as a defense against certain forms of criticism. In this paper it is argued that this thesis is false, and that belief is really often a public matter. This argument, the publicity of belief argument, depends on one of the most compelling and central thesis of Peircean pragmatism. This crucial thesis is that bona fide belief (...) cannot be separated from action. It is then also suggested that we should accept a form of W. K. Clifford's evidentialism. When these theses are jointly accepted in conjunction with the basic principle of ethics that it is prima facie wrong to act in such a way that may subject others to serious but unnecessary and avoidable harm, it follows that many beliefs are morally wrong. (shrink)
Some contemporary theologically inclined epistemologists, the reformed epistemologists, have attempted to show that belief in God is rational by appealing directly to a special kind of experience. To strengthen the appeal to this particular, and admittedly peculiar, type of experience these venture to draw a parallel between such experiences and normal perceptual experiences in order to show that, by parity of reasoning, if beliefs formed on the basis of the later are taken to be justified and rational to hold, then (...) beliefs formed on the basis of the former should also be regarded as justified and rational to hold. Such appeals to religious experience have been discussed and/or made by Robert Pargetter, Alvin Plantinga and William Alston and they claim that they provide sufficient warrant for religious beliefs, specifically for the belief that God exists. The main critical issue that will be raised here concerns the coherence of this notion of religious experience itself and whether such appeals to religious experience really provide justification for belief in the existence of God.<br><br>. (shrink)
In this paper significant challenges are raised with respect to the view that explanation essentially involves unification. These objections are raised specifically with respect to the well-known versions of unificationism developed and defended by Michael Friedman and Philip Kitcher. The objections involve the explanatory regress argument and the concepts of reduction and scientific understanding. Essentially, the contention made here is that these versions of unificationism wrongly assume that reduction secures understanding.
The social world contains institutions (nations, clubs), groups (races, genders), objects (talismans, borders), and more. This paper explores a puzzle about the essences of social items. There is widespread consensus against social essences because of problematic presuppositions often made about them. But it is argued that essence can be freed from these presuppositions and their problems. Even so, a puzzle still arises. In a Platonic spirit, essences in general seem “detached” from the world. In an Aristotelian spirit, social essences in (...) particular seem “embedded” in the world. The puzzle is that these inclinations are individually plausible but jointly incompatible. The paper has four aims: to clarify and refine the puzzle; to explore the puzzle’s implications for essence in general and for social essences in particular; to illustrate the fruitfulness of the general distinction between “detached” and “embedded”; and to develop this distinction to sketch a novel solution to the puzzle. (shrink)
This paper explores the prospects of combining two views. The first view is metaphysical rationalism (the principle of sufficient reason): all things have an explanation. The second view is metaphysical essentialism: there are real essences. The exploration is motivated by a conflict between the views. Metaphysical essentialism posits facts about essences. Metaphysical rationalism demands explanations for all facts. But facts about essences appear to resist explanation. I consider two solutions to the conflict. Exemption solutions attempt to exempt facts about essences (...) from the demand for explanation. Explanation solutions attempt to explain facts about essences. I argue that exemption solutions are less promising than explanation solutions. I then consider how explanation solutions might be developed. I suggest that a “generative” approach is most promising. I tentatively conclude that the prospects for combining metaphysical rationalism and metaphysical essentialism turn on the viability of a generative approach. This sets the agenda for defending the combination as well as the more general project of explaining essences. (shrink)
This paper presents a case for the claim that the infamous miners paradox is not a paradox. This contention is based on some important observations about the nature of ignorance with respect to both disjunctions and conditional obligations and their modal features. The gist of the argument is that given the uncertainty about the location of the miners in the story and the nature of obligations, the apparent obligation to block either mine shaft is cancelled.
This paper introduces a model for evidence denial that explains this behavior as a manifestation of rationality and it is based on the contention that social values (measurable as utilities) often underwrite these sorts of responses. Moreover, it is contended that the value associated with group membership in particular can override epistemic reason when the expected utility of a belief or belief system is great. However, it is also true that it appears to be the case that it is still (...) possible for such unreasonable believers to reverse this sort of dogmatism and to change their beliefs in a way that is epistemically rational. The conjecture made here is that we should expect this to happen only when the expected utility of the beliefs in question dips below a threshold where the utility value of continued dogmatism and the associated group membership is no longer sufficient to motivate defusing the counter-evidence that tells against such epistemically irrational beliefs. (shrink)
Recently Timothy Williamson (2007) has argued that characterizations of the standard (i.e. intuition-based) philosophical practice of philosophical analysis are misguided because of the erroneous manner in which this practice has been understood. In doing so he implies that experimental critiques of the reliability of intuition are based on this misunderstanding of philosophical methodology and so have little or no bearing on actual philosophical practice or results. His main point is that the orthodox understanding of philosophical methodology is incorrect in that (...) it treats philosophical thought experiments in such a way that they can be “filled in” in various ways that undermines their use as counter-examples and that intuition plays no substantial role in philosophical practice when we properly understand that methodology as a result of the possibility of such filling in. In this paper Williamson’s claim that philosophical thought experiments cases can be legitimately filled in this way will be challenged and it will be shown that the experimental critique of the intuition-based methods involved a serious issue. (shrink)
This paper introduces a new argument against Richard Foley’s threshold view of belief. His view is based on the Lockean Thesis (LT) and the Rational Threshold Thesis (RTT). The argument introduced here shows that the views derived from the LT and the RTT violate the safety condition on knowledge in way that threatens the LT and/or the RTT.
This paper has three interdependent aims. The first is to make Reichenbach’s views on induction and probabilities clearer, especially as they pertain to his pragmatic justification of induction. The second aim is to show how his view of pragmatic justification arises out of his commitment to extensional empiricism and moots the possibility of a non-pragmatic justification of induction. Finally, and most importantly, a formal decision-theoretic account of Reichenbach’s pragmatic justification is offered in terms both of the minimax principle and the (...) dominance principle. (shrink)
In this chapter we consider three philosophical perspectives (including those of Stalnaker and Lewis) on the question of whether and how the principle of conditional excluded middle should figure in the logic and semantics of counterfactuals. We articulate and defend a third view that is patterned after belief revision theories offered in other areas of logic and philosophy. Unlike Lewis’ view, the belief revision perspective does not reject conditional excluded middle, and unlike Stalnaker’s, it does not embrace supervaluationism. We adduce (...) both theoretical and empirical considerations to argue that the belief revision perspective should be preferred to its alternatives. The empirical considerations are drawn from the results of four empirical studies (which we report below) of non-experts’ judgments about counterfactuals and conditional excluded middle. (shrink)
Given the sheer vastness of the totality of contemporary human knowledge and our individual epistemic finitude it is commonplace for those of us who lack knowledge with respect to some proposition(s) to appeal to experts (those who do have knowledge with respect to that proposition(s)) as an epistemic resource. Of course, much ink has been spilled on this issue and so concern here will be very narrowly focused on testimony in the context of epistemological views that incorporate evidentialism and internalism, (...) and which are either reductivist or non-reductivist in nature. Also, as the main question about testimony addressed here is whether or not testimony can provide any basic justification at all, attention will be narrowly focused on the simple case where one is presented with testimony that something is the case from only one source and on one occasion. It turns out that there are some seriously odd epistemic features of such appeals to expertise that arise both for those who intend to accept internalism, evidentialism and reductivism about justification by testimony and for those who intend to accept internalism, evidentialism and non-reductivism about justification by testimony. (shrink)
Imre Lakatos' views on the philosophy of mathematics are important and they have often been underappreciated. The most obvious lacuna in this respect is the lack of detailed discussion and analysis of his 1976a paper and its implications for the methodology of mathematics, particularly its implications with respect to argumentation and the matter of how truths are established in mathematics. The most important themes that run through his work on the philosophy of mathematics and which culminate in the 1976a paper (...) are (1) the (quasi-)empirical character of mathematics and (2) the rejection of axiomatic deductivism as the basis of mathematical knowledge. In this paper Lakatos' later views on the quasi-empirical nature of mathematical theories and methodology are examined and specific attention is paid to what this view implies about the nature of mathematical argumentation and its relation to the empirical sciences. (shrink)
In this paper it is shown that Lewis' MWD (might/would duality) and imaging principles lead to wildly implausible probability assignments for would counterfactuals.
In this paper the strategy for the eliminative reduction of the alethic modalities suggested by John Venn is outlined and it is shown to anticipate certain related contemporary empiricistic and nominalistic projects. Venn attempted to reduce the alethic modalities to probabilities, and thus suggested a promising solution to the nagging issue of the inclusion of modal statements in empiricistic philosophical systems. However, despite the promise that this suggestion held for laying the ‘ghost of modality’ to rest, this general approach, tempered (...) modal eliminativism, is shown to be inadequate for that task.<br><br>. (shrink)
It is an under-appreciated fact that Quine's rejection of the analytic/synthetic distinction, when coupled with some other plausible and related views, implies that there are serious difficulties in demarcating empirical theories from pure mathematical theories within the Quinean framework. This is a serious problem because there seems to be a principled difference between the two disciplines that cannot apparently be captured in the orthodox Quienan framework. For the purpose of simplicity let us call this Quine's problem of demarcation. In this (...) paper this problem will be articulated and it will be shown that the typical sorts of responses to this problem are all unworkable within the Quinean framework. It will then be shown that the lack of resources to solve this problem within the Quinean framework implies that Quine’s version of the indispensability argument cannot get off the ground, for it presupposes the possibility of making such a distinction. (shrink)
Hans Reichenbach’s pragmatic treatment of the problem of induction in his later works on inductive inference was, and still is, of great interest. However, it has been dismissed as a pseudo-solution and it has been regarded as problematically obscure. This is, in large part, due to the difficulty in understanding exactly what Reichenbach’s solution is supposed to amount to, especially as it appears to offer no response to the inductive skeptic. For entirely different reasons, the significance of Bertrand Russell’s classic (...) attempt to solve Hume’s problem is also both obscure and controversial. Russell accepted that Hume’s reasoning about induction was basically correct, but he argued that given the centrality of induction in our cognitive endeavors something must be wrong with Hume’s basic assumptions. What Russell effectively identified as Hume’s (and Reichenbach’s) failure was the commitment to a purely extensional empiricism. So, Russell’s solution to the problem of induction was to concede extensional empiricism and to accept that induction is grounded by accepting both a robust essentialism and a form of rationalism that allowed for a priori knowledge of universals. So, neither of those doctrines is without its critics. On the one hand, Reichenbach’s solution faces the charges of obscurity and of offering no response to the inductive skeptic. On the other hand, Russell’s solution looks to be objectionably ad hoc absent some non-controversial and independent argument that the universals that are necessary to ground the uniformity of nature actually exist and are knowable. This particular charge is especially likely to arise from those inclined towards purely extensional forms of empiricism. In this paper the significance of Reichenbach’s solution to the problem of induction will be made clearer via the comparison of these two historically important views about the problem of induction. The modest but important contention that will be made here is that the comparison of Reichenbach’s and Russell’s solutions calls attention to the opposition between extensional and intensional metaphysical presuppositions in the context of attempts to solve the problem of induction. It will be show that, in effect, what Reichenbach does is to establish an important epistemic limitation of extensional empiricism. So, it will be argued here that there is nothing really obscure about Reichenbach’s thoughts on induction at all. He was simply working out the limits of extensional empiricism with respect to inductive inference in opposition to the sort of metaphysics favored by Russell and like-minded thinkers. (shrink)
The main examples of pragmatic encroachment presented by Jason Stanley involve the idea that knowledge ascription occurs more readily in cases where stakes are low rather than high. This is the stakes hypothesis. In this paper an example is presented showing that in some cases knowledge ascription is more readily appropriate where stakes are high rather than low.
Some recent work by philosophers of mathematics has been aimed at showing that our knowledge of the existence of at least some mathematical objects and/or sets can be epistemically grounded by appealing to perceptual experience. The sensory capacity that they refer to in doing so is the ability to perceive numbers, mathematical properties and/or sets. The chief defense of this view as it applies to the perception of sets is found in Penelope Maddy’s Realism in Mathematics, but a number of (...) other philosophers have made similar, if more simple, appeals of this sort. For example, Jaegwon Kim, John Bigelow, and John Bigelow and Robert Pargetter have all defended such views. The main critical issue that will be raised here concerns the coherence of the notions of set perception and mathematical perception, and whether appeals to such perceptual faculties can really provide any justification for or explanation of belief in the existence of sets, mathematical properties and/or numbers. (shrink)
This paper contains an argument to the effect that possible worlds semantics renders semantic knowledge impossible, no matter what ontological interpretation is given to possible worlds. The essential contention made is that possible worlds semantic knowledge is unsafe and this is shown by a parallel with the preface paradox.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.