It has been argued that if the rigidity condition is satisfied, a rational agent operating with uncertain evidence should update her subjective probabilities by Jeffreyconditionalization or else a series of bets resulting in a sure loss could be made against her. We show, however, that even if the rigidity condition is satisfied, it is not always safe to update probability distributions by JC because there exist such sequences of non-misleading uncertain observations where it may be foreseen that (...) an agent who updates her subjective probabilities by JC will end up nearly certain that a false hypothesis is true. We analyze the features of JC that lead to this problem, specify the conditions in which it arises and respond to potential objections. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of (...) class='Hi'>Conditionalization to choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether JeffreyConditionalization is really a generalization of Conditionalization. (shrink)
Seeing a red hat can (i) increase my credence in the hat is red, and (ii) introduce a negative dependence between that proposition and po- tential undermining defeaters such as the light is red. The rigidity of JeffreyConditionalization makes this awkward, as rigidity preserves inde- pendence. The picture is less awkward given ‘Holistic Conditionalization’, or so it is claimed. I defend JeffreyConditionalization’s consistency with underminable perceptual learning and its superiority to Holistic Conditionalization, (...) arguing that the latter is merely a special case of the former, is itself rigid, and is committed to implausible accounts of perceptual con- firmation and of undermining defeat. (shrink)
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, (...) we gather lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
A handful of well-known arguments (the 'diachronic Dutch book arguments') rely upon theorems establishing that, in certain circumstances, you are immune from sure monetary loss (you are not 'diachronically Dutch bookable') if and only if you adopt the strategy of conditionalizing (or Jeffrey conditionalizing) on whatever evidence you happen to receive. These theorems require non-trivial assumptions about which evidence you might acquire---in the case of conditionalization, the assumption is that, if you might learn that e, then it is (...) not the case that you might learn something else that is consistent with e. These assumptions may not be relaxed. When they are, not only will non-(Jeffrey) conditionalizers be immune from diachronic Dutch bookability, but (Jeffrey) conditionalizers will themselves be diachronically Dutch bookable. I argue: 1) that there are epistemic situations in which these assumptions are violated; 2) that this reveals a conflict between the premise that susceptibility to sure monetary loss is irrational, on the one hand, and the view that rational belief revision is a function of your prior beliefs and the acquired evidence alone, on the other; and 3) that this inconsistency demonstrates that diachronic Dutch book arguments for (Jeffrey) conditionalization are invalid. (shrink)
Weisberg introduces a phenomenon he terms perceptual undermining. He argues that it poses a problem for Jeffreyconditionalization, and Bayesian epistemology in general. This is Weisberg’s paradox. Weisberg argues that perceptual undermining also poses a problem for ranking theory and for Dempster-Shafer theory. In this note I argue that perceptual undermining does not pose a problem for any of these theories: for true conditionalizers Weisberg’s paradox is a false alarm.
Dutch Book arguments have been presented for static belief systems and for belief change by conditionalization. An argument is given here that a rule for belief change which under certain conditions violates probability kinematics will leave the agent open to a Dutch Book.
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the (...) literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
Weisberg ([2009]) provides an argument that neither conditionalization nor Jeffreyconditionalization is capable of accommodating the holist’s claim that beliefs acquired directly from experience can suffer undercutting defeat. I diagnose this failure as stemming from the fact that neither conditionalization nor Jeffreyconditionalization give any advice about how to rationally respond to theory-dependent evidence, and I propose a novel updating procedure that does tell us how to respond to evidence like this. This holistic updating (...) rule yields conditionalization as a special case in which our evidence is entirely theory independent. 1 Introduction2 Conditionalization3 Holism and Conditionalization4 A Holistic Update5 HCondi and Dutch Books6 Commutativity and Learning about Background Theories6.1 Commutativity6.2 Learning about background theories7 In Summation. (shrink)
Our senses provide us with information about the world, but what exactly do they tell us? I argue that in order to optimally respond to sensory stimulations, an agent’s doxastic space may have an extra, “imaginary” dimension of possibility; perceptual experiences confer certainty on propositions in this dimension. To some extent, the resulting picture vindicates the old-fashioned empiricist idea that all empirical knowledge is based on a solid foundation of sense-datum propositions, but it avoids most of the problems traditionally associated (...) with that idea. The proposal might also explain why experiences appear to have a non-physical phenomenal character, even if the world is entirely physical. (shrink)
This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency. Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: change in the commitment to the reason, the reason’s sensitivity and selectivity to the claim, one’s prior commitment to the claim, and the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, and (...) may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align. (shrink)
Crupi et al. propose a generalization of Bayesian confirmation theory that they claim to adequately deal with confirmation by uncertain evidence. Consider a series of points of time t0, . . . , ti, . . . , tn such that the agent’s subjective probability for an atomic proposition E changes from Pr0 at t0 to . . . to Pri at ti to . . . to Prn at tn. It is understood that the agent’s subjective probabilities change for (...) E and no logically stronger proposition, and that the agent updates her subjective probabilities by Jeffreyconditionalization. For this specific scenario the authors propose to take the difference between Pr0 and Pri as the degree to which E confirms H for the agent at time ti , C0,i. This proposal is claimed to be adequate, because. (shrink)
Our main aims in this paper is to discuss and criticise the core thesis of a position that has become known as phenomenal conservatism. According to this thesis, its seeming to one that p provides enough justification for a belief in p to be prima facie justified (a thesis we label Standard Phenomenal Conservatism). This thesis captures the special kind of epistemic import that seemings are claimed to have. To get clearer on this thesis, we embed it, first, in a (...) probabilistic framework in which updating on new evidence happens by Bayesian conditionalization, and second, a framework in which updating happens by Jeffreyconditionalization. We spell out problems for both views, and then generalize some of these to non-probabilistic frameworks. The main theme of our discussion is that the epistemic import of a seeming (or experience) should depend on its content in a plethora of ways that phenomenal conservatism is insensitive to. (shrink)
Van Fraassen's Judy Benjamin problem asks how one ought to update one's credence in A upon receiving evidence of the sort ``A may or may not obtain, but B is k times likelier than C'', where {A,B,C} is a partition. Van Fraassen's solution, in the limiting case of increasing k, recommends a posterior converging to the probability of A conditional on A union B, where P is one's prior probability function. Grove and Halpern, and more recently Douven and Romeijn, have (...) argued that one ought to leave credence in A unchanged, i.e. fixed at P(A). We argue that while the former approach is superior, it brings about a Reflection violation due in part to neglect of a ``regression to the mean'' phenomenon, whereby when C is eliminated by random evidence that leaves A and B alive, the ratio P(A):P(B) ought to drift in the direction of 1:1. (shrink)
Greaves and Wallace argue that conditionalization maximizes expected accuracy. In this paper I show that their result only applies to a restricted range of cases. I then show that the update procedure that maximizes expected accuracy in general is one in which, upon learning P, we conditionalize, not on P, but on the proposition that we learned P. After proving this result, I provide further generalizations and show that much of the accuracy-first epistemology program is committed to KK-like iteration (...) principles and to the existence of a class of propositions that rational agents will be certain of if and only if they are true. (shrink)
The applicability of Bayesian conditionalization in setting one’s posterior probability for a proposition, α, is limited to cases where the value of a corresponding prior probability, PPRI(α|∧E), is available, where ∧E represents one’s complete body of evidence. In order to extend probability updating to cases where the prior probabilities needed for Bayesian conditionalization are unavailable, I introduce an inference schema, defeasible conditionalization, which allows one to update one’s personal probability in a proposition by conditioning on a proposition (...) that represents a proper subset of one’s complete body of evidence. While defeasible conditionalization has wider applicability than standard Bayesian conditionalization (since it may be used when the value of a relevant prior probability, PPRI(α|∧E), is unavailable), there are circumstances under which some instances of defeasible conditionalization are unreasonable. To address this difficulty, I outline the conditions under which instances of defeasible conditionalization are defeated. To conclude the article, I suggest that the prescriptions of direct inference and statistical induction can be encoded within the proposed system of probability updating, by the selection of intuitively reasonable prior probabilities. (shrink)
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer (...) depend on it. As I show, whether this is possible depends on the formulation of the norm under consideration. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different (...) ways, and these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
The paper discusses the notion of reasoning with comparative moral judgements (i.e judgements of the form “act a is morally superior to act b”) from the point of view of several meta-ethical positions. Using a simple formal result, it is argued that only a version of moral cognitivism that is committed to the claim that moral beliefs come in degrees can give a normatively plausible account of such reasoning. Some implications of accepting such a version of moral cognitivism are discussed.
Colin Howson (1995 ) offers a counter-example to the rule of conditionalization. I will argue that the counter-example doesn't hit its target. The problem is that Howson mis-describes the total evidence the agent has. In particular, Howson overlooks how the restriction that the agent learn 'E and nothing else' interacts with the de se evidence 'I have learnt E'.
Are counterfactuals with true antecedents and consequents automatically true? That is, is Conjunction Conditionalization: if (X & Y), then (X > Y) valid? Stalnaker and Lewis think so, but many others disagree. We note here that the extant arguments for Conjunction Conditionalization are unpersuasive, before presenting a family of more compelling arguments. These arguments rely on some standard theorems of the logic of counterfactuals as well as a plausible and popular semantic claim about certain semifactuals. Denying Conjunction (...) class='Hi'>Conditionalization, then, requires rejecting other aspects of the standard logic of counterfactuals, or else our intuitive picture of semifactuals. (shrink)
This paper shows that any view of future contingent claims that treats such claims as having indeterminate truth values or as simply being false implies probabilistic irrationality. This is because such views of the future imply violations of reflection, special reflection and conditionalization.
This discussion note examines a recent argument for the principle that any counterfactual with true components is itself true. That argument rests upon two widely accepted principles of counterfactual logic to which the paper presents counterexamples. The conclusion speculates briefly upon the wider lessons that philosophers should draw from these examples for the semantics of counterfactuals.
How do temporal and eternal beliefs interact? I argue that acquiring a temporal belief should have no effect on eternal beliefs for an important range of cases. Thus, I oppose the popular view that new norms of belief change must be introduced for cases where the only change is the passing of time. I defend this position from the purported counter-examples of the Prisoner and Sleeping Beauty. I distinguish two importantly different ways in which temporal beliefs can be acquired and (...) draw some general conclusions about their impact on eternal beliefs. (shrink)
The Epistemic Objection says that certain theories of time imply that it is impossible to know which time is absolutely present. Standard presentations of the Epistemic Objection are elliptical—and some of the most natural premises one might fill in to complete the argument end up leading to radical skepticism. But there is a way of filling in the details which avoids this problem, using epistemic safety. The new version has two interesting upshots. First, while Ross Cameron alleges that the Epistemic (...) Objection applies to presentism as much as to theories like the growing block, the safety version does not overgeneralize this way. Second, the Epistemic Objection does generalize in a different, overlooked way. The safety objection is a serious problem for a widely held combination of views: “propositional temporalism” together with “metaphysical eternalism”. (shrink)
Some contextually sensitive expressions are such that their context independent conventional meanings need to be in some way supplemented in context for the expressions to secure semantic values in those contexts. As we’ll see, it is not clear that there is a paradigm here, but ‘he’ used demonstratively is a clear example of such an expression. Call expressions of this sort supplementives in order to highlight the fact that their context independent meanings need to be supplemented in context for them (...) to have semantic values relative to the context. Many philosophers and linguists think that there is a lot of contextual sensitivity in natural language that goes well beyond the pure indexicals and supplementives like ‘he’. Constructions/expressions that are good candidates for being contextually sensitive include: quantifiers, gradable adjectives including “predicates of personal taste”, modals, conditionals, possessives and relational expressions taking implicit arguments. It would appear that in none of these cases does the expression/construction in question have a context independent meaning that when placed in context suffices to secure a semantic value for the expression/construction in the context. In each case, some sort of supplementation is required to do this. Hence, all these expressions are supplementives in my sense. For a given supplementive, the question arises as to what the mechanism is for supplementing its conventional meanings in context so as to secure a semantic value for it in context. That is, what form does the supplementation take? The question also arises as to whether different supplementives require different kinds of supplementation. Let us call an account of what, in addition to its conventional meaning, secures a semantic value for a supplementive in context a metasemantics for that supplementive. So we can put our two questions thus: what is the proper metasemantics for a given supplementive; and do all supplementives have the same metasemantics? In the present work, I sketch the metasemantics I formulated for demonstratives in earlier work. Next, I briefly consider a number of other supplementives that I think the metasemantics I propose plausibly applies to and explain why I think that. Finally, I consider the prospects for extending the account to all supplementives. In so doing, I take up arguments due to Michael Glanzberg to the effect that supplementives are governed by two different metasemantics and attempt to respond to them. (shrink)
We consider how an epistemic network might self-assemble from the ritualization of the individual decisions of simple heterogeneous agents. In such evolved social networks, inquirers may be significantly more successful than they could be investigating nature on their own. The evolved network may also dramatically lower the epistemic risk faced by even the most talented inquirers. We consider networks that self-assemble in the context of both perfect and imperfect communication and compare the behaviour of inquirers in each. This provides a (...) step in bringing together two new and developing research programs, the theory of self-assembling games and the theory of network epistemology. (shrink)
David Lewis holds that a single possible world can provide more than one way things could be. But what are possible worlds good for if they come apart from ways things could be? We can make sense of this if we go in for a metaphysical understanding of what the world is. The world does not include everything that is the case—only the genuine facts. Understood this way, Lewis's “cheap haecceitism” amounts to a kind of metaphysical anti-haecceitism: it says there (...) aren't any genuine facts about individuals over and above their qualitative roles. (shrink)
We prove a representation theorem for preference relations over countably infinite lotteries that satisfy a generalized form of the Independence axiom, without assuming Continuity. The representing space consists of lexicographically ordered transfinite sequences of bounded real numbers. This result is generalized to preference orders on abstract superconvex spaces.
I examine three ‘anti-object’ metaphysical views: nihilism, generalism, and anti-quantificationalism. After setting aside nihilism, I argue that generalists should be anti-quantificationalists. Along the way, I attempt to articulate what a ‘metaphysically perspicuous’ language might even be.
“There are no gaps in logical space,” David Lewis writes, giving voice to sentiment shared by many philosophers. But different natural ways of trying to make this sentiment precise turn out to conflict with one another. One is a *pattern* idea: “Any pattern of instantiation is metaphysically possible.” Another is a *cut and paste* idea: “For any objects in any worlds, there exists a world that contains any number of duplicates of all of those objects.” We use resources from model (...) theory to show the inconsistency of certain packages of combinatorial principles and the consistency of others. (shrink)
The Christian doctrine of the Trinity poses a serious philosophical problem. On the one hand, it seems to imply that there is exactly one divine being; on the other hand, it seems to imply that there are three. There is another well-known philosophical problem that presents us with a similar sort of tension: the problem of material constitution. We argue in this paper that a relatively neglected solution to the problem of material constitution can be developed into a novel solution (...) to the problem of the Trinity. (shrink)
According to the doctrine of divine simplicity, God is an absolutely simple being lacking any distinct metaphysical parts, properties, or constituents. Although this doctrine was once an essential part of traditional philosophical theology, it is now widely rejected as incoherent. In this paper, I develop an interpretation of the doctrine designed to resolve contemporary concerns about its coherence, as well as to show precisely what is required to make sense of divine simplicity.
There is a traditional theistic doctrine, known as the doctrine of divine simplicity, according to which God is an absolutely simple being, completely devoid of any metaphysical complexity. On the standard understanding of this doctrine—as epitomized in the work of philosophers such as Augustine, Anselm, and Aquinas—there are no distinctions to be drawn between God and his nature, goodness, power, or wisdom. On the contrary, God is identical with each of these things, along with anything else that can be predicated (...) of him intrinsically. (shrink)
Could space consist entirely of extended regions, without any regions shaped like points, lines, or surfaces? Peter Forrest and Frank Arntzenius have independently raised a paradox of size for space like this, drawing on a construction of Cantor’s. I present a new version of this argument and explore possible lines of response.
That believing truly as a matter of luck does not generally constitute knowing has become epistemic commonplace. Accounts of knowledge incorporating this anti-luck idea frequently rely on one or another of a safety or sensitivity condition. Sensitivity-based accounts of knowledge have a well-known problem with necessary truths, to wit, that any believed necessary truth trivially counts as knowledge on such accounts. In this paper, we argue that safety-based accounts similarly trivialize knowledge of necessary truths and that two ways of responding (...) to this problem for safety, issuing from work by Williamson and Pritchard, are of dubious success. (shrink)
Linguists often advert to what are sometimes called linguistic intuitions. These intuitions and the uses to which they are put give rise to a variety of philosophically interesting questions: What are linguistic intuitions – for example, what kind of attitude or mental state is involved? Why do they have evidential force and how might this force be underwritten by their causal etiology? What light might their causal etiology shed on questions of cognitive architecture – for example, as a case study (...) of how consciously inaccessible subpersonal processes give rise to conscious states, or as a candidate example of cognitive penetrability? What methodological issues arise concerning how linguistic intuitions are gathered and interpreted – for example, might some subjects' intuitions be more reliable than others? And what bearing might all this have on philosophers' own appeals to intuitions? This paper surveys and critically discusses leading answers to these questions. In particular, we defend a ‘mentalist’ conception of linguistics and the role of linguistic intuitions therein. (shrink)
The counterpart theorist has a problem: there is no obvious way to understand talk about actuality in terms of counterparts. Fara and Williamson have charged that this obstacle cannot be overcome. Here I defend the counterpart theorist by offering systematic interpretations of a quantified modal language that includes an actuality operator. Centrally, I disentangle the counterpart relation from a related notion, a ‘representation relation’. The relation of possible things to the actual things they represent is variable, and an adequate account (...) of modal language must keep track of the way it is systematically shifted by modal operators. I apply my account to resolve several puzzles about counterparts and actuality. In technical appendices, I prove some important logical results about this ‘representational’ counterpart system and its relationship to other modal systems. (shrink)
“Pragmatic encroachers” about knowledge generally advocate two ideas: (1) you can rationally act on what you know; (2) knowledge is harder to achieve when more is at stake. Charity Anderson and John Hawthorne have recently argued that these two ideas may not fit together so well. I extend their argument by working out what “high stakes” would have to mean for the two ideas to line up, using decision theory.
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. A (...) popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Suppose that all non-qualitative facts are grounded in qualitative facts. I argue that this view naturally comes with a picture in which trans-world identity is indeterminate. But this in turn leads to either pervasive indeterminacy in the non-qualitative, or else contingency in what facts about modality and possible worlds are determinate.
Some hold that the lesson of Russell’s paradox and its relatives is that mathematical reality does not form a ‘definite totality’ but rather is ‘indefinitely extensible’. There can always be more sets than there ever are. I argue that certain contact puzzles are analogous to Russell’s paradox this way: they similarly motivate a vision of physical reality as iteratively generated. In this picture, the divisions of the continuum into smaller parts are ‘potential’ rather than ‘actual’. Besides the intrinsic interest of (...) this metaphysical picture, it has important consequences for the debate over absolute generality. It is often thought that ‘indefinite extensibility’ arguments at best make trouble for mathematical platonists; but the contact arguments show that nominalists face the same kind of difficulty, if they recognize even the metaphysical possibility of the picture I sketch. (shrink)
Epistemic decision theory produces arguments with both normative and mathematical premises. I begin by arguing that philosophers should care about whether the mathematical premises (1) are true, (2) are strong, and (3) admit simple proofs. I then discuss a theorem that Briggs and Pettigrew (2020) use as a premise in a novel accuracy-dominance argument for conditionalization. I argue that the theorem and its proof can be improved in a number of ways. First, I present a counterexample that shows that (...) one of the theorem’s claims is false. As a result of this, Briggs and Pettigrew’s argument for conditionalization is unsound. I go on to explore how a sound accuracy-dominance argument for conditionalization might be recovered. In the course of doing this, I prove two new theorems that correct and strengthen the result reported by Briggs and Pettigrew. I show how my results can be combined with various normative premises to produce sound arguments for conditionalization. I also show that my results can be used to support normative conclusions that are stronger than the one that Briggs and Pettigrew’s argument supports. Finally, I show that Briggs and Pettigrew’s proofs can be simplified considerably. (shrink)
Few notions are more central to Aquinas’s thought than those of matter and form. Although he invokes these notions in a number of different contexts, and puts them to a number of different uses, he always assumes that in their primary or basic sense they are correlative both with each other and with the notion of a “hylomorphic compound”—that is, a compound of matter (hyle) and form (morphe). Thus, matter is an entity that can have form, form is an entity (...) that can be had by matter, and a hylomorphic compound is an entity that exists when the potentiality of some matter to have form is actualized.1 What is more, Aquinas assumes that the matter of a hylomorphic compound explains certain of its general characteristics, whereas its form explains certain of its more specific characteristics. Thus, the matter of a bronze statue explains the fact that it is bronze, whereas its form explains the fact that it is a statue. Again, the matter of a human being explains the fact that it is a material object, whereas its form explains the specific type of material object it is (namely, human). My aim in this chapter is to provide a systematic introduction to Aquinas’s primary or basic notions of matter and form. To accomplish this aim, I focus on the two main theoretical contexts in which he deploys them—namely, his theory of change and his theory of individuation. In both contexts, as we shall see, Aquinas appeals to matter and form to account for relations of sameness and difference holding between distinct individuals. (shrink)
Some philosophers respond to Leibniz’s “shift” argument against absolute space by appealing to antihaecceitism about possible worlds, using David Lewis’s counterpart theory. But separated from Lewis’s distinctive system, it is difficult to understand what this doctrine amounts to or how it bears on the Leibnizian argument. In fact, the best way of making sense of the relevant kind of antihaecceitism concedes the main point of the Leibnizian argument, pressing us to consider alternative spatiotemporal metaphysics.
Linguists, particularly in the generative tradition, commonly rely upon intuitions about sentences as a key source of evidence for their theories. While widespread, this methodology has also been controversial. In this paper, I develop a positive account of linguistic intuition, and defend its role in linguistic inquiry. Intuitions qualify as evidence as form of linguistic behavior, which, since it is partially caused by linguistic competence (the object of investigation), can be used to study this competence. I defend this view by (...) meeting two challenges. First, that intuitions are collected through methodologically unsound practices, and second, that intuition cannot distinguish between the contributions of competence and performance systems. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.