Conditional probability is often used to represent the probability of the conditional. However, triviality results suggest that the thesis that the probability of the conditional always equals conditional probability leads to untenable conclusions. In this paper, I offer an interpretation of this thesis in a possible worlds framework, arguing that the triviality results make assumptions at odds with the use of conditional probability. I argue that these assumptions come from a theory called the operator theory and that the (...) rival restrictor theory can avoid these problematic assumptions. In doing so, I argue that recent extensions of the triviality arguments to restrictor conditionals fail, making assumptions which are only justified on the operator theory. (shrink)
A pervasive and influential argument appeals to trivial truths to demonstrate that the aim of inquiry is not the acquisition of truth. But the argument fails, for it neglects to distinguish between the complexity of the sentence used to express a truth and the complexity of the truth expressed by a sentence.
Inquiry into the meaning of logical terms in natural language (‘and’, ‘or’, ‘not’, ‘if’) has generally proceeded along two dimensions. On the one hand, semantic theories aim to predict native speaker intuitions about the natural language sentences involving those logical terms. On the other hand, logical theories explore the formal properties of the translations of those terms into formal languages. Sometimes, these two lines of inquiry appear to be in tension: for instance, our best logical investigation into conditional connectives may (...) show that there is no conditional operator that has all the properties native speaker intuitions suggest if has. Indicative conditionals have famously been the source of one such tension, ever since the triviality proofs of both Lewis (1976) and Gibbard (1981) established conclusions which are in prima facie tension with ordinary judgments about natural language indicative conditionals. In a recent series of papers, Branden Fitelson has strengthened both triviality results (Fitelson 2013, 2015, 2016), revealing a common culprit: a logical schema known as IMPORT-EXPORT. Fitelson’s results focus the tension between the logical results and ordinary judgments, since IMPORT-EXPORT seems to be supported by intuitions about natural language. In this paper, we argue that the intuitions which have been taken to support IMPORT-EXPORT are really evidence for a closely related, but subtly different, principle. We show that the two principles are independent by showing how, given a standard assumption about the conditional operator in the formal language in which IMPORT-EXPORT is stated, many existing theories of indicative conditionals validate one, but not the other. Moreover, we argue that once we clearly distinguish these principles, we can use propositional anaphora to show that IMPORT-EXPORT is in fact not valid for natural language indicative conditionals (given this assumption about the formal conditional operator). This gives us a principled and independently motivated way of rejecting a crucial premise in many triviality results, while still making sense of the speaker intuitions which appeared to motivate that premise. We suggest that this strategy has broad application and an important lesson: in theorizing about the logic of natural language, we must pay careful attention to the translation between the formal languages in which logical results are typically proved, and natural languages which are the subject matter of semantic theory. (shrink)
I present two Triviality results for Kratzer's standard “restrictor” analysis of indicative conditionals. I both refine and undermine the common claim that problems of Triviality do not arise for Kratzer conditionals since they are not strictly conditionals at all.
This paper clarifies the relationship between the Triviality Results for the conditional and the Restrictor Theory of the conditional. On the understanding of Triviality proposed here, it is implausible—pace many proponents of the Restrictor Theory—that Triviality rests on a syntactic error. As argued here, Triviality arises from simply mistaking the feature a claim has when that claim is logically unacceptable for the feature a claim has when that claim is unsatisfiable. Triviality rests on a semantic (...) confusion—one which some semantic theories, but not others, are prone to making. On the interpretation proposed here, Triviality Results thus play a theoretically constructive role in the project of natural language semantics. (shrink)
Opponents of the computational theory of mind have held that the theory is devoid of explanatory content, since whatever computational procedures are said to account for our cognitive attributes will also be realized by a host of other ‘deviant’ physical systems, such as buckets of water and possibly even stones. Such ‘triviality’ claims rely on a simple mapping account of physical implementation. Hence defenders of CTM traditionally attempt to block the trivialization critique by advocating additional constraints on the implementation (...) relation. However, instead of attempting to ‘save’ CTM by constraining the account of physical implementation, I argue that the general form of the triviality argument is invalid. I provide a counterexample scenario, and show that SMA is in fact consistent with empirically rich and theoretically plausible versions of CTM. This move requires rejection of the computational sufficiency thesis, which I argue is scientifically unjustified in any case. By shifting the ‘burden of explanatory force’ away from the concept of physical implementation, and instead placing it on salient aspects of the target phenomenon to be explained, it’s possible to retain a maximally liberal and unfettered view of physical implementation, and at the same time defuse the triviality arguments that have motivated defenders of CTM to impose various theory-laden constraints on SMA. (shrink)
In recent years, a number of theorists have claimed that beliefs about probability are transparent. To believe probably p is simply to have a high credence that p. In this paper, I prove a variety of triviality results for theses like the above. I show that such claims are inconsistent with the thesis that probabilistic modal sentences have propositions or sets of worlds as their meaning. Then I consider the extent to which a dynamic semantics for probabilistic modals can (...) capture theses connecting belief, certainty, credence, and probability. I show that although a dynamic semantics for probabilistic modals does allow one to validate such theses, it can only do so at a cost. I prove that such theses can only be valid if probabilistic modals do not satisfy the axioms of the probability calculus. (shrink)
Suppose that you're certain that a certain sentence, e.g. "Frida is tall", lacks a determinate truth value. What cognitive attitude should you take towards it—reject it, suspend judgment, or what else? We show that, by adopting a seemingly plausible principle connecting credence in A and Determinately A, we can prove a very implausible answer to this question: i.e., all indeterminate claims should be assigned credence zero. The result is striking similar to so-called triviality results in the literature on modals (...) and conditionals. (shrink)
The Identity of Indiscernibles is the principle that objects cannot differ only numerically. It is widely held that one interpretation of this principle is trivially true: the claim that objects that bear all of the same properties are identical. This triviality ostensibly arises from haecceities (properties like \textit{is identical to a}). I argue that this is not the case; we do not trivialize the Identity of Indiscernibles with haecceities, because it is impossible to express the haecceities of indiscernible objects. (...) I then argue that this inexpressibility generalizes to all of their trivializing properties. Whether the Identity of Indiscernibles is trivially true ultimately turns on whether we can quantify over properties that we cannot express. (shrink)
I defend a new account of constitutive essence on which an entity’s constitutively essential properties are its most fundamental, non-trivial necessary properties. I argue that this account accommodates the Finean counterexamples to classic modalism about essence, provides an independently plausible account of constitutive essence and does not run into clear counterexamples. I conclude that this theory provides a promising way forward for attempts to produce an adequate non-primitivist, modalist account of essence. As both triviality and fundamentality in the account (...) are understood in terms of grounding, the theory also potentially has important implications for the relation between essence and grounding. (shrink)
The Equation (TE) states that the probability of A → B is the probability of B given A. Lewis (1976) has shown that the acceptance of TE implies that the probability of A → B is the probability of B, which is implausible: the probability of a conditional cannot plausibly be the same as the probability of its consequent, e.g., the probability that the match will light given that is struck is not intuitively the same as the probability that it (...) will light. Here I want to counter Lewis’ claim. My aim is to argue that: (1) TE express the coherence requirements implicit in the probability distributions of a modus ponens inference (MP); (2) the triviality result is not implausible because it is a result from these requirements; (3) these coherence requirements measure MP employability, so TE significance is tied to it; (4) MP employability doesn’t provide either the acceptability or the truth conditions of conditionals, since MP employability depends on previous independent reasons to accept the conditional and some acceptable conditionals are not MP friendly. Consequently, TE doesn’t have the logical significance that is usually attributed to it. (shrink)
On an influential line of thinking tracing back to Ramsey, conditionals are closely linked to the attitude of supposition. When applied to counterfactuals, this view suggests a subjunctive version of the so-called Ramsey test: the probability of a counterfactual If A, would B ought to be equivalent to the probability of B, under the subjunctive supposition that A. I present a collapse result for any view that endorses the subjunctive version of the Ramsey test. Starting from plausible assumptions, the result (...) shows that one’s rational credence in a would-counterfactual and in the corresponding might-counterfactual have to be identical. (shrink)
I formulate a counterfactual version of the notorious ‘Ramsey Test’. Even in a weak form, this makes counterfactuals subject to the very argument that Lewis used to persuade the majority of the philosophical community that indicative conditionals were in hot water. I outline two reactions: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives.
The paper examines Derek Parfit’s claim that naturalism trivializes the agent’s practical argument and therefore abolishes the normativity of its conclusion. In the first section, I present Parfit’s charge in detail. After this I discuss three possible responses to the objection. I show that the first two responses either fail or are inconclusive. Trying to avoid Parfit’s charge by endorsing irreductionist naturalism is not a solution because this form of naturalism is metaphysically untenable. Non- descriptive naturalism, on the other hand, (...) does not answer the pressing concern behind Parfit’s charge. I conclude that we had better turn to the third response: Peter Railton’s vindicatory reductionism. However, I also argue that naturalism can only avoid triviality in this way if it is able to respond to further challenges concerning the vindication of the reduction it proposes. Hence, though not a knockdown argument as it is intended to be, Parfit’s charge can still pose a threat to naturalist accounts of normativity. (shrink)
I here present and defend what I call the Triviality Theory of Truth, to be understood in analogy with Matti Eklund’s Inconsistency Theory of Truth. A specific formulation of is defended and compared with alternatives found in the literature. A number of objections against the proposed notion of meaning-constitutivity are discussed and held inconclusive. The main focus, however, is on the problem, discussed at length by Gupta and Belnap, that speakers do not accept epistemically neutral conclusions of Curry derivations. (...) I first argue that the facts about speakers’ reactions to such Curry derivations do not constitute a problem for the Triviality Theory specifically. Rather, they follow from independent, uncontroversial facts. I then propose a solution which coheres with the theory as I understand it. Finally, I consider a normative reading of their objection and offer a response. (shrink)
I argue that No-Ought-From-Is (in the sense that I believe it) is a relatively trivial affair. Of course, when people try to derive substantive or non-vacuous moral conclusions from non-moral premises, they are making a mistake. But No-Non-Vacuous-Ought-From-Is is meta-ethically inert. It tells us nothing about the nature of the moral concepts. It neither refutes naturalism nor supports non-cognitivism. And this is not very surprising since it is merely an instance of an updated version of the conservativeness of logic (in (...) a logically valid inference you don’t get out what you haven’t put in): so long as the expressions F are non-logical, you cannot get non-vacuous F-conclusions from non-F premises. However, the triviality of No-Non-Vacuous-Ought-From-Is is important and its non-profundity profound. No-Ought-From-Is is widely supposed to tell us something significant about the nature of the moral concepts. If, in fact, it tells us nothing, this is a point well worth shouting from the housetops. This brings me to my dispute with Gerhard Schurz who has proved a related version of No-Ought-From-Is, No-Ought-Relevant-Ought-From-Is, a proof which relaxes my assumption that ‘ought’ should not be treated as a logical constant. But if ought is not a logical expression then it does not really matter much that No-Ought-From-Is would be salvageable even if it were. Furthermore, Schurz’s proof depends on special features of the moral concepts and this might afford the basis for an abductive argument to something like non-cognitivism. As an error theorist, and therefore a cognitivist, I object. Finally I take a dim view of deontic logic. Many of its leading principles are false, bordering on the nonsensical, and even the reasonably plausible ones are subject to devastating counter-examples. (shrink)
This paper discusses and relates two puzzles for indicative conditionals: a puzzle about indeterminacy and a puzzle about triviality. Both puzzles arise because of Ramsey's Observation, which states that the probability of a conditional is equal to the conditional probability of its consequent given its antecedent. The puzzle of indeterminacy is the problem of reconciling this fact about conditionals with the fact that they seem to lack truth values at worlds where their antecedents are false. The puzzle of (...) class='Hi'>triviality is the problem of reconciling Ramsey's Observation with various triviality proofs which establish that Ramsey's Observation cannot hold in full generality. In the paper, I argue for a solution to the indeterminacy puzzle and then apply the resulting theory to the triviality puzzle. On the theory I defend, the truth conditions of indicative conditionals are highly context dependent and such that an indicative conditional may be indeterminate in truth value at each possible world throughout some region of logical space and yet still have a nonzero probability throughout that region. (shrink)
Theories of content are at the centre of philosophical semantics. The most successful general theory of content takes contents to be sets of possible worlds. But such contents are very coarse-grained, for they cannot distinguish between logically equivalent contents. They draw intensional but not hyperintensional distinctions. This is often remedied by including impossible as well as possible worlds in the theory of content. Yet it is often claimed that impossible worlds are metaphysically obscure; and it is sometimes claimed that their (...) use results in a trivial theory of content. In this paper, I set out the need for impossible worlds in a theory of content; I briefly sketch a metaphysical account of their nature; I argue that worlds in general must be very fine-grained entities; and, finally, I argue that the resulting conception of impossible worlds is not a trivial one. (shrink)
This paper presents a range of new triviality proofs pertaining to naïve truth theory formulated in paraconsistent relevant logics. It is shown that excluded middle together with various permutation principles such as A → (B → C)⊩B → (A → C) trivialize naïve truth theory. The paper also provides some new triviality proofs which utilize the axioms ((A → B)∧ (B → C)) → (A → C) and (A → ¬A) → ¬A, the fusion connective and the Ackermann (...) constant. An overview over various ways to formulate Leibniz’s law in non-classical logics and two new triviality proofs for naïve set theory are also provided. (shrink)
In this paper, we present a new semantic challenge to the moral error theory. Its first component calls upon moral error theorists to deliver a deontic semantics that is consistent with the error-theoretic denial of moral truths by returning the truth-value false to all moral deontic sentences. We call this the ‘consistency challenge’ to the moral error theory. Its second component demands that error theorists explain in which way moral deontic assertions can be seen to differ in meaning despite necessarily (...) sharing the same intension. We call this the ‘triviality challenge’ to the moral error theory. Error theorists can either meet the consistency challenge or the triviality challenge, we argue, but are hard pressed to meet both. (shrink)
A spectre haunts the semantics of natural language — the spectre of Triviality. Semanticists (in particular Rothschild 2013; Khoo and Mandelkern 2018a,b) have entered into a holy alliance to exorcise this spectre. None, I will argue, have yet succeeded.
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. A (...) popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Primarily a response to Paul Horwich's "Composition of Meanings", the paper attempts to refute his claim that compositionality—roughly, the idea that the meaning of a sentence is determined by the meanings of its parts and how they are there combined—imposes no substantial constraints on semantic theory or on our conception of the meanings of words or sentences. Show Abstract.
A viable theory of literary humanism must do justice to the idea that literature offers cognitive rewards to the careful reader. There are, however, powerful arguments to the effect that literature is at best only capable of offering idle visions of a world already well known. In this essay I argue that there is a form of cognitive awareness left unmentioned in the traditional vocabulary of knowledge acquisition, a form of awareness literature is particularly capable of offering. Thus even if (...) it is the case that literature has nothing interesting to give us in the way of knowledge, the literary humanist can consistently maintain that literary experience is thoroughly cognitive. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth-conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the `logicality of language', accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter-examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired (...) with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is `blind' to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non-classical---indeed quite exotic---kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis-a-vis its open class terms and employs a deductive system that is basically classical. (shrink)
I argue against the claim that it is trivial to state that Sidgwick used the method of wide reflective equilibrium. This claim is based on what could be called the Triviality Charge, which is pressed against the method of wide reflective equilibrium by Peter Singer. According to this charge, there is no alternative to using the method if it is interpreted as involving all relevant philosophical background arguments. The main argument against the Triviality Charge is that although the (...) method of wide reflective equilibrium is compatible with coherentism (understood as a form of weak foundationalism) as well as moderate foundationalism, it is not compatible with strong foundationalism. Hence, the claim that a philosopher uses the method of wide reflective equilibrium is informative. In particular, this is true with regard to Sidgwick. (shrink)
The subject of this paper is the notion of similarity between the actual and impossible worlds. Many believe that this notion is governed by two rules. Ac-cording to the first rule, every non-trivial world is more similar to the actual world than the trivial world is. The second rule states that every possible world is more similar to the actual world than any impossible world is. The aim of this paper is to challenge both of these rules. We argue that (...) acceptance of the first rule leads to the claim that the rule ex contradictione sequitur quodlibet is invalid in classical logic. The second rule does not recognize the fact that objects might be similar to one an-other due to various features. (shrink)
Moral contextualism is the view that claims like ‘A ought to X’ are implicitly relative to some (contextually variable) standard. This leads to a problem: what are fundamental moral claims like ‘You ought to maximize happiness’ relative to? If this claim is relative to a utilitarian standard, then its truth conditions are trivial: ‘Relative to utilitarianism, you ought to maximize happiness’. But it certainly doesn’t seem trivial that you ought to maximize happiness (utilitarianism is a highly controversial position). Some people (...) believe this problem is a reason to prefer a realist or error theoretic semantics of morals. I argue two things: first, that plausible versions of all these theories are afflicted by the problem equally, and second, that any solution available to the realist and error theorist is also available to the contextualist. So the problem of triviality does not favour noncontextualist views of moral language. (shrink)
Sharon Street defines her constructivism about practical reasons as the view that whether something is a reason to do a certain thing for a given agent depends on that agent’s normative point of view. However, Street has also maintained that there is a judgment about practical reasons which is true relative to every possible normative point of view, namely constructivism itself. I show that the latter thesis is inconsistent with Street’s own constructivism about epistemic reasons and discuss some consequences of (...) this incompatibility. (shrink)
Many authors have turned their attention to the notion of constitution to determine whether the hypothesis of extended cognition (EC) is true. One common strategy is to make sense of constitution in terms of the new mechanists’ mutual manipulability account (MM). In this paper I will show that MM is insufficient. The Challenge of Trivial Extendedness arises due to the fact that mechanisms for cognitive behaviors are extended in a way that should not count as verifying EC. This challenge can (...) be met by adding a necessary condition: cognitive constituents satisfy MM and they are what I call behavior unspecific. (shrink)
This is a thesis in support of the conceptual yoking of analytic truth to a priori knowledge. My approach is a semantic one; the primary subject matter throughout the thesis is linguistic objects, such as propositions or sentences. I evaluate arguments, and also forward my own, about how such linguistic objects’ truth is determined, how their meaning is fixed and how we, respectively, know the conditions under which their truth and meaning are obtained. The strategy is to make explicit what (...) is distinctive about analytic truths. The objective is to show that truths, known a priori, are trivial in a highly circumscribed way. My arguments are premised on a language-relative account of analytic truth. The language relative account which underwrites much of what I do has two central tenets: 1. Conventionalism about truth and, 2. Non-factualism about meaning. I argue that one decisive way of establishing conventionalism and non-factualism is to prioritise epistemological questions. Once it is established that some truths are not known empirically an account of truth must follow which precludes factual truths being known non-empirically. The function of Part 1 is, chiefly, to render Carnap’s language-relative account of analytic truth. I do not offer arguments in support of Carnap at this stage, but throughout Parts 2 and 3, by looking at more current literature on a priori knowledge and analytic truth, it becomes quickly evident that I take Carnap to be correct, and why. In order to illustrate the extent to which Carnap’s account is conventionalist and non-factualist I pose his arguments against those of his predecessors, Kant and Frege. Part 1 is a lightly retrospective background to the concepts of ‘analytic’ and ‘a priori’. The strategy therein is more mercenary than exegetical: I select the parts from Kant and Frege most relevant to Carnap’s eventual reaction to them. Hereby I give the reasons why Carnap foregoes a factual and objective basis for logical truth. The upshot of this is an account of analytic truth (i.e. logical truth, to him) which ensures its trivial nature. In opposition to accounts of a priori knowledge, which describe it as knowledge gained from rational apprehension, I argue that it is either knowledge from logical deduction or knowledge of stipulations. I therefore reject, in Part 2, three epistemologies for knowing linguistic conventions (e.g. implicit definitions): 1. intuition, 2. inferential a priori knowledge and, 3. a posteriori knowledge. At base, all three epistemologies are rejected because they are incompatible with conventionalism and non-factualism. I argue this point by signalling that such accounts of knowledge yield unsubstantiated second-order claims and/or they render the relevant linguistic conventions epistemically arrogant. For a convention to be arrogant it must be stipulated to be true. The stipulation is then considered arrogant when its meaning cannot be fixed, and its truth cannot be determined without empirical ‘work’. Once a working explication of ‘a priori’ has been given, partially in Part 1 (as inferential) and then in Part 2 (as non-inferential) I look, in Part 3, at an apriorist account of analytic truth, which, I argue, renders analytic truth non-trivial. The particular subject matter here is the implicit definitions of logical terms. The opposition’s argument holds that logical truths are known a priori (this is part of their identification criteria) and that their meaning is factually based. From here it follows that analytic truth, being determined by factually based meaning, is also factual. I oppose these arguments by exposing the internal inconsistencies; that implicit definition is premised on the arbitrary stipulation of truth which is inconsistent with saying that there are facts which determine the same truth. In doing so, I endorse the standard irrealist position about implicit definition and analytic truth (along with the “early friends of implicit definition” such as Wittgenstein and Carnap). What is it that I am trying to get at by doing all of the abovementioned? Here is a very abstracted explanation. The unmitigated realism of the rationalists of old, e.g. Plato, Descartes, Kant, have stoically borne the brunt of the allegation of yielding ‘synthetic a priori’ claims. The anti-rationalist phase of this accusation I am most interested in is that forwarded by the semantically driven empiricism of the early 20th century. It is here that the charge of the ‘synthetic a priori’ really takes hold. Since then new methods and accusatory terms are employed by, chiefly, non-realist positions. I plan to give these proper attention in due course. However, it seems to me that the reframing of the debate in these new terms has also created the illusion that current philosophical realism, whether naturalistic realism, realism in science, realism in logic and mathematics, is somehow not guilty of the same epistemological and semantic charges levelled against Plato, Descartes and Kant. It is of interest to me that in, particularly, current analytic philosophy1 (given its rationale) realism in many areas seems to escape the accusation of yielding synthetic priori claims. Yet yielding synthetic a priori claims is something which realism so easily falls prey to. Perhaps this is a function of the fact that the phrase, ‘synthetic a priori’, used as an allegation, is now outmoded. This thesis is nothing other than an indictment of metaphysics, or speculative philosophy (this being the crime), brought against a specific selection of realist arguments. I, therefore, ask of my reader to see my explicit, and perhaps outmoded, charge of the ‘synthetic a priori’ levelled against respective theorists as an attempt to draw a direct comparison with the speculative metaphysics so many analytic philosophers now love to hate. I think the phrase ‘synthetic a priori’ still does a lot of work in this regard, precisely because so many current theorists wrongly think they are immune to this charge. Consequently, I shall say much about what is not permitted. Such is, I suppose, the nature of arguing ‘against’ something. I’ll argue that it is not permitted to be a factualist about logical principles and say that they are known a priori. I’ll argue that it is not permitted to say linguistic conventions are a posteriori, when there is a complete failure in locating such a posteriori conventions. Both such philosophical claims are candidates for the synthetic a priori, for unmitigated rationalism. But on the positive side, we now have these two assets: Firstly, I do not ask us to abandon any of the linguistic practises discussed; merely to adopt the correct attitude towards them. For instance, where we use the laws of logic, let us remember that there are no known/knowable facts about logic. These laws are therefore, to the best of our knowledge, conventions not dissimilar to the rules of a game. And, secondly, once we pass sentence on knowing, a priori, anything but trivial truths we shall have at our disposal the sharpest of philosophical tools. A tool which can only proffer a better brand of empiricism. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth‐conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the ‘logicality of language’, accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter‐examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired (...) with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is ‘blind’ to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non‐classical—indeed quite exotic—kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis‐á‐vis its open class terms and employs a deductive system that is basically classical. (shrink)
This dissertation offers a proof of the logical possibility of testing empirical/factual theories that are inconsistent, but non-trivial. In particular, I discuss whether or not such theories can satisfy Popper's principle of falsifiablility. An inconsistent theory Ƭ closed under a classical consequence relation implies every statement of its language because in classical logic the inconsistency and triviality are coextensive. A theory Ƭ is consistent iff there is not a α such that Ƭ ⊢ α ∧ ¬α, otherwise it is (...) inconsistent. We say, instead, that Ƭ is non-trivial iff there is at least one α such that Ƭ ⊢ α, otherwise we say that it trivial. This happens because classical logic satisfies the principle of explosion, according ex contradictione sequitur quodlibet (from a contradiction anything follows). Under these conditions inconsistent classical theories would be compatible with any well-formed formula, which makes them useless for science. There are, however, so-called paraconsistent logics in which the principle of explosion does not generally hold and in which a theory can be (simply) inconsistent, but also absolutely consistent. It is in this logical framework that we can prove that some inconsistent theories can be falsifiable. (shrink)
In a recent pair of publications, Richard Bradley has offered two novel no-go theorems involving the principle of Preservation for conditionals, which guarantees that one’s prior conditional beliefs will exhibit a certain degree of inertia in the face of a change in one’s non-conditional beliefs. We first note that Bradley’s original discussions of these results—in which he finds motivation for rejecting Preservation, first in a principle of Commutativity, then in a doxastic analogue of the rule of modus ponens —are problematic (...) in a significant number of respects. We then turn to a recent U-turn on his part, in which he winds up rescinding his commitment to modus ponens, on the grounds of a tension with the rule of Import-Export for conditionals. Here we offer an important positive contribution to the literature, settling the following crucial question that Bradley leaves unanswered: assuming that one gives up on full-blown modus ponens on the grounds of its incompatibility with Import-Export, what weakened version of the principle should one be settling for instead? Our discussion of the issue turns out to unearth an interesting connection between epistemic undermining and the apparent failures of modus ponens in McGee’s famous counterexamples. (shrink)
Christian moral philosophy is a distinctive kind of moral philosophy owing to the special role it assigns to God in Christ. Much contemporary 'Christian ethics' focuses on semantic, modal, conceptual and epistemological issues. This may be helpful but it omits the distinctive focus of Christian moral philosophy: the human condition in a morally ordered universe and the redemptive work of jesus Christ as a response to that predicament. Christian moral philosophers should seek to remedy that neglect.
This is a reply to Alex Grzankowski’s comment on my paper, ‘To Believe is to Believe True’. I argue that one may believe a proposition to be true without possessing the concept of truth. I note that to believe the proposition P to be true is not the same as to believe the proposition ‘P is true’. This avoids the regress highlighted by Grzankowski in which the concept of truth is employed an infinite number of times in a single belief.
The Unexpected Hanging Problem is also known as the Surprise Examination Problem. We here solve it by isolating what is logical reasoning from the rest of the human psyche. In a not-so-orthodox analysis, following our tradition (The Liar, Dichotomy, The Sorites and Russell’s Paradox), we talk about the problem from a perspective that is more distant than all the known perspectives. From an observational point that is in much farther than all the observational points used until now, the reader can (...) finally see why the problem has been perpetuated as a problem and can also see that the problem was never an actual problem: Once more, we have an allurement. The allurement this time makes us start paying attention to all the complexity of the human psyche when studying problems that involve human feelings. The main finding could be told to be that we have to understand and study more the human psyche, in all its intricacies, also when dealing with problems that seem to belong with exclusivity to Mathematics or Logic. (shrink)
In this paper, we use antecedent-final conditionals to formulate two problems for parsing-based theories of presupposition projection and triviality of the kind given in Schlenker 2009. We show that, when it comes to antecedent-final conditionals, parsing-based theories predict filtering of presuppositions where there is in fact projection, and triviality judgments for sentences which are in fact felicitous. More concretely, these theories predict that presuppositions triggered in the antecedent of antecedent-final conditionals will be filtered (i.e. will not project) if (...) the negation of the consequent entails the presupposition. But this is wrong: John isn’t in Paris, if he regrets being in France intuitively presupposes that John is in France, contrary to this prediction. Likewise, parsing-based approaches to triviality predict that material entailed by the negation of the consequent will be redundant in the antecedent of the conditional; but John isn’t in Paris, if he’s in France and Mary is with him is intuitively felicitous, contrary to these predictions. Importantly, given that the trigger appears in sentence-final position, both incremental (left-to-right) and symmetric versions of such theories make the same predictions. These data constitute a challenge to the idea that presupposition projection and triviality should be computed on the basis of parsing. This issue is important because it relates to the more general question as to whether presupposition and triviality calculation should be thought of as a pragmatic post-compositional phenomenon or as part of compositional semantics (as in the more traditional dynamic approaches). We discuss a solution which allows us to maintain the parsing-based pragmatic approach; it is based on an analysis of conditionals which incorporates a presupposition that their antecedent is compatible with the context, together with a modification to Schlenker’s (2009) algorithm for calculating local contexts so that it takes into account presupposed material. As we will discuss, this solution works within a framework broadly similar to that of Schlenker’s (2009), but it doesn’t extend in an obvious way to other parsing-based accounts (e.g. parsing-based trivalent approaches). We conclude that a parsing-based theory can be maintained, but only if we adopt a substantial change of perspective on the framework. (shrink)
Triviality arguments against the computational theory of mind claim that computational implementation is trivial and thus does not serve as an adequate metaphysical basis for mental states. It is common to take computational implementation to consist in a mapping from physical states to abstract computational states. In this paper, I propose a novel constraint on the kinds of physical states that can implement computational states, which helps to specify what it is for two physical states to non-trivially implement the (...) same computational state. (shrink)
Many recent theories of epistemic discourse exploit an informational notion of consequence, i.e. a notion that defines entailment as preservation of support by an information state. This paper investigates how informational consequence fits with probabilistic reasoning. I raise two problems. First, all informational inferences that are not also classical inferences are, intuitively, probabilistically invalid. Second, all these inferences can be exploited, in a systematic way, to generate triviality results. The informational theorist is left with two options, both of them (...) radical: they can either deny that epistemic modal claims have probability at all, or they can move to a nonstandard probability theory. (shrink)
The aim of the paper is to critically assess the idea that reasons for action are provided by desires. I start from the claim that the most often employed meta-ethical background for the Model is ethical naturalism; I then argue against the Model through its naturalist background. For the latter purpose I make use of two objections that are both intended to refute naturalism per se. One is G.E. Moore’s Open Question Argument, the other is Derek Parfit’s Triviality Objection. (...) I show that naturalists might be able to avoid both objections if they can vindicate the reduction proposed. This, however, leads to further conditions whose fulfillment is necessary for the success of the vindication. I deal with one such condition, which I borrow from Peter Railton and Mark Schroeder:the demand that naturalist reductions must be tolerably revisionist. In the remainder of the paper I argue that the most influential versions of the Model are intolerably revisionist. The first problem concerns the picture of reasons that many recent formulations of the Model advocate. By using an objection from Michael Bedke, I show that on this interpretation obvious reasons won’t be accounted for by the Model. The second problem concerns the idealization that is also often part of the Model. Invoking an argument of Connie Rosati’s, I show that the best form of idealization, the ideal advisor account, is inadequate. Hence, though not the knock down arguments they were intended to be, OQA and TO do pose a serious threat to the Model. (shrink)
The chapter is devoted to the probability and acceptability of indicative conditionals. Focusing on three influential theses, the Equation, Adams’ thesis, and the qualitative version of Adams’ thesis, Sikorski argues that none of them is well supported by the available empirical evidence. In the most controversial case of the Equation, the results of many studies which support it are, at least to some degree, undermined by some recent experimental findings. Sikorski discusses the Ramsey Test, and Lewis’s triviality proof, with (...) special attention dedicated to the popular ways of blocking it. Sikorski concludes that the role of the three theses in future studies of conditionals should be re-thought, and he presents alternative proposals. (shrink)
Motivated by H. Curry’s well-known objection and by a proposal of L. Henkin, this article introduces the positive tableaux, a form of tableau calculus without refutation based upon the idea of implicational triviality. The completeness of the method is proven, which establishes a new decision procedure for the (classical) positive propositional logic. We also introduce the concept of paratriviality in order to contribute to the question of paradoxes and limitations imposed by the behavior of classical implication.
What is it to know more? By what metric should the quantity of one's knowledge be measured? I start by examining and arguing against a very natural approach to the measure of knowledge, one on which how much is a matter of how many. I then turn to the quasi-spatial notion of counterfactual distance and show how a model that appeals to distance avoids the problems that plague appeals to cardinality. But such a model faces fatal problems of its own. (...) Reflection on what the distance model gets right and where it goes wrong motivates a third approach, which appeals not to cardinality, nor to counterfactual distance, but to similarity. I close the paper by advocating this model and briefly discussing some of its significance for epistemic normativity. In particular, I argue that the 'trivial truths' objection to the view that truth is the goal of inquiry rests on an unstated, but false, assumption about the measure of knowledge, and suggest that a similarity model preserves truth as the aim of belief in an intuitively satisfying way. (shrink)
This work studies some problems connected to the role of negation in logic, treating the positive fragments of propositional calculus in order to deal with two main questions: the proof of the completeness theorems in systems lacking negation, and the puzzle raised by positive paradoxes like the well-known argument of Haskel Curry. We study the constructive com- pleteness method proposed by Leon Henkin for classical fragments endowed with implication, and advance some reasons explaining what makes difficult to extend this constructive (...) method to non-classical fragments equipped with weaker implications (that avoid Curry's objection). This is the case, for example, of Jan Lukasiewicz's n-valued logics and Wilhelm Ackermann's logic of restricted implication. Besides such problems, both Henkin's method and the triviality phenomenon enable us to propose a new positive tableau proof system which uses only positive meta-linguistic resources, and to mo- tivate a new discussion concerning the role of negation in logic proposing the concept of paratriviality. In this way, some relations between positive reasoning and infinity, the possibilities to obtain a ¯first-order positive logic as well as the philosophical connection between truth and meaning are dis- cussed from a conceptual point of view. (shrink)
My aim in this paper is to critically assess the idea that reasons for action are provided by desires (the Desire-based Reasons Model or the Model). I start from the claim that the most often employed meta-ethical background for the Model is ethical naturalism; I then consider attempts to argue against the Model through its naturalism. I make use of two objections that are both intended to refute naturalism per se. One is the indirect version of G. E. Moore’s Open (...) Question Argument (OQA), the other is Derek Parfit’s more recent Triviality Objection (TO). I show that naturalists might be able to avoid both objections in case the reduction they propose is tolerable. This, however, means that in order to see if the objections work, we must analyze the particular reductions proposed. Hence, though not knock down arguments as they were intended to be, the indirect OQA and TO may pose threat to the Model. (shrink)
Bradley offers a quick and convincing argument that no Boolean semantic theory for conditionals can validate a very natural principle concerning the relationship between credences and conditionals. We argue that Bradley’s principle, Preservation, is, in fact, invalid; its appeal arises from the validity of a nearby, but distinct, principle, which we call Local Preservation, and which Boolean semantic theories can non-trivially validate.
On Hume’s account of motivation, beliefs and desires are very different kinds of propositional attitudes. Beliefs are cognitive attitudes, desires emotive ones. An agent’s belief in a proposition captures the weight he or she assigns to this proposition in his or her cognitive representation of the world. An agent’s desire for a proposition captures the degree to which he or she prefers its truth, motivating him or her to act accordingly. Although beliefs and desires are sometimes entangled, they play very (...) different roles in rational agency. In two classic papers (Lewis 1988, 1996), David Lewis discusses several challenges to this Humean picture, but ultimately rejects them. We think that his discussion of a central anti-Humean alternative – the desire-as-belief thesis – is in need of refinnement. On this thesis, the desire for proposition p is given by the belief that p is desirable. Lewis claims that ‘[e]xcept in trivial cases, [this thesis] collapses into contradiction’(Lewis 1996, p. 308). The problem, he argues, is that the thesis is inconsistent with the purportedly plausible requirement that one’s desire for a proposition should not change upon learning that the proposition is true; call this the invariance requirement. In this paper, we revisit Lewis’s argument. We show that, if one carefully distinguishes between non-evaluative and evaluative propositions, the desire-asbelief thesis can be rendered consistent with the invariance requirement. Lewis’s conclusion holds only under certain conditions: the desire-as-belief thesis conflicts with the invariance requirement if and only if there are certain correlations between non-evaluative and evaluative propositions. But when there are such correlations, we suggest, the invariance requirement loses its plausibility. Thus Lewis’s argument against the desire-as-belief thesis appears to be valid only in cases in which it is unsound. (shrink)
We investigate a lattice of conditional logics described by a Kripke type semantics, which was suggested by Chellas and Segerberg – Chellas–Segerberg (CS) semantics – plus 30 further principles. We (i) present a non-trivial frame-based completeness result, (ii) a translation procedure which gives one corresponding trivial frame conditions for arbitrary formula schemata, and (iii) non-trivial frame conditions in CS semantics which correspond to the 30 principles.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.