During the realist revival in the early years of this century, philosophers of various persuasions were concerned to investigate the ontology of truth. That is, whether or not they viewed truth as a correspondence, they were interested in the extent to which one needed to assume the existence of entities serving some role in accounting for the truth of sentences. Certain of these entities, such as the Sätze an sich of Bolzano, the Gedanken of Frege, or the (...) propositions of Russell and Moore, were conceived as the bearers of the properties of truth and falsehood. Some thinkers however, such as Russell, Wittgenstein in the Tractatus, and Husserl in the Logische Untersuchungen, argued that instead of, or in addition to, truth-bearers, one must assume the existence of certain entities in virtue of which sentences and/or propositions are true. Various names were used for these entities, notably 'fact', 'Sachverhalt', and 'state of affairs'. (1) In order not to prejudge the suitability of these words we shall initially employ a more neutral terminology, calling any entities which are candidates for this role truth-makers. (shrink)
We present a theory of truth in fiction that improves on Lewis's [1978] ‘Analysis 2’ in two ways. First, we expand Lewis's possible worlds apparatus by adding non-normal or impossible worlds. Second, we model truth in fiction as belief revision via ideas from dynamic epistemic logic. We explain the major objections raised against Lewis's original view and show that our theory overcomes them.
A popular account of epistemic justification holds that justification, in essence, aims at truth. An influential objection against this account points out that it is committed to holding that only true beliefs could be justified, which most epistemologists regard as sufficient reason to reject the account. In this paper I defend the view that epistemic justification aims at truth, not by denying that it is committed to epistemic justification being factive, but by showing that, when we focus on (...) the relevant sense of ‘justification’, it isn’t in fact possible for a belief to be at once justified and false. To this end, I consider and reject three popular intuitions speaking in favor of the possibility of justified false beliefs, and show that a factive account of epistemic justification is less detrimental to our normal belief forming practices than often supposed. (shrink)
A raft of new philosophical problems concerning truth have recently been discovered by several theorists, following [Fine, 2010]. Most previ- ous commentators on these problems have taken them to shed light on the theory of ground. In this paper, I argue that they also shed light on the theory of truth. In particular, I argue that the notion of ground can be deployed to clearly articulate one strand of deflationary thinking about truth, according to which truth (...) is “metaphysically lightweight.” I offer a ground-theoretic explication of the (entirely bearable) lightness of truth, and show how it yields a novel solution to the problems concerning how truth is grounded. So, the theory of truth and the theory of ground interact fruitfully: we can apply the notion of ground to offer a clear expli- cation of the deflationist claim that truth is “metaphysically lightweight” that both captures the motivations for that claim and solves the problems. (shrink)
Replacing Truth.Kevin Scharp - 2007 - Inquiry: An Interdisciplinary Journal of Philosophy 50 (6):606 – 621.details
Of the dozens of purported solutions to the liar paradox published in the past fifty years, the vast majority are "traditional" in the sense that they reject one of the premises or inference rules that are used to derive the paradoxical conclusion. Over the years, however, several philosophers have developed an alternative to the traditional approaches; according to them, our very competence with the concept of truth leads us to accept that the reasoning used to derive the paradox is (...) sound. That is, our conceptual competence leads us into inconsistency. I call this alternative the inconsistency approach to the liar. Although this approach has many positive features, I argue that several of the well-developed versions of it that have appeared recently are unacceptable. In particular, they do not recognize that if truth is an inconsistent concept, then we should replace it with new concepts that do the work of truth without giving rise to paradoxes. I outline an inconsistency approach to the liar paradox that satisfies this condition. (shrink)
A realist theory of truth for a class of sentences holds that there are entities in virtue of which these sentences are true or false. We call such entities ‘truthmakers’ and contend that those for a wide range of sentences about the real world are moments (dependent particulars). Since moments are unfamiliar, we provide a definition and a brief philosophical history, anchoring them in our ontology by showing that they are objects of perception. The core of our theory is (...) the account of truthmaking for atomic sentences, in which we expose a pervasive ‘dogma of logical form’, which says that atomic sentences cannot have more than one truthmaker. In contrast to this, we uphold the mutual independence of logical and ontological complexity, and the authors outline formal principles of truthmaking taking account of both kinds of complexity. (shrink)
It is argued that if we take grounding to be univocal, then there is a serious tension between truth-grounding and one commonly assumed structural principle for grounding, namely transitivity. The primary claim of the article is that truth-grounding cannot be transitive. Accordingly, it is either the case that grounding is not transitive or that truth-grounding is not grounding, or both.
Introductory students regularly endorse naïve skepticism—unsupported or uncritical doubt about the existence and universality of truth—for a variety of reasons. Though some of the reasons for students’ skepticism can be traced back to the student—for example, a desire to avoid engaging with controversial material or a desire to avoid offense—naïve skepticism is also the result of how introductory courses are taught, deemphasizing truth to promote students’ abilities to develop basic disciplinary skills. While this strategy has a number of (...) pedagogical benefits, it prevents students in early stages of intellectual development from understanding truth as a threshold concept. Using philosophy as a case study, I argue that we can make progress against naïve skepticism by clearly discussing how metadisciplinary aims differ at the disciplinary and course levels in a way that is meaningful, reinforced, and accessible. (shrink)
Functionalism about truth, or alethic functionalism, is one of our most promising approaches to the study of truth. In this chapter, I chart a course for functionalist inquiry that centrally involves the empirical study of ordinary thought about truth. In doing so, I review some existing empirical data on the ways in which we think about truth and offer suggestions for future work on this issue. I also argue that some of our data lend support to (...) two kinds of pluralism regarding ordinary thought about truth. These pluralist views, as I show, can be straightforwardly integrated into the broader functionalist framework. The main result of this integration is that some unexplored metaphysical views about truth become visible. To close the chapter, I briefly respond to one of the most serious objections to functionalism, due to Cory Wright. (shrink)
Inflationists have argued that truth is a causal-explanatory property on the grounds that true belief facilitates practical success: we must postulate truth to explain the practical success of certain actions performed by rational agents. Deflationists, however, have a seductive response. Rather than deny that true belief facilitates practical success, the deflationist maintains that the sole role for truth here is as a device for generalisation. In particular, each individual instance of practical success can be explained only by (...) reference to a relevant instance of a T-schema; the role of truth is just to generalise over these individualised explanations. I present a critical problem for this strategy. Analogues of the deflationist’s individualised explanations can be produced by way of explanation of coincidental instances of practical success where the agent merely has the right false beliefs. By deflationary lights, there is no substantive explanatory difference between such coincidental and non-coincidental instances of practical success. But the non-/coincidental distinction just is an explanatory distinction. The deflationist’s individualised explanations of non-coincidental instances of practical success must therefore be inadequate. However, I argue that the deflationist’s prospects for establishing an explanatory contrast between these cases by supplementing her individualised explanations are, at best, bleak. The inflationist, by contrast, is entitled to the obvious further explanatory premise needed to make sense of the distinction. As such, pending some future deflationary rejoinder, the deflationary construal of the principle that true belief facilitates practical success must be rejected; and with it the deflationary conception of truth. (shrink)
Many in philosophy understand truth in terms of precise semantic values, true propositions. Following Braun and Sider, I say that in this sense almost nothing we say is, literally, true. I take the stand that this account of truth nonetheless constitutes a vitally useful idealization in understanding many features of the structure of language. The Fregean problem discussed by Braun and Sider concerns issues about application of language to the world. In understanding these issues I propose an alternative (...) modeling tool summarized in the idea that inaccuracy of statements can be accommodated by their imprecision. This yields a pragmatist account of truth, but one not subject to the usual counterexamples. The account can also be viewed as an elaborated error theory. The paper addresses some prima facie objections and concludes with implications for how we address certain problems in philosophy. (shrink)
Functionalists about truth employ Ramsification to produce an implicit definition of the theoretical term _true_, but doing so requires determining that the theory introducing that term is itself true. A variety of putative dissolutions to this problem of epistemic circularity are shown to be unsatisfactory. One solution is offered on functionalists' behalf, though it has the upshot that they must tread on their anti-pluralist commitments.
I argue that there is no metaphysically substantive property of truth. Although many take this thesis to be central to deflationism about truth, it is sometimes left unclear what a metaphysically substantive property of truth is supposed to be. I offer a precise account by relying on the distinction between the property and concept of truth. Metaphysical substantivism is the view that the property of truth is a sparse property, regardless of how one understands the (...) nature of sparse properties. I then offer two new arguments against metaphysical substantivism that employ ideas involving recombination and truthmaking. First, I argue that there are no theoretically compelling reasons to posit the existence of a metaphysically substantive property of truth. Secondly, I argue that if we do posit the existence of such a property, then we end up with a view that is either contradictory or unmotivated. What we’re left with is a metaphysically deflationary account of the property of truth that fully respects the metaphysical ambitions of truthmaker theory, and that is consistent with both the view that truth is a deflated, explanatorily impotent concept and the view that truth is an explanatorily powerful concept. (shrink)
This paper argues that truth predicates in natural language and their variants, predicates of correctness, satisfaction and validity, do not apply to propositions (not even with 'that'-clauses), but rather to a range of attitudinal and modal objects. As such natural language reflects a notion of truth that is primarily a normative notion of correctness constitutive of representational objects. The paper moreover argues that 'true' is part of a larger class of satisfaction predicates whose semantic differences are best accounted (...) for in terms of a truthmaker theory along the lines of Fine's recent truthmaker semantics. (shrink)
This paper advances our understanding of the norms of assertion in two ways. First, I evaluate recent studies claiming to discredit an important earlier finding which supports the hypothesis that assertion has a factive norm. In particular, I evaluate whether it was due to stimuli mentioning that a speaker’s evidence was fallible. Second, I evaluate the hypothesis that assertion has a truth-insensitive standard of justification. In particular, I evaluate the claim that switching an assertion from true to false, while (...) holding all else objectively constant, is irrelevant to attributions of justification. Two pre-registered experiments provide decisive evidence against each claim. In the first experiment, switching from mentioning to not mentioning fallibility made no difference to assertability attributions, thereby disproving the criticism concerning fallibility. By contrast, switching an assertion from true to false decreased the rate of assertability attribution from over 90% to less than 20%, thereby replicating and vindicating the original finding supporting a factive norm. In the second experiment, switching an assertion from true to false decreased the rate of justification attribution from over 80 to 10%, thereby undermining the hypothesis that assertion’s standard of justification is truth-insensitive. The second experiment also demonstrates that perspective-taking influences attributions of justification, and it provides initial evidence that the standard of justification for assertion is stricter than the standard for belief. (shrink)
Answers to the questions of what justifies conscientious objection in medicine in general and which specific objections should be respected have proven to be elusive. In this paper, I develop a new framework for conscientious objection in medicine that is based on the idea that conscience can express true moral claims. I draw on one of the historical roots, found in Adam Smith’s impartial spectator account, of the idea that an agent’s conscience can determine the correct moral norms, even if (...) the agent’s society has endorsed different norms. In particular, I argue that when a medical professional is reasoning from the standpoint of an impartial spectator, his or her claims of conscience are true, or at least approximate moral truth to the greatest degree possible for creatures like us, and should thus be respected. In addition to providing a justification for conscientious objection in medicine by appealing to the potential truth of the objection, the account advances the debate regarding the integrity and toleration justifications for conscientious objection, since the standard of the impartial spectator specifies the boundaries of legitimate appeals to moral integrity and toleration. The impartial spectator also provides a standpoint of shared deliberation and public reasons, from which a conscientious objector can make their case in terms that other people who adopt this standpoint can and should accept, thus offering a standard fitting to liberal democracies. (shrink)
The standard view in social science and philosophy is that lying does not require the liar’s assertion to be false, only that the liar believes it to be false. We conducted three experiments to test whether lying requires falsity. Overall, the results suggest that it does. We discuss some implications for social scientists working on social judgments, research on lie detection, and public moral discourse.
Gettier presented the now famous Gettier problem as a challenge to epistemology. The methods Gettier used to construct his challenge, however, utilized certain principles of formal logic that are actually inappropriate for the natural language discourse of the Gettier cases. In that challenge to epistemology, Gettier also makes truth claims that would be considered controversial in analytic philosophy of language. The Gettier challenge has escaped scrutiny in these other relevant academic disciplines, however, because of its façade as an epistemological (...) analysis. This article examines Gettier's methods with the analytical tools of logic and analytic philosophy of language. (shrink)
We might suppose it is not only instrumentally valuable for beliefs to be true, but that it is intrinsically valuable – truth makes a non-derivative, positive contribution to a belief's overall value. Some intrinsic goods are better than others, though, and this article considers the question of how good truth is, compared to other intrinsic goods. I argue that truth is the worst of all intrinsic goods; every other intrinsic good is better than it. I also suggest (...) the best explanation for truth's inferiority is that it is not really an intrinsic good at all. It is intrinsically neutral. (shrink)
There is a fundamental disagreement about which norm regulates assertion. Proponents of factive accounts argue that only true propositions are assertable, whereas proponents of non-factive accounts insist that at least some false propositions are. Puzzlingly, both views are supported by equally plausible (but apparently incompatible) linguistic data. This paper delineates an alternative solution: to understand truth as the aim of assertion, and pair this view with a non-factive rule. The resulting account is able to explain all the relevant linguistic (...) data, and finds independent support from general considerations about the differences between rules and aims. (shrink)
Suppose that scientific realists believe that a successful theory is approximately true, and that constructive empiricists believe that it is empirically adequate. Whose belief is more likely to be false? The problem of underdetermination does not yield an answer to this question one way or the other, but the pessimistic induction does. The pessimistic induction, if correct, indicates that successful theories, both past and current, are empirically inadequate. It is arguable, however, that they are approximately true. Therefore, scientific realists overall (...) take less epistemic risk than constructive empiricists. (shrink)
The notion of more truth, or of more truth and less falsehood, is central to epistemology. Yet, I argue, we have no idea what this consists in, as the most natural or obvious thing to say—that more truth is a matter of a greater number of truths, and less falsehood is a matter of a lesser number of falsehoods—is ultimately implausible. The issue is important not merely because the notion of more truth and less falsehood is (...) central to epistemology, but because an implicit, false picture of what this consists in underpins and gives shape to much contemporary epistemology. (shrink)
Conceptual engineering is to be explained by appeal to the externalist distinction between concepts and conceptions. If concepts are determined by non-conceptual relations to objective properties rather than by associated conceptions (whether individual or communal), then topic preservation through semantic change will be possible. The requisite level of objectivity is guaranteed by the possibility of collective error and does not depend on a stronger level of objectivity, such as mind-independence or independence from linguistic or social practice more generally. This means (...) that the requisite level of objectivity is exhibited not only by natural kinds, but also by a wide range of philosophical kinds, social kinds and artefactual kinds. The alternative externalist accounts of conceptual engineering offered by Herman Cappelen and Derek Ball fall back into a kind of descriptivism which is antithetical to externalism and fails to recognise this basic level of objectivity. (shrink)
In a paper in this journal, I defend the view that truth is the fundamental norm for assertion and, in doing so, reject the view that knowledge is the fundamental norm for assertion. In a recent response, Littlejohn raises a number of objections against my arguments. In this reply, I argue that Littlejohn’s objections are unsuccessful.
Attention to the conversational role of alethic terms seems to dominate, and even sometimes exhaust, many contemporary analyses of the nature of truth. Yet, because truth plays a role in judgment and assertion regardless of whether alethic terms are expressly used, such analyses cannot be comprehensive or fully adequate. A more general analysis of the nature of truth is therefore required – one which continues to explain the significance of truth independently of the role alethic terms (...) play in discourse. We undertake such an analysis in this paper; in particular, we start with certain elements from Kant and Frege, and develop a construct of truth as a normative modality of cognitive acts (e.g., thought, judgment, assertion). Using the various biconditional T-schemas to sanction the general passage from assertions to (equivalent) assertions of truth, we then suggest that an illocutionary analysis of truth can contribute to its locutionary analysis as well, including the analysis of diverse constructions involving alethic terms that have been largely overlooked in the philosophical literature. Finally, we briefly indicate the importance of distinguishing between alethic and epistemic modalities. (shrink)
This takes a closer look at the actual semantic behavior of apparent truth predicates in English and re-evaluates the way they could motivate particular philosophical views regarding the formal status of 'truth predicates' and their semantics. The paper distinguishes two types of 'truth predicates' and proposes semantic analyses that better reflect the linguistic facts. These analyses match particular independently motivated philosophical views.
In Truth and Objectivity, Crispin Wright argues that because truth is a distinctively normative property, it cannot be as metaphysically insubstantive as deflationists claim.1 This argument has been taken, together with the scope problem,2 as one of the main motivations for alethic pluralism.3 We offer a reconstruction of Wright’s Inflationary Argument (henceforth IA) aimed at highlighting what are the steps required to establish its inflationary conclusion. We argue that if a certain metaphysical and epistemological view of a given (...) subject matter is accepted, a local counterexample to IA can be constructed. We focus on the domain of basic taste and we develop two variants of a subjectivist and relativist metaphysics and epistemology that seems palatable in that domain. Although we undertake no commitment to this being the right metaphysical cum epistemological package for basic taste, we contend that if the metaphysics and the epistemology of basic taste are understood along these lines, they call for a truth property whose nature is not distinctively normative—contra what IA predicts. This result shows that the success of IA requires certain substantial metaphysical and epistemological principles and that, consequently, a proper assessment of IA cannot avoid taking a stance on the metaphysics and the epistemology of the domain where it is claimed to be successful. Although we conjecture that IA might succeed in other domains, in this paper we don’t take a stand on this issue. We conclude by briefly discussing the significance of this result for the debate on alethic pluralism. (shrink)
Coherentists on epistemic justification claim that all justification is inferential, and that beliefs, when justified, get their justification together (not in isolation) as members of a coherent belief system. Some recent work in formal epistemology shows that “individual credibility” is needed for “witness agreement” to increase the probability of truth and generate a high probability of truth. It can seem that, from this result in formal epistemology, it follows that coherentist justification is not truth-conducive, that it is (...) not the case that, under the requisite conditions, coherentist justification increases the probability of truth and generates a high probability of truth. I argue that this does not follow. (shrink)
In order to make sense of Scotus’s claim that rationality is perfected only by the will, a Scotistic doctrine of truth is developed in a speculative way. It is claimed that synthetic a priori truths are truths of the will, which are existential truths. This insight holds profound theological implications and is used on the one hand to criticize Kant's conception of existence, and on the other hand, to offer another explanation of the sense according to which the existence (...) of things is grasped. (shrink)
The problem that motivates me arises from a constellation of factors pulling in different, sometimes opposing directions. Simplifying, they are: (1) The complexity of the world; (2) Humans’ ambitious project of theoretical knowledge of the world; (3) The severe limitations of humans’ cognitive capacities; (4) The considerable intricacy of humans’ cognitive capacities . Given these circumstances, the question arises whether a serious notion of truth is applicable to human theories of the world. In particular, I am interested in the (...) questions: (a) Is a substantive standard of truth for human theories of the world possible? (b) What kind of standard would that be? (shrink)
Assertion is fundamental to our lives as social and cognitive beings. Philosophers have recently built an impressive case that the norm of assertion is factive. That is, you should make an assertion only if it is true. Thus far the case for a factive norm of assertion been based on observational data. This paper adds experimental evidence in favor of a factive norm from six studies. In these studies, an assertion’s truth value dramatically affects whether people think it should (...) be made. Whereas nearly everyone agreed that a true assertion supported by good evidence should be made, most people judged that a false assertion supported by good evidence should not be made. The studies also suggest that people are consciously aware of criteria that guide their evaluation of assertions. Evidence is also presented that some intuitive support for a non-factive norm of assertion comes from a surprising tendency people have to misdescribe cases of blameless rule-breaking as cases where no rule is broken. (shrink)
17th-century Iberian and Italian scholastics had a concept of a truthmaker [verificativum] similar to that found in contemporary metaphysical debates. I argue that the 17th-century notion of a truthmaker can be illuminated by a prevalent 17th-century theory of truth according to which the truth of a proposition is the mereological sum of that proposition and its intentional object. I explain this theory of truth and then spell out the account of truthmaking it entails.
Kuhn's alleged taxonomic interpretation of incommensurability is grounded on an ill defined notion of untranslatability and is hence radically incomplete. To supplement it, I reconstruct Kuhn's taxonomic interpretation on the basis of a logical-semantic theory of taxonomy, a semantic theory of truth-value, and a truth-value conditional theory of cross-language communication. According to the reconstruction, two scientific languages are incommensurable when core sentences of one language, which have truth values when considered within its own context, lack truth (...) values when considered within the context of the other due to the unmatchable taxonomic structures underlying them. So constructed, Kuhn's mature interpretation of incommensurability does not depend upon the notion of truth-preserving translatability, but rather depends on the notion of truth-value-status-preserving cross-language communication. The reconstruction makes Kuhn's notion of incommensurability a well grounded, tenable and integrated notion.Author Keywords: Incommensurability; Thomas Kuhn; Taxonomic structures; Lexicons; Truth-value; Untranslatability; Cross-language communication. (shrink)
In this paper, we defend Davidson's program in truth-theoretical semantics against recent criticisms by Scott Soames. We argue that Soames has misunderstood Davidson's project, that in consequence his criticisms miss the mark, that appeal to meanings as entities in the alternative approach that Soames favors does no work, and that the approach is no advance over truth-theoretic semantics.
The main goal in this paper is to outline and defend a form of Relativism, under which truth is absolute but assertibility is not. I dub such a view Norm-Relativism in contrast to the more familiar forms of Truth-Relativism. The key feature of this view is that just what norm of assertion, belief, and action is in play in some context is itself relative to a perspective. In slogan form: there is no fixed, single norm for assertion, belief, (...) and action. Upshot: 'knows' is neither context-sensitive nor perspectival. (shrink)
The main question addressed in this paper is whether some false sentences can constitute evidence for the truth of other propositions. In this paper it is argued that there are good reasons to suspect that at least some false propositions can constitute evidence for the truth of certain other contingent propositions. The paper also introduces a novel condition concerning propositions that constitute evidence that explains a ubiquitous evidential practice and it contains a defense of a particular condition concerning (...) the possession of evidence. The core position adopted here then is that false propositions that are approximately true reports of measurements can constitute evidence for the truth of other propositions. So, it will be argued that evidence is only quasi-factive in this very specific sense. (shrink)
In the early 20th century, scepticism was common among philosophers about the very meaningfulness of the notion of truth – and of the related notions of denotation, definition etc. (i.e., what Tarski called semantical concepts). Awareness was growing of the various logical paradoxes and anomalies arising from these concepts. In addition, more philosophical reasons were being given for this aversion.1 The atmosphere changed dramatically with Alfred Tarski’s path-breaking contribution. What Tarski did was to show that, assuming that the syntax (...) of the object language is specified exactly enough, and that the metatheory has a certain amount of set theoretic power,2 one can explicitly define truth in the object language. And what can be explicitly defined can be eliminated. It follows that the defined concept cannot give rise to any inconsistencies (that is, paradoxes). This gave new respectability to the concept of truth and related notions. Nevertheless, philosophers’ judgements on the nature and philosophical relevance of Tarski’s work have varied. It is my aim here to review and evaluate some threads in this debate. (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability (...) evaluations. Across the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
Hartry Field has suggested that we should adopt at least a methodological deflationism: [W]e should assume full-fledged deflationism as a working hypothesis. That way, if full-fledged deflationism should turn out to be inadequate, we will at least have a clearer sense than we now have of just where it is that inflationist assumptions ... are needed. I argue here that we do not need to be methodological deflationists. More pre-cisely, I argue that we have no need for a disquotational (...) class='Hi'>truth-predicate; that the word true, in ordinary language, is not a disquotational truth-predicate; and that it is not at all clear that it is even possible to introduce a disquotational truth-predicate into ordinary language. If so, then we have no clear sense how it is even possible to be a methodological deflationist. My goal here is not to convince a committed deflationist to abandon his or her position. My goal, rather, is to argue, contrary to what many seem to think, that reflection on the apparently trivial character of T-sentences should not incline us to deflationism. (shrink)
The paper uses the tools of mereotopology (the theory of parts, wholes and boundaries) to work out the implications of certain analogies between the 'ecological psychology' of J. J Gibson and the phenomenology of Edmund Husserl. It presents an ontological theory of spatial boundaries and of spatially extended entities. By reference to examples from the geographical sphere it is shown that both boundaries and extended entities fall into two broad categories: those which exist independently of our cognitive acts (for example, (...) the planet Earth, its exterior surface); and those which exist only in virtue of such acts (for example: the equator, the North Sea). The visual field, too, can be conceived as an example of an extended entity that is dependent in the sense at issue. The paper suggests extending this analogy by postulating entities which would stand to true judgments as the visual field stands to acts of visual perception. The judgment field is defined more precisely as that complex extended entity which comprehends all entities which are relevant to the truth of a given (true) judgment. The work of cognitive linguists such as Talmy and Langacker, when properly interpreted, can be shown to yield a detailed account of the structures of the judgment fields corresponding to sentences of different sorts. A new sort of correspondence-theoretic definition of truth for sentences of natural language can then be formulated on this basis. (shrink)
This chapter investigates the conflict between thought and speech that is inherent in lying. This is the conflict of saying what you think is false. The chapter shows how stubbornly saying what you think is false resists analysis. In traditional analyses of lying, saying what you think is false is analyzed in terms of saying something and believing that it is false. But standard cases of unconscious or divided belief challenge these analyses. Classic puzzles about belief from Gottlob Frege and (...) Saul Kripke show that suggested amendments involving assent instead of belief do not fare better. I argue that attempts to save these analyses by appeal to guises or Fregean modes of presentation will also run into trouble. I then consider alternative approaches to untruthfulness that focus on (a) expectations for one’s act of saying/asserting and (b) the intentions involved in one’s act of saying/asserting. Here I introduce two new kinds of case, which I call “truth serum” and “liar serum” cases. Consideration of these cases reveals structural problems with intention- and expectation-based approaches as well. Taken together, the string of cases presented suggests that saying what you think is false, or being untruthful, is no less difficult and interesting a subject for analysis than lying itself. Tackling the question of what it is to say what you think is false illuminates ways in which the study of lying is intertwined with fundamental issues in the nature of intentional action. (shrink)
Alfred Tarski seems to endorse a partial conception of truth, the T-schema, which he believes might be clarified by the application of empirical methods, specifically citing the experimental results of Arne Næss (1938a). The aim of this paper is to argue that Næss’ empirical work confirmed Tarski’s semantic conception of truth, among others. In the first part, I lay out the case for believing that Tarski’s T-schema, while not the formal and generalizable Convention-T, provides a partial account of (...)truth that may be buttressed by an examination of the ordinary person’s views of truth. Then, I address a concern raised by Tarski’s contemporaries who saw Næss’ results as refuting Tarski’s semantic conception. Following that, I summarize Næss’ results. Finally, I will contend with a few objections that suggest a strict interpretation of Næss’ results might recommend an overturning of Tarski’s theory. (shrink)
Philosopher’s judgements on the philosophical value of Tarski’s contributions to the theory of truth have varied. For example Karl Popper, Rudolf Carnap, and Donald Davidson have, in their different ways, celebrated Tarski’s achievements and have been enthusiastic about their philosophical relevance. Hilary Putnam, on the other hand, pronounces that “[a]s a philosophical account of truth, Tarski’s theory fails as badly as it is possible for an account to fail.” Putnam has several alleged reasons for his dissatisfaction,1 but one (...) of them, the one I call the modal objection (cf. Raatikainen 2003), has been particularly influential. In fact, very similar objections have been presented over and over again in the literature. Already in 1954, Arthur Pap had criticized Tarski’s account with a similar argument (Pap 1954). Moreover, both Scott Soames (1984) and John Etchemendy (1988) use, with an explicit reference to Putnam, similar modal arguments in relation to Tarski. Richard Heck (1997), too, shows some sympathy for such considerations. Simon Blackburn (1984, Ch. 8) has put forward a related argument against Tarski. Recently, Marian David has criticized Tarski’s truth definition with an analogous argument as well (David 2004, p. 389-390).2 This line of argument is thus apparently one of the most influential critiques of Tarski. It is certainly worthy of serious attention. Nevertheless, I shall argue that, given closer scrutiny, it does not present such an acute problem for the Tarskian approach to truth as many philosophers think. But I also believe that it is important to understand clearly why this is so. Moreover, I think that a careful consideration of the issue illuminates certain important but somewhat neglected aspects of the Tarskian approach. (shrink)
Could it be right to convict and punish defendants using only statistical evidence? In this paper, I argue that it is not and explain why it would be wrong. This is difficult to do because there is a powerful argument for thinking that we should convict and punish defendants using statistical evidence. It looks as if the relevant cases are cases of decision under risk and it seems we know what we should do in such cases (i.e., maximize expected value). (...) Given some standard assumptions about the values at stake, the case for convicting and punishing using statistical evidence seems solid. In trying to show where this argument goes wrong, I shall argue (against Lockeans, reliabilists, and others) that beliefs supported only by statistical evidence are epistemically defective and (against Enoch, Fisher, and Spectre) that these epistemic considerations should matter to the law. To solve the puzzle about the role of statistical evidence in the law, we need to revise some commonly held assumptions about epistemic value and defend the relevance of epistemology to this practical question. (shrink)
Many philosophers claim that interesting forms of epistemic evaluation are insensitive to truth in a very specific way. Suppose that two possible agents believe the same proposition based on the same evidence. Either both are justified or neither is; either both have good evidence for holding the belief or neither does. This does not change if, on this particular occasion, it turns out that only one of the two agents has a true belief. Epitomizing this line of thought are (...) thought experiments about radically deceived “brains in vats.” It is widely and uncritically assumed that such a brain is equally justified as its normally embodied human “twin.” This “parity” intuition is the heart of truth-insensitive theories of core epistemological properties such as justification and rationality. Rejecting the parity intuition is considered radical and revisionist. In this paper, I show that exactly the opposite is true. The parity intuition is idiosyncratic and widely rejected. A brain in a vat is not justified and has worse evidence than its normally embodied counterpart. On nearly every ordinary way of evaluating beliefs, a false belief is significantly inferior to a true belief. Of all the evaluations studied here, only blamelessness is truth-insensitive. (shrink)
This is a reply to de Sousa's 'Emotional Truth', in which he argues that emotions can be objective, as propositional truths are. I say that it is better to distinguish between truth and accuracy, and agree with de Sousa to the extent of arguing that emotions can be more or less accurate, that is, based on the facts as they are.
There is a long-standing disagreement among Branching-Time theorists. Even though they all believe that the branching representation accurately grasps the idea that the future, contrary to the past, is open, they argue whether this representation is compatible with the claim that one among many possible futures is distinguished—the single future that will come to be. This disagreement is paralleled in an argument about the bivalence of future contingents. The single, privileged future is often called the Thin Red Line. I reconstruct (...) the history of the arguments for and against this idea. Then, I propose my own version of the Thin Red Line theory which is immune to the major objections found in the literature. I argue that the semantic disagreement is grounded in distinct metaphysical presuppositions. My solution is expressed in a conceptual framework proposed by John MacFarlane, who distinguishes semantics from postsemantics. I extend his distinction and introduce a new notion of presemantics to elucidate my idea. (shrink)
Assertoric sentences are sentences which admit of truth or falsity. Non-assertoric sentences, imperatives and interrogatives, have long been a source of difficulty for the view that a theory of truth for a natural language can serve as the core of a theory of meaning. The trouble for truth-theoretic semantics posed by non-assertoric sentences is that, prima facie, it does not make sense to say that imperatives, such as 'Cut your hair', or interrogatives such as 'What time is (...) it?', are truth or false. Thus, the vehicle for giving the meaning of a sentence by using an interpretive truth theory, the T-sentence, is apparently unavailable for non-assertoric sentences. This paper shows how to incorporate non-assertoric sentences into a theory of meaning that gives central place to an interpretive truth theory for the language, without, however, reducing the non-assertorics to assertorics, or treating their utterances as semantically equivalent to one or more utterances of assertoric sentences. Four proposals for how to incorporate non-assertoric sentences into a broadly truth-theoretic semantics are reviewed. The proposals fall into two classes, those that attempt to explain the meaning of non-assertoric sentences solely by appeal to truth conditions, and those that attempt to explain the meaning of non-assertroic sentences by appeal to compliance conditions, which can be treated as one variety of fulfillment conditions for sentences of which truth conditions are another variety. The paper argues that none of the extant approaches is successful, but develops a version of the generalized fulfillment approach which avoids the difficulties of previous approaches and still exhibits a truth theory as the central component of a compositional meaning theory for all sentences of natural language. (shrink)
Conservativeness has been proposed as an important requirement for deflationary truth theories. This in turn gave rise to the so-called ‘conservativeness argument’ against deflationism: a theory of truth which is conservative over its base theory S cannot be adequate, because it cannot prove that all theorems of S are true. In this paper we show that the problems confronting the deflationist are in fact more basic: even the observation that logic is true is beyond his reach. This seems (...) to conflict with the deflationary characterization of the role of the truth predicate in proving generalizations. However, in the final section we propose a way out for the deflationist — a solution that permits him to accept a strong theory, having important truth-theoretical generalizations as its theorems. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.