How does logic relate to rational belief? Is logic normative for belief, as some say? What, if anything, do facts about logical consequence tell us about norms of doxastic rationality? In this paper, we consider a range of putative logic-rationality bridge principles. These purport to relate facts about logical consequence to norms that govern the rationality of our beliefs and credences. To investigate these principles, we deploy a novel approach, namely, epistemic utility theory. That is, we assume (...) that doxastic attitudes have different epistemic value depending on how accurately they represent the world. We then use the principles of decision theory to determine which of the putative logic-rationality bridge principles we can derive from considerations of epistemic utility. (shrink)
(1) This paper is about how to build an account of the normativity of logic around the claim that logic is constitutive of thinking. I take the claim that logic is constitutive of thinking to mean that representational activity must tend to conform to logic to count as thinking. (2) I develop a natural line of thought about how to develop the constitutive position into an account of logical normativity by drawing on constitutivism in (...) metaethics. (3) I argue that, while this line of thought provides some insights, it is importantly incomplete, as it is unable to explain why we should think. I consider two attempts at rescuing the line of thought. The first, unsuccessful response is that it is self-defeating to ask why we ought to think. The second response is that we need to think. But this response secures normativity only if thinking has some connection to human flourishing. (4) I argue that thinking is necessary for human flourishing. Logic is normative because it is constitutive of this good. (5) I show that the resulting account deals nicely with problems that vex other accounts of logical normativity. (shrink)
The idea that logic is in some sense normative for thought and reasoning is a familiar one. Some of the most prominent figures in the history of philosophy including Kant and Frege have been among its defenders. The most natural way of spelling out this idea is to formulate wide-scope deductive requirements on belief which rule out certain states as irrational. But what can account for the truth of such deductive requirements of rationality? By far, the most prominent responses (...) draw in one way or another on the idea that belief aims at the truth. In this paper, I consider two ways of making this line of thought more precise and I argue that they both fail. In particular, I examine a recent attempt by Epistemic Utility Theory to give a veritist account of deductive coherence requirements. I argue that despite its proponents’ best efforts, Epistemic Utility Theory cannot vindicate such requirements. (shrink)
According to what was the standard view (Poincaré; Wang, etc.), although Frege endorses, and Kant denies, the claim that arithmetic is reducible to logic, there is not a substantive disagreement between them because their conceptions of logic are too different. In his “Frege, Kant, and the logic in logicism,” John MacFarlane aims to establish that Frege and Kant do share enough of a conception of logic for this to be a substantive, adjudicable dispute. MacFarlane maintains that (...) for both Frege and Kant, the fundamental defining characteristic of logic is “that it provides norms for thought as such (MacFarlane, 2002, p.57). I defend the standard view. I show that MacFarlane's argument rests on conflating the way that pure general logic is normative as a canon and as a propaedeutic, and that once these are distinguished the argument is blocked. (shrink)
Many theories of rational belief give a special place to logic. They say that an ideally rational agent would never be uncertain about logical facts. In short: they say that ideal rationality requires "logical omniscience." Here I argue against the view that ideal rationality requires logical omniscience on the grounds that the requirement of logical omniscience can come into conflict with the requirement to proportion one’s beliefs to the evidence. I proceed in two steps. First, I rehearse an influential (...) line of argument from the "higher-order evidence" debate, which purports to show that it would be dogmatic, even for a cognitively infallible agent, to refuse to revise her beliefs about logical matters in response to evidence indicating that those beliefs are irrational. Second, I defend this "anti-dogmatism" argument against two responses put forth by Declan Smithies and David Christensen. Against Smithies’ response, I argue that it leads to irrational self-ascriptions of epistemic luck, and that it obscures the distinction between propositional and doxastic justification. Against Christensen’s response, I argue that it clashes with one of two attractive deontic principles, and that it is extensionally inadequate. Taken together, these criticisms will suggest that the connection between logic and rationality cannot be what it is standardly taken to be—ideal rationality does not require logical omniscience. (shrink)
There is an exegetical quandary when it comes to interpreting Locke's relation to logic.On the one hand, over the last few decades a substantive amount of literature has been dedicated to explaining Locke's crucial role in the development of a new logic in the seventeenth and eighteenth centuries. John Yolton names this new logic the "logic of ideas," while James Buickerood calls it "facultative logic."1 Either way, Locke's Essay is supposedly its "most outspoken specimen" or (...) "culmination."2 Call this reading the 'New Logic interpretation.'On the other hand, from the typical standpoint of a philosopher accustomed to the modern conception of logic, whatever Locke—indeed, whatever most of the... (shrink)
Logical pluralism is the view that there is more than one correct logic. This very general characterization gives rise to a whole family of positions. I argue that not all of them are stable. The main argument in the paper is inspired by considerations known as the “collapse problem”, and it aims at the most popular form of logical pluralism advocated by JC Beall and Greg Restall. I argue that there is a more general argument available that challenges all (...) variants of logical pluralism that meet the following three conditions: that there are at least two correct logical systems characterized in terms of different consequence relations, that there is some sort of rivalry among the correct logics, and that logical consequence is normative. The hypothesis I argue for amounts to conditional claim: If a position satisfies all these conditions, then that position is unstable in the sense that it collapses into competing positions. (shrink)
Logical pluralism is the view that there is more than one correct logic. Most logical pluralists think that logic is normative in the sense that you make a mistake if you accept the premisses of a valid argument but reject its conclusion. Some authors have argued that this combination is self-undermining: Suppose that L1 and L2 are correct logics that coincide except for the argument from Γ to φ, which is valid in L1 but invalid in L2. If (...) you accept all sentences in Γ, then, by normativity, you make a mistake if you reject φ. In order to avoid mistakes, you should accept φ or suspend judgment about φ. Both options are problematic for pluralism. Can pluralists avoid this worry by rejecting the normativity of logic? I argue that they cannot. All else being equal, the argument goes through even if logic is not normative. (shrink)
David Lewis presented Convention as an alternative to the conventionalism characteristic of early-twentieth-century analytic philosophy. Rudolf Carnap is well known for suggesting the arbitrariness of any particular linguistic convention for engaging in scientific inquiry. Analytic truths are self-consistent, and are not checked against empirical facts to ascertain their veracity. In keeping with the logical positivists before him, Lewis concludes that linguistic communication is conventional. However, despite his firm allegiance to conventions underlying not just languages but also social customs, he pioneered (...) the view that convening need not require any active agreement to participate. Lewis proposed that conventions arise from “an exchange of manifestations of a propensity to conform to a regularity” .In reasserting the conventional quality of languages and other practices resting on mutual expectations, Lewis comfortably works within the analytic tradition. Yet he also deviates from his predecessors because his conventionalist approach is comprehensively grounded in instrumentalism. Lewis adopts an extension of David Hume's desire-belief psychology articulated in rational choice theory. He develops his philosophy of convention relying on the highly formal mid-twentieth-century expected utility and game theories. This attempt to account for language and social customs wholly in terms of instrumental rationality has the implication of reducing normativity to preference satisfaction. Lewis’ approach continues in the trend of undermining normative political philosophy because institutions and practices arise spontaneously, without the deliberate involvement of agents. Perhaps Lewis’ Convention is best seen as a resurgent form of analytic philosophy, characterized by “a style of argument, hostility to [ambitious] metaphysics, focus on language, and the dominance of logic and formalization” that solves the dilemma of “combining the analytic inheritance…with normative concerns” by reducing normativity to individuals’ preference fulfillment consistent with the axioms of rational choice. (shrink)
In her 2007 paper, “Argument Has No Function” Jean Goodwin takes exception with what she calls the “explicit function claims”, arguing that not only are function-based accounts of argumentation insufficiently motivated, but they fail to ground claims to normativity. In this paper I stake out the beginnings of a functionalist answer to Goodwin.
The purpose of the essay is to explore some points pertaining to Peirce’s conception of reality, with a special emphasis on the themes developed in his later writings (such as normativity, common sense, and the logic of signs). The resulting proposal advances a preliminary reading of some key issues (arising in connection with Peirce’s discussions of reality and truth), configured with a view to the socially sustainable, coordinated practices of inquiry that are intrinsically embedded in the biological and (...) cultural dynamics of the evolving sense of reasonableness in human practical and cognitive enterprises. (shrink)
The paper argues that applications of the principle that “ought” implies “can” (OIC) depend on normative considerations even if the link between “ought” and “can” is logical in nature. Thus, we should reject a common, “factualist” conception of OIC and endorse weak “normativism”. Even if we use OIC as the rule ““cannot” therefore “not ought””, applying OIC is not a mere matter of facts and logic, as factualists claim, but often draws on “proto-ideals” of moral agency.
Gila Sher approaches knowledge from the perspective of the basic human epistemic situation—the situation of limited yet resourceful beings, living in a complex world and aspiring to know it in its full complexity. What principles should guide them? Two fundamental principles of knowledge are epistemic friction and freedom. Knowledge must be substantially constrained by the world (friction), but without active participation of the knower in accessing the world (freedom) theoretical knowledge is impossible. This requires a grounding of all knowledge, empirical (...) and abstract, in both mind and world, but the fall of traditional foundationalism has led many to doubt the viability of this ‘classical’ project. Sher challenges this skepticism, charting a new foundational methodology, foundational holism, that differs from others in being holistic, world-oriented, and universal (i.e., applicable to all fields of knowledge). Using this methodology, Epistemic Friction develops an integrated theory of knowledge, truth, and logic. This includes (i) a dynamic model of knowledge, incorporating some of Quine’s revolutionary ideas while rejecting his narrow empiricism, (ii) a substantivist, non-traditional correspondence theory of truth, and (iii) an outline of a joint grounding of logic in mind and world. The model of knowledge subjects all disciplines to demanding norms of both veridicality and conceptualization. The correspondence theory is robust and universal yet not simplistic or naive, admitting diverse forms of correspondence. Logic’s grounding in the world brings it in line with other disciplines while preserving, and explaining, its strong formality, necessity, generality, and normativity. (shrink)
I argue for the Wittgensteinian thesis that mathematical statements are expressions of norms, rather than descriptions of the world. An expression of a norm is a statement like a promise or a New Year's resolution, which says that someone is committed or entitled to a certain line of action. A expression of a norm is not a mere description of a regularity of human behavior, nor is it merely a descriptive statement which happens to entail a norms. The view can (...) be thought of as a sort of logicism for the logical expressivist---a person who believes that the purpose of logical language is to make explicit commitments and entitlements that are implicit in ordinary practice. The thesis that mathematical statements are expression of norms is a kind of logicism, not because it says that mathematics can be reduced to logic, but because it says that mathematical statements play the same role as logical statements. ;I contrast my position with two sets of views, an empiricist view, which says that mathematical knowledge is acquired and justified through experience, and a cluster of nativist and apriorist views, which say that mathematical knowledge is either hardwired into the human brain, or justified a priori, or both. To develop the empiricist view, I look at the work of Kitcher and Mill, arguing that although their ideas can withstand the criticisms brought against empiricism by Frege and others, they cannot reply to a version of the critique brought by Wittgenstein in the Remarks on the Foundations of Mathematics. To develop the nativist and apriorist views, I look at the work of contemporary developmental psychologists, like Gelman and Gallistel and Karen Wynn, as well as the work of philosophers who advocate the existence of a mathematical intuition, such as Kant, Husserl, and Parsons. After clarifying the definitions of "innate" and "a priori," I argue that the mechanisms proposed by the nativists cannot bring knowledge, and the existence of the mechanisms proposed by the apriorists is not supported by the arguments they give. (shrink)
Philosophers are divided on whether the proof- or truth-theoretic approach to logic is more fruitful. The paper demonstrates the considerable explanatory power of a truth-based approach to logic by showing that and how it can provide (i) an explanatory characterization —both semantic and proof-theoretical—of logical inference, (ii) an explanatory criterion for logical constants and operators, (iii) an explanatory account of logic’s role (function) in knowledge, as well as explanations of (iv) the characteristic features of logic —formality, (...) strong modal force, generality, topic neutrality, basicness, and (quasi-)apriority, (v) the veridicality of logic and its applicability to science, (v) the normativity of logic, (vi) error, revision, and expansion in/of logic, and (vii) the relation between logic and mathematics. The high explanatory power of the truth-theoretic approach does not rule out an equal or even higher explanatory power of the proof-theoretic approach. But to the extent that the truth-theoretic approach is shown to be highly explanatory, it sets a standard for other approaches to logic, including the proof-theoretic approach. (shrink)
This is a survey of recent debates concerning the normativity of belief. We explain what the thesis that belief is normative involves, consider arguments for and against that thesis, and explore its bearing on debates in metaethics.
The aim of this paper is to provide a dynamic interpretation of Kant’s logical hylomorphism. Firstly, various types of the logical hylomorphism will be illustrated. Secondly, I propose to reevaluate Kant’s constitutivity thesis about logic. Finally, I focus on the design of logical norms as specific kinds of artefacts.
It is generally accepted that there are two kinds of normative concepts : evaluative concepts, such as good, and deontic concepts, such as ought. The question that is raised by this distinction is how it is possible to claim that evaluative concepts are normative. Given that deontic concepts appear to be at the heart of normativity, the bigger the gap between evaluative and deontic concepts, the less it appears plausible to say that evaluative concepts are normative. After having presented (...) the main differences between evaluative and deontic concepts, and shown that there is more than a superficial difference between the two kinds, the paper turns to the question of the normativity of evaluative concepts. It will become clear that, even if these concepts have different functions, there are a great many ties between evaluative concepts, on the one hand, and the concepts of ought and of reason, on the other. (shrink)
This article is an introduction to the recent debate about whether rationality is normative – that is, very roughly, about whether we should have attitudes which fit together in a coherent way. I begin by explaining an initial problem – the “detaching problem” – that arises on the assumption that we should have coherent attitudes. I then explain the prominent “wide-scope” solution to this problem, and some of the central objections to it. I end by considering the options that arise (...) if we reject the wide-scope solution. (shrink)
The so-called ‘central problem’ of internalism has been formulated like this: one cannot concurrently maintain the following three philosophical positions without inconsistency: internalism about practical reason, moral rationalism, and moral absolutism. Since internalism about practical reason is the most controversial of these, the suggestion is that it is the one that is best abandoned. In this paper, I point towards a response to this problem by sketching a deontic logic of internal reasons that deflates moral normativity to the (...)normativity of instrumental rationality, and provides support for the assertion that one can hold fast simultaneously to internalism and at least many of the intuitive commitments of liberal moral thinking. Crucial to the proposal is an account of the enkratic principle – I ought to attempt to realise what I ultimately desire – as the source of obligations we owe to ourselves. I attempt to show how from this, in conjunction with some plausible assumptions, obligations to others might be derived. (shrink)
There has been much debate over whether to accept the claim that meaning is normative. One obstacle to making progress in that debate is that it is not always clear what the claim amounts to. In this paper, I try to resolve a dispute between those who advance the claim concerning how it should be understood. More specifically, I critically examine two competing conceptions of the normativity of meaning, rejecting one and defending the other. Though the paper aims to (...) settle a dispute among proponents of the claim that meaning is normative, it should be of interest to those who challenge it. After all, before one takes aim, one’s target needs to be in clear view. (shrink)
Rationality appears to have some intimate relation with normativity: exactly what relation is in dispute. John Broome devotes a chapter of his recent book to rebutting the view that rationality has 'true' normativity, which he equates with the kind of normativity that I call directivity. In particular, he offers a number of arguments against derivative accounts of thenormativity of rationality. In this paper I defend my instrumentalist account from those arguments. In so doing I bring into view (...) the grounds of a live positive defence of rationality’s servanthood to directivity. (shrink)
Foundational theories of mental content seek to identify the conditions under which a mental representation expresses, in the mind of a particular thinker, a particular content. Normativists endorse the following general sort of foundational theory of mental content: A mental representation r expresses concept C for agent S just in case S ought to use r in conformity with some particular pattern of use associated with C. In response to Normativist theories of content, Kathrin Glüer-Pagin and Åsa Wikforss propose a (...) dilemma, alleging that Normativism either entails a vicious regress or falls prey to a charge of idleness. In this paper, I respond to this argument. I argue that Normativism can avoid the commitments that generate the regress and does not propose the sort of explanation required to charge that its explanation has been shown to be problematically idle. The regress-generating commitment to be avoided is, roughly, that tokened, contentful mental states are the product of rule-following. The explanatory task Normativists should disavow is that of explaining how it is that beliefs and other contentful mental states are produced. I argue that Normativism, properly understood as a theory of content, does not provide this kind of psychological explanation, and therefore does not entail that such explanations are to be given in terms of rule-following. If this is correct, Normativism is not the proper target of the dilemma offered by Glüer-Pagin and Wikforss. Understanding why one might construe Normativism in the way Glüer-Pagin and Wikforss must, and how, properly understood, it avoids their dilemma, can help us to appreciate the attractiveness of a genuinely normative theory of content and the importance of paying careful attention to the sort of normativity involved in norm-based theories of content. (shrink)
For Kant, the form of a subject's experience of an object provides the normative basis for an aesthetic judgement about it. In other words, if the subject's experience of an object has certain structural properties, then Kant thinks she can legitimately judge that the object is beautiful - and that it is beautiful for everyone. My goal in this paper is to provide a new account of how this 'subjective universalism' is supposed to work. In doing so, I appeal to (...) Kant's notions of an aesthetic idea and an aesthetic attribute, and the connection that Kant makes between an object's expression of rational and the normativity of aesthetic judgements about it. -/- . (shrink)
Logic arguably plays a role in the normativity of reasoning. In particular, there are plausible norms of belief/disbelief whose antecedents are constituted by claims about what follows from what. But is logic also relevant to the normativity of agnostic attitudes? The question here is whether logical entailment also puts constraints on what kinds of things one can suspend judgment about. In this paper I address that question and I give a positive answer to it. In particular, (...) I advance two logical norms of agnosticism, where the first one allows us to assess situations in which the subject is agnostic about the conclusion of a valid argument and the second one allows us to assess situations in which the subject is agnostic about one of the premises of a valid argument. (shrink)
Constructivists hold that truths about practical reasons are to be explained in terms of truths about the correct exercise of practical reason (rather than vice versa). But what is the normative status of the correctness-defining standards of practical reason? The problem is that constructivism appears to presuppose the truth of two theses that seem hard to reconcile. First, for constructivism to be remotely plausible, the relevant standards must be genuinely (and not merely formally or minimally) normative. Second, to avoid circularity, (...) the relevant standards must be non-reason-involving, i.e. prior to and independent of practical reasons. From the standpoint of the contemporary philosophy of normatively, this is a surprising combination to say the least. What could these genuinely normative but non-reason-involving standards possibly be? The standard constructivist response is to insist that the relevant standards possess a special kind of necessity inasmuch as we only count as occupying the “deliberative standpoint” or as a “deliberative agent” insofar as we comply with or accept the relevant standards. I offer a different response. My response holds that the special normative status of the relevant standards consists in their exhibiting a distinctive kind of practical necessity that derives from the fact that they determine what I have called elsewhere truths about “the thing to do” – namely, truths about correct answers to the question of what to do. Understanding the norms of practical reason in these terms vindicates the idea that standards of practical reason are genuinely normative since truths about the thing to do plausibly possess the hallmarks of genuine normativity. And it vindicates the idea that the standards are not reason-involving since truths about the thing to do are plausibly prior to and independent of truths about practical reasons. (shrink)
Brandom's "inferentialism"—his theory that contentfulness consists in being governed by inferential norms—proves dubiously compatible with his own deflationary approach to intentional objectivity. This is because a deflationist argument, adapted from the case of truth to that of correct inference, undermines the criterion of adequacy Brandom employs in motivating inferentialism. Once that constraint is abandoned, moreover, the very constitutive-explanatory availability of Brandom's inferential norms becomes suspect. Yet Brandom intertwines inferentialism with a separate explanatory project, one that in explaining the pragmatic significance (...) of meaning-attributions does yield a convincing construal of the claim that the concept of meaning is normative. (shrink)
Neo-Humean instrumentalist theories of reasons for acting have been presented with a dilemma: either they are normatively trivial and, hence, inadequate as a normative theory or they covertly commit themselves to a noninstrumentalist normative principle. The claimed result is that no purely instrumentalist theory of reasons for acting can be normatively adequate. This dilemma dissolves when we understand what question neo-Humean instrumentalists are addressing. The dilemma presupposes that neo-Humeans are attempting to address the question of how to act, 'simpliciter'. Instead, (...) they are evaluating actions from the agent's normative perspective. (shrink)
This paper tries to do three things. First, it tries to make it plausible that correct rules of reasoning do not always preserve justification: in other words, if you begin with a justified attitude, and reason correctly from that premise, it can nevertheless happen that you’ll nevertheless arrive at an unjustified attitude. Attempts to show that such cases in fact involve following an incorrect rule of reasoning cannot be vindicated. Second, it also argues that correct rules of reasoning do not (...) even correspond to permissions of “structural rationality”: it is not always structurally permissible to base an attitude on other attitudes from which it follows by correct reasoning. Third, from these observations it tries to build a somewhat positive account of the correctness of rules of reasoning as a more sui generis notion irreducible to either justification or structural rationality. This account vindicates an important unity of theoretical and practical reasoning as well as a qualified version of the thesis that deductive logic supplies correct rules of reasoning. (shrink)
Thinking about misleading higher-order evidence naturally leads to a puzzle about epistemic rationality: If one’s total evidence can be radically misleading regarding itself, then two widely-accepted requirements of rationality come into conflict, suggesting that there are rational dilemmas. This paper focuses on an often misunderstood and underexplored response to this (and similar) puzzles, the so-called conflicting-ideals view. Drawing on work from defeasible logic, I propose understanding this view as a move away from the default metaepistemological position according to which (...) rationality requirements are strict and governed by a strong, but never explicitly stated logic, toward the more unconventional view, according to which requirements are defeasible and governed by a comparatively weak logic. When understood this way, the response is not committed to dilemmas. (shrink)
Though legal positivism remains popular, HLA Hart’s version has fallen somewhat by the wayside. This is because, according to many, the central task of a theory of law is to explain the so-called ‘normativity of law’. Hart’s theory, it is thought, is not up to the task. Some have suggested modifying the theory accordingly. This paper argues that both Hart’s theory and the normativity of law have been misunderstood. First, a popular modification of Hart’s theory is considered and (...) rejected. It stems from a misunderstanding of Hart and his project. Second, a new understanding of the mysterious but often-mentioned ‘normativity of law’ is presented. Once we have dispelled some misunderstandings of Hart’s view and clarified the sense in which law is supposed to be normative, we see that Hart’s view, unmodified, is well suited to the task of explaining law’s normativity. (shrink)
Technical artifacts have the capacity to fulfill their function in virtue of their physicochemical make-up. An explanation that purports to explicate this relation between artifact function and structure can be called a technological explanation. It might be argued, and Peter Kroes has in fact done so, that there issomething peculiar about technological explanations in that they are intrinsically normative in some sense. Since the notion of artifact function is a normative one (if an artifact has a proper function, it ought (...) to behave in specific ways) an explanation of an artifact’s function must inherit this normativity.In this paper I will resist this conclusion by outlining and defending a ‘buck-passing account’ of the normativity of technological explanations. I will first argue that it is important to distinguish properly between (1) a theory of function ascriptions and (2) an explanation of how a function is realized. The task of the former is to spell out the conditions under which one is justified in ascribing a function to an artifact; the latter should show how the physicochemical make-up of an artifact enables it to fulfill its function. Second, I wish to maintain that a good theory of function ascriptions should account for the normativity of these ascriptions. Provided such a function theory can be formulated — as I think it can — a technological explanation may pass the normativity buck to it. Third, to flesh out these abstract claims, I show how a particular function theory — to wit, the ICE theory by Pieter Vermaas and Wybo Houkes — can be dovetailed smoothly with my own thoughts on technological explanation. (shrink)
In this essay, we draw on John Haugeland’s work in order to argue that Burge is wrong to think that exercises of perceptual constancy mechanisms suffice for perceptual representation. Although Haugeland did not live to read or respond to Burge’s Origins of Objectivity, we think that his work contains resources that can be developed into a critique of the very foundation of Burge’s approach. Specifically, we identify two related problems for Burge. First, if (what Burge calls) mere sensory responses are (...) not representational, then neither are exercises of constancy mechanisms, since the differences between them do not suffice to imply that one is representational and the other is not. Second, taken by themselves, exercises of constancy mechanisms are only derivatively representational, so merely understanding how they work is not sufficient for understanding what is required for something, in itself, to be representational (and thereby provide a full solution to the problem of perceptual representation). (shrink)
Allan Gibbard () argues that the term ‘meaning’ expresses a normative concept, primarily on the basis of arguments that parallel Moore's famous Open Question Argument. In this paper I argue that Gibbard's evidence for normativity rests on idiosyncrasies of the Open Question Argument, and that when we use related thought experiments designed to bring out unusual semantic intuitions associated with normative terms we fail to find such evidence. These thought experiments, moreover, strongly suggest there are basic requirements for a (...) theory of meaning incompatible with Gibbard's ultimate goal of providing an expressivist account of meaning-related concepts. I conclude by considering a possible way in which meaning could be normative, consistent with the intuitions about disagreement; but this form of normativism about meaning appears incompatible with Gibbard's expressivism. (shrink)
What justifies one interlocutor to challenge the conversational expectations of the other? Paul Grice approaches conversation as one instance of joint action that, like all such action, is governed by the Cooperative Principle. He thinks the expectations of the interlocutors must align, although he acknowledges that expectations can and do shift in the course of a conversation through a process he finds strange. Martin Heidegger analyzes discourse as governed by the normativity of care for self and for another. It (...) is the structure of care that warrants disrupting the presumed cooperative horizon of a conversation in order to occasion some new insight. The chapter expands Heidegger’s ontological conception of care to make sense of the exigencies of conversation. Conversation requires taking cognizance of (1) the human good, (2) the specifics of the conversational context, and (3) one’s responsibilities for the other. This threefold understanding can provide directives for subverting the interlocutor’s expectation for the purposes of a given conversation. (shrink)
Davidson has been instrumental in dampening the prospect of reductively explaining the mind. The core of his arguments turn upon his insistence that contentful mental states, the bread and butter of folk psychology, have a “normative element.” In spite of its pivotal role, as well as its intrinsic interest, the concept is very poorly developed and understood. This paper attempts to discern four different strands of the normativity of intentionality and to spark a long overdue systematic examination of a (...) fascinating and significant thesis. (shrink)
This thesis engages with three topics and the relationships between them: (i) the phenomenon of disagreement (paradigmatically, where one person makes a claim and another denies it); (ii) the normative character of disagreements (the issue of whether, and in what sense, one of the parties is “at fault” for believing something that’s untrue); (iii) the issue of which theory of what truth is can best accommodate the norms relating belief and truth. People disagree about all sorts of things: about whether (...) climate is changing, death penalty is wrong, sushi is delicious, or Louis C.K. is funny. However, even focusing on disagreements in the evaluative domain (e.g., taste, moral and comedic), where people have the intuition that there is ‘no fact of the matter’ about who is right, there are significant differences that require explanation. For instance, disagreement about taste is generally perceived as shallow. People accept to disagree and live comfortably with that fact. By contrast, moral disagreement is perceived as deep and sometimes hard to tolerate. Comedic disagreement is similar to taste. However, it may involve an element of ‘intellectual snobbery’ that is absent in taste disagreement. The immediate questions are whether these contrasts allow of precise characterization and what is responsible for them. I argue that, once a case is made for the truth-aptness of judgments in these areas, the contrast can be explained in terms of variable normative function of truth – as exerting a lightweight normative constraint in the domain of taste and a stricter constraint in the moral domain. In particular I claim that while truth in the moral domain exerts a sui generis deontic control, this normative feature of truth is silent in both the taste and the comedic domains. This leads me to investigate how to conceive of truth in the light of normative variability. I argue that an amended version of deflationism – minimally inflated deflationism – can account for the normative variability of truth. (shrink)
In The Ethical Project, Kitcher has three main aim: to provide a naturalistic explanation of the rise of morality and of its subsequent development, to supply an account of moral progress that explains progressive developments that have occurred so far and shows how further progress is possible, and to propose a further progressive development—the emergence of a cosmopolitan morality—and make the case that it is a natural extension of the ethical project. I argue that Kitcher does not succeed in achieving (...) any of these aims and that he cannot do so given the meager resources of his explanatory model. The chief difficulty is that Kitcher equivocates in his characterization of the original function of ethics. Although he begins by characterizing it as remedying altruism failures in order to avoid their social costs, he sometimes characterizes it instead as remedying altruism failures simpliciter. Kitcher does not explain how a practice whose original function was developed into one whose function is. Further, it appears that he cannot do so without significantly enriching his explanatory model to include a more robust account of how humans came to have the capacity to reflect on and revise norms. (shrink)
Sharon Street argues that realism about epistemic normativity is false. Realists believe there are truths about epistemic reasons that hold independently of the agent’s attitudes. Street argues by dilemma. Either the realist accepts a certain account of the nature of belief, or she does not. If she does, then she cannot consistently accept realism. If she does not, then she has no scientifically credible explanation of the fact that our epistemic behaviours or beliefs about epistemic reasons align with independent (...) normative truths. I argue that neither horn is very sharp for realists about epistemic normativity. (shrink)
This critical notice explores the distinction between the justifying and requiring forces of reasons, which Joshua Gert introduced and defended in his book Brute Rationality.
This paper investigates whether different philosophers’ claims about “normativity” are about the same subject or (as recently argued by Derek Parfit) theorists who appear to disagree are really using the term with different meanings, in order to cast disambiguating light on the debates over at least the nature, existence, extension, and analyzability of normativity. While I suggest the term may be multiply ambiguous, I also find reasons for optimism about a common subject-matter for metanormative theory. This is supported (...) partly by sketching a special kind of hybrid view of normative judgment, perspectivism, that occupies a position between cognitivism and noncognitivism, naturalism and nonnaturalism, objectivism and subjectivism, making it more plausible that radically different metanormative theories could be about the same thing. I explore three main fissures: between (i) the “normativity” of language/thought versus that of facts and properties, (ii) abstract versus substantive senses, and (iii) formal versus robust senses. (shrink)
It is widely maintained that doxastic norms that govern how people should believe can be explained by the truism that belief is governed by the correctness norm: believing p is correct if and only if p. This approach fails because it confuses two kinds of correctness norm: (1) It is correct for S to believe p if and only p; and (2) believing p is correct qua belief if and only if p. Only can (2) be said to be a (...) truism about belief, but it cannot ground doxastic norms. (shrink)
Many Kantian scholars have debated what normative guidance the formula of the law of nature provides. There are three ways of understanding the role of FLN in Kant’s ethics. The first line of interpretation claims that FLN and FLU are logically equivalent. The second line claims that there are only subjective differences, meaning that FLN is easier to apply than the abstrct method of FUL. The third line of interpretation claims that there are objective differences between FLN and FUL in (...) the sense that each formula has an irreducible role in Kant’s ethics. In this article I will show that the first and second lines of interpretation cannot fully explain Kant’s account of FLN and I will propose a new interpretation which pertains to the third type. I will explore the schematism model to understand the role of FLN and argue that it is an intermediary principle that fills in a practical gap between the moral law and action. In the end, I will consider a possible objection against this understanding which claims that the schematism model is not applicable to practical judgment since nothing is given in experience. (shrink)
Anandi Hattiangadi packs a lot of argument into this lucid, well-informed and lively examination of the meaning scepticism which Kripke ascribes to Wittgenstein. Her verdict on the success of the sceptical considerations is mixed. She concludes that they are sufficient to rule out all accounts of meaning and mental content proposed so far. But she believes that they fail to constitute, as Kripke supposed they did, a fully general argument against the possibility of meaning or content. Even though we are (...) not now in a position to specify facts in which meaning consists, the view that there are such facts, and more specifically that they satisfy the intuitive conception of meaning which she labels ‘semantic realism’, remains a live option. Moreover, given.. (shrink)
This paper is devoted to defending philosophical studies of mind, especially traditional ones. In my view, human mentality is a dialogue with myself, which has a social aspect that is never explained nor predicted by scientific studies. We firstly derive this picture from Descartes’ classical argmuments (§§2-3), and then develop it in the context of Kantian ethics (§4). Some readers think this combination arbitrary. However, these two philosophers agree on mind/body dualism (§5), and further, the fact that the dialogue is (...) often made in an ethical situation leads us to Kantian ethics. We shall draw this developed picture within the format of modern practical syllogisms (§§5-13). Finally, we shall refer to Nick Zangwill’s normative essentialism for the completion of our whole picture (§§7-8). (shrink)
In one of the earlier influential papers in the field of experimental philosophy titled Normativity and Epistemic Intuitions published in 2001, Jonathan M. Weinberg, Shaun Nichols and Stephen Stich reported that respondents answered Gettier type questions differently depending on their ethnic background as well as socioeconomic status. There is currently a debate going on, on the significance of the results of Weinberg et al. (2001) and its implications for philosophical methodology in general and epistemology in specific. Despite the debates, (...) however, to our knowledge, there has not been a replication attempt of the experiments of the original paper. We collected data from four different sources (two on-line and two in-person) to replicate the experiments. Despite several different data sets and in various cases larger sample sizes and hence greater power to detect differences, we failed to detect significant differences between the above-mentioned ethnic and socioeconomic groups. Our results suggest that epistemic intuitions are more robust across ethnic and socioeconomic groups than Weinberg et al. (2001) indicates. Given our data, we believe that the notion of differences in epistemic intuitions among different ethnic and socioeconomic groups that follows from Weinberg et al. (2001) needs to be corrected. (shrink)
This is a book about normativity -- where the central normative terms are words like 'ought' and 'should' and their equivalents in other languages. It has three parts: The first part is about the semantics of normative discourse: what it means to talk about what ought to be the case. The second part is about the metaphysics of normative properties and relations: what is the nature of those properties and relations whose pattern of instantiation makes propositions about what ought (...) to be the case true. The third part is about the epistemology of normative beliefs: how we could ever know, or even have rational or justified belief in, propositions about what ought to be the case. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.