As the ongoing literature on the paradoxes of the Lottery and the Preface reminds us, the nature of the relation between probability and rational acceptability remains far from settled. This article provides a novel perspective on the matter by exploiting a recently noted structural parallel with the problem of judgment aggregation. After offering a number of general desiderata on the relation between finite probability models and sets of accepted sentences in a Boolean sentential language, it is noted that a number (...) of these constraints will be satisfied if and only if acceptable sentences are true under all valuations in a distinguished non-empty set W. Drawing inspiration from distance-based aggregation procedures, various scoring rule based membership conditions for W are discussed and a possible point of contact with ranking theory is considered. The paper closes with various suggestions for further research. (shrink)
Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or “closeness to the truth” (1998). The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightly-generalized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who show that (...) there is no strictly proper scoring rule for imprecise probabilities. -/- The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. We argue instead that another Joycean assumption — called strict immodesty— should be rejected, and we prove a representation theorem that characterizes all “mildly” immodest measures of inaccuracy. (shrink)
A number of authors have recently put forward arguments pro or contra various rules for scoring probability estimates. In doing so, they have skipped over a potentially important consideration in making such assessments, to wit, that the hypotheses whose probabilities are estimated can approximate the truth to different degrees. Once this is recognized, it becomes apparent that the question of how to assess probability estimates depends heavily on context.
In attempting to form rational personal probabilities by direct inference, it is usually assumed that one should prefer frequency information concerning more specific reference classes. While the preceding assumption is intuitively plausible, little energy has been expended in explaining why it should be accepted. In the present article, I address this omission by showing that, among the principled policies that may be used in setting one’s personal probabilities, the policy of making direct inferences with a preference for frequency information for (...) more specific reference classes yields personal probabilities whose accuracy is optimal, according to all proper scoringrules, in situations where all of the relevant frequency information is point-valued. Assuming that frequency information for narrower reference classes is preferred, when the relevant frequency statements are point-valued, a dilemma arises when choosing whether to make a direct inference based upon relatively precise-valued frequency information for a broad reference class, R, or upon relatively imprecise-valued frequency information for a more specific reference class, R*. I address such cases, by showing that it is often possible to make a precise-valued frequency judgment regarding R* based on precise-valued frequency information for R, using standard principles of direct inference. Having made such a frequency judgment, the dilemma of choosing between and is removed, and one may proceed by using the precise-valued frequency estimate for the more specific reference class as a premise for direct inference. (shrink)
This article proposes a new interpretation of mutual information. We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of (...) measure sensitivity and fails to justify the use of MI in giving definitive answers to questions of information. We propose a fourth interpretation of MI by reduction in expected inaccuracy, where inaccuracy is measured by a strictly proper monotonic scoring rule. It is shown that the answers to questions of information given by MI are definitive whenever this interpretation is appropriate, and that it is appropriate in a wide range of applications with epistemic implications. _1_ Introduction _2_ Formal Analyses of the Three Interpretations _2.1_ Reduction in doubt _2.2_ Reduction in uncertainty _2.3_ Divergence _3_ Inconsistency with Epistemic Value of Information _4_ Problem of Measure Sensitivity _5_ Reduction in Expected Inaccuracy _6_ Resolution of the Problem of Measure Sensitivity _6.1_ Alternative measures of inaccuracy _6.2_ Resolution by strict propriety _6.3_ Range of applications _7_ Global ScoringRules _8_ Conclusion. (shrink)
We use a theorem from M. J. Schervish to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she will quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemic utility theorists.
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditional probabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expected utility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming to the dominant (...) paradigm for assessing the rationality of actions and decisions, our present evaluation complements our previous work, since our previous evaluation may have been too severe in its assessment of inferences to false and uninformative conclusions. In the end, our new results provide additional support for the conclusion that (of the four systems considered) inference by system Z offers the best balance of error avoidance and inferential power. Our new results also suggest that improved performance could be achieved by a modest strengthening of system Z. (shrink)
Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.
Michael Rescorla (2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever you take yourself to learn something with certainty, it's true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla's new argument by giving a very general Dutch Book argument that applies to many cases of updating (...) beyond those covered by Conditionalization, and then showing how Rescorla's version follows as a special case of that. Second, I want to show how to generalise Briggs and Pettigrew's Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs & Pettigrew 2018). (shrink)
Many philosophers think that games like chess, languages like English, and speech acts like assertion are constituted by rules. Lots of others disagree. To argue over this productively, it would be first useful to know what it would be for these things to be rule-constituted. Searle famously claimed in Speech Acts that rules constitute things in the sense that they make possible the performance of actions related to those things (Searle 1969). On this view, rules constitute games, (...) languages, and speech acts in the sense that they make possible playing them, speaking them and performing them. This raises the question what it is to perform rule-constituted actions (e. g. play, speak, assert) and the question what makes constitutive rules distinctive such that only they make possible the performance of new actions (e. g. playing). In this paper I will criticize Searle’s answers to these questions. However, my main aim is to develop a better view, explain how it works in the case of each of games, language, and assertion and illustrate its appeal by showing how it enables rule-based views of these things to respond to various objections. (shrink)
In the theory of meaning, it is common to contrast truth-conditional theories of meaning with theories which identify the meaning of an expression with its use. One rather exact version of the somewhat vague use-theoretic picture is the view that the standard rules of inference determine the meanings of logical constants. Often this idea also functions as a paradigm for more general use-theoretic approaches to meaning. In particular, the idea plays a key role in the anti-realist program of Dummett (...) and his followers. In the theory of truth, a key distinction now is made between substantial theories and minimalist or deflationist views. According to the former, truth is a genuine substantial property of the truth-bearers, whereas according to the latter, truth does not have any deeper essence, but all that can be said about truth is contained in T-sentences (sentences having the form: ‘P’ is true if and only if P). There is no necessary analytic connection between the above theories of meaning and truth, but they have nevertheless some connections. Realists often favour some kind of truth-conditional theory of meaning and a substantial theory of truth (in particular, the correspondence theory). Minimalists and deflationists on truth characteristically advocate the use theory of meaning (e.g. Horwich). Semantical anti-realism (e.g. Dummett, Prawitz) forms an interesting middle case: its starting point is the use theory of meaning, but it usually accepts a substantial view on truth, namely that truth is to be equated with verifiability or warranted assertability. When truth is so understood, it is also possible to accept the idea that meaning is closely related to truth-conditions, and hence the conflict between use theories and truth-conditional theories in a sense disappears in this view. (shrink)
Foundational theories of mental content seek to identify the conditions under which a mental representation expresses, in the mind of a particular thinker, a particular content. Normativists endorse the following general sort of foundational theory of mental content: A mental representation r expresses concept C for agent S just in case S ought to use r in conformity with some particular pattern of use associated with C. In response to Normativist theories of content, Kathrin Glüer-Pagin and Åsa Wikforss propose a (...) dilemma, alleging that Normativism either entails a vicious regress or falls prey to a charge of idleness. In this paper, I respond to this argument. I argue that Normativism can avoid the commitments that generate the regress and does not propose the sort of explanation required to charge that its explanation has been shown to be problematically idle. The regress-generating commitment to be avoided is, roughly, that tokened, contentful mental states are the product of rule-following. The explanatory task Normativists should disavow is that of explaining how it is that beliefs and other contentful mental states are produced. I argue that Normativism, properly understood as a theory of content, does not provide this kind of psychological explanation, and therefore does not entail that such explanations are to be given in terms of rule-following. If this is correct, Normativism is not the proper target of the dilemma offered by Glüer-Pagin and Wikforss. Understanding why one might construe Normativism in the way Glüer-Pagin and Wikforss must, and how, properly understood, it avoids their dilemma, can help us to appreciate the attractiveness of a genuinely normative theory of content and the importance of paying careful attention to the sort of normativity involved in norm-based theories of content. (shrink)
I introduce an account of when a rule normatively sustains a practice. My basic proposal is that a rule normatively sustains a practice when the value achieved by following the rule explains why agents continue following that rule, thus establishing and sustaining a pattern of activity. I apply this model to practices of belief management and identifies a substantive normative connection between knowledge and belief. More specifically, I proposes one special way that knowledge might set the normative standard for belief: (...) knowing is essentially the unique way of normatively sustaining cognition and, thereby, inquiry. In this respect, my proposal can be seen as one way of elaborating a “knowledge-first” normative theory. (shrink)
This paper discusses the prospects of a dispositional solution to the Kripke–Wittgenstein rule-following puzzle. Recent attempts to employ dispositional approaches to this puzzle have appealed to the ideas of finks and antidotes—interfering dispositions and conditions—to explain why the rule-following disposition is not always manifested. We argue that this approach fails: agents cannot be supposed to have straightforward dispositions to follow a rule which are in some fashion masked by other, contrary dispositions of the agent, because in all cases, at least (...) some of the interfering dispositions are both relatively permanent and intrinsic to the agent. The presence of these intrinsic and relatively permanent states renders the ascription of a rule-following disposition to the agent false. (shrink)
How can people function appropriately and respond normatively in social contexts even if they are not aware of rules governing these contexts? John Searle has rightly criticized a popular way out of this problem by simply asserting that they follow them unconsciously. His alternative explanation is based on his notion of a preintentional, nonrepresentational background. In this paper I criticize this explanation and the underlying account of the background and suggest an alternative explanation of the normativity of elementary social (...) practices and of the background itself. I propose to think of the background as being intentional, but nonconceptual, and of the basic normativity or proto-normativity as being instituted through common sensory-motor-emotional schemata established in the joint interactions of groups. The paper concludes with some reflections on what role this level of collective intentionality and the notion of the background can play in a layered account of the social mind and the ontology of the social world. (shrink)
Recently two distinct forms of rule-utilitarianism have been introduced that differ on how to measure the consequences of rules. Brad Hooker advocates fixed-rate rule-utilitarianism, while Michael Ridge advocates variable-rate rule-utilitarianism. I argue that both of these are inferior to a new proposal, optimum-rate rule-utilitarianism. According to optimum-rate rule-utilitarianism, an ideal code is the code whose optimum acceptance level is no lower than that of any alternative code. I then argue that all three forms of rule-utilitarianism fall prey to two (...) fatal problems that leave us without any viable form of rule-utilitarianism. (shrink)
Shuford, Albert and Massengill proved, a half century ago, that the logarithmic scoring rule is the only proper measure of inaccuracy determined by a differentiable function of probability assigned the actual cell of a scored partition. In spite of this, the log rule has gained less traction in applied disciplines and among formal epistemologists that one might expect. In this paper we show that the differentiability criterion in the Shuford et. al. result is unnecessary and use the resulting simplified (...) characterization of the logarithmic rule to give novel arguments in favor of it. (shrink)
The widely discussed "discursive dilemma" shows that majority voting in a group of individuals on logically connected propositions may produce irrational collective judgments. We generalize majority voting by considering quota rules, which accept each proposition if and only if the number of individuals accepting it exceeds a given threshold, where different thresholds may be used for different propositions. After characterizing quota rules, we prove necessary and sufficient conditions on the required thresholds for various collective rationality requirements. We also (...) consider sequential quota rules, which ensure collective rationality by adjudicating propositions sequentially and letting earlier judgments constrain later ones. Sequential rules may be path-dependent and strategically manipulable. We characterize path-independence and prove its essential equivalence to strategy-proofness. Our results shed light on the rationality of simple-, super-, and sub-majoritarian decision-making. (shrink)
What if your peers tell you that you should disregard your perceptions? Worse, what if your peers tell you to disregard the testimony of your peers? How should we respond if we get evidence that seems to undermine our epistemic rules? Several philosophers have argued that some epistemic rules are indefeasible. I will argue that all epistemic rules are defeasible. The result is a kind of epistemic particularism, according to which there are no simple rules connecting (...) descriptive and normative facts. I will argue that this type of particularism is more plausible in epistemology than in ethics. The result is an unwieldy and possibly infinitely long epistemic rule — an Uber-rule. I will argue that the Uber-rule applies to all agents, but is still defeasible — one may get misleading evidence against it and rationally lower one’s credence in it. (shrink)
An examination of the role played by general rules in Hume's positive (nonskeptical) epistemology. General rules for Hume are roughly just general beliefs. The difference between justified and unjustified belief is a matter of the influence of good versus bad general rules, the good general rules being the "extensive" and "constant" ones.
In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high-level to be put into practice without further guidance, or they focus on very specific notions of fairness or transparency that don’t consider multiple stakeholders or the (...) broader social context. In this paper, we present an auditing framework to guide the ethical assessment of an algorithm. The audit model we propose starts by identifying the goals of the audit, and describing the context of the algorithm. The audit instrument itself is comprised of three elements: a list of possible interests of stakeholders affected by the algorithm, an assessment of metrics that describe key ethically salient features of the algorithm, and a relevancy matrix that connects the assessed metrics to stakeholder interests. The proposed audit instrument yields an ethical evaluation of an algorithm that could be used by regulators and others interested in doing due diligence. (shrink)
In ‘Measuring the Consequences of Rules’, Holly Smith presents two problems involving the indeterminacy of compliance, which she takes to be fatal for all forms of rule-utilitarianism. In this reply, I attempt to dispel both problems.
There is a fundamental disagreement about which norm regulates assertion. Proponents of factive accounts argue that only true propositions are assertable, whereas proponents of non-factive accounts insist that at least some false propositions are. Puzzlingly, both views are supported by equally plausible (but apparently incompatible) linguistic data. This paper delineates an alternative solution: to understand truth as the aim of assertion, and pair this view with a non-factive rule. The resulting account is able to explain all the relevant linguistic data, (...) and finds independent support from general considerations about the differences between rules and aims. (shrink)
This paper examines theories of first person authority proposed by Dorit Bar-On (2004), Crispin Wright (1989a) and Sydney Shoemaker (1988). What all three accounts have in common is that they attempt to explain first person authority by reference to the way our language works. Bar-On claims that in our language self-ascriptions of mental states are regarded as expressive of those states; Wright says that in our language such self-ascriptions are treated as true by default; and Shoemaker suggests that they might (...) arise from our capacity to avoid Moore-paradoxical utterances. I argue that Bar-On’s expressivism and Wright’s constitutive theory suffer from a similar problem: They fail to explain how it is possible for us to instantiate the language structures that supposedly bring about first person authority. Shoemaker’s account does not suffer from this problem. But it is unclear whether the capacity to avoid Moore-paradoxical utterances really yields self-knowledge. Also, it might be that self-knowledge explains why we have this capacity rather than vice versa. (shrink)
Abstract This paper offers an appraisal of Phillip Pettit's approach to the problem how a merely finite set of examples can serve to represent a determinate rule, given that indefinitely many rules can be extrapolated from any such set. I argue that Pettit's so-called ethnocentric theory of rule-following fails to deliver the solution to this problem he sets out to provide. More constructively, I consider what further provisions are needed in order to advance Pettit's general approach to the problem. (...) I conclude that what is needed is an account that, whilst it affirms the view that agents' responses are constitutively involved in the exemplification of rules, does not allow such responses the pride of place they have in Pettit's theory. (shrink)
This paper aims at showing that the generative-semantic framework is not essential to the proposal in H.J. Verkuyl On the Compositional Nature of the Aspects Reidel:Dordrecht 1972. Compositionality can be shown to be neutral as to the then-difference between generative-semantic and the interpretive-semantic branch of transformational grammar.
A certain type of inference rules in modal logics, generalizing Gabbay's Irreflexivity rule, is introduced and some general completeness results about modal logics axiomatized with such rules are proved.
The “Game of the Rule” is easy enough: I give you the beginning of a sequence of numbers (say) and you have to figure out how the sequence continues, to uncover the rule by means of which the sequence is generated. The game depends on two obvious constraints, namely (1) that the initial segment uniquely identify the sequence, and (2) that the sequence be non-random. As it turns out, neither constraint can fully be met, among other reasons because the relevant (...) notion of randomness is either vacuous or undecidable. This may not be a problem when we play for fun. It is, however, a serious problem when it comes to playing the game for real, i.e., when the player to issue the initial segment is not one of us but the world out there, the sequence consisting not of numbers (say) but of the events that make up our history. Moreover, when we play for fun we know exactly what initial segment to focus on, but when we play for real we don’t even know that. This is the core difficulty in the philosophy of the inductive sciences. (shrink)
In our thought, we employ rules of inference and belief-forming methods more generally. For instance, we (plausibly) employ deductive rules such as Modus Ponens, ampliative rules such as Inference to the Best Explanation, and perceptual methods that tell us to believe what perceptually appears to be the case. What explains our entitlement to employ these rules and methods? This chapter considers the motivations for broadly internalist answers to this question. It considers three such motivations—one based on (...) simple cases, one based on a general conception of epistemic responsibility, and one based on skeptical scenarios. The chapter argues that none of these motivations is successful. The first two motivations lead to forms of internalism—Extreme Method Internalism and Defense Internalism—that are too strong to be tenable. The third motivation motivates Mental Internalism (Mentalism), which does not fit with plausible accounts of entitlement. (shrink)
This paper has a two-fold aim. First, it reinforces a version of the "syntactic argument" given in Aizawa (1994). This argument shows that connectionist networks do not provide a means of implementing representations without rules. Horgan and Tlenson have responded to the syntactic argument in their book and in another paper (Horgan & Tlenson, 1993), but their responses do not meet the challenge posed by my formulation of the syntactic argument. My second aim is to describe a kind of (...) cognitive architecture. This architecture might be called a computational architecture, but it is not a rules and representations architecture nor the representations without rules architecture that Horgan and Tlenson wish to endorse. (shrink)
We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires that (...) revised beliefs incorporate what has been learnt, and "conservativeness", which requires that beliefs on which the learnt input is "silent" do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (shrink)
In his paper “Fairness, Political Obligation, and the Justificatory Gap” (published in the Journal of Moral Philosophy), Jiafeng Zhu argues that the principle of fair play cannot require submission to the rules of a cooperative scheme, and that when such submission is required, the requirement is grounded in consent. I propose a better argument for the claim that fair play requires submission to the rules than the one Zhu considers. I also argue that Zhu’s attribution of consent to (...) people commonly thought to be bound to follow the rules by a duty of fair play is implausible. (shrink)
I briefly consider why Descartes stopped work on the _Rules_ towards the end of my paper. My main concern is to accurately characterize the project represented in the _Rules_, especially in its relation to early-modern logic.
According to the received view the later Wittgenstein subscribed to the thesis that speaking a language requires being guided by rules (thesis RG). In this paper we question the received view. On its most intuitive reading, we argue, (RG) is very much at odds with central tenets of the later Wittgenstein. Giving up on this reading, however, threatens to deprive the notion of rule-following of any real substance. Consequently, the rule-following considerations cannot charitably be read as a deep and (...) subtle defense of (RG) against the threat of paradox, as proponents of the received view are wont to do. Instead, we argue, the rule-following considerations provide Wittgenstein's deep and subtle reasons for rejecting the very idea that speaking a language involves rule-guidance. Although Wittgenstein subscribed to (RG) during his middle period writings, his later remarks on rules, far from being a clarification and elaboration of his earlier views, are directed against the claim of the middle period that speaking a language is an essentially rule-guided activity. (shrink)
Lewis Carroll’s 1895 paper “Achilles and the Tortoise” showed that we need a distinction between rules of inference and premises. We cannot, on pain of regress, treat all rules simply as further premises in an argument. But Carroll’s paper doesn’t say very much about what rules there must be. Indeed, it is consistent with what Carroll says there to think that the only rule is -elimination. You might think that modern Bayesians, who seem to think that the (...) only rule of inference they need is conditionalisation, have taken just this lesson from Carroll. But obviously nothing in Carroll’s argument rules out there being other rules as well. (shrink)
This self-contained one page paper produces one valid two-premise premise-conclusion argument that is a counterexample to the entire three traditional rules of distribution. These three rules were previously thought to be generally applicable criteria for invalidity of premise-conclusion arguments. No longer can a three-term argument be dismissed as invalid simply on the ground that its middle is undistributed, for example. The following question seems never to have been raised: how does having an undistributed middle show that an argument's (...) conclusion does not follow from its premises? This result does nothing to vitiate the theories of distribution developed over the period beginning in medieval times. What it does vitiate is many if not all attempts to use distribution in tests of invalidity outside of the standard two-premise categorical arguments—where they were verified on a case-by-case basis without further theoretical grounding. In addition it shows that there was no theoretical basis for many if not all claims of fundamental status of rules of distribution. These results are further support for approaching historical texts using mathematical archeology. (shrink)
I criticize Yamada's account of rule-following. Yamada's conditions are not necessary. And he misses the deepest level of the rule-following considerations: how meaning rules come about.
Mental content normativists hold that the mind’s conceptual contents are essentially normative. Many hold the view because they think that facts of the form “subject S possesses concept c” imply that S is enjoined by rules concerning the application of c in theoretical judgments. Some opponents independently raise an intuitive objection: even if there are such rules, S’s possession of the concept is not the source of the enjoinment. Hence, these rules do not support mental content normativism. (...) Call this the “Source Objection.” This paper refutes the Source Objection, outlining a key strand of the relationship between judgments and their contents in the process. Theoretical judgment and mental conceptual content are equally the source of enjoinment; norms for judging with contents do not derive from one at the expense of the other. (shrink)
We are justified in employing the rule of inference Modus Ponens (or one much like it) as basic in our reasoning. By contrast, we are not justified in employing a rule of inference that permits inferring to some difficult mathematical theorem from the relevant axioms in a single step. Such an inferential step is intuitively “too large” to count as justified. What accounts for this difference? In this paper, I canvass several possible explanations. I argue that the most promising approach (...) is to appeal to features like usefulness or indispensability to important or required cognitive projects. On the resulting view, whether an inferential step counts as large or small depends on the importance of the relevant rule of inference in our thought. (shrink)
An experimental paradigm that purports to test young children’s understanding of social norms is examined. The paradigm models norms on Searle’s notion of a constitutive rule. The experiments and the reasons provided for their design are discussed. It is argued that the experiments do not provide direct evidence about the development of social norms and that the concepts of a social norm and constitutive rule are distinct. The experimental data are re-interpreted, and suggestions for how to deal with the present (...) criticism are presented that do not require abandoning the paradigm as such. Then the conception of normativity that underlies the experimental paradigm is rejected and an alternative view is put forward. It is argued that normativity emerges from interaction and engagement, and that learning to comply with social norms involves understanding the distinction between their content, enforcement, and acceptance. As opposed to rule-based accounts that picture the development of an understanding of social norms as one-directional and based in enforcement, the present view emphasizes that normativity is situated, reciprocal, and interactive. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.