Many philosophers think that games like chess, languages like English, and speech acts like assertion are constituted by rules. Lots of others disagree. To argue over this productively, it would be first useful to know what it would be for these things to be rule-constituted. Searle famously claimed in Speech Acts that rules constitute things in the sense that they make possible the performance of actions related to those things (Searle 1969). On this view, rules constitute games, languages, and (...) speech acts in the sense that they make possible playing them, speaking them and performing them. This raises the question what it is to perform rule-constituted actions (e. g. play, speak, assert) and the question what makes constitutive rules distinctive such that only they make possible the performance of new actions (e. g. playing). In this paper I will criticize Searle’s answers to these questions. However, my main aim is to develop a better view, explain how it works in the case of each of games, language, and assertion and illustrate its appeal by showing how it enables rule-based views of these things to respond to various objections. (shrink)
In the middle of the 20th century, it was a common Wittgenstein-inspired idea in philosophy that for a linguistic expression to have a meaning is for it to be governed by a rule of use. In other words, it was widely believed that meanings are to be identified with use-conditions. However, as things stand, this idea is widely taken to be vague and mysterious, inconsistent with “truth-conditional semantics”, and subject to the Frege-Geach problem. In this paper I reinvigorate the (...) ideas that meaningfulness is a matter of being governed by rules of use and that meanings are best thought of in terms of use-conditions. I will do this by sketching the Rule-Governance view of the nature of linguistic meaningfulness, showing that the view isn’t by itself subject to the two problems, and explain why the idea has had a lasting appeal to philosophers from Strawson to Kaplan and why we should find it continually attractive. (shrink)
There is a large and growing literature on communal interpretive resources: the concepts, theories, narratives, and so on that a community draws on in interpreting its members and their world. Several recent contributions to this literature have concerned dominant and resistant interpretive resources and how they affect concrete lived interactions. In this article, I note that “using” interpretive resources—applying them to parts of the world in conversation with others—is “a rule‐governed activity”; and I propose that in oppressive systems, these (...) rules are influenced by the rules of oppression. Section I clarifies some rules governing the use of resources. Section II draws on work by Gaile Pohlhaus, Jr. and others to suggest that according to the present rules of our oppressive system, it is permissible for dominantly situated speakers to dismiss interpretive resources developed in marginalized communities. Section III appeals to Charles Mills's work on White ignorance to propose, further, that our system's rules make it impermissible and deserving of punishment to use resistant resources. The conclusion enumerates several further points about such rules governing the use of interpretive resources, their social effects, and some philosophical literatures. (shrink)
This paper discusses the prospects of a dispositional solution to the Kripke–Wittgenstein rule-following puzzle. Recent attempts to employ dispositional approaches to this puzzle have appealed to the ideas of finks and antidotes—interfering dispositions and conditions—to explain why the rule-following disposition is not always manifested. We argue that this approach fails: agents cannot be supposed to have straightforward dispositions to follow a rule which are in some fashion masked by other, contrary dispositions of the agent, because in all (...) cases, at least some of the interfering dispositions are both relatively permanent and intrinsic to the agent. The presence of these intrinsic and relatively permanent states renders the ascription of a rule-following disposition to the agent false. (shrink)
Foundational theories of mental content seek to identify the conditions under which a mental representation expresses, in the mind of a particular thinker, a particular content. Normativists endorse the following general sort of foundational theory of mental content: A mental representation r expresses concept C for agent S just in case S ought to use r in conformity with some particular pattern of use associated with C. In response to Normativist theories of content, Kathrin Glüer-Pagin and Åsa Wikforss propose a (...) dilemma, alleging that Normativism either entails a vicious regress or falls prey to a charge of idleness. In this paper, I respond to this argument. I argue that Normativism can avoid the commitments that generate the regress and does not propose the sort of explanation required to charge that its explanation has been shown to be problematically idle. The regress-generating commitment to be avoided is, roughly, that tokened, contentful mental states are the product of rule-following. The explanatory task Normativists should disavow is that of explaining how it is that beliefs and other contentful mental states are produced. I argue that Normativism, properly understood as a theory of content, does not provide this kind of psychological explanation, and therefore does not entail that such explanations are to be given in terms of rule-following. If this is correct, Normativism is not the proper target of the dilemma offered by Glüer-Pagin and Wikforss. Understanding why one might construe Normativism in the way Glüer-Pagin and Wikforss must, and how, properly understood, it avoids their dilemma, can help us to appreciate the attractiveness of a genuinely normative theory of content and the importance of paying careful attention to the sort of normativity involved in norm-based theories of content. (shrink)
I introduce an account of when a rule normatively sustains a practice. My basic proposal is that a rule normatively sustains a practice when the value achieved by following the rule explains why agents continue following that rule, thus establishing and sustaining a pattern of activity. I apply this model to practices of belief management and identifies a substantive normative connection between knowledge and belief. More specifically, I proposes one special way that knowledge might set the (...) normative standard for belief: knowing is essentially the unique way of normatively sustaining cognition and, thereby, inquiry. In this respect, my proposal can be seen as one way of elaborating a “knowledge-first” normative theory. (shrink)
This paper describes and defends a novel and distinctively egalitarian conception of the rule of law. Official behavior is to be governed by preexisting, public rules that do not draw irrelevant distinctions between the subjects of law. If these demands are satisfied, a state achieves vertical equality between officials and ordinary people and horizontal legal equality among ordinary people.
Most plausible moral theories must address problems of partial acceptance or partial compliance. The aim of this paper is to examine some proposed ways of dealing with partial acceptance problems as well as to introduce a new Rule Utilitarian suggestion. Here I survey three forms of Rule Utilitarianism, each of which represents a distinct approach to solving partial acceptance issues. I examine Fixed Rate, Variable Rate, and Optimum Rate Rule Utilitarianism, and argue that a new approach, Maximizing (...) Expectation Rate Rule Utilitarianism, better solves partial acceptance problems. (shrink)
Proponents of the rule of law argue about whether that ideal should be conceived formalistically or in terms of substantive values. Formalistically, the rule of law is associated with principles like generality, clarity, prospectivity, consistency, etc. Substantively, it is associated with market values, with constitutional rights, and with freedom and human dignity. In this paper, I argue for a third layer of complexity: the procedural aspect of the rule of law; the aspects of rule-of-law requirements that (...) have to do with "natural Justice" or "procedural due process." These I believe have been neglected in the jurisprudential literature devoted specifically to the idea of the rule of law and they deserve much greater emphasis. Moreover procedural values go beyond elementary principles like the guarantee of an unbiased tribunal or the opportunity to present and confront evidence. They include the right to argue in a court about what the law is and what its bearing should be on one's situation. The provision that law makes for argument is necessarily unsettling, and so this emphasis on the procedural aspect highlights the point predictability should not be regarded as the be-all and end-all of the rule of law. (shrink)
Much has been said about Confucius’ negative formulation of the Golden Rule. Most discussions center on explaining why this formulation, while negative, does not differ at all in intention from the positive formulation. It is my view that such attempts may have the effect of blurring the essential point behind the specifically negative formulation, a point which I hope to elucidate in this essay. It is my first contention that such a negative formulation is consonant with other basic implicit (...) Confucian attitudes such as modesty and the belief in the inherent goodness of human nature. My second contention is that this negative formulation has the intent and/or effect of promoting growth and, more importantly, preventing moral harm. My broader thesis is that the negative version of the Golden Rule does differ significantly from the positive version and that the difference that exists might well have been intended by Confucius to highlight the nature of his most basic moral principle. (shrink)
Mental content normativists hold that the mind’s conceptual contents are essentially normative. Many hold the view because they think that facts of the form “subject S possesses concept c” imply that S is enjoined by rules concerning the application of c in theoretical judgments. Some opponents independently raise an intuitive objection: even if there are such rules, S’s possession of the concept is not the source of the enjoinment. Hence, these rules do not support mental content normativism. Call this the (...) “Source Objection.” This paper refutes the Source Objection, outlining a key strand of the relationship between judgments and their contents in the process. Theoretical judgment and mental conceptual content are equally the source of enjoinment; norms for judging with contents do not derive from one at the expense of the other. (shrink)
When faced with a rule that they take to be true, and a recalcitrant example, people are apt to say: “The exception proves the rule”. When pressed on what they mean by this though, things are often less than clear. A common response is to dredge up some once-heard etymology: ‘proves’ here, it is often said, means ‘tests’. But this response—its frequent appearance even in some reference works notwithstanding1—makes no sense of the way in which the expression is (...) used. To insist that the exception proves the rule is to insist that whilst this is an exception, the rule still stands; and furthermore, that, rather than undermining the rule, the exception serves to confirm it. This second claim may seem paradoxical, but it should not, once it is realized that what does the confirming is not the exception itself, but rather the fact that we judge it to be an exception; and that what is confirmed is not the rule itself, but rather the fact that we judge it to be a rule. To treat something as an exception is not to treat it as a counterexample that refutes the existence of the rule. Rather it is to treat it as special, and so to concede the rule from which it is excepted. The point comes clearly in the original (probably 17th Century) Latin form: Exceptio probat (figit2) regulam in casibus non exceptis. Exception (i.e. the act of excepting) proves (establishes) the rule in the cases not excepted. Clearly this form of reasoning cannot apply when the rule that we are considering has the form of a simple universal generalization. Here there can be no exceptions, only counterexamples. So what we need, and what will be developed.. (shrink)
This paper employs some outcomes (for the most part due to David Lewis) of the contemporary debate on the metaphysics of dispositions to evaluate those dispositional analyses of meaning that make use of the concept of a disposition in ideal conditions. The first section of the paper explains why one may find appealing the notion of an ideal-condition dispositional analysis of meaning and argues that Saul Kripke’s well-known argument against such analyses is wanting. The second section focuses on Lewis’ work (...) in the metaphysics of dispositions in order to call attention to some intuitions about the nature of dispositions that we all seem to share. In particular, I stress the role of what I call "Actuality Constraint". The third section of the paper maintains that the Actuality Constraint can be used to show that the dispositions with which ideal-condition dispositional analyses identify my meaning addition by "+" do not exist (in so doing, I develop a suggestion put forward by Paul Boghossian). This immediately implies that ideal-condition dispositional analyses of meaning cannot work. The last section discusses a possible objection to my argument. The point of the objection is that the argument depends on an illicit assumption. I show (1) that, in fact, the assumption in question is far from illicit and (2) that even without this assumption it is possible to argue that the dispositions with which ideal-condition dispositional analyses identify my meaning addition by "+" do not exist. (shrink)
Over the last decade, philosophers of science have extensively criticized the epistemic superiority of randomized controlled trials for testing safety and effectiveness of new drugs, defending instead various forms of evidential pluralism. We argue that scientific methods in regulatory decision-making cannot be assessed in epistemic terms only: there are costs involved. Drawing on the legal distinction between rules and standards, we show that drug regulation based on evidential pluralism has much higher costs than our current RCT-based system. We analyze these (...) costs and advocate for evaluating any scheme for drug regulatory tests in terms of concrete empirical benchmarks, like the error rates of regulatory decisions. (shrink)
We discuss a well-known puzzle about the lexicalization of logical operators in natural language, in particular connectives and quantifiers. Of the many logically possible functions of the relevant type, only few appear in the lexicon of natural languages: the connectives in English, for example, are only 'and', 'or', and perhaps 'nor' (expressing negated disjunction). The logically possible 'nand' (negated conjunction) is not expressed by a lexical entry of English, or of any natural language. The explanation we propose is based on (...) the “dynamic” behaviour of connectives and quantifiers: we define update potentials for logical operator, under the assumption that the logical structure of a sentence p defines what type of update p contributes to context, together with the speech act performed (assertion or denial). We conjecture that the adequacy of update potentials determines the limits of lexicalizability for logical operators in natural language. (shrink)
Rule-Consequentialism faces “the problem of partial acceptance”: How should the ideal code be selected given the possibility that its rules may not be universally accepted? A new contender, “Calculated Rates” Rule-Consequentialism claims to solve this problem. However, I argue that Calculated Rates merely relocates the partial acceptance question. Nevertheless, there is a significant lesson from this failure of Calculated Rates. Rule-Consequentialism’s problem of partial acceptance is more helpfully understood as an instance of the broader problem of selecting (...) the ideal code given various assumptions—assumptions about who will accept and comply with the rules, but also about how the rules will be taught and enforced, and how similar the future will be. Previous rich discussions about partial acceptance provide a taxonomy and groundwork for formulating the best version of Rule-Consequentialism. (shrink)
In the theory of meaning, it is common to contrast truth-conditional theories of meaning with theories which identify the meaning of an expression with its use. One rather exact version of the somewhat vague use-theoretic picture is the view that the standard rules of inference determine the meanings of logical constants. Often this idea also functions as a paradigm for more general use-theoretic approaches to meaning. In particular, the idea plays a key role in the anti-realist program of Dummett and (...) his followers. In the theory of truth, a key distinction now is made between substantial theories and minimalist or deflationist views. According to the former, truth is a genuine substantial property of the truth-bearers, whereas according to the latter, truth does not have any deeper essence, but all that can be said about truth is contained in T-sentences (sentences having the form: ‘P’ is true if and only if P). There is no necessary analytic connection between the above theories of meaning and truth, but they have nevertheless some connections. Realists often favour some kind of truth-conditional theory of meaning and a substantial theory of truth (in particular, the correspondence theory). Minimalists and deflationists on truth characteristically advocate the use theory of meaning (e.g. Horwich). Semantical anti-realism (e.g. Dummett, Prawitz) forms an interesting middle case: its starting point is the use theory of meaning, but it usually accepts a substantial view on truth, namely that truth is to be equated with verifiability or warranted assertability. When truth is so understood, it is also possible to accept the idea that meaning is closely related to truth-conditions, and hence the conflict between use theories and truth-conditional theories in a sense disappears in this view. (shrink)
One side of this paper is devoted to showing that the Golden Rule, understood as standing for universal love, is centrally characteristic of Confucianism properly understood, rather than graded, familial love. In this respect Confucianism and Christianity are similar. The other side of this paper is devoted to arguing contra 18 centuries of commentators that the negative sentential formulation of the Golden Rule as found in Confucius cannot be converted to an affirmative sentential formulation (as is found in (...) Christianity) without a change in its meaning. In this respect Confucianism and Christianity are different. (shrink)
What is the American rule of law? Is it a paradigm case of the strong constitutionalism concept of the rule of law or has it fallen short of its rule of law ambitions? -/- This open access book traces the promise and paradox of the American rule of law in three interwoven ways. -/- It focuses on explicating the ideals of the American rule of law by asking: how do we interpret its history and the (...) goals of its constitutional framers to see the rule of law ambitions its foundational institutions express? -/- It considers those constitutional institutions as inextricable from the problem of race in the United States and the tensions between the rule of law as a protector of property rights and the rule of law as a restrictor on arbitrary power and a guarantor of legal equality. In that context, it explores the distinctive role of Black liberation movements in developing the American rule of law. -/- Finally, it considers the extent to which the American rule of law is compromised at its frontiers, and the extent that those compromises undermine legal protections Americans enjoy in the interior. It asks how America reflects the legal contradictions of capitalism and empire outside its borders, and the impact of those contradictions on its external goals. (shrink)
According to Amie Thomasson's Modal Normativism (MN), knowledge of metaphysical modality is to be explained in terms of a speaker’s mastery of semantic rules, as opposed to one’s epistemic grasp of independent modal facts. In this chapter, I outline (MN)'s account of modal knowledge (§1) and argue that more than semantic mastery is needed for knowledge of metaphysical modality. Specifically (§2), in reasoning aimed at gaining such knowledge, a competent speaker needs to further deploy essentialist principles and information. In response, (...) normativists might contend that a competent speaker will only need to appeal to specific independence counterfactuals, on analogy with quasi-realism about morality. These conditionals fix the meaning of our terms at the actual world, independently of the particular context in which a statement is evaluated. However, I show that this strategy causes several problems for the account (§3). While those problems might perhaps be avoided by endorsing a certain picture of modal metaphysics (Modal Monism), such a picture involves notorious issues that normativists will have to address (§4). It is thus doubtful that (MN) can explain knowledge of metaphysical modality. Still, it may explain some modal knowledge without committing to Modal Monism. As I show (§5), semantic mastery may suffice for gaining knowledge of logical-conceptual modality or analyticity. (shrink)
In her seminal article ‘Modern Moral Philosophy’ Elizabeth Anscombe argued that we need a new ethics, one that uses virtue terms to generate absolute prohibitions against certain act-types. Leading contemporary virtue ethicists have not taken up Anscombe's challenge in justifying absolute prohibitions and have generally downplayed the role of rule-following in their normative theories. That they have not done so is primarily because contemporary virtue ethicists have focused on what is sufficient for characterizing the deliberation and action of the (...) fully virtuous person, and rule-following is inadequate for this task. In this article, I take up Anscombe's challenge by showing that rule-following is necessary for virtuous agency, and that virtue ethics can justify absolute prohibitions. First, I offer a possibility proof by showing how virtue ethics can generate absolute prohibitions in three ways: by considering actions that directly manifest vice or that cannot be performed virtuously; actions that are prohibited by one's institutional roles and practical identities; and actions that are prohibited by the prescriptions of the wise. I then seek to show why virtue ethicists should incorporate rule-following and absolute prohibitions into their theories. I emphasize the central role that rules have in the development of virtue, then motivate the stronger view that fully virtuous agents follow moral rules by considering the importance of hope, uncertainty about consequences, and taking responsibility for what eventuates. Finally, I provide an account of what Anscombe called a ‘corrupt mind’, explaining how our understanding of virtue is corrupted if we think that virtue may require us to do vicious actions. (shrink)
Recently two distinct forms of rule-utilitarianism have been introduced that differ on how to measure the consequences of rules. Brad Hooker advocates fixed-rate rule-utilitarianism, while Michael Ridge advocates variable-rate rule-utilitarianism. I argue that both of these are inferior to a new proposal, optimum-rate rule-utilitarianism. According to optimum-rate rule-utilitarianism, an ideal code is the code whose optimum acceptance level is no lower than that of any alternative code. I then argue that all three forms of (...) class='Hi'>rule-utilitarianism fall prey to two fatal problems that leave us without any viable form of rule-utilitarianism. (shrink)
In this essay I point out parallels between Kants theory of aesthetics and Wittgensteins discussion of rule following. Although Wittgenstein did not write an aesthetics and Kant did not discuss Wittgensteinian rule-following problems, and although both Kant and Wittgenstein begin at very different starting points and use different methods, they end up dealing with similar issues, namely issues about rules, particularity, exemplarity, objectivity, practice, and as-if statements.
In this article, it is argued that existing democracies might establish popular rule even if Joseph Schumpeter’s notoriously unflattering picture of ordinary citizens is accurate. Some degree of popular rule is in principle compatible with apathetic, ignorant and suggestible citizens, contrary to what Schumpeter and others have maintained. The people may have control over policy, and their control may constitute popular rule, even if citizens lack definite policy opinions and even if their opinions result in part from (...) elites’ efforts to manipulate these opinions. Thus, even a purely descriptive, ‘realist’ account of democracy of the kind that Schumpeter professed to offer may need to concede that there is no democracy without some degree of popular rule. (shrink)
How can people function appropriately and respond normatively in social contexts even if they are not aware of rules governing these contexts? John Searle has rightly criticized a popular way out of this problem by simply asserting that they follow them unconsciously. His alternative explanation is based on his notion of a preintentional, nonrepresentational background. In this paper I criticize this explanation and the underlying account of the background and suggest an alternative explanation of the normativity of elementary social practices (...) and of the background itself. I propose to think of the background as being intentional, but nonconceptual, and of the basic normativity or proto-normativity as being instituted through common sensory-motor-emotional schemata established in the joint interactions of groups. The paper concludes with some reflections on what role this level of collective intentionality and the notion of the background can play in a layered account of the social mind and the ontology of the social world. (shrink)
This paper argues that the problematic of rule following in Wittgenstein's Philosophical Investigations and Heidegger's analysis of anxiety in Being and Time have analogous structures. Working through these analogies helps our interpretation of both of these authors. Contrasting sceptical and anti-sceptical readings of Wittgenstein helps us to resolve an interpretive puzzle about what an authentic response to anxiety looks like for Heidegger. And considering the importance of anxiety to Heidegger's conception of authenticity allows us to locate in Wittgenstein's later (...) philosophy a covert appeal to something resembling Heideggerian authenticity. (shrink)
The dead donor rule justifies current practice in organ procurement for transplantation and states that organ donors must be dead prior to donation. The majority of organ donors are diagnosed as having suffered brain death and hence are declared dead by neurological criteria. However, a significant amount of unrest in both the philosophical and the medical literature has surfaced since this practice began forty years ago. I argue that, first, declaring death by neurological criteria is both unreliable and unjustified (...) but further, the ethical principles which themselves justify the dead donor rule are better served by abandoning that rule and instead allowing individuals who have suffered severe and irreversible brain damage to become organ donors, even though they are not yet dead and even though the removal of their organs would be the proximal cause of death. (shrink)
There is a fundamental disagreement about which norm regulates assertion. Proponents of factive accounts argue that only true propositions are assertable, whereas proponents of non-factive accounts insist that at least some false propositions are. Puzzlingly, both views are supported by equally plausible (but apparently incompatible) linguistic data. This paper delineates an alternative solution: to understand truth as the aim of assertion, and pair this view with a non-factive rule. The resulting account is able to explain all the relevant linguistic (...) data, and finds independent support from general considerations about the differences between rules and aims. (shrink)
The going-on problem (GOP) is the central concern of Wittgenstein's later philosophy. It informs not only his epistemology and philosophy of mind, but also his views on mathematics, universals, and religion. In section I, I frame this issue as a matter of accounting for intentionality. Here I follow Saul Kripke's lead. My departure therefrom follows: first, a criticism of Wittgenstein's “straight” conventionalism and, secondly, a defense of a solution Kripke rejects. I proceed under the assumption, borne out in the end, (...) that statements of rule-following have truth-conditions and are not, as Kripke seems willing to concede, merely "assertible" in circumstances of a specified sort. Ultimately, my goal is to demonstrate that intending can be understood in terms of an individual's dispositions rather than those of the community to which she belongs. (shrink)
In this paper, the relationship between the e-value of a complex hypothesis, H, and those of its constituent elementary hypotheses, Hj, j = 1… k, is analyzed, in the independent setup. The e-value of a hypothesis H, ev, is a Bayesian epistemic, credibility or truth value defined under the Full Bayesian Significance Testing mathematical apparatus. The questions addressed concern the important issue of how the truth value of H, and the truth function of the corresponding FBST structure M, relate to (...) the truth values of its elementary constituents, Hj, and to the truth functions of their corresponding FBST structures Mj, respectively. (shrink)
What if your peers tell you that you should disregard your perceptions? Worse, what if your peers tell you to disregard the testimony of your peers? How should we respond if we get evidence that seems to undermine our epistemic rules? Several philosophers have argued that some epistemic rules are indefeasible. I will argue that all epistemic rules are defeasible. The result is a kind of epistemic particularism, according to which there are no simple rules connecting descriptive and normative facts. (...) I will argue that this type of particularism is more plausible in epistemology than in ethics. The result is an unwieldy and possibly infinitely long epistemic rule — an Uber-rule. I will argue that the Uber-rule applies to all agents, but is still defeasible — one may get misleading evidence against it and rationally lower one’s credence in it. (shrink)
The widely discussed "discursive dilemma" shows that majority voting in a group of individuals on logically connected propositions may produce irrational collective judgments. We generalize majority voting by considering quota rules, which accept each proposition if and only if the number of individuals accepting it exceeds a given threshold, where different thresholds may be used for different propositions. After characterizing quota rules, we prove necessary and sufficient conditions on the required thresholds for various collective rationality requirements. We also consider sequential (...) quota rules, which ensure collective rationality by adjudicating propositions sequentially and letting earlier judgments constrain later ones. Sequential rules may be path-dependent and strategically manipulable. We characterize path-independence and prove its essential equivalence to strategy-proofness. Our results shed light on the rationality of simple-, super-, and sub-majoritarian decision-making. (shrink)
In Tractatus 5.132 Wittgenstein argues that inferential justification depends solely on the understanding of the premises and conclusion, and is not mediated by any further act. On this basis he argues that Frege’s and Russell’s rules of inference are “senseless” and “superfluous”. This line of argument is puzzling, since it is unclear that there could be any viable account of inference according to which no such mediation takes place. I show that Wittgenstein’s rejection of rules of inference can be motivated (...) by taking account of his holistic construal of the relation between inference and understanding. (shrink)
Predicativists hold that proper names have predicate-type semantic values. They face an obvious challenge: in many languages names normally occur as, what appear to be, grammatical arguments. The standard version of predicativism answers this challenge by positing an unpronounced determiner in bare occurrences. I argue that this is a mistake. Predicativists should draw a distinction between two kinds of semantic type—underived semantic type and derived semantic type. The predicativist thesis concerns the underived semantic type of proper names and underdetermines a (...) view about the semantic type of bare occurrences. I’ll argue that predicativists should hold that bare names are derived individual-denoting expressions. I end by considering what this result means for the relationship between predicativism and other metalinguistic theories of names. (shrink)
1. Do models formulated in programming languages use explicit rules where connectionist models do not? 2. Are rules as found in programming languages hard, precise, and exceptionless, where connectionist rules are not? 3. Do connectionist models use rules operating on distributed representations where models formulated in programming languages do not? 4. Do connectionist models fail to use structure sensitive rules of the sort found in "classical" computer architectures? In this chapter we argue that the answer to each of these questions (...) is negative. (shrink)
This paper presents a new reconstruction of Wittgenstein’s famous (and controversial) rule-following arguments. Two are the novel features offered by our reconstruction. In the first place, we propose a shift of the central focus of the discussion, from the general semantics and the philosophy of mind to the philosophy of mathematics and the rejection of the notion of a function. The second new feature is positive: we argue that Wittgenstein offers us a new alternative notion of a rule (...) (to replace the rejected functions), a notion reminiscent of Category Theory’s notion of a morphism. (shrink)
Undeniably, the greatest way for a Moslem to be closer to Allah, is recitation of Holy-Quran approves with the method conveyed from Messenger of Allah Mohammed from the feature of speech points of letters and the intrinsic and fleeting characteristics of the letters, So, there is a persistent need to teach all Moslems the science of Tajweed Al-Quran. ITS (Intelligent Tutoring System) is computer software that supplies direct and tailored training or response to students without human teacher interfering. The main (...) target of ITS is smoothing the learning process using the wide-ranging facilities of computer. The proposed system will be implemented using the ITSB Authoring tool. In this thesis, the researcher presents an intelligent tutoring system for teaching Reciting Al-Quran "Tajweed" with Rewaya: Hafs from ‘Aasem by the way of Shatebiyyah. It was a novel idea that the researcher combined the science of Tajweed Al-Quran and the science of artificial intelligence in his thesis. The researcher arranged the material into chapters, lessons, examples then, added all these to the proposed system. He also added questions, right answers and the level of difficulty for each lesson. He prepared an exam for each chapter and a final exam to test the knowledge of the learner in the whole material. The system was evaluated by teachers and students in reciting science and the outcome of the assessment was encouraging and promising. (shrink)
At the core of republican thought, on Philip Pettit’s account, lies the conception of freedom as non-domination, as opposed to freedom as noninterference in the liberal sense. I revisit the distinction between liberal and republican freedom and argue that republican freedom incorporates a particular rule-of-law requirement, whereas liberal freedom does not. Liberals may also endorse such a requirement, but not as part of their conception of freedom itself. I offer a formal analysis of this rule-of-law requirement and compare (...) liberal and republican freedom on its basis. While I agree with Pettit that republican freedom has broader implications than liberal freedom, I conclude that we face a trade-off between two dimensions of freedom (scope and robustness) and that it is harder for republicans to solve that trade-off than it is for liberals. Key Words: freedom • republicanism • liberalism • noninterference • non-domination • rule of law • robustness • liberal paradox. (shrink)
This is a book about duties to help others. When do you have to sacrifice life and limb, time and money, to prevent harm to others? When must you save more people rather than fewer? These questions arise in emergencies involving nearby strangers who are drowning or trapped in burning buildings. But they also arise in our everyday lives, in which we have constant opportunities to give time or money to help distant strangers in need of food, shelter, or medical (...) care. With the resources available to you, you can provide more help or less. This book argues that it is often wrong to provide less help rather than more, even when the personal sacrifice involved makes it permissible not to help at all. It shows that helping distant strangers by donating or volunteering is morally more like rescuing nearby strangers than most of us realize. The ubiquity of opportunities to help others threatens to make morality extremely demanding, and the book argues that it is only thanks to adequate permissions grounded in considerations of cost and autonomy that we may pursue our own plans and projects. It concludes that many of us are required to provide no less help over our lives than we would have done if we were effective altruists. (shrink)
In ‘Measuring the Consequences of Rules’, Holly Smith presents two problems involving the indeterminacy of compliance, which she takes to be fatal for all forms of rule-utilitarianism. In this reply, I attempt to dispel both problems.
It is sometimes said that permitting, say, voluntary euthanasia would erode the motivations and inhibitions supporting other, legitimate prohibitions on killing to the point where widespread disregard for the moral law would result. this paper discusses the relevance of such "slippery slope" arguments for the rule-utilitarian who claims that we can assess moral rules by asking whether their acceptance would maximize utility. first it is argued that any normative theory of this type cannot recognize slope arguments as legitimate considerations (...) in this assessment. second, it is suggested that a theory based on the very different notion of choosing a moral code can permit slope arguments to weigh as relevant considerations. (shrink)
There is an increase in the use of in-door wireless networking solutions via Wi-Fi and this increase infiltrated and utilized Wi-Fi enable devices, as well as smart mobiles, games consoles, security systems, tablet PCs and smart TVs. Thus the demand on Wi-Fi connections increased rapidly. Rule Based System is an essential method in helping using the human expertise in many challenging fields. In this paper, a Rule Based System was designed and developed for diagnosing the wireless connection problems (...) and attain a precise decision about the cause of the problem. SL5 Object expert system language was used in developing the rule based system. An Evaluation of the rule based system was carried out to test its accuracy and the results were promising. (shrink)
This paper considers a formalisation of classical logic using general introduction rules and general elimination rules. It proposes a definition of ‘maximal formula’, ‘segment’ and ‘maximal segment’ suitable to the system, and gives reduction procedures for them. It is then shown that deductions in the system convert into normal form, i.e. deductions that contain neither maximal formulas nor maximal segments, and that deductions in normal form satisfy the subformula property. Tarski’s Rule is treated as a general introduction rule (...) for implication. The general introduction rule for negation has a similar form. Maximal formulas with implication or negation as main operator require reduction procedures of a more intricate kind not present in normalisation for intuitionist logic. (shrink)
Normativists about belief hold that belief formation is essentially rule- or norm-guided. On this view, certain norms are constitutive of or essential to belief in such a way that no mental state not guided by those norms counts as a belief, properly construed. In recent influential work, Kathrin Glüer and Åsa Wikforss develop novel arguments against normativism. According to their regress of motivations argument, not all belief formation can be rule- or norm-guided, on pain of a vicious infinite (...) regress. I argue that the regress of motivations argument is unsuccessful: an appeal to the notion of blind rule-following, drawn from a plausible interpretation of Ludwig Wittgenstein’s remarks on rule-following, stops the regress of motivations in its tracks. (shrink)
AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
An examination of the role played by general rules in Hume's positive (nonskeptical) epistemology. General rules for Hume are roughly just general beliefs. The difference between justified and unjustified belief is a matter of the influence of good versus bad general rules, the good general rules being the "extensive" and "constant" ones.
A number of philosophers from Hobbes to Mill to Parfit have held some combination of the following views about the Golden Rule: (a) It is the cornerstone of morality across many if not all cultures. (b) It affirms the value of moral impartiality, and potentially the core idea of utilitarianism. (c) It is immune from evolutionary debunking, that is, there is no good naturalistic explanation for widespread acceptance of the Golden Rule, ergo the best explanation for its appearance (...) in different traditions is that people have perceived the same non-natural moral truth. De Lazari-Radek and Singer employ all three of these claims in an argument meant to vindicate Sidgwick's ‘principle of universal benevolence’. I argue that the Golden Rule is the cornerstone of morality only in Christianity, it does not advocate moral impartiality, and there is a naturalistic explanation for why versions of the Golden Rule appear in different traditions. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.