The available resources for global health assistance are far outstripped by need. In the face of such scarcity, many people endorse a principle according to which highest priority should be given to the worst off. However, in order for this prioritarian principle to be useful for allocation decisions, policy-makers need to know what it means to be badly off. In this article, we outline a conception of disadvantage suitable for identifying the worst off for the purpose of making health resource (...) allocation decisions. According to our total advantage view: the worst off are those who have the greatest total lifetime disadvantage; advantage foregone due to premature death should be treated in the same way as other ways of being disadvantaged at a time; how badly off someone is depends on the actual outcomes that will befall her without intervention, not her prospects at a time; and all significant forms of disadvantage count for determining who is worst off, not just disadvantage relating to health. We conclude by noting two important implications of the total advantage view: first, that those who die young are among the globally worst off, and second, that the epidemiological shift in the global burden of disease from communicable to non-communicable diseases should not lead to a corresponding shift in global health spending priorities. (shrink)
Buddhist philosophy asserts that human suffering is caused by ignorance regarding the true nature of reality. According to this, perceptions and thoughts are largely fabrications of our own minds, based on conditioned tendencies which often involve problematic fears, aversions, compulsions, etc. In Buddhist psychology, these tendencies reside in a portion of mind known as Store consciousness. Here, I suggest a correspondence between this Buddhist Store consciousness and the neuroscientific idea of stored synaptic weights. These weights are strong synaptic connections built (...) in through experience. Buddhist philosophy claims that humans can find relief from suffering through a process in which the Store consciousness is transformed. Here, I argue that this Buddhist 'transformation at the base' corresponds to a loosening of the learned synaptic connections. I will argue that Buddhist meditation practices create conditions in the brain which are optimal for diminishing the strength of our conditioned perceptual and behavioural tendencies. (shrink)
Objective reasons are given by the facts. Subjective reasons are given by one’s perspective on the facts. Subjective reasons, not objective reasons, determine what it is rational to do. In this paper, I argue against a prominent account of subjective reasons. The problem with that account, I suggest, is that it makes what one has subjective reason to do, and hence what it is rational to do, turn on matters outside or independent of one’s perspective. After explaining and establishing this (...) point, I offer a novel account of subjective reasons which avoids the problem. (shrink)
A normative reason for a person to? is a consideration which favours?ing. A motivating reason is a reason for which or on the basis of which a person?s. This paper explores a connection between normative and motivating reasons. More specifically, it explores the idea that there are second-order normative reasons to? for or on the basis of certain first-order normative reasons. In this paper, I challenge the view that there are second-order reasons so understood. I then show that prominent views (...) in contemporary epistemology are committed to the existence of second-order reasons, specifically, views about the epistemic norms governing practical reasoning and about the role of higher-order evidence. If there are no second-order reasons, those views are mistaken. (shrink)
Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation that (...) addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices. -/- We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern. (shrink)
Physicalism, the thesis that everything is physical, is one of the most controversial problems in philosophy. Its adherents argue that there is no more important doctrine in philosophy, whilst its opponents claim that its role is greatly exaggerated. In this superb introduction to the problem Daniel Stoljar focuses on three fundamental questions: the interpretation, truth and philosophical significance of physicalism. In answering these questions he covers the following key topics: -/- (i)A brief history of physicalism and its definitions, (ii)what (...) a physical property is and how physicalism meets challenges from empirical sciences, (iii)'Hempel’s dilemma’ and the relationship between physicalism and physics, (iv)physicalism and key debates in metaphysics and philosophy of mind, such as supervenience, identity and conceivability, and (v)physicalism and causality. -/- Additional features include chapter summaries, annotated further reading and a glossary of technical terms, making Physicalism ideal for those coming to the problem for the first time. (shrink)
What is a normative reason for acting? In this paper, I introduce and defend a novel answer to this question. The starting-point is the view that reasons are right-makers. By exploring difficulties facing it, I arrive at an alternative, according to which reasons are evidence of respects in which it is right to perform an act, for example, that it keeps a promise. This is similar to the proposal that reasons for a person to act are evidence that she ought (...) to do so; however, as I explain, it differs from that proposal in two significant ways. As a result, I argue, the evidence-based account of reasons I advance shares the advantages of its predecessor while avoiding many of the difficulties facing it. (shrink)
What does the aesthetic ask of us? What claims do the aesthetic features of the objects and events in our environment make on us? My answer in this paper is: that depends. Aesthetic reasons can only justify feelings – they cannot demand them. A corollary of this is that there are no aesthetic obligations to feel, only permissions. However, I argue, aesthetic reasons can demand actions – they do not merely justify them. A corollary of this is that there are (...) aesthetic obligations to act, not only permissions. So, I conclude, the aesthetic asks little of us as patients and much of as agents. (shrink)
Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers — first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting (...) and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world. (shrink)
There has been much debate over whether to accept the claim that meaning is normative. One obstacle to making progress in that debate is that it is not always clear what the claim amounts to. In this paper, I try to resolve a dispute between those who advance the claim concerning how it should be understood. More specifically, I critically examine two competing conceptions of the normativity of meaning, rejecting one and defending the other. Though the paper aims to settle (...) a dispute among proponents of the claim that meaning is normative, it should be of interest to those who challenge it. After all, before one takes aim, one’s target needs to be in clear view. (shrink)
Call the view that it is possible to acquire aesthetic knowledge via testimony, optimism, and its denial, pessimism. In this paper, I offer a novel argument for pessimism. It works by turning attention away from the basis of the relevant belief, namely, testimony, and toward what that belief in turn provides a basis for, namely, other attitudes. In short, I argue that an aesthetic belief acquired via testimony cannot provide a rational basis for further attitudes, such as admiration, and that (...) the best explanation for this is that the relevant belief is not itself rational. If a belief is not rational, it is not knowledge. So, optimism is false. After addressing a number of objections to the argument, I consider briefly its bearing on the debate concerning thick evaluative concepts. While the aim is to argue that pessimism holds, not to explain why it holds, I provide an indication in closing of what that explanation might be. (shrink)
Deontologists believe in two key exceptions to the duty to promote the good: restrictions forbid us from harming others, and prerogatives permit us not to harm ourselves. How are restrictions and prerogatives related? A promising answer is that they share a source in rights. I argue that prerogatives cannot be grounded in familiar kinds of rights, only in something much stranger: waivable rights against oneself.
Subjects appear to take only evidential considerations to provide reason or justification for believing. That is to say that subjects do not take practical considerations—the kind of considerations which might speak in favour of or justify an action or decision—to speak in favour of or justify believing. This is puzzling; after all, practical considerations often seem far more important than matters of truth and falsity. In this paper, I suggest that one cannot explain this, as many have tried, merely by (...) appeal to the idea that belief aims only at the truth. I appeal instead to the idea that the aim of belief is to provide only practical reasons which might form the basis on which to act and to make decisions, an aim which is in turn dictated by the aim of action. This, I argue, explains why subjects take only evidential considerations to favour of or justify believing. Surprisingly, then, it turns out that it is practical reason itself which demands that there be no practical reasons for belief. (shrink)
The exclusion argument is widely thought to put considerable pressure on dualism if not to refute it outright. We argue to the contrary that, whether or not their position is ultimately true, dualists have a plausible response. The response focuses on the notion of ‘distinctness’ as it occurs in the argument: if 'distinctness' is understood one way, the exclusion principle on which the argument is founded can be denied by the dualist; if it is understood another way, the argument is (...) not persuasive. (shrink)
This paper explores the role of generics in social cognition. First, we explore the nature and effects of the most common form of generics about social kinds. Second, we discuss the nature and effects of a less common but equally important form of generics about social kinds. Finally, we consider the implications of this discussion for how we ought to use language about the social world.
An influential proposal is that knowledge involves safe belief. A belief is safe, in the relevant sense, just in case it is true in nearby metaphysically possible worlds. In this paper, I introduce a distinct but complementary notion of safety, understood in terms of epistemically possible worlds. The main aim, in doing so, is to add to the epistemologist’s tool-kit. To demonstrate the usefulness of the tool, I use it to advance and assess substantive proposals concerning knowledge and justification.
The distinction between objective and subjective reasons plays an important role in both folk normative thought and many research programs in metaethics. But the relation between objective and subjective reasons is unclear. This paper explores problems related to the unity of objective and subjective reasons for actions and attitudes and then offers a novel objectivist account of subjective reasons.
Translation from German to English by Daniel Fidel Ferrer -/- What Does it Mean to Orient Oneself in Thinking? -/- German title: "Was heißt: sich im Denken orientieren?" -/- Published: October 1786, Königsberg in Prussia, Germany. By Immanuel Kant (Born in 1724 and died in 1804) -/- Translation into English by Daniel Fidel Ferrer (March, 17, 2014). The day of Holi in India in 2014. -/- From 1774 to about 1800, there were three intense philosophical and theological controversies (...) underway in Germany, namely: Fragments Controversy, the Pantheism Controversy, and the Atheism Controversy. Kant’s essay translated here is Kant’s respond to the Pantheism Controversy. During this period (1770-1800), there was the Sturm und Drang (Storm and Urge (stress)) movement with thinkers like Johann Hamann, Johann Herder, Friedrich Schiller, and Johann Goethe; who were against the cultural movement of the Enlightenment (Aufklärung). Kant was on the side of Enlightenment (see his Answer the Question: What is Enlightenment? 1784). -/- What Does it Mean to Orient Oneself in Thinking? / By Immanuel Kant (1724-1804). [Was heißt: sich im Denken orientieren? English]. (shrink)
The U.S. election in November 2016 raised and amplified doubts about first-past-the-post (“plurality rule”) electoral systems. Arguments against plurality rule and for alternatives like preferential voting tend to be consequentialist: it is argued that systems like preferential voting produce different, better outcomes. After briefly noting why the consequentialist case against plurality rule is more complex and contentious than it first appears, I offer an expressive alternative: plurality rule produces actual or apparent dilemmas for voters in ways that are morally objectionable, (...) and avoidable under preferential voting systems. This expressive case against plurality rule is both simpler and more ecumenical than its consequentialist counterpart, and it provides strong reasons to prefer alternatives to plurality rule. Moreover, it suggests a distinct way of evaluating different alternatives like preferential voting. (shrink)
We argue that generic generalizations about racial groups are pernicious in what they communicate (both to members of that racial group and to members of other racial groups), and may be central to the construction of social categories like racial groups. We then consider how we should change and challenge uses of generic generalizations about racial groups.
It is commonly said that some standards, such as morality, are ‘normatively authoritative’ in a way that other standards, such as etiquette, are not; standards like etiquette are said to be ‘not really normative’. Skeptics deny the very possibility of normative authority, and take claims like ‘etiquette is not really normative’ to be either empty or confused. I offer a different route to defeat skeptics about authority: instead of focusing on what makes standards like morality special, we should focus on (...) what makes standards like etiquette ‘not really normative’. I defend a fictionalist theory on which etiquette is ‘not really normative’ in roughly the same way that Sherlock is ‘not really a detective’, and show that fictionalism about some normative standards helps us explain the possibility of normative authority. (shrink)
Mark Schroeder has recently proposed a new analysis of knowledge. I examine that analysis and show that it fails. More specifically, I show that it faces a problem all too familiar from the post-Gettier literature, namely, that it is delivers the wrong verdict in fake barn cases.
The last two decades have seen a surge of support for normative quietism: most notably, from Dworkin, Nagel, Parfit and Scanlon. Detractors like Enoch and McPherson object that quietism is incompatible with realism about normativity. The resulting debate has stagnated somewhat. In this paper I explore and defend a more promising way of developing that objection: I’ll argue that if normative quietism is true, we can create reasons out of thin air, so normative realists must reject normative quietism.
I argue that existing objectivist accounts of subjective reasons face systematic problems with cases involving probability and possibility. I then offer a diagnosis of why objectivists face these problems, and recommend that objectivists seek to provide indirect analyses of subjective reasons.
This paper reexamines Kierkegaard's work with respect to the question whether truth is one or many. I argue that his famous distinction between objective and subjective truth is grounded in a unitary conception of truth as such: truth as self-coincidence. By explaining his use in this context of the term ‘redoubling’ [Fordoblelse], I show how Kierkegaard can intelligibly maintain that truth is neither one nor many, neither a simple unity nor a complex multiplicity. I further show how these points shed (...) much-needed light on the relationship between objective and subjective truth, conceived not as different kinds or species of truth but as different ways in which truth manifests itself as a standard of success across different contexts of inquiry. (shrink)
Curry's paradox for "if.. then.." concerns the paradoxical features of sentences of the form "If this very sentence is true, then 2+2=5". Standard inference principles lead us to the conclusion that such conditionals have true consequents: so, for example, 2+2=5 after all. There has been a lot of technical work done on formal options for blocking Curry paradoxes while only compromising a little on the various central principles of logic and meaning that are under threat. -/- Once we have a (...) sense of the technical options, though, a philosophical choice remains. When dealing with puzzles in the logic of conditionals, a natural place to turn is independently motivated semantic theories of the behaviour of "if... then...". This paper argues that the closest-worlds approach outlined in Nolan 1997 offers a philosophically satisfying reason to deny conditional proof and so block the paradoxical Curry reasoning, and can give the verdict that standard Curry conditionals are false, along with related "contraction conditionals". (shrink)
The expressivist advances a view about how we explain the meaning of a fragment of language, such as claims about what we morally ought to do. Critics evaluate expressivism on those terms. This is a serious mistake. We don’t just use that fragment of language in isolation. We make claims about what we morally, legally, rationally, and prudentially ought to do. To account for this linguistic phenomenon, the expressivist owes us an account not just of each fragment of language, but (...) of how they weave together into a broader tapestry. I argue that expressivists face a dilemma in doing so: either they fail to explain the univocality of terms like 'ought', or they fail to explain when normative statements are and aren't inconsistent. (shrink)
We defend Uniqueness, the claim that given a body of total evidence, there is a uniquely rational doxastic state that it is rational for one to be in. Epistemic rationality doesn't give you any leeway in forming your beliefs. To this end, we bring in two metaepistemological pictures about the roles played by rational evaluations. Rational evaluative terms serve to guide our practices of deference to the opinions of others, and also to help us formulate contingency plans about what to (...) believe in various situations. We argue that Uniqueness vindicates these two roles for rational evaluations, while Permissivism clashes with them. (shrink)
This collection of essays explores the metaphysical thesis that the living world is not made up of substantial particles or things, as has often been assumed, but is rather constituted by processes. The biological domain is organised as an interdependent hierarchy of processes, which are stabilised and actively maintained at different timescales. Even entities that intuitively appear to be paradigms of things, such as organisms, are actually better understood as processes. Unlike previous attempts to articulate processual views of biology, which (...) have tended to use Alfred North Whitehead’s panpsychist metaphysics as a foundation, this book takes a naturalistic approach to metaphysics. It submits that the main motivations for replacing an ontology of substances with one of processes are to be found in the empirical findings of science. Biology provides compelling reasons for thinking that the living realm is fundamentally dynamic, and that the existence of things is always conditional on the existence of processes. The phenomenon of life cries out for theories that prioritise processes over things, and it suggests that the central explanandum of biology is not change but rather stability, or more precisely, stability attained through constant change. This edited volume brings together philosophers of science and metaphysicians interested in exploring the consequences of a processual philosophy of biology. The contributors draw on an extremely wide range of biological case studies, and employ a process perspective to cast new light on a number of traditional philosophical problems, such as identity, persistence, and individuality. (shrink)
The dilemma of free will is that if actions are caused deterministically, then they are not free, and if they are not caused deterministically then they are not free either because then they happen by chance and are not up to the agent. I propose a conception of free will that solves this dilemma. It can be called agent causation but it differs from what Chisholm and others have called so.
There are at least two threads in our thought and talk about rationality, both practical and theoretical. In one sense, to be rational is to respond correctly to the reasons one has. Call this substantive rationality. In another sense, to be rational is to be coherent, or to have the right structural relations hold between one’s mental states, independently of whether those attitudes are justified. Call this structural rationality. According to the standard view, structural rationality is associated with a distinctive (...) set of requirements that mandate or prohibit certain combinations of attitudes, and it’s in virtue of violating these requirements that incoherent agents are irrational. I think the standard view is mistaken. The goal of this paper is to explain why, and to motivate an alternative account: rather than corresponding to a set of law-like requirements, structural rationality should be seen as corresponding to a distinctive kind of pro tanto rational pressure—i.e. something that comes in degrees, having both magnitude and direction. Something similar is standardly assumed to be true of substantive rationality. On the resulting picture, each dimension of rational evaluation is associated with a distinct kind of rational pressure—substantive rationality with (what I call) justificatory pressure and structural rationality with attitudinal pressure. The former is generated by one’s reasons while the latter is generated by one’s attitudes. Requirements turn out to be at best a footnote in the theory of rationality. (shrink)
This essay offers an account of Kierkegaard’s view of the limits of thought and of what makes this view distinctive. With primary reference to Philosophical Fragments, and its putative representation of Christianity as unthinkable, I situate Kierkegaard’s engagement with the problem of the limits of thought, especially with respect to the views of Kant and Hegel. I argue that Kierkegaard builds in this regard on Hegel’s critique of Kant but that, against Hegel, he develops a radical distinction between two types (...) of thinking and inquiry: the ‘aesthetic-intellectual’ and the ‘ethico-religious’. I clarify this distinction and show how it guides Kierkegaard’s conception of a form of philosophical practice that involves drawing limits to the proper sphere of disinterested contemplation. With reference to two rival interpretations of Kierkegaard’s approach to the limits of thought—which I call ‘bullet-biting’ and ‘relativizing’—I further show how my ‘disambiguating’ account can better explain how, and why, his work courts a form of self-referential incoherence, in which it appears that certain limits of thought are at once affirmed and violated. (shrink)
This paper draws on the notion of the ‘project,’ as developed in the existential philosophy of Heidegger and Sartre, to articulate an understanding of the existential structure of engagement with virtual worlds. By this philosophical understanding, the individual’s orientation towards a project structures a mechanism of self-determination, meaning that the project is understood essentially as the project to make oneself into a certain kind of being. Drawing on existing research from an existential-philosophical perspective on subjectivity in digital game environments, the (...) notion of a ‘virtual subjectivity’ is proposed to refer to the subjective sense of being-in-the-virtual-world. The paper proposes an understanding of virtual subjectivity as standing in a nested relation to the individual’s subjectivity in the actual world, and argues that it is this relation that allows virtual world experience to gain significance in the light of the individual’s projectual existence. The arguments advanced in this paper pave the way for a comprehensive understanding of the transformative, self-transformative, and therapeutic possibilities and advantages afforded by virtual worlds. (shrink)
For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...) options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them. (shrink)
A principle endorsed by many theories of objective chance, and practically forced on us by the standard interpretation of the Kolmogorov semantics for chance, is the principle that when a proposition P has a chance, any proposition Q that is necessarily equivalent to P will have the same chance as P. Call this principle SUB (for the substitution of necessary equivalents into chance ascriptions). I will present some problems for a theory of chance, and will argue that the best way (...) to resolve these problems is to reject SUB, and similar principles e.g. for the chances of outcomes or the chances of events. Objective chance, it turns out, carves things more finely than necessary equivalence does. (shrink)
Sarah McGrath argues that moral perception has an advantage over its rivals in its ability to explain ordinary moral knowledge. I disagree. After clarifying what the moral perceptualist is and is not committed to, I argue that rival views are both more numerous and more plausible than McGrath suggests: specifically, I argue that inferentialism can be defended against McGrath’s objections; if her arguments against inferentialism succeed, we should accept a different rival that she neglects, intuitionism; and, reductive epistemologists can appeal (...) to non-naturalist commitments to avoid McGrath’s counterexamples. (shrink)
The concept of mechanism in biology has three distinct meanings. It may refer to a philosophical thesis about the nature of life and biology (‘mechanicism’), to the internal workings of a machine-like structure (‘machine mechanism’), or to the causal explanation of a particular phenomenon (‘causal mechanism’). In this paper I trace the conceptual evolution of ‘mechanism’ in the history of biology, and I examine how the three meanings of this term have come to be featured in the philosophy of biology, (...) situating the new ‘mechanismic program’ in this context. I argue that the leading advocates of the mechanismic program (i.e., Craver, Darden, Bechtel, etc.) inadvertently conflate the different senses of ‘mechanism’. Specifically, they all inappropriately endow causal mechanisms with the ontic status of machine mechanisms, and this invariably results in problematic accounts of the role played by mechanism-talk in scientific practice. I suggest that for effective analyses of the concept of mechanism, causal mechanisms need to be distinguished from machine mechanisms, and the new mechanismic program in the philosophy of biology needs to be demarcated from the traditional concerns of mechanistic biology. (shrink)
Well-being measurements are frequently used to support conclusions about a range of philosophically important issues. This is a problem, because we know too little about the intervals of the relevant scales. I argue that it is plausible that well-being measurements are non-linear, and that common beliefs that they are linear are not truth-tracking, so we are not justified in believing that well-being scales are linear. I then argue that this undermines common appeals to both hypothetical and actual well-being measurements; I (...) first focus on the philosophical literature on prioritarianism and then discuss Kahneman’s Peak-End Rule as a systematic bias. Finally, I discuss general implications for research on well-being, and suggest a better way of representing scales. (shrink)
Supererogatory acts—good deeds “beyond the call of duty”—are a part of moral common sense, but conceptually puzzling. I propose a unified solution to three of the most infamous puzzles: the classic Paradox of Supererogation (if it’s so good, why isn’t it just obligatory?), Horton’s All or Nothing Problem, and Kamm’s Intransitivity Paradox. I conclude that supererogation makes sense if, and only if, the grounds of rightness are multi-dimensional and comparative.
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to be borne (...) out by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
In a paper in this journal, I defend the view that truth is the fundamental norm for assertion and, in doing so, reject the view that knowledge is the fundamental norm for assertion. In a recent response, Littlejohn raises a number of objections against my arguments. In this reply, I argue that Littlejohn’s objections are unsuccessful.
I argue that semantics is the study of the proprietary database of a centrally inaccessible and informationally encapsulated input–output system. This system’s role is to encode and decode partial and defeasible evidence of what speakers are saying. Since information about nonlinguistic context is therefore outside the purview of semantic processing, a sentence’s semantic value is not its content but a partial and defeasible constraint on what it can be used to say. I show how to translate this thesis into a (...) detailed compositional-semantic theory based on the influential framework of Heim and Kratzer. This approach situates semantics within an independently motivated account of human cognitive architecture and reveals the semantics–pragmatics interface to be grounded in the underlying interface between modular and central systems. (shrink)
First-order evidence is evidence which bears on whether a proposition is true. Higher-order evidence is evidence which bears on whether a person is able to assess her evidence for or against a proposition. A widespread view is that higher-order evidence makes a difference to whether it is rational for a person to believe a proposition. In this paper, I consider in what way higher-order evidence might do this. More specifically, I consider whether and how higher-order evidence plays a role in (...) determining what it is rational to believe distinct from that which first-order evidence plays. To do this, I turn to the theory of reasons, and try to situate higher-order evidence within it. The only place I find for it there, distinct from that which first-order evidence already occupies, is as a practical reason, that is, as a reason for desire or action. One might take this to show either that the theory of reasons is inadequate as it stands or that higher-order evidence makes no distinctive difference to what it is rational to believe. I tentatively endorse the second option. (shrink)
Abstract This paper offers an appraisal of Phillip Pettit's approach to the problem how a merely finite set of examples can serve to represent a determinate rule, given that indefinitely many rules can be extrapolated from any such set. I argue that Pettit's so-called ethnocentric theory of rule-following fails to deliver the solution to this problem he sets out to provide. More constructively, I consider what further provisions are needed in order to advance Pettit's general approach to the problem. I (...) conclude that what is needed is an account that, whilst it affirms the view that agents' responses are constitutively involved in the exemplification of rules, does not allow such responses the pride of place they have in Pettit's theory. (shrink)
In the first part of this essay (Sections I and II), I argue that Kierkegaard's work helps us to articulate and defend two basic requirements on searching for knowledge of one's own judgements: first, that searching for knowledge whether one judges that P requires trying to make a judgement whether P; and second that, in an important range of cases, searching for knowledge of one's own judgements requires attending to how one's acts of judging are performed. In the second part (...) of the essay (Sections III and IV), I consider two prima facie problems regarding this conception of searching for knowledge of one's own judgements. The first problem concerns how in general one can coherently try to meet both these requirements at once; the second, how in particular one can try to attend to one's own acts of judging. I show how Kierkegaard's work is alive to both these problems, and helps us to see how they can be resolved. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.