I argue in this paper that Locke and contemporary Lockeans underestimate the problems involved in their frequent, implicit assumption that when we apply the proviso we use the latest scientific knowledge of natural resources, technology, and the economy’s operations. Problematic for these theories is that much of the pertinent knowledge used is obtained through particular persons’ labor. If the knowledge obtained through individuals’ labor must be made available to everyone and if particular persons’ new knowledge affects the proviso’s (...) proper application, then some end up without freedom to pursue their own ends and some find their freedom subject to others’ arbitrary will. (shrink)
Habermas’ ‘ethics of citizenship’ raises a number of relevant concerns about the dangers of a secularistic exclusion of religious contributions to public deliberation, on the one hand, and the dangers of religious conflict and sectarianism in politics, on the other. Agreeing largely with these concerns, the paper identities four problems with Habermas’ approach, and attempts to overcome them: the full exclusion of religious reasons from parliamentary debate; the full inclusion of religious reasons in the informal public sphere; the philosophical distinction (...) between secular and religious reasons; and the sociological distinction between ‘Western’ and ‘non-Western’ religions. The result is a revised version of the ethics of citizenship, which I call moderate inclusivism. Most notably, moderate inclusivism implies a replacement of Habermas’ ‘institutional translation proviso’ with a more flexible ‘conversational translation proviso’. (shrink)
The ProvisoProblem is the discrepancy between the predictions of nearly every major theory of semantic presupposition about what is semantically presupposed by conditionals, disjunctions, and conjunctions, versus observations about what speakers of certain sentences are felt to be presupposing. I argue that the ProvisoProblem is a more serious problem than has been widely recognized. After briefly describing the problem and two standard responses to it, I give a number of examples which, I (...) argue, show that those responses are inadequate. I conclude by briefly exploring alternate approaches to presupposition that avoid this problem. -/- . (shrink)
Two recent and influential papers, van Rooij 2007 and Lassiter 2012, propose solutions to the provisoproblem that make central use of related notions of independence—qualitative in the first case, probabilistic in the second. We argue here that, if these solutions are to work, they must incorporate an implicit assumption about presupposition accommodation, namely that accommodation does not interfere with existing qualitative or probabilistic independencies. We show, however, that this assumption is implausible, as updating beliefs with conditional information (...) does not in general preserve independencies. We conclude that the approach taken by van Rooij and Lassiter does not succeed in resolving the provisoproblem. (shrink)
I propose a new theory of semantic presupposition, which I call dissatisfaction theory. I first briefly review a cluster of problems − known collectively as the provisoproblem − for most extant theories of presupposition, arguing that the main pragmatic response to them faces a serious challenge. I avoid these problems by adopting two changes in perspective on presupposition. First, I propose a theory of projection according to which presuppositions project unless they are locally entailed. Second, I reject (...) the standard assumption that presuppositions are contents which must be entailed by the input context; instead, I propose that presuppositions are contents which are marked as backgrounded. I show that, together, these commitments allow us to avoid the provisoproblem altogether, and generally make plausible predictions about presupposition projection out of connectives and attitude predicates. I close by sketching a two-dimensional implementation of my theory which allows us to make further welcome predictions about attitude predicates and quantifiers. (shrink)
"Procedural Justice" offers a theory of procedural fairness for civil dispute resolution. The core idea behind the theory is the procedural legitimacy thesis: participation rights are essential for the legitimacy of adjudicatory procedures. The theory yields two principles of procedural justice: the accuracy principle and the participation principle. The two principles require a system of procedure to aim at accuracy and to afford reasonable rights of participation qualified by a practicability constraint. The Article begins in Part I, Introduction, with two (...) observations. First, the function of procedure is to particularize general substantive norms so that they can guide action. Second, the hard problem of procedural justice corresponds to the following question: How can we regard ourselves as obligated by legitimate authority to comply with a judgment that we believe (or even know) to be in error with respect to the substantive merits? The theory of procedural justice is developed in several stages, beginning with some preliminary questions and problems. The first question - what is procedure? - is the most difficult and requires an extensive answer: Part II, Substance and Procedure, defines the subject of the inquiry by offering a new theory of the distinction between substance and procedure that acknowledges the entanglement of the action-guiding roles of substantive and procedural rules while preserving the distinction between two ideal types of rules. The key to the development of this account of the nature of procedure is a thought experiment, in which we imagine a world with the maximum possible acoustic separation between substance and procedure. Part III, The Foundations of Procedural Justice, lays out the premises of general jurisprudence that ground the theory and answers a series of objections to the notion that the search for a theory of procedural justice is a worthwhile enterprise. Sections II and III set the stage for the more difficult work of constructing a theory of procedural legitimacy. Part IV, Views of Procedural Justice, investigates the theories of procedural fairness found explicitly or implicitly in case law and commentary. After a preliminary inquiry that distinguishes procedural justice from other forms of justice, Part IV focuses on three models or theories. The first, the accuracy model, assumes that the aim of civil dispute resolution is correct application of the law to the facts. The second, the balancing model, assumes that the aim of civil procedure is to strike a fair balance between the costs and benefits of adjudication. The third, the participation model, assumes that the very idea of a correct outcome must be understood as a function of process that guarantees fair and equal participation. Part IV demonstrates that none of these models provides the basis for a fully adequate theory of procedural justice. In Part V, The Value of Participation, the lessons learned from analysis and critique of the three models are then applied to the question whether a right of participation can be justified for reasons that are not reducible to either its effect on the accuracy or its effect on the cost of adjudication. The most important result of Part V is the Participatory Legitimacy Thesis: it is (usually) a condition for the fairness of a procedure that those who are to be finally bound shall have a reasonable opportunity to participate in the proceedings. The central normative thrust of Procedural Justice is developed in Part VI, Principles of Procedural Justice. The first principle, the Participation Principle, stipulates a minimum (and minimal) right of participation, in the form of notice and an opportunity to be heard, that must be satisfied (if feasible) in order for a procedure to be considered fair. The second principle, the Accuracy Principle, specifies the achievement of legally correct outcomes as the criterion for measuring procedural fairness, subject to four provisos, each of which sets out circumstances under which a departure from the goal of accuracy is justified by procedural fairness itself. In Part VII, The Problem of Aggregation, the Participation Principle and the Accuracy Principle are applied to the central problem of contemporary civil procedure - the aggregation of claims in mass litigation. Part VIII offers some concluding observations about the point and significance of Procedural Justice. (shrink)
It has been argued that the fundamental laws of physics do not face a ‘problem of provisos’ equivalent to that found in other scientific disciplines (Earman, Roberts and Smith 2002) and there is only the appearance of exceptions to physical laws if they are confused with differential equations of evolution type (Smith 2002). In this paper I argue that even if this is true, fundamental laws in physics still pose a major challenge to standard Humean approaches to lawhood, as (...) they are not in any obvious sense about regularities in behaviour. A Humean approach to physical laws with exceptions is possible, however, if we adopt a view of laws that takes them to be the algorithms in the algorithmic compressions of empirical data. When this is supplemented with a distinction between lossy and lossless compression, we can explain exceptions in terms of compression artefacts present in the application of the lossy laws. (shrink)
The paper discusses the possibility that the benefits of pharmacogenomics will not be distributed equally and will create orphan populations. I argue that since these inequalities are not substantially different from those produced by ‘traditional’ drugs and are not generated with the intention to discriminate, their production needs not be unethical. Still, the final result is going against deep-seated moral feelings and intuitions, as well as broadly accepted principles of just distribution of health outcomes and healthcare. I thus propose two (...) provisos that would prevent the most offensive outcomes and moderate the scope of the produced inequalities. The first proviso rejects pharmacogenomics innovations that worsen existing group inequalities and aggravate the disadvantage of communities with a history of discrimination. The second proviso requires that there is a strategy in place to even out as much as possible the distribution of benefits in the future and that a system of compensations is in place for pharmacogenomic orphans. Given that only one moral problem generated by pharmacogenomics has been tackled, the list of provisos might be expanded when other issues are considered. (shrink)
In this chapter, I argue that John Rawls’ later work presents one of the most fruitful liberal frameworks from which to approach global cultural diversity. In his Law of Peoples (1999), the normative architecture Rawls provides is much more open to an intercultural/religious dialogue with various non-Western communities, such as the First Nations, than are other liberal approaches. Surprisingly, this has gone unnoticed in the literature on multiculturalism. At the same time, Rawls’ framework is not problem free. Here, I (...) am concerned with Rawls’ conception of overlapping consensus as political, rather than comprehensive; or the idea that dialogue and discussion concerning issues of justice must necessarily, as a matter of principle, exclude philosophical or religious reasons. I argue that this constraint will only add to the unfair exclusion of legitimate concerns. I demonstrate this in Rawls' discussion of so-called non-public spiritual perspectives of land, non-human animals, and the environment – views strikingly similar to those held by many Indigenous and Native Americans. Rawls’ framework unjustifiably and unjustly excludes such views from participation in the public and political realm. In the context of a globally diverse world, and in light of a history of Western colonialism, oppression, and domination of Indigenous peoples, I would argue that justice and fairness require that such others at least be able to articulate their concerns in the public and political domain, according to their own self-understandings and as they see fit, even if we do not agree with these. I argue that Rawls’ proviso is insufficient as a response. (shrink)
State of nature theories have a long history and play a lively role in contemporary work. Theories of this kind share certain nontrivial commitments. Among these are commitments to inclusion of a Lockean proviso among the principles of justice and to an assumption of invariance of political principles across changes of circumstances. In this article I want to look at those two commitments and bring to light what I believe are some important difficulties they engender. For nonpattern state of (...) nature theories, the justness of a society is marked by the conformance of the society to procedural principles. Distributions of resources and the like have no particular import for questions of justice. Whatever may later result, so long as it came about in accordance with the rules determined by the principles of justice, is itself just. The Lockean proviso is one of the principles of justice governing property and other rights of nonpattern theories of justice. The proviso hangs as a "shadow" over the results of the operation of the other (usual) principles of justice. It is intended to remedy a complaint which arises when the positions of those no longer at liberty to use some resource are worsened (1) by no longer being able to use freely what they previously were free to use and (2) in such a way that they fall below a "baseline." Following Locke, a traditional formulation of the proviso is to allow acquisition just so long as there is "enough and as good" left over for others. Section I concerns the relation of the Lockean proviso to pattern and nonpattern principles of justice, demonstrating that a Lockean proviso turns a nonpattern into a pattern theory of justice. Section II is about the relation of the Lockean proviso to the ideas revealed by an examination of a state of nature, suggesting reasons to reject ideal theories of justice. (shrink)
A private property account is central to a liberal theory of justice. Much of the appeal of the Lockean theory stems from its account of the so-called `enough-and-as-good' proviso, a principle which aims to specify each employable person's fair share of the earth's material resources. I argue that to date Lockeans have failed to show how the proviso can be applied without thereby undermining a guiding intuition in Lockean theory. This guiding intuition is that by interacting in accordance (...) with the proviso persons interact as free and equal, or as reciprocally subject to the `laws of nature' rather than as subject to one another's arbitrary will. Because Locke's own and contemporary Lockean conceptions of the proviso subject some persons to some other persons' arbitrary will, the proviso so conceived cannot function as it should, namely as a principle that restricts interacting persons' actions reciprocally and thereby enables Lockean freedom under law. (shrink)
Does intellectual property satisfy the requirements of the Lockean proviso, that the appropriator leave “enough and as good” or that he at least not “deprive others”? If an author's appropriation of a work he has just created is analogous to a drinker “taking a good draught” in the flow of an inexhaustible river, or to someone magically “causing springs of water to flow in the desert,” how could it not satisfy the Lockean proviso?
This paper explores the implications of libertarianism for welfare policy. There are two central arguments. First, the paper argues that if one adopts a libertarian framework, it makes most sense to be a Lockean right-libertarian. Second, the paper argues that this form of libertarianism leads to the endorsement of a fairly extensive set of redistributive welfare programs. Specifically, the paper argues that Lockean right-libertarians are committed to endorsing welfare programs under which the receipt of benefits is conditional on meeting a (...) work requirement, and also endorsing some form of publicly funded jobs of last resort for potential welfare recipients. (shrink)
Matt Zwolinski argues that libertarians “should see the Basic Income Guarantee (BIG)—a guarantee that all members will receive income regardless of why they need it—as an essential part of an ideally just libertarian system.” He regards the satisfaction of a Lockean proviso—a stipulation that individuals may not be rendered relevantly worse off by the uses and appropriations of private property—as a necessary condition for a private property system’s being just. BIG is to be justified precisely because it prevents (...) class='Hi'>proviso violations. We deem Zwolinski’s argument a “Direct Proviso-Based Argument” for BIG. We argue that because this sort of argument for the BIG is in tension with other principles libertarians within the Lockean tradition hold dear, specifically prohibitions on seizing legitimately held property and forcing individuals to labor, the Direct Proviso-Based Argument fails. (shrink)
One of the reasons why most of us feel puzzled about the problem of abortion is that we want, and do not want, to allow to the unborn child the rights that belong to adults and children. When we think of a baby about to be born it seems absurd to think that the next few minutes or even hours could make so radical a difference to its status; yet as we go back in the life of the fetus (...) we are more and more reluctant to say that this is a human being and must be treated as such. No doubt this is the deepest source of our dilemma, but it is not the only one. For we are also confused about the general question of what we may and may not do where the interests of human beings conflict. We have strong intuitions about certain cases; saying, for instance, that it is all right to raise the level of education in our country, though statistics allow us to predict that a rise in the suicide rate will follow, while it is not all right to kill the feeble-minded to aid cancer research. It is not easy, however, to see the principles involved, and one way of throwing light on the abortion issue will be by setting up parallels involving adults or children once born. So we will be able to isolate the “equal rights” issue and should be able to make some advance... (shrink)
Recent debate in the literature on political obligation about the principle of fairness rests on a mistake. Despite the widespread assumption to the contrary, a person can have a duty of fairness to share in the burdens of sustaining some cooperative scheme even though that scheme does not represent a net benefit to her. Recognizing this mistake allows for a resolution of the stalemate between those who argue that the mere receipt of some public good from a scheme can generate (...) a duty of fairness and those who argue that only some voluntary action of consent or acceptance of the good can generate such a duty. I defend a version of the principle of fairness that holds that it is the person’s reliance on a scheme for the provision of some product or service that generates duties of fairness to share in the burdens of sustaining the scheme. And, on this version, the principle of fairness is politically significant: regardless of whether the citizen has a duty to obey the law, she will still have important political duties of fairness generated by her reliance on the various public goods provided by those society-wide cooperative schemes sustained by the sacrifices of her fellow citizens. (shrink)
I resolve the major challenge to an Expressivist theory of the meaning of normative discourse: the Frege–Geach Problem. Drawing on considerations from the semantics of directive language (e.g., imperatives), I argue that, although certain forms of Expressivism (like Gibbard’s) do run into at least one version of the Problem, it is reasonably clear that there is a version of Expressivism that does not.
Gerade weil das Bestehen auf Rangordnungen in der heutigen Gesellschaft anstößig und fremd wirkt, ist es lohnenswert, sich ihnen mit Nietzsche neu zu stellen, der sie als sein Problem bezeichnete. Er richtet sie gezielt gegen die Gleichheit, von der er befürchtet, ihr Anspruch auf Universalität verunmögliche Individualität, Anders-Sein und damit auch alle Größe. Den moralischen Wert der Gleichheit kritisieren heißt nicht, sich von demokratischen Grundprinzipien oder Errungenschaften zu verabschieden. Geklärte Rangverhältnisse reduzieren Komplexität, vereinfachen die Kommunikation, machen Verhalten erwartbar und (...) vereinfachen so die Orientierung. So könnte gerade in den modernen Ausprägungen der Demokratie ein offenerer Umgang mit Rangordnungen zu ihrer Stärkung beitragen. Die Nietzsche-Forschung hat eine Beschäftigung mit dem Begriff bisher weitgehend vermieden. Alberts schließt diese Forschungslücke. Er geht persönlichen, philologischen und philosophischen Anhaltspunkten für Nietzsches Denken nach und untersucht die Perspektiven, die dabei auf verschiedenste Lebensbereiche wie Natur, Religion, Moral, Wissenschaft und Interindividualität eröffnet werden. (shrink)
The problem of imaginative resistance holds interest for aestheticians, literary theorists, ethicists, philosophers of mind, and epistemologists. We present a somewhat opinionated overview of the philosophical discussion to date. We begin by introducing the phenomenon of imaginative resistance. We then review existing responses to the problem, giving special attention to recent research directions. Finally, we consider the philosophical significance that imaginative resistance has—or, at least, is alleged to have—for issues in moral psychology, theories of cognitive architecture, and modal (...) epistemology. (shrink)
Whether or not quantum physics can account for molecular structure is a matter of considerable controversy. Three of the problems raised in this regard are the problems of molecular structure. We argue that these problems are just special cases of the measurement problem of quantum mechanics: insofar as the measurement problem is solved, the problems of molecular structure are resolved as well. In addition, we explore one consequence of our argument: that claims about the reduction or emergence of (...) molecular structure cannot be settled independently of the choice of a particular resolution to the measurement problem. Specifically, we consider how three standard putative solutions to the measurement problem inform our under- standing of a molecule in isolation, as well as of chemistry’s relation to quantum physics. (shrink)
Looking at the recent spate of claims about “fake news” which appear to be a new feature of political discourse, I argue that fake news presents an interesting problem in epistemology. Te phenomena of fake news trades upon tolerating a certain indiference towards truth, which is sometimes expressed insincerely by political actors. Tis indiference and insincerity, I argue, has been allowed to fourish due to the way in which we have set the terms of the “public” epistemology that maintains (...) what is considered “rational” public discourse. I argue one potential salve to the problem of fake news is to challenge this public epistemology by injecting a certain ethical consideration back into the discourse. (shrink)
Many morally significant outcomes can be brought about only if several individuals contribute to them. However, individual contributions to collective outcomes often fail to have morally significant effects on their own. Some have concluded from this that it is permissible to do nothing. What I call ‘the problem of insignificant hands’ is the challenge of determining whether and when people are obligated to contribute. For this to be the case, I argue, the prospect of helping to bring about the (...) outcome has to be good enough. Furthermore, the individual must be in a position to increase the probability of its being brought about to an appropriate extent. Finally, I argue that when too few are willing to contribute, people may have a duty to increase their number. Thus, someone can be obligated to contribute or to get others to contribute. This prospect account is consistent with Kantianism, contractualism and rule consequentialism but inconsistent with act consequentialism. (shrink)
The main goal of this paper is to investigate what explanatory resources Robert Brandom’s distinction between acknowledged and consequential commitments affords in relation to the problem of logical omniscience. With this distinction the importance of the doxastic perspective under consideration for the relationship between logic and norms of reasoning is emphasized, and it becomes possible to handle a number of problematic cases discussed in the literature without thereby incurring a commitment to revisionism about logic. One such case in particular (...) is the preface paradox, which will receive an extensive treatment. As we shall see, the problem of logical omniscience not only arises within theories based on deductive logic; but also within the recent paradigm shift in psychology of reasoning. So dealing with this problem is important not only for philosophical purposes but also from a psychological perspective. (shrink)
In the course of daily life we solve problems often enough that there is a special term to characterize the activity and the right to expect a scientific theory to explain its dynamics. The classical view in psychology is that to solve a problem a subject must frame it by creating an internal representation of the problem’s structure, usually called a problem space. This space is an internally generable representation that is mathematically identical to a graph structure (...) with nodes and links. The nodes can be annotated with useful information, and the whole representation can be distributed over internal and external structures such as symbolic notations on paper or diagrams. If the representation is distributed across internal and external structures the subject must be able to keep track of activity in the distributed structure. Problem solving proceeds as the subject works from an initial state in mentally supported space, actively constructing possible solution paths, evaluating them and heuristically choosing the best. Control of this exploratory process is not well understood, as it is not always systematic, but various heuristic search algorithms have been proposed and some experimental support has been provided for them. (shrink)
In this paper, I hope to solve a problem that’s as old as the hills: the problem of contingency for religious belief. Paradigmatic examples of this argument begin with a counterfactual premise: had we been born at a different time or in a difference place, we easily could have held different beliefs on religious topics. Ultimately, and perhaps by additional steps, we’re meant to reach the skeptical conclusion that very many of our religious beliefs do not amount to (...) knowledge. I survey some historical examples of this argument, and I try to fill the gap between the counterfactual premise and the skeptical conclusion as forcefully as possible. I consider the following possibilities: there are no additional steps in the argument; or there are and they concern the alleged safety condition on knowledge, or the alleged non-accidentality condition on knowledge, or the unclarity produced by disagreement. On every possibility, the argument from the counterfactual premise to the conclusion of widespread skepticism is invalid. It seems, then, that there is no serious problem of contingency for religious belief. (shrink)
In this paper, I argue that, just as the problem of unconceived alternatives provides a basis for a New Induction on the History of Science to the effect that a realist view of science is unwarranted, the problem of unconceived objections provides a basis for a New Induction on the History of Philosophy to the effect that a realist view of philosophy is unwarranted. I raise this problem not only for skepticism’s sake but also for the sake (...) of making a point about philosophical argumentation, namely, that anticipating objections to one’s claim is not the same as supporting one’s claim. In other words, defending p from objections does not amount to support or evidence for p. This, in turn, presents dialectical and pragma-dialectical approaches to argumentation with the following question: does proper argumentation require that arguers anticipate and respond to unconceived objections? (shrink)
In this paper, I argue that there is a kind of evil, namely, the unequal distribution of natural endowments, or natural inequality, which presents theists with a new evidential problem of evil. The problem of natural inequality is a new evidential problem of evil not only because, to the best of my knowledge, it has not yet been discussed in the literature, but also because available theodicies, such the free will defense and the soul-making defense, are not (...) adequate responses in the face of this particular evil, or so I argue. (shrink)
Abstract artifacts such as musical works and fictional entities are human creations; they are intentional products of our actions and activities. One line of argument against abstract artifacts is that abstract objects are not the kind of objects that can be created. This is so, it is argued, because abstract objects are causally inert. Since creation requires being caused to exist, abstract objects cannot be created. One common way to refute this argument is to reject the causal inefficacy of abstracta. (...) I argue that creationists should rather reject the principle that creation requires causation. Creation, in my view, is a non-causal relation that can be explained using an appropriate notion of ontological dependence. The existence and the creation of abstract artifacts depend on certain individuals with appropriate intentions, along with events of a certain kind that include but are not limited to creations of certain concrete objects. (shrink)
We are pleased to publish this WSIA edition of Trudy’s Govier’s seminal volume, Problems in Argument Analysis and Evaluation. Originally published in 1987 by Foris Publications, this was a pioneering work that played a major role in establishing argumentation theory as a discipline. Today, it is as relevant to the field as when it first appeared, with discussions of questions and issues that remain central to the study of argument. It has defined the main approaches to many of those issues (...) and guided the ways in which we might respond to them. From this foundation, it sets the stage for further investigations and emerging research. This is a second edition of the book that is corrected and updated by the author, with new prefaces to each chapter. (shrink)
A substantial proportion of human embryos spontaneously abort soon after conception, and ethicists have argued this is problematic for the pro-life view that a human embryo has the same moral status as an adult from conception. Firstly, if human embryos are our moral equals, this entails spontaneous abortion is one of humanity’s most important problems, and it is claimed this is absurd, and a reductio of the moral status claim. Secondly, it is claimed that pro-life advocates do not act as (...) if spontaneous abortion is important, implying they are failing to fulfill their moral obligations. We report that the primary cause of spontaneous abortion is chromosomal defects, which are currently unpreventable, and show that as the other major cause of prenatal death is induced abortion, pro-life advocates can legitimately continue efforts to oppose it. We also defend the relevance of the killing and letting die distinction, which provides further justification for pro-life priorities. (shrink)
Various philosophers have long since been attracted to the doctrine that future contingent propositions systematically fail to be true—what is sometimes called the doctrine of the open future. However, open futurists have always struggled to articulate how their view interacts with standard principles of classical logic—most notably, with the Law of Excluded Middle. For consider the following two claims: Trump will be impeached tomorrow; Trump will not be impeached tomorrow. According to the kind of open futurist at issue, both of (...) these claims may well fail to be true. According to many, however, the disjunction of these claims can be represented as p ∨ ~p—that is, as an instance of LEM. In this essay, however, I wish to defend the view that the disjunction these claims cannot be represented as an instance of p ∨ ~p. And this is for the following reason: the latter claim is not, in fact, the strict negation of the former. More particularly, there is an important semantic distinction between the strict negation of the first claim [~] and the latter claim. However, the viability of this approach has been denied by Thomason, and more recently by MacFarlane and Cariani and Santorio, the latter of whom call the denial of the given semantic distinction “scopelessness”. According to these authors, that is, will is “scopeless” with respect to negation; whereas there is perhaps a syntactic distinction between ‘~Will p’ and ‘Will ~p’, there is no corresponding semantic distinction. And if this is so, the approach in question fails. In this paper, then, I criticize the claim that will is “scopeless” with respect to negation. I argue that will is a so-called “neg-raising” predicate—and that, in this light, we can see that the requisite scope distinctions aren’t missing, but are simply being masked. The result: a under-appreciated solution to the problem of future contingents that sees and as contraries, not contradictories. (shrink)
The problem of closure for the traditional unstructured possible worlds model of attitudinal content is that it treats belief and other cognitive states as closed under entailment, despite apparent counterexamples showing that this is not a necessary property of such states. One solution to this problem, which has been proposed recently by several authors (Schaffer 2005; Yalcin 2018; Hoek forthcoming), is to restrict closure in an unstructured setting by treating propositional attitudes as question-sensitive. Here I argue that this (...) line of response is unsatisfying as it stands because the problem of closure is more general than is typically discussed. A version of the problem recurs for attitudes like wondering, entertaining, considering, and so on, which are directed at questions rather than propositions. For such questioning attitudes, the appeal to question-sensitivity is much less convincing as a solution to the problem of closure. (shrink)
Fitelson (1999) demonstrates that the validity of various arguments within Bayesian confirmation theory depends on which confirmation measure is adopted. The present paper adds to the results set out in Fitelson (1999), expanding on them in two principal respects. First, it considers more confirmation measures. Second, it shows that there are important arguments within Bayesian confirmation theory and that there is no confirmation measure that renders them all valid. Finally, the paper reviews the ramifications that this "strengthened problem of (...) measure sensitivity" has for Bayesian confirmation theory and discusses whether it points at pluralism about notions of confirmation. (shrink)
I develop two problems, which I call the problem of divine location and the problem of divine age, to challenge the theist belief that God created the universe. The problem of divine location holds that it is not clear where God existed before he created the universe. The problem of divine age holds that it is not clear how old God was when he created the universe. I explore several theist responses to these two problems, and (...) argue that all of them are problematic under the existing conceptions of space and time in physics. The philosophical magnitudes of these two problems are equal to that of the problem of evil. (shrink)
Generic generalisations like ‘Opioids are highly addictive’ are very useful in scientific communication, but they can often be interpreted in many different ways. Although this is not a problem when all interpretations provide the same answer to the question under discussion, a problem arises when a generic generalisation is used to answer a question other than that originally intended. In such cases, some interpretations of the generalisation might answer the question in a way that the original speaker would (...) not endorse. Rather than excising generic generalisations from scientific communication, I recommend that scientific communicators carefully consider the kinds of questions their words might be taken to answer and try to avoid phrasing that might be taken to provide unintended answers. (shrink)
To speak of being religious lucky certainly sounds odd. But then, so does “My faith holds value in God’s plan, while yours does not.” This book argues that these two concerns — with the concept of religious luck and with asymmetric or sharply differential ascriptions of religious value — are inextricably connected. It argues that religious luck attributions can profitably be studied from a number of directions, not just theological, but also social scientific and philosophical. There is a strong tendency (...) among adherents of different faith traditions to invoke asymmetric explanations of the religious value or salvific status of the home religion vis-à-vis all others. Attributions of good/bad religious luck and exclusivist dismissal of the significance of religious disagreement are the central phenomena that the book studies. Part I lays out a taxonomy of kinds of religious luck, a taxonomy that draws upon but extends work on moral and epistemic luck. It asks: What is going on when persons, theologies, or purported revelations ascribe various kinds of religiously-relevant traits to insiders and outsiders of a faith tradition in sharply asymmetric fashion? “I am saved but you are lost”; “My religion is holy but yours is idolatrous”; “My faith tradition is true, and valued by God, but yours is false and valueless.” Part II further develops the theory introduced in Part I, pushing forward both the descriptive/explanatory and normative sides of what the author terms his inductive risk account. Firstly, the concept of inductive risk is shown to contribute to the needed field of comparative fundamentalism by suggesting new psychological markers of fundamentalist orientation. The second side of what is termed an inductive risk account is concerned with the epistemology of religious belief, but more especially with an account of the limits of reasonable religious disagreement. Problems of inductively risky modes of belief-formation problematize claims to religion-specific knowledge. But the inductive risk account does not aim to set religion apart, or to challenge the reasonableness of religious belief tout court. Rather the burden of the argument is to challenge the reasonableness of attitudes of religious exclusivism, and to demotivate the “polemical apologetics” that exclusivists practice and hope to normalize. Lexington Books Pages: 290 978-1-4985-5017-8 • Hardback • December 2018 • $95.00 • (£65.00) 978-1-4985-5018-5 • eBook • December 2018 • $90.00 • (£60.00) ISBN 978-1-4985-5018-5 (pbk: alk. paper) (coming 2020) [Download the 30% personal use Discount Order Form I uploaded for hardcover or e-book, and please ask your library to purchase a copy for their collection.]. (shrink)
The philosophy of information (PI) is a new area of research with its own field of investigation and methodology. This article, based on the Herbert A. Simon Lecture of Computing and Philosophy I gave at Carnegie Mellon University in 2001, analyses the eighteen principal open problems in PI. Section 1 introduces the analysis by outlining Herbert Simon's approach to PI. Section 2 discusses some methodological considerations about what counts as a good philosophical problem. The discussion centers on Hilbert's famous (...) analysis of the central problems in mathematics. The rest of the article is devoted to the eighteen problems. These are organized into five sections: problems in the analysis of the concept of information, in semantics, in the study of intelligence, in the relation between information and nature, and in the investigation of values. (shrink)
The article begins by describing two longstanding problems associated with direct inference. One problem concerns the role of uninformative frequency statements in inferring probabilities by direct inference. A second problem concerns the role of frequency statements with gerrymandered reference classes. I show that past approaches to the problem associated with uninformative frequency statements yield the wrong conclusions in some cases. I propose a modification of Kyburg’s approach to the problem that yields the right conclusions. Past theories (...) of direct inference have postponed treatment of the problem associated with gerrymandered reference classes by appealing to an unexplicated notion of projectability . I address the lacuna in past theories by introducing criteria for being a relevant statistic . The prescription that only relevant statistics play a role in direct inference corresponds to the sort of projectability constraints envisioned by past theories. (shrink)
In a series of papers, Donald Davidson :3–17, 1984, The philosophical grounds of rationality, 1986, Midwest Stud Philos 16:1–12, 1991) developed a powerful argument against the claim that linguistic conventions provide any explanatory purchase on an account of linguistic meaning and communication. This argument, as I shall develop it, turns on cases of what I call lexical innovation: cases in which a speaker uses a sentence containing a novel expression-meaning pair, but nevertheless successfully communicates her intended meaning to her audience. (...) I will argue that cases of lexical innovation motivate a dynamic conception of linguistic conventions according to which background linguistic conventions may be rapidly expanded to incorporate new word meanings or shifted to revise the meanings of words already in circulation. I argue that this dynamic account of conventions both resolves the problem raised by cases of lexical innovation and that it does so in a way that is preferable to those who—like Davidson—deny important explanatory roles for linguistic conventions. (shrink)
Consider the following sentences: In every race, the colt won; In every race, John won.John Hawthorne and David Manley say that the difference between these two sentences raises a problem for Predicativism about names. According to the currently more standard version of Predicativism, a bare singular name in argument position, like ‘John’ in , is embedded in a definite description with an unpronounced definite article. The problem is supposed to be that permits a covarying reading that allows for (...) different races to have been won by different colts, while does not permit a covarying reading—it can be true only if there is a single John that won every race. But, the objection runs, if the name ‘John’ is really embedded in a definite description with an unpronounced definite article, then the two sentences are structurally parallel and should not differ with respect to covariation. Appealing to Jason Stanley's ‘Nominal Restriction’ , I show that the difference between the two sentences above not only does not raise a problem for Predicativism but also is actually predicted by it. (shrink)
When people make sense of situations, illustrations, instructions and problems they do more than just think with their heads. They gesture, talk, point, annotate, make notes and so on. What extra do they get from interacting with their environment in this way? To study this fundamental problem, I looked at how people project structure onto geometric drawings, visual proofs, and games like tic tac toe. Two experiments were run to learn more about projection. Projection is a special capacity, similar (...) to perception, but less tied to what is in the environment. Projection, unlike pure imagery, requires external structure to anchor it, but it adds ‘mental’ structure to the external scene much like an augmented reality system adds structure to an outside scene. A person projects when they look at a chessboard and can see where a knight may be moved. Because of the cognitive costs of sustaining and extending projection, humans make some of their projections real. They create structure externally. They move the piece, they talk, point, notate, represent. Much of our interactivity during sense making and problem solving involves a cycle of projecting then creating structure. (shrink)
In its original form, Nozick’s experience machine serves as a potent counterexample to a simplistic form of hedonism. The pleasurable life offered by the experience machine, its seems safe to say, lacks the requisite depth that many of us find necessary to lead a genuinely worthwhile life. Among other things, the experience machine offers no opportunities to establish meaningful relationships, or to engage in long-term artistic, intellectual, or political projects that survive one’s death. This intuitive objection finds some support in (...) recent research regarding the psychological effects of phenomena such as video games or social media use. After a brief discussion of these problems, I will consider a variation of the experience machine in which many of these deficits are remedied. In particular, I’ll explore the consequences of a creating a virtual world populated with strongly intelligent AIs with whom users could interact, and that could be engineered to survive the user’s death. The presence of these agents would allow for the cultivation of morally significant relationships, and the world’s long-term persistence would help ground possibilities for a meaningful, purposeful life in a way that Nozick’s original experience machine could not. While the creation of such a world is obviously beyond the scope of current technology, it represents a natural extension of the existing virtual worlds provided by current video games, and it provides a plausible “ideal case” toward which future virtual worlds will move. While this improved experience machine would seem to represent progress over Nozick’s original, I will argue that it raises a number of new problems stemming from the fact that that the world was created to provide a maximally satisfying and meaningful life for the intended user. This, in turn, raises problems analogous in some ways to the problem(s) of evil faced by theists. In particular, I will suggest that it is precisely those features that would make a world most attractive to potential users—the fact that the AIs are genuinely moral agents whose well-being the user can significantly impact—that render its creation morally problematic, since they require that the AIs inhabiting the world be subject to unnecessary suffering. I will survey the main lines of response to the traditional problem of evil, and will argue that they are irrelevant to this modified case. I will close by considering by consider what constraints on the future creation of virtual worlds, if any, might serve to allay the concerns identified in the previous discussion. I will argue that, insofar as the creation of such worlds would allow us to meet morally valuable purposes that could not be easily met otherwise, we would be unwise to prohibit it altogether. However, if our processes of creation are to be justified, they must take account of the interests of the moral agents that would come to exist as the result of our world creation. (shrink)
The “Problem of the Rock” (PoR) is a famous objection to Higher-Order (HO) theories of consciousness. According to PoR, the HO theorists’ claim that a mental state is conscious iff there is a higher-order mental state about it implies that a rock is also conscious iff there is a higher-order mental state about it. In this paper I show that this argument confuses two grammatically distinct attributions of consciousness, and that if the consequent equivocation fallacy is avoided, PoR is (...) either a straw man argument or has an unproblematic conclusion. (shrink)
The study determined the problem-solving performance and skills of prospective elementary teachers (PETs) in the Northern Philippines. Specifically, it defined the PETs’ level of problem-solving performance in number sense, measurement, geometry, algebra, and probability; significant predictors of their problem-solving performance in terms of sex, socio-economic status, parents’ educational attainment, high school graduated from and subject preference; and their problem-solving skills. The PETs’ problem-solving performance was determined by a problem set consisting of word problems with (...) number sense, measurement, geometry, algebra, and probability. A mixed-method research design was employed. Senior PETs purposively served as a sample where they mostly preferred to teach other subjects than mathematics. PETs who preferred math performed satisfactorily, while prospective teachers who opted for other subjects performed unsatisfactorily. The PETs’ unsatisfactory output indicates the need for remediation to advance the mathematical material skills and enrich the problem-solving abilities of these primary schools' potential teachers. Besides, results showed that subject preference strongly affected and predicted the problem-solving success of the PETs. PETs who preferred to teach mathematics performed significantly better than their counterparts; hence, mathematics as a field of specialization in the Bachelor of Elementary Education program may be considered by teacher education institutions. Further, most PETs displayed lack of problem-solving skills; thus, a Problem-Solving course is recommended for them. (shrink)
This article defends the Doomsday Argument, the Halfer Position in Sleeping Beauty, the Fine-Tuning Argument, and the applicability of Bayesian confirmation theory to the Everett interpretation of quantum mechanics. It will argue that all four problems have the same structure, and it gives a unified treatment that uses simple models of the cases and no controversial assumptions about confirmation or self-locating evidence. The article will argue that the troublesome feature of all these cases is not self-location but selection effects.
This paper argues that higher-order doubt generates an epistemic dilemma. One has a higher-order doubt with regards to P insofar as one justifiably withholds belief as to what attitude towards P is justified. That is, one justifiably withholds belief as to whether one is justified in believing, disbelieving, or withholding belief in P. Using the resources provided by Richard Feldman’s recent discussion of how to respect one’s evidence, I argue that if one has a higher-order doubt with regards to P, (...) then one is not justified in having any attitude towards P. Otherwise put: No attitude towards the doubted proposition respects one’s higher-order doubt. I argue that the most promising response to this problem is to hold that when one has a higher-order doubt about P, the best one can do to respect such a doubt is to simply have no attitude towards P. Higher-order doubt is thus much more rationally corrosive than non-higher-order doubt, as it undermines the possibility of justifiably having any attitude towards the doubted proposition. (shrink)
Agnieszka Jaworska and Julie Tannenbaum recently developed the ingenious and novel person‐rearing account of moral status, which preserves the commonsense judgment that humans have a higher moral status than nonhuman animals. It aims to vindicate speciesist judgments while avoiding the problems typically associated with speciesist views. We argue, however, that there is good reason to reject person‐rearing views. Person‐rearing views have to be coupled with an account of flourishing, which will (according to Jaworska and Tannenbaum) be either a species norm (...) or an intrinsic potential account of flourishing. As we show, however, person‐rearing accounts generate extremely implausible consequences when combined with the accounts of flourishing Jaworska and Tannenbaum need for the purposes of their view. (shrink)
I argue that recent attempts to deflect Access Problems for realism about a priori domains such as mathematics, logic, morality, and modality using arguments from evolution result in two kinds of explanatory overkill: the Access Problem is eliminated for contentious domains, and realist belief becomes viciously immune to arguments from dispensability, and to non-rebutting counter-arguments more generally.
Recent years have seen a surge in philosophical work on the rationality of grief. Much of this research is premised on the idea that people tend to grieve much less than would be appropriate or, as it is often called, fitting. My goal in this paper is diagnostic, that is, to articulate two never properly distinguished, and indeed often conflated, arguments in favour of the purported discrepancy between experienced and fitting grief: a metaphysical and a psychological argument. According to the (...) former, grief is rationalized entirely by facts about the past. And because the past is unchangeable, grief can be said to remain forever fitting. According to the latter argument, humans’ emotional resilience causes grief to diminish at a faster rate than would be fitting. Which of these problems we end up facing depends on relatively subtle variations in the characterization of the losses that render grief appropriate. (shrink)
The problem of satisfaction conditions arises from the apparent difficulties of explaining the nature of the mental states involved in our emotional responses to tragic fictions. Greg Currie has recently proposed to solve the problem by arguing for the recognition of a class of imaginative counterparts of desires - what he and others call i-desires. In this paper I will articulate and rebut Currie's argument in favour of i-desires and I will put forward a new solution in terms (...) of genuine desires. To this aim I will show that the same sort of puzzling phenomenon involved in our responses to tragic fictions arises also in a non-fictional case, and I will offer a solution to the problem of satisfaction conditions that dispenses with i-desires. The key to the explanation is in the notion of condition-dependent desires triggered by fictions. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.