While naturalism is used in positive senses by the tradition of analytical philosophy, with Ludwig Wittgenstein its best example, and by the tradition of phenomenology, with Maurice Merleau-Ponty its best exemplar, it also has an extremely negative sense on both of these fronts. Hence, both Merleau-Ponty and Wittgenstein in their basic thrusts adamantly reject reductionistic naturalism. Although Merleau-Ponty’s phenomenology rejects the naturalism Husserl rejects, he early on found a place for the “truth of naturalism.” In a parallel way, Wittgenstein accepts (...) a certain positive sense of naturalism, while rejecting Quine’s kind of naturalism. It is the aim of this paper to investigate the common ground in the views of Wittgenstein and Merleau-Ponty regarding the naturalism that they each espouse and that which they each reject. (shrink)
If the world itself is metaphysically indeterminate in a specified respect, what follows? In this paper, we develop a theory of metaphysical indeterminacy answering this question.
Decisions are made under uncertainty when there are distinct outcomes of a given action, and one is uncertain to which the act will lead. Decisions are made under indeterminacy when there are distinct outcomes of a given action, and it is indeterminate to which the act will lead. This paper develops a theory of (synchronic and diachronic) decision-making under indeterminacy that portrays the rational response to such situations as inconstant. Rational agents have to capriciously and randomly choose how to resolve (...) the indeterminacy relevant to a given choice-situation, but such capricious choices once made constrain how they will choose in the future. The account is illustrated by the case of self-interested action in situations where it is indeterminate whether you yourself will survive to benefit or suffer the consequences. The conclusion emphasizes some distinctive anti-hedging predictions of the account. (shrink)
Inscrutability arguments threaten to reduce interpretationist metasemantic theories to absurdity. Can we find some way to block the arguments? A highly influential proposal in this regard is David Lewis’ ‘ eligibility ’ response: some theories are better than others, not because they fit the data better, but because they are framed in terms of more natural properties. The purposes of this paper are to outline the nature of the eligibility proposal, making the case that it is not ad hoc, but (...) instead flows naturally from three independently motivated elements; and to show that severe limitations afflict the proposal. In conclusion, I pick out the element of the eligibility response that is responsible for the limitations: future work in this area should therefore concentrate on amending this aspect of the overall theory. (shrink)
Might it be that world itself, independently of what we know about it or how we represent it, is metaphysically indeterminate? This article tackles in turn a series of questions: In what sorts of cases might we posit metaphysical indeterminacy? What is it for a given case of indefiniteness to be 'metaphysical'? How does the phenomenon relate to 'ontic vagueness', the existence of 'vague objects', 'de re indeterminacy' and the like? How might the logic work? Are there reasons for postulating (...) this distinctive sort of indefiniteness? Conversely, are there reasons for denying that there is indefiniteness of this sort? (shrink)
Integrating the concept of place meanings into protected area management has been difficult. Across a diverse body of social science literature, challenges in the conceptualization and application of place meanings continue to exist. However, focusing on relationships in the context of participatory planning and management allows protected area managers to bring place meanings into professional judgment and practice. This paper builds on work that has outlined objectives and recommendations for bringing place meanings, relationships, and lived experiences to the forefront of (...) land-use planning and management. It proposes the next steps in accounting for people’s relationships with protected areas and their relationships with protected area managers. Our goals are to 1) conceptualize this relationship framework; 2) present a structure for application of the framework; and 3) demonstrate the application in a specific protected area context, using an example from Alaska. We identify three key target areas of information and knowledge that managers will need to sustain quality relationship outcomes at protected areas. These targets are recording stories or narratives, monitoring public trust in management, and identifying and prioritizing threats to relationships. The structure needed to apply this relationship-focused approach requires documenting and following individual relationships with protected areas in multiple ways. The goal of this application is not to predict relationships, but instead to gain a deeper understanding of how and why relationships develop and change over time. By documenting narratives of individuals, managers can understand how relationships evolve over time and the role they play in individual’s lives. By understanding public trust, the shared values and goals of individuals and managers can be observed. By identifying and prioritizing threats, managers can pursue efforts that steward relationships while allowing for the protection of experiences and meanings. The collection and interpretation of these three information targets can then be integrated and implemented within planning and management strategies to achieve outcomes that are beneficial for resource protection, visitor experiences, and stakeholder engagement. By investing in this approach, agencies will gain greater understanding and usable knowledge towards the achievement of quality relationships. It represents an investment in both place relationships and public relations. By integrating such an approach into planning and management, protected area managers can represent the greatest diversity of individual place meanings and connections. relationships, place meanings, trust, narratives, planning, protected areas. (shrink)
Worlds where things divide forever ("gunk" worlds) are apparently conceivable. The conceivability of such scenarios has been used as an argument against "nihilist" or "near-nihilist" answers to the special composition question. I argue that the mereological nihilist has the resources to explain away the illusion that gunk is possible.
Lewis (1973) gave a short argument against conditional excluded middle, based on his treatment of ‘might’ counterfactuals. Bennett (2003), with much of the recent literature, gives an alternative take on ‘might’ counterfactuals. But Bennett claims the might-argument against CEM still goes through. This turns on a specific claim I call Bennett’s Hypothesis. I argue that independently of issues to do with the proper analysis of might-counterfactuals, Bennett’s Hypothesis is inconsistent with CEM. But Bennett’s Hypothesis is independently objectionable, so we should (...) resolve this tension by dropping the Hypothesis, not by dropping CEM. (shrink)
Many accounts of structural rationality give a special role to logic. This paper reviews the problem case of clear-eyed logical uncertainty. An account of rational norms on belief that does not give a special role to logic is developed: doxastic probabilism.
We can use radically different reference-schemes to generate the same truth-conditions for the sentences of a language. In this paper, we do three things. (1) Distinguish two arguments that deploy this observation to derive different conclusions. The first argues that reference is radically indeterminate: there is no fact of the matter what ordinary terms refer to. This threat is taken seriously and most contemporary metasemantic theories come with resources intended to rebut it. The second argues for radical parochialism about reference: (...) it’s a reflection of our parochial interests, rather than the nature of the subject matter, that our theorizing about language appeals to reference rather than another relation that generates the same truth-conditions. Rebuttals of the first argument cut no ice against the second, because radical parochialism is compatible with reference being determinate. (2) Argue that radical parochialism, like radical indeterminacy, would be shocking if true. (3) Argue that the case for radical parochialism turns on the explanatory purposes of “reference”-talk: on relatively “thin” conceptions, the argument goes through, and radical parochialism is (shockingly!) true; on richer conceptions, the argument can be blocked. We conclude that non-revisionists must endorse, and justify, a relatively rich conception of the explanatory purposes of “reference”-talk. (shrink)
Joyce (1998) gives an argument for probabilism: the doctrine that rational credences should conform to the axioms of probability. In doing so, he provides a distinctive take on how the normative force of probabilism relates to the injunction to believe what is true. But Joyce presupposes that the truth values of the propositions over which credences are defined are classical. I generalize the core of Joyce’s argument to remove this presupposition. On the same assumptions as Joyce uses, the credences of (...) a rational agent should always be weighted averages of truth value assignments. In the special case where the truth values are classical, the weighted averages of truth value assignments are exactly the probability functions. But in the more general case, probabilistic axioms formulated in terms of classical logic are violated—but we will show that generalized versions of the axioms formulated in terms of non-classical logics are satisfied. (shrink)
Jeff Paris proves a generalized Dutch Book theorem. If a belief state is not a generalized probability then one faces ‘sure loss’ books of bets. In Williams I showed that Joyce’s accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that both results are easy corollaries of the core result that Paris appeals to in proving his dutch book theorem. We see that every point of accuracy-domination defines a (...) dutch book, but we only have a partial converse. (shrink)
Revisionary theories of logic or truth require revisionary theories of mind. This essay outlines nonclassically based theories of rational belief, desire, and decision making, singling out the supervaluational family for special attention. To see these nonclassical theories of mind in action, this essay examines a debate between David Lewis and Derek Parfit over what matters in survival. Lewis argued that indeterminacy in personal identity allows caring about psychological connectedness and caring about personal identity to amount to the same thing. The (...) essay argues that Lewis's treatment of two of Parfit's puzzle cases—degreed survival and fission—presuppose different nonclassical treatments of belief and desire. (shrink)
I outline and motivate a way of implementing a closest world theory of indicatives, appealing to Stalnaker's framework of open conversational possibilities. Stalnakerian conversational dynamics helps us resolve two outstanding puzzles for a such a theory of indicative conditionals. The first puzzle -- concerning so-called 'reverse Sobel sequences' -- can be resolved by conversation dynamics in a theoryneutral way: the explanation works as much for Lewisian counterfactuals as for the account of indicatives developed here. Resolving the second puzzle, by contrast, (...) relies on the interplay between the particular theory of indicative conditionals developed here and Stalnakerian dynamics. The upshot is an attractive resolution of the so-called "Gibbard phenomenon" for indicative conditionals. (shrink)
Some argue that theories of universals should incorporate structural universals, in order to allow for the metaphysical possibility of worlds of 'infinite descending complexity' ('onion worlds'). I argue that the possibility of such worlds does not establish the need for structural universals. So long as we admit the metaphysical possibility of emergent universals, there is an attractive alternative description of such cases.
There are advantages to thrift over honest toil. If we can make do without numbers we avoid challenging questions over the metaphysics and epistemology of such entities; and we have a good idea, I think, of what a nominalistic metaphysics should look like. But minimizing ontology brings its own problems; for it seems to lead to error theory— saying that large swathes of common-sense and best science are false. Should recherche philosophical arguments really convince us to give all this up? (...) Such Moorean considerations are explicitly part of the motivation for the recent resurgence of structured metaphysics, which allow a minimal (perhaps nominalistic) fundamental ontology, while avoiding error-theory by adopting a permissive stance towards ontology that can be argued to be grounded in the fundamental. This paper evaluates the Moorean arguments, identifying key epistemological assumptions. On the assumption that Moorean arguments can be used to rule out error-theory, I examine deflationary ‘representationalist’ rivals to the structured metaphysics reaction. Quinean paraphrase, fictionalist claims about syntax and semantics are considered and criticized. In the final section, a ‘direct’ deflationary strategy is outlined and the theoretical obligations that it faces are articulated. The position advocated may have us talking a lot like a friend of structured metaphysics—but with a very different conception of what we’re up to. (shrink)
Supervaluationism is often described as the most popular semantic treatment of indeterminacy. There???s little consensus, however, about how to fill out the bare-bones idea to include a characterization of logical consequence. The paper explores one methodology for choosing between the logics: pick a logic that norms belief as classical consequence is standardly thought to do. The main focus of the paper considers a variant of standard supervaluational, on which we can characterize degrees of determinacy. It applies the methodology above to (...) focus on degree logic. This is developed first in a basic, single-premise case; and then extended to the multipremise case, and to allow degrees of consequence. The metatheoretic properties of degree logic are set out. On the positive side, the logic is supraclassical???all classical valid sequents are degree logic valid. Strikingly, metarules such as cut and conjunction introduction fail. (shrink)
I formulate a counterfactual version of the notorious ‘Ramsey Test’. Even in a weak form, this makes counterfactuals subject to the very argument that Lewis used to persuade the majority of the philosophical community that indicative conditionals were in hot water. I outline two reactions: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives.
In some sense, survival seems to be an intrinsic matter. Whether or not you survive some event seems to depend on what goes on with you yourself —what happens in the environment shouldn’t make a difference. Likewise, being a person at a time seems intrinsic. The principle that survival seems intrinsic is one factor which makes personal fission puzzles so awkward. Fission scenarios present cases where if survival is an intrinsic matter, it appears that an individual could survive twice over. (...) But it’s well known that standard notions of “intrinsicality” won’t do to articulate the sense in which survival is intrinsic, since ‘personhood’ appears to be a maximal property. We formulate a sense in which survival and personhood (and perhaps other maximal properties) may be almost intrinsic—a sense that would suffice, for example, to ground fission arguments. It turns out that this notion of almost-intrinsicality allows us to formulate a new version of the problem of the many. (shrink)
This essay explores the thesis that for vague predicates, uncertainty over whether a borderline instance x of red/large/tall/good is to be understood as practical uncertainty over whether to treat x as red/large/tall/good. Expressivist and quasi-realist treatments of vague predicates due to John MacFarlane and Daniel Elstein provide the stalking-horse. It examines the notion of treating/counting a thing as F , and links a central question about our attitudes to vague predications to normative evaluation of plans to treat a thing as (...) F . The essay examines how the account applies to normatively defective or contested terms. The final section raises a puzzle about the mechanics of MacFarlane’s detailed implementation for the case of gradable adjectives. (shrink)
How are permutation arguments for the inscrutability of reference to be formulated in the context of a Davidsonian truth-theoretic semantics? Davidson takes these arguments to establish that there are no grounds for favouring a reference scheme that assigns London to “Londres”, rather than one that assigns Sydney to that name. We shall see, however, that it is far from clear whether permutation arguments work when set out in the context of the kind of truth-theoretic semantics which Davidson favours. The principle (...) required to make the argument work allows us to resurrect Foster problems against the Davidsonian position. The Foster problems and the permutation inscrutability problems stand or fall together: they are one puzzle, not two. (shrink)
Bryne & Hajek (1997) argue that Lewis’s (1988; 1996) objections to identifying desire with belief do not go through if our notion of desire is ‘causalized’ (characterized by causal, rather than evidential, decision theory). I argue that versions of the argument go through on certain assumptions about the formulation of decision theory. There is one version of causal decision theory where the original arguments cannot be formulated—the ‘imaging’ formulation that Joyce (1999) advocates. But I argue this formulation is independently objectionable. (...) If we want to maintain the desire as belief thesis, there’s no shortcut through causalization. (shrink)
*This is a project I hope to come back to one day. It stalled, a bit, on the absence of a positive theory of update I could be satisfied with* When should we believe a indicative conditional, and how much confidence in it should we have? Here’s one proposal: one supposes actual the antecedent; and sees under that supposition what credence attaches to the consequent. Thus we suppose that Oswald did not shot Kennedy; and note that under this assumption, Kennedy (...) was assassinated by someone other than Oswald. Thus we are highly confident in the indicative: if Oswald did not kill Kennedy, someone else did. (shrink)
*Note that this project is now being developed in joint work with Rich Woodward* -/- Some things are left open by a work of fiction. What colour were the hero’s eyes? How many hairs are on her head? Did the hero get shot in the final scene, or did the jailor complete his journey to redemption and shoot into the air? Are the ghosts that appear real, or a delusion? Where fictions are open or incomplete in this way, we can (...) ask what attitudes it’s appropriate (or permissible) to take to the propositions in question, in engaging with the fiction. In Mimesis as Make-Believe (henceforth, MMB), Walton argues that just as truth norms belief, truth-in-fiction norms imagination. Granting that what is true-in-the-fiction should be imagined, and what is false-in-the-fiction is not to be imagined, there remains the question of what to say within the Waltonian framework about things that are neither true- nor false-in-the-fiction---the loci of incompleteness. (shrink)
*These notes were folded into the published paper "Probability and nonclassical logic*. Revising semantics and logic has consequences for the theory of mind. Standard formal treatments of rational belief and desire make classical assumptions. If we are to challenge the presuppositions, we indicate what is kind of theory is going to take their place. Consider probability theory interpreted as an account of ideal partial belief. But if some propositions are neither true nor false, or are half true, or whatever—then it’s (...) far from clear that our degrees of belief in it and its negation should sum to 1, as classical probability theory requires (?, cf.). There are extant proposals in the literature for generalizing (categorical) probability theory to a non-classical setting, and we will use these below. But subjective probabilities themselves stand in functional relations to other mental states, and we need to trace the knock-on consequences of revisionism for this interrelationship (arguably, degrees of belief only count as kinds of belief in virtue of standing in these functional relationships). (shrink)
In debates over the regulation of communication related to dual-use research, the risks that such communication creates must be weighed against against the value of scientific autonomy. The censorship of such communication seems justifiable in certain cases, given the potentially catastrophic applications of some dual-use research. This conclusion however, gives rise to another kind of danger: that regulators will use overly simplistic cost-benefit analysis to rationalize excessive regulation of scientific research. In response to this, we show how institutional design principles (...) and normative frameworks from free speech theory can be used to help extend the argument for regulating dangerous dual-use research beyond overly simplistic cost-benefit reasoning, but without reverting to an implausibly absolutist view of scientific autonomy. (shrink)
Throughout his career, Derek Parfit made the bold suggestion, at various times under the heading of the "Normativity Objection," that anyone in possession of normative concepts is in a position to know, on the basis of their competence with such concepts alone, that reductive realism in ethics is not even possible. Despite the prominent role that the Normativity Objection plays in Parfit's non-reductive account of the nature of normativity, when the objection hasn't been ignored, it's been criticized and even derided. (...) We argue that the exclusively negative attention that the objection has received has been a mistake. On our reading, Parfit's Normativity Objection poses a serious threat to reductivism, as it exposes the uneasy relationship between our a priori knowledge of a range of distinctly normative truths and the typical package of semantic commitments that reductivists have embraced since the Kripkean revolution. (shrink)
In his famous 1982 paper, Allen Newell [22, 23] introduced the notion of knowledge level to indicate a level of analysis, and prediction, of the rational behavior of a cognitive arti cial agent. This analysis concerns the investigation about the availability of the agent knowledge, in order to pursue its own goals, and is based on the so-called Rationality Principle (an assumption according to which "an agent will use the knowledge it has of its environment to achieve its goals" [22, (...) p. 17]. By using the Newell's own words: "To treat a system at the knowledge level is to treat it as having some knowledge, some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates" [22, p. 13]. In the last decades, the importance of the knowledge level has been historically and system- atically downsized by the research area in cognitive architectures (CAs), whose interests have been mainly focused on the analysis and the development of mechanisms and the processes governing human and (arti cial) cognition. The knowledge level in CAs, however, represents a crucial level of analysis for the development of such arti cial general systems and therefore deserves greater research attention [17]. In the following, we will discuss areas of broad agree- ment and outline the main problematic aspects that should be faced within a Common Model of Cognition [12]. Such aspects, departing from an analysis at the knowledge level, also clearly impact both lower (e.g. representational) and higher (e.g. social) levels. (shrink)
There’s a long but relatively neglected tradition of attempting to explain why many researchers working on the nature of phenomenal consciousness think that it’s hard to explain. David Chalmers argues that this “meta-problem of consciousness” merits more attention than it has received. He also argues against several existing explanations of why we find consciousness hard to explain. Like Chalmers, we agree that the meta-problem is worthy of more attention. Contra Chalmers, however, we argue that there’s an existing explanation that is (...) more promising than his objections suggest. We argue that researchers find phenomenal consciousness hard to explain because phenomenal concepts are complex demonstratives that encode the impossibility of explaining consciousness as one of their application conditions. (shrink)
In December 2013, the Nonhuman Rights Project (NhRP) filed a petition for a common law writ of habeas corpus in the New York State Supreme Court on behalf of Tommy, a chimpanzee living alone in a cage in a shed in rural New York (Barlow, 2017). Under animal welfare laws, Tommy’s owners, the Laverys, were doing nothing illegal by keeping him in those conditions. Nonetheless, the NhRP argued that given the cognitive, social, and emotional capacities of chimpanzees, Tommy’s confinement constituted (...) a profound wrong that demanded remedy by the courts. Soon thereafter, the NhRP filed habeas corpus petitions on behalf of Kiko, another chimpanzee housed alone in Niagara Falls, and Hercules and Leo, two chimpanzees held in research facilities at Stony Brook University. Thus began the legal struggle to move these chimpanzees from captivity to a sanctuary, an effort that has led the NhRP to argue in multiple courts before multiple judges. The central point of contention has been whether Tommy, Kiko, Hercules, and Leo have legal rights. To date, no judge has been willing to issue a writ of habeas corpus on their behalf. Such a ruling would mean that these chimpanzees have rights that confinement might violate. Instead, the judges have argued that chimpanzees cannot be bearers of legal rights because they are not, and cannot be persons. In this book we argue that chimpanzees are persons because they are autonomous. (shrink)
This article derives from a paper presented at the Philosophy of Religion and Mysticism Conference hosted by the Russian Academy of Sciences in Moscow, May 22-24, 2014. That paper introduced theories and methods drawn from the ”cognitive science of religion’ and suggested future avenues of research connecting CSR and scholarship on mysticism. Towards these same ends, the present article proceeds in three parts. Part I outlines the origins, aims, and basic tenets of CSR research. Part II discusses one specific causal (...) perspective that informs a wide range of CSR research, Sperber’s ”epidemiological’ approach to cultural expression, and connects this perspective to the example of creator deities. Part III discusses some possible future directions for CSR research concerning mysticism and mystical experience. Finally, a coda addresses two common misunderstandings concerning the ”reductionist’ nature of CSR research. (shrink)
Rawls offers three arguments for the priority of liberty in Theory, two of which share a common error: the belief that once we have shown the instrumental value of the basic liberties for some essential purpose (e.g., securing self-respect), we have automatically shown the reason for their lexical priority. The third argument, however, does not share this error and can be reconstructed along Kantian lines: beginning with the Kantian conception of autonomy endorsed by Rawls in section 40 of Theory, we (...) can explain our highest-order interest in rationality, justify the lexical priority of all basic liberties, and reinterpret Rawls’ threshold condition for the application of the priority of liberty. Perhaps unsurprisingly, this Kantian reconstruction will not work within the radically different framework of Political Liberalism. (shrink)
My paper addresses a topic--the implications of Rawls's justice as fairness for affirmative action--that has received remarkably little attention from Rawls's major interpreters. The only extended treatments of it that are in print are over a quarter-century old, and they bear scarcely any relationship to Rawls's own nonideal theorizing. Following Christine Korsgaard's lead, I work through the implications of Rawls's nonideal theory and show what it entails for affirmative action: viz. that under nonideal conditions, aggressive forms of formal equality of (...) opportunity (e.g., sensitivity training, outreach efforts, external monitoring and enforcement) and compensating support (e.g., special fellowship programs, childcare facilities, mentoring, co-op opportunities, etc.) can be justified, but that "hard" and even "soft" quotas are difficult to defend under any conditions. I conclude the paper by exploring the implications of these surprising results for contemporary liberalism more broadly and for constitutional law and public policy. (shrink)
Many scholars, including G. A. Cohen, Daniel Attas, and George Brenkert, have denied that a Kantian defense of self-ownership is possible. Kant's ostensible hostility to self-ownership can be resolved, however, upon reexamination of the Groundwork and the Metaphysics of Morals. Moreover, two novel Kantian defenses of self-ownership (narrowly construed) can be devised. The first shows that maxims of exploitation and paternalism that violate self-ownership cannot be universalized, as this leads to contradictions in conception. The second shows that physical coercion against (...) rational agents involves a profound status wrong--namely, their treatment as children or animals--and that this system of differential status and treatment (including self-ownership rights for rational agents) can be morally justified by our capacity for autonomy. (shrink)
In disagreements about trivial matters, it often seems appropriate for disputing parties to adopt a ‘middle ground’ view about the disputed matter. But in disputes about more substantial controversies (e.g. in ethics, religion, or politics) this sort of doxastic conduct can seem viciously acquiescent. How should we distinguish between the two kinds of cases, and thereby account for our divergent intuitions about how we ought to respond to them? One possibility is to say that ceding ground in a trivial dispute (...) is appropriate because the disputing parties are usually epistemic peers within the relevant domain, whereas in a more substantial disagreement the disputing parties rarely, if ever, qualify as epistemic peers, and so ‘sticking to one’s guns’ is usually the appropriate doxastic response. My aim in this paper is to explain why this way of drawing the desired distinction is ultimately problematic, even if it seems promising at first blush. (shrink)
In his book “Galileo’s Error”, Philip Goff lays out what he calls “foundations for a new science of consciousness”, which are decidedly anti-physicalist (panpsychist), motivated by a critique of Galileo’s distinction into knowable objective and unknowable subjective properties and Arthur Eddington’s argument for the limitation of purely structural (physical) knowledge. Here we outline an alternative theory, premised on the Interface Theory of Perception, that too subscribes to a “post-Galilean” research programme. However, interface theorists disagree along several lines. 1. They note (...) that Galileo’s distinction should be replaced by a truly non-dual account, referring to a difference of degree only. 2. They highly appreciate the role of mathematics, in particular when it comes to actually engage scientifically with consciousness. Some notable features of the interface theory are its skepticism towards our epistemic capacities and its rejection of the existence of a public, mind-independent reality. In addition, some interface theorists further employ a thin concept of “conscious agency” to ground their theory. The interface theory leaves open many of the problems of consciousness science (e.g. what is a “self”?) as questions for further (scientific, mathematical) research. (shrink)
The republican tradition has long been ambivalent about markets and commercial society more generally: from the contrasting positions of Rousseau and Smith in the eighteenth century to recent neorepublican debates about capitalism, republicans have staked out diverse positions on fundamental issues of political economy. Rather than offering a systematic historical survey of these discussions, this chapter will instead focus on the leading neo-republican theory—that of Philip Pettit—and consider its implications for market society. As I will argue, Pettit’s theory is even (...) friendlier to markets than most have believed: far from condemning commercial society, his theory recognizes that competitive markets and their institutional preconditions are an alternative means to limit arbitrary power across the domestic, economic, and even political spheres. While most republican theorists have focused on political means to limit such power—including both constitutional means (e.g., separation of powers, judicial review, the rule of law, federalism) and participatory ones (democratic elections and oversight)—I will examine here an economic model of republicanism that can complement, substitute for, and at times displace the standard political model. Whether we look at spousal markets, labor markets, or residential markets within federal systems, state policies that heighten competition among their participants and resource exit from abusive relationships within them can advance freedom as non-domination as effectively or even more effectively than social-democratic approaches that have recently gained enthusiasts among republicans. These conclusions suggest that democracy, be it social or political, is just one means among others for restraining arbitrary power and is consequently less central to (certain versions of) republicanism than we may have expected. So long as they counteract domination, economic inroads into notionally democratic territory are no more worrisome than constitutional ones. (shrink)
Laws of nature seem to take two forms. Fundamental physics discovers laws that hold without exception, ‘strict laws’, as they are sometimes called; even if some laws of fundamental physics are irreducibly probabilistic, the probabilistic relation is thought not to waver. In the nonfundamental, or special, sciences, matters differ. Laws of such sciences as psychology and economics hold only ceteris paribus – that is, when other things are equal. Sometimes events accord with these ceteris paribus laws (c.p. laws, hereafter), but (...) sometimes the laws are not manifest, as if they have somehow been placed in abeyance: the regular relation indicative of natural law can fail in circumstances where an analogous outcome would effectively refute the assertion of strict law. Many authors have questioned the supposed distinction between strict laws and c.p. laws. The brief against it comprises various considerations: from the complaint that c.p. clauses are void of meaning to the claim that, although understood well enough, they should appear in all law-statements. These two concerns, among others, are addressed in due course, but first, I venture a positive proposal. I contend that there is an important contrast between strict laws and c.p. laws, one that rests on an independent distinction between combinatorial and noncombinatorial nomic principles.2 Instantiations of certain properties, e.g., mass and charge, nomically produce individual forces, or more generally, causal influences,3 in accordance with noncombinatorial.. (shrink)
This volume of twelve specially commissioned essays about species draws on the perspectives of prominent researchers from anthropology , botany, developmental psychology , the philosophy of biology and science, protozoology, and zoology . The concept of species has played a focal role in both evolutionary biology and the philosophy of biology , and the last decade has seen something of a publication boom on the topic (e.g., Otte and Endler 1989; Ereshefsky 1992b; Paterson 1994; lambert and Spence 1995; Claridge, Dawah, (...) and Wilson 1997; Wheeler and Meier 1999; Howard and Berlocher 1998). (shrink)
In the region where some cat sits, there are many very cat-like items that are proper parts of the cat (or otherwise mereologically overlap the cat) , but which we are inclined to think are not themselves cats, e.g. all of Tibbles minus the tail. The question is, how can something be so cat-like without itself being a cat. Some have tried to answer this “Problem of the Many” (a problem that arises for many different kinds of things we regularly (...) encounter, including desks, persons, rocks, and clouds) by relying on a mereological maximality principle, according to which, something cannot be a member of a kind K if it is a large proper part of, or otherwise greatly mereologically overlaps, a K. It has been shown, however, that a maximality constraint of this type, i.e. one that restricts mereological overlap, is open to strong objections. Inspired by the insights of, especially, Sutton and Madden, I develop a type of functional-maximality principle that avoids these objections (and has other merits), and thereby provides a better answer to the Problem of the Many. (shrink)
In response to Stephen Davis’s criticism of our previous essay, we revisit and defend our arguments that the Resurrection hypothesis is logically incompatible with the Standard Model of particle physics—and thus is maximally implausible—and that it cannot explain the sensory experiences of the Risen Jesus attributed to various witnesses in the New Testament—and thus has low explanatory power. We also review Davis’s reply, noting that he evades our arguments, misstates their conclusions, and distracts the reader with irrelevancies regarding, e.g., what (...) natural laws are, what a miracle is, and how “naturalism” and “supernaturalism” differ as worldviews. Contrary to what Davis claims (even in his abstract), we do not argue that “if the Standard Model of particle physics is true, then the resurrection of Jesus did not occur and physical things can only causally interact with other physical things.” Davis distorts our claims and criticizes straw men of his own creation. (shrink)
In recent work Mary Kate McGowan presents an account of oppressive speech inspired by David Lewis's analysis of conversational kinematics. Speech can effect identity-based oppression, she argues, by altering 'the conversational score', which is to say, roughly, that it can introduce presuppositions and expectations into a conversation, and thus determine what sort of subsequent conversational 'moves' are apt, correct, felicitous, etc., in a manner that oppresses members of a certain group (e.g. because the suppositions and expectations derogate or demean members (...) of that group). In keeping with the Lewisian picture, McGowan stresses the asymmetric pliability of conversational scores. She argues that it is easier to introduce (for example) sexist presuppositions and expectations into a conversation than it is to remove them. Responding to a sexist remark, she thus suggests, is like trying to "unring a bell". I begin by situating McGowan's work in the wider literature on speech and social hierarchy, and explaining how her account of oppressive speech improves upon the work of others in its explication of the relationship between individuals' verbal conduct and structurally oppressive social arrangements. I then propose an explanation and supportive elaboration of McGowan's claims about the asymmetric pliability of conversations involving identity-oppressive speech. Rather than regarding such asymmetry as a sui generis phenomenon, I show how we can understand it as a consequence of a more general asymmetry between making things salient and un-salient in speech, and I show how this asymmetry also operates in various cases that interested Lewis. (shrink)
Introduction in chapter viii of book ii of An Essay Concerning Human Understanding, John Locke provides various putative lists of primary qualities. Insofar as they have considered the variation across Locke's lists at all, commentators have usually been content simply either to consider a self-consciously abbreviated list (e.g., "Size, Shape, etc.") or a composite list as the list of Lockean primary qualities, truncating such a composite list only by omitting supposedly co-referential terms. Doing the latter with minimal judgment about what (...) terms are co-referential gives us the following list of eleven qualities (in the order in which they appear in this chapter of the Essay): solidity, extension, figure, mobility, motion or rest, number, bulk, texture, motion, size, and situation. Perhaps surprisingly given the attention to the primary/secondary distinction since Locke, Locke's primary qualities themselves have received little more than passing mention in the bulk of the subsequent literature. In particular, no discussion both offers an interpretation of Locke's conception of primary qualities and makes sense of Locke's various lists as lists of primary qualities. A central motivation for this paper is the idea that these two tasks are not independent. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.