This is, to the best of my knowledge, the first published attempt at a rigorous logical formalization of a passage in Leibniz's Monadology. The method we followed was suggested by Johannes Czermak.
This is an English translation of Waldenfels' German essay: Equality and inequality are basic elements of law, justice and politics. Equality integrates each of us into a common sphere by distributing rights, duties and chances among us. Equality turns into mere indifference as far as we get overintegrated into social orders. When differences are fading away experience loses its relief and individuals lose their face. Our critical reflections start from the inevitable paradox of making equal what is not equal. In (...) various ways they refer to Nietzsche’s concept of order, to Marx’s analysis of money, to Lévinas’s ethics of the Other, and to novelists like Dostoevsky and Musil. Our critique turns against two extremes, on the one hand against any sort of normalism fixed on functioning orders, on the other hand against any sort of anomalism dreaming of mere events and permanent ruptures. Responsive phenomenology shows how we are confronted with extraordinary events. Those deviate from the ordinary and transgress its borders, without leaving the normality of our everyday world behind. The process of equalizing moves between the ordinary and the extraordinary. What makes the difference and resists mere indifference are creative responses which are to be invented again and again. (shrink)
A "concept" in the sense favoured by Wittgenstein is a paradigm for a transition between parts of a notational system. A concept-determining sentence such as "There is no reddish green" registers the absence of such a transition. This suggests a plausible account of what is perceived in an experiment that was first designed by Crane and Piantanida, who claim to have induced perceptions of reddish green. I shall propose a redescription of the relevant phenomena, invoking only ordinary colour concepts. This (...) redescription is not ruled out by anything the experimenters say. It accounts for certain peculiarities in both their descriptions and their subjects', and suggests that instead of discovering forbidden colours the experimenters introduced a new use of "-ish". Still, there is a point in speaking of "reddish green" in their context, which can be motivated by invoking what Wittgenstein calls a "physiognomy". (shrink)
It is often argued that higher-level special-science properties cannot be causally efficacious since the lower-level physical properties on which they supervene are doing all the causal work. This claim is usually derived from an exclusion principle stating that if a higherlevel property F supervenes on a physical property F* that is causally sufficient for a property G, then F cannot cause G. We employ an account of causation as differencemaking to show that the truth or falsity of this principle is (...) a contingent matter and derive necessary and sufficient conditions under which a version of it holds. We argue that one important instance of the principle, far from undermining non-reductive physicalism, actually supports the causal autonomy of certain higher-level properties. (shrink)
This Paper addresses the problem of statelessness, a problem which remains despite treaties and judicial decisions elaborating distinct rules to protect stateless persons. I explain why this has been so. Drawing from the work of Bernhard Waldenfels, I argue that international and domestic courts have presupposed a territorial sense of space, a territorial knowledge and the founding date for the territorial structure of a state-centric international legal community. I then focus upon the idea that an impartial third party can (...) resolve a dispute involving stateless persons by deferring to a universal rule. I call the third party the ‘rule of law third’. Such a rule, I argue, presupposes a presupposed knowledge over stateless persons. The Third takes for granted the territorial boundary of a legal structure, a boundary which excludes the recognition of outsiders to the boundary. (shrink)
The existence of group agents is relatively widely accepted. Examples are corporations, courts, NGOs, and even entire states. But should we also accept that there is such a thing as group consciousness? I give an overview of some of the key issues in this debate and sketch a tentative argument for the view that group agents lack phenomenal consciousness. In developing my argument, I draw on integrated information theory, a much-discussed theory of consciousness. I conclude by pointing out an implication (...) of my argument for the normative status of group agents. (shrink)
This paper generalises the classical Condorcet jury theorem from majority voting over two options to plurality voting over multiple options. The paper further discusses the debate between epistemic and procedural democracy and situates its formal results in that debate. The paper finally compares a number of different social choice procedures for many-option choices in terms of their epistemic merits. An appendix explores the implications of some of the present mathematical results for the question of how probable majority cycles (as in (...) Condorcet's paradox) are in large electorates. (shrink)
Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where (...) the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty. (shrink)
Johann Georg Hamann, filozof z Królewca, filolog oraz teolog (1730-1788) był nazywany przez Goethego „najuczeńszą głową swojego czasu”. Twierdził, że właśnie od niego „najwięcej się nauczył”. Dlatego też zbierał i czytał jego pisma. To samo można powiedzieć o stosunku Ernsta Jüngera (1895-1998) do „maga Północy” . Usłyszał o nim przypadkowo w roku 1924 od lipskiego docenta filozofii Hugo Fischera i odtąd był Hamannem zafascynowany. Często powoływał się na niego w węzłowych miejscach twórczości, jak np. w postaci motta z Hamanna w (...) obu wersjach swojej pierwszej pracy Das abenteuerliche Herz (1929, 1938). Także dzięki Hamannowi powziął przekonanie, że „wyrazistość jest słusznym podziałem między światłem i cieniem”, jak i to, że najważniejsze fenomeny nie są zjawiskami przyczynowo uszeregowanymi, możliwymi do opanowania przez człowieka, lecz zjawiskami podpowierzchniowo powiązanymi ze sobą. Można je poprzez analogię opisać w zbliżony sposób. Jünger stanął tutaj w jednym szeregu z Hamannem i Goethem. Pod koniec swoich opublikowanych dzienników (14.12.1995) nazwał swoją znajomość z magiem Północy „nieuniknionym” i zaliczył go do „budzicieli”, którzy uformowali jego charakter. (shrink)
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty (...) about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
The interdisciplinary debate about the nature of expertise often conflates having expertise with either the individual possession of competences or a certain role ascription. In contrast to this, the paper attempts to demonstrate how different dimensions of expertise ascription are inextricably interwoven. As a result, a balanced account of expertise will be proposed that more accurately determines the closer relationship between the expert’s dispositions, their manifestations and the expert’s function. This finally results in an advanced understanding of expertise that views (...) someone as an expert only if she is undefeatedly disposed to fulfill a contextually salient service function adequately at the moment of assessment. (shrink)
How should you decide what to do when you're uncertain about basic normative principles (e.g., Kantianism vs. utilitarianism)? A natural suggestion is to follow some "second-order" norm: e.g., "comply with the first-order norm you regard as most probable" or "maximize expected choiceworthiness". But what if you're uncertain about second-order norms too -- must you then invoke some third-order norm? If so, it seems that any norm-guided response to normative uncertainty is doomed to a vicious regress. In this paper, I aim (...) to rescue second-order norms from this threat of regress. I first elaborate and defend the suggestion some philosophers have entertained that the regress problem forces us to accept normative externalism, the view that at least one norm is incumbent on agents regardless of their beliefs or evidence concerning that norm. But, I then argue, we need not accept externalism about first-order (e.g., moral) norms, thus closing off any question of what an agent should do in light of her normative beliefs. Rather, it is more plausible to ascribe external force to a single, second-order rational norm: the enkratic principle, correctly formulated. This modest form of externalism, I argue, is both intrinsically well-motivated and sufficient to head off the threat of regress. (shrink)
I argue that free will and determinism are compatible, even when we take free will to require the ability to do otherwise and even when we interpret that ability modally, as the possibility of doing otherwise, and not just conditionally or dispositionally. My argument draws on a distinction between physical and agential possibility. Although in a deterministic world only one future sequence of events is physically possible for each state of the world, the more coarsely defined state of an agent (...) and his or her environment can be consistent with more than one such sequence, and thus different actions can be “agentially possible”. The agential perspective is supported by our best theories of human behaviour, and so we should take it at face value when we refer to what an agent can and cannot do. On the picture I defend, free will is not a physical phenomenon, but a higher-level one on a par with other higher-level phenomena such as agency and intentionality. (shrink)
I argue that Traditional Christian Theism is inconsistent with Truthmaker Maximalism, the thesis that all truths have truthmakers. Though this original formulation requires extensive revision, the gist of the argument is as follows. Suppose for reductio Traditional Christian Theism and the sort of Truthmaker Theory that embraces Truthmaker Maximalism are both true. By Traditional Christian Theism, there is a world in which God, and only God, exists. There are no animals in such a world. Thus, it is (...) true in such a world that there are no zebras. That there are no zebras must have a truthmaker, given Truthmaker Maximalism. God is the only existing object in such a world, and so God must be the truthmaker for this truth, given that it has a truthmaker. But truthmakers necessitate the truths they make true. So, for any world, at any time at which God exists, God makes that there are no zebras true. According to Traditional Christian Theism, God exists in our world. In our world, then, it is true: there are no zebras. But there are zebras. Contradiction! Thus, the conjunction of Traditional Christian Theism with Truthmaker Necessitation and Truthmaker Maximalism is inconsistent. (shrink)
Suppose that the members of a group each hold a rational set of judgments on some interconnected questions, and imagine that the group itself has to form a collective, rational set of judgments on those questions. How should it go about dealing with this task? We argue that the question raised is subject to a difficulty that has recently been noticed in discussion of the doctrinal paradox in jurisprudence. And we show that there is a general impossibility theorem that that (...) difficulty illustrates. Our paper describes this impossibility result and provides an exploration of its significance. The result naturally invites comparison with Kenneth Arrow's famous theorem (Arrow, 1963 and 1984; Sen, 1970) and we elaborate that comparison in a companion paper (List and Pettit, 2002). The paper is in four sections. The first section documents the need for various groups to aggregate its members' judgments; the second presents the discursive paradox; the third gives an informal statement of the more general impossibility result; the formal proof is presented in an appendix. The fourth section, finally, discusses some escape routes from that impossibility. (shrink)
What turns the continuous flow of experience into perceptually distinct objects? Can our verbal descriptions unambiguously capture what it is like to see, hear, or feel? How might we reason about the testimony that perception alone discloses? Christian Coseru proposes a rigorous and highly original way to answer these questions by developing a framework for understanding perception as a mode of apprehension that is intentionally constituted, pragmatically oriented, and causally effective. By engaging with recent discussions in phenomenology and analytic (...) philosophy of mind, but also by drawing on the work of Husserl and Merleau-Ponty, Coseru offers a sustained argument that Buddhist philosophers, in particular those who follow the tradition of inquiry initiated by Dign?ga and Dharmak?rti, have much to offer when it comes to explaining why epistemological disputes about the evidential role of perceptual experience cannot satisfactorily be resolved without taking into account the structure of our cognitive awareness. -/- Perceiving Reality examines the function of perception and its relation to attention, language, and discursive thought, and provides new ways of conceptualizing the Buddhist defense of the reflexivity thesis of consciousness-namely, that each cognitive event is to be understood as involving a pre-reflective implicit awareness of its own occurrence. Coseru advances an innovative approach to Buddhist philosophy of mind in the form of phenomenological naturalism, and moves beyond comparative approaches to philosophy by emphasizing the continuity of concerns between Buddhist and Western philosophical accounts of the nature of perceptual content and the character of perceptual consciousness. (shrink)
We offer a new argument for the claim that there can be non-degenerate objective chance (“true randomness”) in a deterministic world. Using a formal model of the relationship between different levels of description of a system, we show how objective chance at a higher level can coexist with its absence at a lower level. Unlike previous arguments for the level-specificity of chance, our argument shows, in a precise sense, that higher-level chance does not collapse into epistemic probability, despite higher-level properties (...) supervening on lower-level ones. We show that the distinction between objective chance and epistemic probability can be drawn, and operationalized, at every level of description. There is, therefore, not a single distinction between objective and epistemic probability, but a family of such distinctions. (shrink)
How should an agent decide what to do when she is uncertain not just about morally relevant empirical matters, like the consequences of some course of action, but about the basic principles of morality itself? This question has only recently been taken up in a systematic way by philosophers. Advocates of moral hedging claim that an agent should weigh the reasons put forward by each moral theory in which she has positive credence, considering both the likelihood that that theory is (...) true and the strength of the reasons it posits. The view that it is sometimes rational to hedge for one's moral uncertainties, however, has recently come under attack both from those who believe that an agent should always be guided by the dictates of the single moral theory she deems most probable and from those who believe that an agent's moral beliefs are simply irrelevant to what she ought to do. Among the many objections to hedging that have been pressed in the recent literature is the worry that there is no non-arbitrary way of making the intertheoretic comparisons of moral value necessary to aggregate the value assignments of rival moral theories into a single ranking of an agent's options. -/- This dissertation has two principal objectives: First, I argue that, contra these recent objections, an agent's moral beliefs and uncertainties are relevant to what she rationally ought to do, and more particularly, that agents are at least sometimes rationally required to hedge for their moral uncertainties. My principal argument for these claims appeals to the enkratic conception of rationality, according to which the requirements of practical rationality derive from an agent's beliefs about the objective, desire-independent value or choiceworthiness of her options. Second, I outline a new general theory of rational choice under moral uncertainty. Central to this theory is the idea of content-based aggregation, that the principles according to which an agent should compare and aggregate rival moral theories are grounded in the content of those theories themselves, including not only their value assignments but also the metaethical and other non-surface-level propositions that underlie, justify, or explain those value assignments. (shrink)
Political theorists have offered many accounts of collective decision-making under pluralism. I discuss a key dimension on which such accounts differ: the importance assigned not only to the choices made but also to the reasons underlying those choices. On that dimension, different accounts lie in between two extremes. The ‘minimal liberal account’ holds that collective decisions should be made only on practical actions or policies and that underlying reasons should be kept private. The ‘comprehensive deliberative account’ stresses the importance of (...) giving reasons for collective decisions, where such reasons should also be collectively decided. I compare these two accounts on the basis of a formal model developed in the growing literature on the ‘discursive dilemma’ and ‘judgment aggregation’ and address several questions: What is the trade-off between the (minimal liberal) demand for reaching agreement on outcomes and the (comprehensive deliberative) demand for reason-giving? How large should the ‘sphere of public reason’ be? When do the decision procedures suggested by the two accounts agree and when not? How good are these procedures at truthtracking on factual matters? What strategic incentives do they generate for decision-makers? My discussion identifies what is at stake in the choice between minimal liberal and comprehensive deliberative accounts of collective decisionmaking, and sheds light not only on these two ideal-typical accounts themselves, but also on many characteristics that intermediate accounts share with them. (shrink)
Political science is divided between methodological individualists, who seek to explain political phenomena by reference to individuals and their interactions, and holists (or nonreductionists), who consider some higher-level social entities or properties such as states, institutions, or cultures ontologically or causally significant. We propose a reconciliation between these two perspectives, building on related work in philosophy. After laying out a taxonomy of different variants of each view, we observe that (i) although political phenomena result from underlying individual attitudes and behavior, (...) individual-level descriptions do not always capture all explanatorily salient properties, and (ii) nonreductionistic explanations are mandated when social regularities are robust to changes in their individual-level realization. We characterize the dividing line between phenomena requiring nonreductionistic explanation and phenomena permitting individualistic explanation and give examples from the study of ethnic conflicts, social-network theory, and international-relations theory. (shrink)
The ``doctrinal paradox'' or ``discursive dilemma'' shows that propositionwise majority voting over the judgments held by multiple individuals on some interconnected propositions can lead to inconsistent collective judgments on these propositions. List and Pettit (2002) have proved that this paradox illustrates a more general impossibility theorem showing that there exists no aggregation procedure that generally produces consistent collective judgments and satisfies certain minimal conditions. Although the paradox and the theorem concern the aggregation of judgments rather than preferences, they invite comparison (...) with two established results on the aggregation of preferences: the Condorcet paradox and Arrow's impossibility theorem. We may ask whether the new impossibility theorem is a special case of Arrow's theorem, or whether there are interesting disanalogies between the two results. In this paper, we compare the two theorems, and show that they are not straightforward corollaries of each other. We further suggest that, while the framework of preference aggregation can be mapped into the framework of judgment aggregation, there exists no obvious reverse mapping. Finally, we address one particular minimal condition that is used in both theorems – an independence condition – and suggest that this condition points towards a unifying property underlying both impossibility results. (shrink)
In this paper I will introduce a practical explication for the notion of expertise. At first, I motivate this attempt by taking a look on recent debates which display great disagreement about whether and how to define expertise in the first place. After that I will introduce the methodology of practical explications in the spirit of Edward Craig’s Knowledge and the state of nature along with some conditions of adequacy taken from ordinary and scientific language. This eventually culminates in the (...) respective explication of expertise according to which this term essentially refers to a certain kind of service-relation. This is why expertise should be considered as a predominantly social kind. This article will end up with a discussion of advantages and prima facie plausible objections against my account of expertise. (shrink)
Much recent philosophical work on social freedom focuses on whether freedom should be understood as non-interference, in the liberal tradition associated with Isaiah Berlin, or as non-domination, in the republican tradition revived by Philip Pettit and Quentin Skinner. We defend a conception of freedom that lies between these two alternatives: freedom as independence. Like republican freedom, it demands the robust absence of relevant constraints on action. Unlike republican, and like liberal freedom, it is not moralized. We show that freedom as (...) independence retains the virtues of its liberal and republican counterparts while shedding their vices. Our aim is to put this conception of freedom more firmly on the map and to offer a novel perspective on the logical space in which different conceptions of freedom are located. (shrink)
This is an edited transcript of a conversation to be included in the collection "Conversations on Rational Choice". The conversation was conducted in Munich on 7 and 9 February 2016.
Scientists and philosophers frequently speak about levels of description, levels of explanation, and ontological levels. In this paper, I propose a unified framework for modelling levels. I give a general definition of a system of levels and show that it can accommodate descriptive, explanatory, and ontological notions of levels. I further illustrate the usefulness of this framework by applying it to some salient philosophical questions: (1) Is there a linear hierarchy of levels, with a fundamental level at the bottom? And (...) what does the answer to this question imply for physicalism, the thesis that everything supervenes on the physical? (2) Are there emergent properties? (3) Are higher-level descriptions reducible to lower-level ones? (4) Can the relationship between normative and non-normative domains be viewed as one involving levels? Although I use the terminology of “levels”, the proposed framework can also represent “scales”, “domains”, or “subject matters”, where these are not linearly but only partially ordered by relations of supervenience or inclusion. (shrink)
The traditional Lewis–Stalnaker semantics treats all counterfactuals with an impossible antecedent as trivially or vacuously true. Many have regarded this as a serious defect of the semantics. For intuitively, it seems, counterfactuals with impossible antecedents—counterpossibles—can be non-trivially true and non-trivially false. Whereas the counterpossible "If Hobbes had squared the circle, then the mathematical community at the time would have been surprised" seems true, "If Hobbes had squared the circle, then sick children in the mountains of Afghanistan at the time would (...) have been thrilled" seems false. Many have proposed to extend the Lewis–Stalnaker semantics with impossible worlds to make room for a non-trivial or non-vacuous treatment of counterpossibles. Roughly, on the extended Lewis–Stalnaker semantics, we evaluate a counterfactual of the form "If A had been true, then C would have been true" by going to closest world—whether possible or impossible—in which A is true and check whether C is also true in that world. If the answer is "yes", the counterfactual is true; otherwise it is false. Since there are impossible worlds in which the mathematically impossible happens, there are impossible worlds in which Hobbes manages to square the circle. And intuitively, in the closest such impossible worlds, sick children in the mountains of Afghanistan are not thrilled—they remain sick and unmoved by the mathematical developments in Europe. If so, the counterpossible "If Hobbes had squared the circle, then sick children in the mountains of Afghanistan at the time would have been thrilled" comes out false, as desired. In this paper, I will critically investigate the extended Lewis–Stalnaker semantics for counterpossibles. I will argue that the standard version of the extended semantics, in which impossible worlds correspond to maximal, logically inconsistent entities, fails to give the correct semantic verdicts for many counterpossibles. In light of the negative arguments, I will then outline a new version of the extended Lewis–Stalnaker semantics that can avoid these problems. (shrink)
This paper explores Thomas Aquinas’ and Richard Swinburne’s doctrines of simplicity in the context of their philosophical theologies. Both say that God is simple. However, Swinburne takes simplicity as a property of the theistic hypothesis, while for Aquinas simplicity is a property of God himself. For Swinburne, simpler theories are ceteris paribus more likely to be true; for Aquinas, simplicity and truth are properties of God which, in a certain way, coincide – because God is metaphysically simple. Notwithstanding their different (...) approaches, some unreckoned parallels between their thoughts are brought to light. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
Some moral theorists argue that innocent beneficiaries of wrongdoing may have special remedial duties to address the hardships suffered by the victims of the wrongdoing. These arguments generally aim to simply motivate the idea that being a beneficiary can provide an independent ground for charging agents with remedial duties to the victims of wrongdoing. Consequently, they have neglected contexts in which it is implausible to charge beneficiaries with remedial duties to the victims of wrongdoing, thereby failing to explore the limits (...) of the benefiting relation in detail. Our aim in this article is to identify a criterion to distinguish contexts in which innocent beneficiaries plausibly bear remedial duties to the victims of wrongdoing from those in which they do not. We argue that innocent beneficiaries incur special duties to the victims of wrongdoing if and only if receiving and retaining the benefits sustains wrongful harm. We develop this criterion by identifying and explicating two general modes of sustaining wrongful harm. We also show that our criterion offers a general explanation for why some innocent beneficiaries incur a special duty to the victims of wrongdoing while others do not. By sustaining wrongful harm, beneficiaries-with-duties contribute to wrongful harm, and we ordinarily have relatively stringent moral requirements against contributing to wrongful harm. On our account, innocently benefiting from wrongdoing _per se_ does not generate duties to the victims of wrongdoing. Rather, beneficiaries acquire such duties because their receipt and retention of the benefits of wrongdoing contribute to the persistence of the wrongful harm suffered by the victim. We conclude by showing that our proposed criterion also illuminates why there can be reasonable disagreement about whether beneficiaries have a duty to victims in some social contexts. (shrink)
In this essay, we explore an issue of moral uncertainty: what we are permitted to do when we are unsure about which moral principles are correct. We develop a novel approach to this issue that incorporates important insights from previous work on moral uncertainty, while avoiding some of the difficulties that beset existing alternative approaches. Our approach is based on evaluating and choosing between option sets rather than particular conduct options. We show how our approach is particularly well-suited to address (...) this issue of moral uncertainty with respect to agents that have credence in moral theories that are not fully consequentialist. (shrink)
In this paper, I introduce the emerging theory of judgment aggregation as a framework for studying institutional design in social epistemology. When a group or collective organization is given an epistemic task, its performance may depend on its ‘aggregation procedure’, i.e. its mechanism for aggregating the group members’ individual beliefs or judgments into corresponding collective beliefs or judgments endorsed by the group as a whole. I argue that a group’s aggregation procedure plays an important role in determining whether the group (...) can meet two challenges: the ‘rationality challenge’ and the ‘knowledge challenge’. The rationality challenge arises when a group is required to endorse consistent beliefs or judgments; the knowledge challenge arises when the group’s beliefs or judgments are required to track certain truths. My discussion seeks to identify those properties of an aggregation procedure that affect a group’s success at meeting each of the two challenges. (shrink)
This paper offers a comparison of three different kinds of collective attitudes: aggregate, common, and corporate attitudes. They differ not only in their relationship to individual attitudes—e.g., whether they are “reducible” to individual attitudes—but also in the roles they play in relation to the collectives to which they are ascribed. The failure to distinguish them can lead to confusion, in informal talk as well as in the social sciences. So, the paper’s message is an appeal for disambiguation.
Our aim in this essay is to critically examine Iris Young’s arguments in her important posthumously published book against what she calls the liability model for attributing responsibility, as well as the arguments that she marshals in support of what she calls the social connection model of political responsibility. We contend that her arguments against the liability model of conceiving responsibility are not convincing, and that her alternative to it is vulnerable to damaging objections.
In normative political theory, it is widely accepted that democracy cannot be reduced to voting alone, but that it requires deliberation. In formal social choice theory, by contrast, the study of democracy has focused primarily on the aggregation of individual opinions into collective decisions, typically through voting. While the literature on deliberation has an optimistic flavour, the literature on social choice is more mixed. It is centred around several paradoxes and impossibility results identifying conflicts between different intuitively plausible desiderata. In (...) recent years, there has been a growing dialogue between the two literatures. This paper discusses the connections between them. Important insights are that (i) deliberation can complement aggregation and open up an escape route from some of its negative results; and (ii) the formal models of social choice theory can shed light on some aspects of deliberation, such as the nature of deliberation-induced opinion change. (shrink)
We offer an original argument for the existence of universal fictions—that is, fictions within which every possible proposition is true. Specifically, we detail a trio of such fictions, along with an easy-to-follow recipe for generating more. After exploring several consequences and dismissing some objections, we conclude that fiction, unlike reality, is unlimited when it comes to truth.
Despite the prevalence of human rights discourse, the very idea or concept of a human right remains obscure. In particular, it is unclear what is supposed to be special or distinctive about human rights. In this paper, we consider two recent attempts to answer this challenge, James Griffin’s “personhood account” and Charles Beitz’s “practice-based account”, and argue that neither is entirely satisfactory. We then conclude with a suggestion for what a more adequate account might look like – what we call (...) the “structural pluralist account” of human rights. (shrink)
Can there be a global demos? The current debate about this topic is divided between two opposing camps: the “pessimist” or “impossibilist” camp, which holds that the emergence of a global demos is either conceptually or empirically impossible, and the “optimist” or “possibilist” camp, which holds that the emergence of a global demos is conceptually as well as empirically possible and an embryonic version of it already exists. However, the two camps agree neither on a common working definition of a (...) global demos, nor on the relevant empirical facts, so it is difficult to reconcile their conflicting outlooks. We seek to move the debate beyond this stalemate. We argue that existing conceptions of a demos are ill-suited for capturing what kind of a global demos is needed to facilitate good global governance, and we propose a new conception of a demos that is better suited for this purpose. We suggest that some of the most prominent conceptions of a demos have focused too much on who the members of a demos are and too little on what functional characteristics the demos must have in order to perform its role in facilitating governance within the relevant domain. Our new proposal shifts the emphasis from the first, “compositional” question to the second, “performative” one, and provides a more “agency-based” account of a global demos. The key criterion that a collection of individuals must meet in order to qualify as a demos is that it is not merely demarcated by an appropriate membership criterion, but that it can be organized, democratically, so as to function as a state-like agent. Compared to the existing, predominantly “compositional” approaches to thinking about the demos, this agency-based approach puts us into a much better position to assess the empirical prospects for the emergence of a global demos that can facilitate good global governance. (shrink)
The Humean best systems account identifies laws of nature with the regularities in a system of truths that, as a whole, best conforms to scientific standards for theory-choice. A principled problem for the BSA is that it returns the wrong verdicts about laws in cases where multiple systems, containing different regularities, satisfy these standards equally well. This problem affects every version of the BSA because it arises regardless of which standards for theory-choice Humeans adopt. In this paper, we propose a (...) Humean response to the problem. We invoke pragmatic aspects of Humean laws to show that the BSA, despite violating some of our intuitive judgements, can capture everything that is relevant for scientific practice. (shrink)
Possible-worlds accounts of mental or linguistic content are often criticized for being too coarse-grained. To make room for more fine-grained distinctions among contents, several authors have recently proposed extending the space of possible worlds by "impossible worlds". We argue that this strategy comes with serious costs: we would effectively have to abandon most of the features that make the possible-worlds framework attractive. More generally, we argue that while there are intuitive and theoretical considerations against overly coarse-grained notions of content, the (...) same kinds of considerations also prohibit an overly fine-grained individuation of content. An adequate notion of content, it seems, should have intermediate granularity. However, it is hard to construe a notion of content that meets these demands. Any notion of content, we suggest, must be either implausibly coarse-grained or implausibly fine-grained (or both). (shrink)
What justifies holding the person that we are today morally responsible for something we did a year ago? And why are we justified in showing prudential concern for the future welfare of the person we will be a year from now? These questions cannot be systematically pursued without addressing the problem of personal identity. This essay considers whether Buddhist Reductionism, a philosophical project grounded on the idea that persons reduce to a set of bodily, sensory, perceptual, dispositional, and conscious elements, (...) provides support for Parfit’s psychological criterion for personal identity. It examines the role that self-consciousness plays in mediating both self-concern and concern for others, and offers an argument for how reductionism about substantive or enduring selves may be reconciled with the seemingly irreducible character of self-consciousness. (shrink)
Some moral theorists argue that being an innocent beneficiary of significant harms inflicted by others may be sufficient to ground special duties to address the hardships suffered by the victims, at least when it is impossible to extract compensation from those who perpetrated the harm. This idea has been applied to climate change in the form of the beneficiary-pays principle. Other philosophers, however, are quite sceptical about beneficiary pays. Our aim in this article is to examine their critiques. We conclude (...) that, while they have made important points, the principle remains worthy of further development and exploration. Our purpose in engaging with these critiques is constructive — we aim to formulate beneficiary pays in ways that would give it a plausible role in allocating the cost of addressing human-induced climate change, while acknowledging that some understandings of the principle would make it unsuitable for this purpose. (shrink)
Majority cycling and related social choice paradoxes are often thought to threaten the meaningfulness of democracy. But deliberation can prevent majority cycles – not by inducing unanimity, which is unrealistic, but by bringing preferences closer to single-peakedness. We present the first empirical test of this hypothesis, using data from Deliberative Polls. Comparing preferences before and after deliberation, we find increases in proximity to single-peakedness. The increases are greater for lower versus higher salience issues and for individuals who seem to have (...) deliberated more versus less effectively. They are not merely a byproduct of increased substantive agreement. Our results both refine and support the idea that deliberation, by increasing proximity to single-peakedness, provides an escape from the problem of majority cycling. (shrink)
We offer a critical assessment of the “exclusion argument” against free will, which may be summarized by the slogan: “My brain made me do it, therefore I couldn't have been free”. While the exclusion argument has received much attention in debates about mental causation (“could my mental states ever cause my actions?”), it is seldom discussed in relation to free will. However, the argument informally underlies many neuroscientific discussions of free will, especially the claim that advances in neuroscience seriously challenge (...) our belief in free will. We introduce two distinct versions of the argument, discuss several unsuccessful responses to it, and then present our preferred response. This involves showing that a key premise – the “exclusion principle” – is false under what we take to be the most natural account of causation in the context of agency: the difference-making account. We finally revisit the debate about neuroscience and free will. (shrink)
Humean reductionism about laws of nature is the view that the laws reduce to the total distribution of non-modal or categorical properties in spacetime. A worry about Humean reductionism is that it cannot motivate the characteristic modal resilience of laws under counterfactual suppositions and that it thus generates wrong verdicts about certain nested counterfactuals. In this paper, we defend Humean reductionism by motivating an account of the modal resilience of Humean laws that gets nested counterfactuals right.
In this paper, we present a new semantic framework designed to capture a distinctly cognitive or epistemic notion of meaning akin to Fregean senses. Traditional Carnapian intensions are too coarse-grained for this purpose: they fail to draw semantic distinctions between sentences that, from a Fregean perspective, differ in meaning. This has led some philosophers to introduce more fine-grained hyperintensions that allow us to draw semantic distinctions among co-intensional sentences. But the hyperintensional strategy has a flip-side: it risks drawing semantic distinctions (...) between sentences that, from a Fregean perspective, do not differ in meaning. This is what we call the ‘new problem’ of hyperintensionality to distinguish it from the ‘old problem’ that faced the intensional theory. We show that our semantic framework offers a joint solution to both these problems by virtue of satisfying a version of Frege’s so-called ‘equipollence principle’ for sense individuation. Frege’s principle, we argue, not only captures the semantic intuitions that give rise to the old and the new problem of hyperintensionality, but also points the way to an independently motivated solution to both problems. (shrink)
In the context of EPR-Bohm type experiments and spin detections confined to spacelike hypersurfaces, a local, deterministic and realistic model within a Friedmann-Robertson-Walker spacetime with a constant spatial curvature (S^3 ) is presented that describes simultaneous measurements of the spins of two fermions emerging in a singlet state from the decay of a spinless boson. Exact agreement with the probabilistic predictions of quantum theory is achieved in the model without data rejection, remote contextuality, superdeterminism or backward causation. A singularity-free Clifford-algebraic (...) representation of S^3 with vanishing spatial curvature and non-vanishing torsion is then employed to transform the model in a more elegant form. Several event-by-event numerical simulations of the model are presented, which confirm our analytical results with the accuracy of 4 parts in 10^4 . Possible implications of our results for practical applications such as quantum security protocols and quantum computing are briefly discussed. (shrink)
At the core of republican thought, on Philip Pettit’s account, lies the conception of freedom as non-domination, as opposed to freedom as noninterference in the liberal sense. I revisit the distinction between liberal and republican freedom and argue that republican freedom incorporates a particular rule-of-law requirement, whereas liberal freedom does not. Liberals may also endorse such a requirement, but not as part of their conception of freedom itself. I offer a formal analysis of this rule-of-law requirement and compare liberal and (...) republican freedom on its basis. While I agree with Pettit that republican freedom has broader implications than liberal freedom, I conclude that we face a trade-off between two dimensions of freedom (scope and robustness) and that it is harder for republicans to solve that trade-off than it is for liberals. Key Words: freedom • republicanism • liberalism • noninterference • non-domination • rule of law • robustness • liberal paradox. (shrink)
Our ordinary causal concept seems to fit poorly with how our best physics describes the world. We think of causation as a time-asymmetric dependence relation between relatively local events. Yet fundamental physics describes the world in terms of dynamical laws that are, possible small exceptions aside, time symmetric and that relate global time slices. My goal in this paper is to show why we are successful at using local, time-asymmetric models in causal explanations despite this apparent mismatch with fundamental physics. (...) In particular, I will argue that there is an important connection between time asymmetry and locality, namely: understanding the locality of our causal models is the key to understanding why the physical time asymmetries in our universe give rise to time asymmetry in causal explanation. My theory thus provides a unified account of why causation is local and time asymmetric and thereby enables a reply to Russell’s famous attack on causation. (shrink)
This essay is about Wittgenstein, first about his views on ethics, second about his conception of language games. Third, it combines the two and shows how problems arise from this. Wittgenstein rejects theories of ethics and emphasises the variety of language games. Such language games are marked by what I call “inner relativity”. Wittgenstein himself was not a relativist, but it seems to me his views easily lead to what I call “outer relativism”. In matters of ethics this is particularly (...) problematic. (shrink)
Many political theorists defend the view that egalitarian justice should extend from the domestic to the global arena. Despite its intuitive appeal, this ‘global egalitarianism’ has come under attack from different quarters. In this article, we focus on one particular set of challenges to this view: those advanced by domestic egalitarians. We consider seven types of challenges, each pointing to a specific disanalogy between domestic and global arenas which is said to justify the restriction of egalitarian justice to the former, (...) and argue that none of them – both individually and jointly – offers a conclusive refutation of global egalitarianism. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.