The vision of legendary criminologist Cesare Lombroso to use scientific theories of individual causes of crime as a basis for screening and prevention programmes targeting individuals at risk for future criminal behaviour has resurfaced, following advances in genetics, neuroscience and psychiatric epidemiology. This article analyses this idea and maps its ethical implications from a public health ethical standpoint. Twenty-seven variants of the new Lombrosian vision of forensic screening and prevention are distinguished, and some scientific and technical limitations are noted. Some (...) lures, biases and structural factors, making the application of the Lombrosian idea likely in spite of weak evidence are pointed out and noted as a specific type of ethical aspect. Many classic and complex ethical challenges for health screening programmes are shown to apply to the identified variants and the choice between them, albeit with peculiar and often provoking variations. These variations are shown to actualize an underlying theoretical conundrum in need of further study, pertaining to the relationship between public health ethics and the ethics and values of criminal law policy. (shrink)
This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...) responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA. (shrink)
It is often argued that higher-level special-science properties cannot be causally efficacious since the lower-level physical properties on which they supervene are doing all the causal work. This claim is usually derived from an exclusion principle stating that if a higherlevel property F supervenes on a physical property F* that is causally sufficient for a property G, then F cannot cause G. We employ an account of causation as differencemaking to show that the truth or falsity of this principle is (...) a contingent matter and derive necessary and sufficient conditions under which a version of it holds. We argue that one important instance of the principle, far from undermining non-reductive physicalism, actually supports the causal autonomy of certain higher-level properties. (shrink)
I argue that free will and determinism are compatible, even when we take free will to require the ability to do otherwise and even when we interpret that ability modally, as the possibility of doing otherwise, and not just conditionally or dispositionally. My argument draws on a distinction between physical and agential possibility. Although in a deterministic world only one future sequence of events is physically possible for each state of the world, the more coarsely defined state of an agent (...) and his or her environment can be consistent with more than one such sequence, and thus different actions can be “agentially possible”. The agential perspective is supported by our best theories of human behaviour, and so we should take it at face value when we refer to what an agent can and cannot do. On the picture I defend, free will is not a physical phenomenon, but a higher-level one on a par with other higher-level phenomena such as agency and intentionality. (shrink)
Suppose that the members of a group each hold a rational set of judgments on some interconnected questions, and imagine that the group itself has to form a collective, rational set of judgments on those questions. How should it go about dealing with this task? We argue that the question raised is subject to a difficulty that has recently been noticed in discussion of the doctrinal paradox in jurisprudence. And we show that there is a general impossibility theorem that that (...) difficulty illustrates. Our paper describes this impossibility result and provides an exploration of its significance. The result naturally invites comparison with Kenneth Arrow's famous theorem (Arrow, 1963 and 1984; Sen, 1970) and we elaborate that comparison in a companion paper (List and Pettit, 2002). The paper is in four sections. The first section documents the need for various groups to aggregate its members' judgments; the second presents the discursive paradox; the third gives an informal statement of the more general impossibility result; the formal proof is presented in an appendix. The fourth section, finally, discusses some escape routes from that impossibility. (shrink)
What turns the continuous flow of experience into perceptually distinct objects? Can our verbal descriptions unambiguously capture what it is like to see, hear, or feel? How might we reason about the testimony that perception alone discloses? Christian Coseru proposes a rigorous and highly original way to answer these questions by developing a framework for understanding perception as a mode of apprehension that is intentionally constituted, pragmatically oriented, and causally effective. By engaging with recent discussions in phenomenology and analytic (...) philosophy of mind, but also by drawing on the work of Husserl and Merleau-Ponty, Coseru offers a sustained argument that Buddhist philosophers, in particular those who follow the tradition of inquiry initiated by Dign?ga and Dharmak?rti, have much to offer when it comes to explaining why epistemological disputes about the evidential role of perceptual experience cannot satisfactorily be resolved without taking into account the structure of our cognitive awareness. -/- Perceiving Reality examines the function of perception and its relation to attention, language, and discursive thought, and provides new ways of conceptualizing the Buddhist defense of the reflexivity thesis of consciousness-namely, that each cognitive event is to be understood as involving a pre-reflective implicit awareness of its own occurrence. Coseru advances an innovative approach to Buddhist philosophy of mind in the form of phenomenological naturalism, and moves beyond comparative approaches to philosophy by emphasizing the continuity of concerns between Buddhist and Western philosophical accounts of the nature of perceptual content and the character of perceptual consciousness. (shrink)
We offer a new argument for the claim that there can be non-degenerate objective chance (“true randomness”) in a deterministic world. Using a formal model of the relationship between different levels of description of a system, we show how objective chance at a higher level can coexist with its absence at a lower level. Unlike previous arguments for the level-specificity of chance, our argument shows, in a precise sense, that higher-level chance does not collapse into epistemic probability, despite higher-level properties (...) supervening on lower-level ones. We show that the distinction between objective chance and epistemic probability can be drawn, and operationalized, at every level of description. There is, therefore, not a single distinction between objective and epistemic probability, but a family of such distinctions. (shrink)
The existence of group agents is relatively widely accepted. Examples are corporations, courts, NGOs, and even entire states. But should we also accept that there is such a thing as group consciousness? I give an overview of some of the key issues in this debate and sketch a tentative argument for the view that group agents lack phenomenal consciousness. In developing my argument, I draw on integrated information theory, a much-discussed theory of consciousness. I conclude by pointing out an implication (...) of my argument for the normative status of group agents. (shrink)
Political theorists have offered many accounts of collective decision-making under pluralism. I discuss a key dimension on which such accounts differ: the importance assigned not only to the choices made but also to the reasons underlying those choices. On that dimension, different accounts lie in between two extremes. The ‘minimal liberal account’ holds that collective decisions should be made only on practical actions or policies and that underlying reasons should be kept private. The ‘comprehensive deliberative account’ stresses the importance of (...) giving reasons for collective decisions, where such reasons should also be collectively decided. I compare these two accounts on the basis of a formal model developed in the growing literature on the ‘discursive dilemma’ and ‘judgment aggregation’ and address several questions: What is the trade-off between the (minimal liberal) demand for reaching agreement on outcomes and the (comprehensive deliberative) demand for reason-giving? How large should the ‘sphere of public reason’ be? When do the decision procedures suggested by the two accounts agree and when not? How good are these procedures at truthtracking on factual matters? What strategic incentives do they generate for decision-makers? My discussion identifies what is at stake in the choice between minimal liberal and comprehensive deliberative accounts of collective decisionmaking, and sheds light not only on these two ideal-typical accounts themselves, but also on many characteristics that intermediate accounts share with them. (shrink)
Political science is divided between methodological individualists, who seek to explain political phenomena by reference to individuals and their interactions, and holists (or nonreductionists), who consider some higher-level social entities or properties such as states, institutions, or cultures ontologically or causally significant. We propose a reconciliation between these two perspectives, building on related work in philosophy. After laying out a taxonomy of different variants of each view, we observe that (i) although political phenomena result from underlying individual attitudes and behavior, (...) individual-level descriptions do not always capture all explanatorily salient properties, and (ii) nonreductionistic explanations are mandated when social regularities are robust to changes in their individual-level realization. We characterize the dividing line between phenomena requiring nonreductionistic explanation and phenomena permitting individualistic explanation and give examples from the study of ethnic conflicts, social-network theory, and international-relations theory. (shrink)
The ``doctrinal paradox'' or ``discursive dilemma'' shows that propositionwise majority voting over the judgments held by multiple individuals on some interconnected propositions can lead to inconsistent collective judgments on these propositions. List and Pettit (2002) have proved that this paradox illustrates a more general impossibility theorem showing that there exists no aggregation procedure that generally produces consistent collective judgments and satisfies certain minimal conditions. Although the paradox and the theorem concern the aggregation of judgments rather than preferences, they invite comparison (...) with two established results on the aggregation of preferences: the Condorcet paradox and Arrow's impossibility theorem. We may ask whether the new impossibility theorem is a special case of Arrow's theorem, or whether there are interesting disanalogies between the two results. In this paper, we compare the two theorems, and show that they are not straightforward corollaries of each other. We further suggest that, while the framework of preference aggregation can be mapped into the framework of judgment aggregation, there exists no obvious reverse mapping. Finally, we address one particular minimal condition that is used in both theorems – an independence condition – and suggest that this condition points towards a unifying property underlying both impossibility results. (shrink)
In this paper I will introduce a practical explication for the notion of expertise. At first, I motivate this attempt by taking a look on recent debates which display great disagreement about whether and how to define expertise in the first place. After that I will introduce the methodology of practical explications in the spirit of Edward Craig’s Knowledge and the state of nature along with some conditions of adequacy taken from ordinary and scientific language. This eventually culminates in the (...) respective explication of expertise according to which this term essentially refers to a certain kind of service-relation. This is why expertise should be considered as a predominantly social kind. This article will end up with a discussion of advantages and prima facie plausible objections against my account of expertise. (shrink)
In the growing literature on decision-making under moral uncertainty, a number of skeptics have argued that there is an insuperable barrier to rational "hedging" for the risk of moral error, namely the apparent incomparability of moral reasons given by rival theories like Kantianism and utilitarianism. Various general theories of intertheoretic value comparison have been proposed to meet this objection, but each suffers from apparently fatal flaws. In this paper, I propose a more modest approach that aims to identify classes of (...) moral theories that share common principles strong enough to establish bases for intertheoretic comparison. I show that, contra the claims of skeptics, there are often rationally perspicuous grounds for precise, quantitative value comparisons within such classes. In light of this fact, I argue, the existence of some apparent incomparabilities between widely divergent moral theories cannot serve as a general argument against hedging for one's moral uncertainties. (shrink)
Much recent philosophical work on social freedom focuses on whether freedom should be understood as non-interference, in the liberal tradition associated with Isaiah Berlin, or as non-domination, in the republican tradition revived by Philip Pettit and Quentin Skinner. We defend a conception of freedom that lies between these two alternatives: freedom as independence. Like republican freedom, it demands the robust absence of relevant constraints on action. Unlike republican, and like liberal freedom, it is not moralized. We show that freedom as (...) independence retains the virtues of its liberal and republican counterparts while shedding their vices. Our aim is to put this conception of freedom more firmly on the map and to offer a novel perspective on the logical space in which different conceptions of freedom are located. (shrink)
This is an edited transcript of a conversation to be included in the collection "Conversations on Rational Choice". The conversation was conducted in Munich on 7 and 9 February 2016.
Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where (...) the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty. (shrink)
Scientists and philosophers frequently speak about levels of description, levels of explanation, and ontological levels. In this paper, I propose a unified framework for modelling levels. I give a general definition of a system of levels and show that it can accommodate descriptive, explanatory, and ontological notions of levels. I further illustrate the usefulness of this framework by applying it to some salient philosophical questions: (1) Is there a linear hierarchy of levels, with a fundamental level at the bottom? And (...) what does the answer to this question imply for physicalism, the thesis that everything supervenes on the physical? (2) Are there emergent properties? (3) Are higher-level descriptions reducible to lower-level ones? (4) Can the relationship between normative and non-normative domains be viewed as one involving levels? Although I use the terminology of “levels”, the proposed framework can also represent “scales”, “domains”, or “subject matters”, where these are not linearly but only partially ordered by relations of supervenience or inclusion. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
Some moral theorists argue that innocent beneficiaries of wrongdoing may have special remedial duties to address the hardships suffered by the victims of the wrongdoing. These arguments generally aim to simply motivate the idea that being a beneficiary can provide an independent ground for charging agents with remedial duties to the victims of wrongdoing. Consequently, they have neglected contexts in which it is implausible to charge beneficiaries with remedial duties to the victims of wrongdoing, thereby failing to explore the limits (...) of the benefiting relation in detail. Our aim in this article is to identify a criterion to distinguish contexts in which innocent beneficiaries plausibly bear remedial duties to the victims of wrongdoing from those in which they do not. We argue that innocent beneficiaries incur special duties to the victims of wrongdoing if and only if receiving and retaining the benefits sustains wrongful harm. We develop this criterion by identifying and explicating two general modes of sustaining wrongful harm. We also show that our criterion offers a general explanation for why some innocent beneficiaries incur a special duty to the victims of wrongdoing while others do not. By sustaining wrongful harm, beneficiaries-with-duties contribute to wrongful harm, and we ordinarily have relatively stringent moral requirements against contributing to wrongful harm. On our account, innocently benefiting from wrongdoing _per se_ does not generate duties to the victims of wrongdoing. Rather, beneficiaries acquire such duties because their receipt and retention of the benefits of wrongdoing contribute to the persistence of the wrongful harm suffered by the victim. We conclude by showing that our proposed criterion also illuminates why there can be reasonable disagreement about whether beneficiaries have a duty to victims in some social contexts. (shrink)
In this essay, we explore an issue of moral uncertainty: what we are permitted to do when we are unsure about which moral principles are correct. We develop a novel approach to this issue that incorporates important insights from previous work on moral uncertainty, while avoiding some of the difficulties that beset existing alternative approaches. Our approach is based on evaluating and choosing between option sets rather than particular conduct options. We show how our approach is particularly well-suited to address (...) this issue of moral uncertainty with respect to agents that have credence in moral theories that are not fully consequentialist. (shrink)
In this paper, I introduce the emerging theory of judgment aggregation as a framework for studying institutional design in social epistemology. When a group or collective organization is given an epistemic task, its performance may depend on its ‘aggregation procedure’, i.e. its mechanism for aggregating the group members’ individual beliefs or judgments into corresponding collective beliefs or judgments endorsed by the group as a whole. I argue that a group’s aggregation procedure plays an important role in determining whether the group (...) can meet two challenges: the ‘rationality challenge’ and the ‘knowledge challenge’. The rationality challenge arises when a group is required to endorse consistent beliefs or judgments; the knowledge challenge arises when the group’s beliefs or judgments are required to track certain truths. My discussion seeks to identify those properties of an aggregation procedure that affect a group’s success at meeting each of the two challenges. (shrink)
How should you decide what to do when you're uncertain about basic normative principles (e.g., Kantianism vs. utilitarianism)? A natural suggestion is to follow some "second-order" norm: e.g., "comply with the first-order norm you regard as most probable" or "maximize expected choiceworthiness". But what if you're uncertain about second-order norms too -- must you then invoke some third-order norm? If so, it seems that any norm-guided response to normative uncertainty is doomed to a vicious regress. In this paper, I aim (...) to rescue second-order norms from this threat of regress. I first elaborate and defend the suggestion some philosophers have entertained that the regress problem forces us to accept normative externalism, the view that at least one norm is incumbent on agents regardless of their beliefs or evidence concerning that norm. But, I then argue, we need not accept externalism about first-order (e.g., moral) norms, thus closing off any question of what an agent should do in light of her normative beliefs. Rather, it is more plausible to ascribe external force to a single, second-order rational norm: the enkratic principle, correctly formulated. This modest form of externalism, I argue, is both intrinsically well-motivated and sufficient to head off the threat of regress. (shrink)
This paper offers a comparison of three different kinds of collective attitudes: aggregate, common, and corporate attitudes. They differ not only in their relationship to individual attitudes—e.g., whether they are “reducible” to individual attitudes—but also in the roles they play in relation to the collectives to which they are ascribed. The failure to distinguish them can lead to confusion, in informal talk as well as in the social sciences. So, the paper’s message is an appeal for disambiguation.
In normative political theory, it is widely accepted that democracy cannot be reduced to voting alone, but that it requires deliberation. In formal social choice theory, by contrast, the study of democracy has focused primarily on the aggregation of individual opinions into collective decisions, typically through voting. While the literature on deliberation has an optimistic flavour, the literature on social choice is more mixed. It is centred around several paradoxes and impossibility results identifying conflicts between different intuitively plausible desiderata. In (...) recent years, there has been a growing dialogue between the two literatures. This paper discusses the connections between them. Important insights are that (i) deliberation can complement aggregation and open up an escape route from some of its negative results; and (ii) the formal models of social choice theory can shed light on some aspects of deliberation, such as the nature of deliberation-induced opinion change. (shrink)
The Humean best systems account identifies laws of nature with the regularities in a system of truths that, as a whole, best conforms to scientific standards for theory-choice. A principled problem for the BSA is that it returns the wrong verdicts about laws in cases where multiple systems, containing different regularities, satisfy these standards equally well. This problem affects every version of the BSA because it arises regardless of which standards for theory-choice Humeans adopt. In this paper, we propose a (...) Humean response to the problem. We invoke pragmatic aspects of Humean laws to show that the BSA, despite violating some of our intuitive judgements, can capture everything that is relevant for scientific practice. (shrink)
Despite the prevalence of human rights discourse, the very idea or concept of a human right remains obscure. In particular, it is unclear what is supposed to be special or distinctive about human rights. In this paper, we consider two recent attempts to answer this challenge, James Griffin’s “personhood account” and Charles Beitz’s “practice-based account”, and argue that neither is entirely satisfactory. We then conclude with a suggestion for what a more adequate account might look like – what we call (...) the “structural pluralist account” of human rights. (shrink)
Our aim in this essay is to critically examine Iris Young’s arguments in her important posthumously published book against what she calls the liability model for attributing responsibility, as well as the arguments that she marshals in support of what she calls the social connection model of political responsibility. We contend that her arguments against the liability model of conceiving responsibility are not convincing, and that her alternative to it is vulnerable to damaging objections.
Can there be a global demos? The current debate about this topic is divided between two opposing camps: the “pessimist” or “impossibilist” camp, which holds that the emergence of a global demos is either conceptually or empirically impossible, and the “optimist” or “possibilist” camp, which holds that the emergence of a global demos is conceptually as well as empirically possible and an embryonic version of it already exists. However, the two camps agree neither on a common working definition of a (...) global demos, nor on the relevant empirical facts, so it is difficult to reconcile their conflicting outlooks. We seek to move the debate beyond this stalemate. We argue that existing conceptions of a demos are ill-suited for capturing what kind of a global demos is needed to facilitate good global governance, and we propose a new conception of a demos that is better suited for this purpose. We suggest that some of the most prominent conceptions of a demos have focused too much on who the members of a demos are and too little on what functional characteristics the demos must have in order to perform its role in facilitating governance within the relevant domain. Our new proposal shifts the emphasis from the first, “compositional” question to the second, “performative” one, and provides a more “agency-based” account of a global demos. The key criterion that a collection of individuals must meet in order to qualify as a demos is that it is not merely demarcated by an appropriate membership criterion, but that it can be organized, democratically, so as to function as a state-like agent. Compared to the existing, predominantly “compositional” approaches to thinking about the demos, this agency-based approach puts us into a much better position to assess the empirical prospects for the emergence of a global demos that can facilitate good global governance. (shrink)
Some moral theorists argue that being an innocent beneficiary of significant harms inflicted by others may be sufficient to ground special duties to address the hardships suffered by the victims, at least when it is impossible to extract compensation from those who perpetrated the harm. This idea has been applied to climate change in the form of the beneficiary-pays principle. Other philosophers, however, are quite sceptical about beneficiary pays. Our aim in this article is to examine their critiques. We conclude (...) that, while they have made important points, the principle remains worthy of further development and exploration. Our purpose in engaging with these critiques is constructive — we aim to formulate beneficiary pays in ways that would give it a plausible role in allocating the cost of addressing human-induced climate change, while acknowledging that some understandings of the principle would make it unsuitable for this purpose. (shrink)
Majority cycling and related social choice paradoxes are often thought to threaten the meaningfulness of democracy. But deliberation can prevent majority cycles – not by inducing unanimity, which is unrealistic, but by bringing preferences closer to single-peakedness. We present the first empirical test of this hypothesis, using data from Deliberative Polls. Comparing preferences before and after deliberation, we find increases in proximity to single-peakedness. The increases are greater for lower versus higher salience issues and for individuals who seem to have (...) deliberated more versus less effectively. They are not merely a byproduct of increased substantive agreement. Our results both refine and support the idea that deliberation, by increasing proximity to single-peakedness, provides an escape from the problem of majority cycling. (shrink)
We offer a critical assessment of the “exclusion argument” against free will, which may be summarized by the slogan: “My brain made me do it, therefore I couldn't have been free”. While the exclusion argument has received much attention in debates about mental causation (“could my mental states ever cause my actions?”), it is seldom discussed in relation to free will. However, the argument informally underlies many neuroscientific discussions of free will, especially the claim that advances in neuroscience seriously challenge (...) our belief in free will. We introduce two distinct versions of the argument, discuss several unsuccessful responses to it, and then present our preferred response. This involves showing that a key premise – the “exclusion principle” – is false under what we take to be the most natural account of causation in the context of agency: the difference-making account. We finally revisit the debate about neuroscience and free will. (shrink)
In the context of EPR-Bohm type experiments and spin detections confined to spacelike hypersurfaces, a local, deterministic and realistic model within a Friedmann-Robertson-Walker spacetime with a constant spatial curvature (S^3 ) is presented that describes simultaneous measurements of the spins of two fermions emerging in a singlet state from the decay of a spinless boson. Exact agreement with the probabilistic predictions of quantum theory is achieved in the model without data rejection, remote contextuality, superdeterminism or backward causation. A singularity-free Clifford-algebraic (...) representation of S^3 with vanishing spatial curvature and non-vanishing torsion is then employed to transform the model in a more elegant form. Several event-by-event numerical simulations of the model are presented, which confirm our analytical results with the accuracy of 4 parts in 10^4 . Possible implications of our results for practical applications such as quantum security protocols and quantum computing are briefly discussed. (shrink)
At the core of republican thought, on Philip Pettit’s account, lies the conception of freedom as non-domination, as opposed to freedom as noninterference in the liberal sense. I revisit the distinction between liberal and republican freedom and argue that republican freedom incorporates a particular rule-of-law requirement, whereas liberal freedom does not. Liberals may also endorse such a requirement, but not as part of their conception of freedom itself. I offer a formal analysis of this rule-of-law requirement and compare liberal and (...) republican freedom on its basis. While I agree with Pettit that republican freedom has broader implications than liberal freedom, I conclude that we face a trade-off between two dimensions of freedom (scope and robustness) and that it is harder for republicans to solve that trade-off than it is for liberals. Key Words: freedom • republicanism • liberalism • noninterference • non-domination • rule of law • robustness • liberal paradox. (shrink)
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty (...) about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
Our ordinary causal concept seems to fit poorly with how our best physics describes the world. We think of causation as a time-asymmetric dependence relation between relatively local events. Yet fundamental physics describes the world in terms of dynamical laws that are, possible small exceptions aside, time symmetric and that relate global time slices. My goal in this paper is to show why we are successful at using local, time-asymmetric models in causal explanations despite this apparent mismatch with fundamental physics. (...) In particular, I will argue that there is an important connection between time asymmetry and locality, namely: understanding the locality of our causal models is the key to understanding why the physical time asymmetries in our universe give rise to time asymmetry in causal explanation. My theory thus provides a unified account of why causation is local and time asymmetric and thereby enables a reply to Russell’s famous attack on causation. (shrink)
Many political theorists defend the view that egalitarian justice should extend from the domestic to the global arena. Despite its intuitive appeal, this ‘global egalitarianism’ has come under attack from different quarters. In this article, we focus on one particular set of challenges to this view: those advanced by domestic egalitarians. We consider seven types of challenges, each pointing to a specific disanalogy between domestic and global arenas which is said to justify the restriction of egalitarian justice to the former, (...) and argue that none of them – both individually and jointly – offers a conclusive refutation of global egalitarianism. (shrink)
Can we design a perfect democratic decision procedure? Condorcet famously observed that majority rule, our paradigmatic democratic procedure, has some desirable properties, but sometimes produces inconsistent outcomes. Revisiting Condorcet’s insights in light of recent work on the aggregation of judgments, I show that there is a conflict between three initially plausible requirements of democracy: “robustness to pluralism”, “basic majoritarianism”, and “collective rationality”. For all but the simplest collective decision problems, no decision procedure meets these three requirements at once; at most (...) two can be met together. This “democratic trilemma” raises the question of which requirement to give up. Since different answers correspond to different views about what matters most in a democracy, the trilemma suggests a map of the “logical space” in which different conceptions of democracy are located. It also sharpens our thinking about other impossibility problems of social choice and how to avoid them, by capturing a core structure many of these problems have in common. More broadly, it raises the idea of “cartography of logical space” in relation to contested political concepts. (shrink)
Pettit (2006) argues that deferring to majority testimony is not generally rational: it may lead to inconsistent beliefs. He suggests that “another ... approach will do better”: deferring to supermajority testimony. But this approach may also lead to inconsistencies. In this paper, I describe conditions under which deference to supermajority testimony ensures consistency, and conditions under which it does not. I also introduce the concept of “consistency of degree k”, which is weaker than full consistency by ruling out only “blatant” (...) inconsistencies in an agent’s beliefs while permitting less blatant ones, and show that, while super-majoritarian deference often fails to ensure full consistency, it is a route to consistency in this weaker sense. (shrink)
While a large social-choice-theoretic literature discusses the aggregation of individual judgments into collective ones, there is much less formal work on the transformation of judgments in group communication. I develop a model of judgment transformation and prove a baseline impossibility theorem: Any judgment transformation function satisfying some initially plausible conditions is the identity function, under which no opinion change occurs. I identify escape routes from this impossibility and argue that the kind of group communication envisaged by deliberative democats must be (...) "holistic": It must focus on webs of connected propositions, not on one proposition at a time, which echoes the Duhem-Quine "holism thesis" on scientific theory testing. My approach provides a map of the logical space in which different possible group communication processes are located. (shrink)
Bell inequalities are usually derived by assuming locality and realism, and therefore violations of the Bell-CHSH inequality are usually taken to imply violations of either locality or realism, or both. But, after reviewing an oversight by Bell, in the Corollary below we derive the Bell-CHSH inequality by assuming only that Bob can measure along vectors b and b' simultaneously while Alice measures along either a or a', and likewise Alice can measure along vectors a and a' simultaneously while Bob measures (...) along either b or b', without assuming locality. The violations of the Bell-CHSH inequality therefore only mean impossibility of measuring along b and b' simultaneously. (shrink)
I argue that Traditional Christian Theism is inconsistent with Truthmaker Maximalism, the thesis that all truths have truthmakers. Though this original formulation requires extensive revision, the gist of the argument is as follows. Suppose for reductio Traditional Christian Theism and the sort of Truthmaker Theory that embraces Truthmaker Maximalism are both true. By Traditional Christian Theism, there is a world in which God, and only God, exists. There are no animals in such a world. Thus, it is (...) true in such a world that there are no zebras. That there are no zebras must have a truthmaker, given Truthmaker Maximalism. God is the only existing object in such a world, and so God must be the truthmaker for this truth, given that it has a truthmaker. But truthmakers necessitate the truths they make true. So, for any world, at any time at which God exists, God makes that there are no zebras true. According to Traditional Christian Theism, God exists in our world. In our world, then, it is true: there are no zebras. But there are zebras. Contradiction! Thus, the conjunction of Traditional Christian Theism with Truthmaker Necessitation and Truthmaker Maximalism is inconsistent. (shrink)
On the orthodox view in economics, interpersonal comparisons of utility are not empirically meaningful, and "hence" impossible. To reassess this view, this paper draws on the parallels between the problem of interpersonal comparisons of utility and the problem of translation of linguistic meaning, as explored by Quine. I discuss several cases of what the empirical evidence for interpersonal comparisonsof utility might be and show that, even on the strongest of these, interpersonal comparisons are empirically underdetermined and, if we also deny (...) any appropriate truth of the matter, indeterminate. However, the underdetermination can be broken non-arbitrarily (though not purely empirically) if (i) we assign normative significance to certain states of affairs or (ii) we posit a fixed connection between certain empirically observable proxies and utility. I conclude that, even if interpersonal comparisons are not empirically meaningful, they are not in principle impossible. (shrink)
This article examines the methodology of a core branch of contemporary political theory or philosophy: “analytic” political theory. After distinguishing political theory from related fields, such as political science, moral philosophy, and legal theory, the article discusses the analysis of political concepts. It then turns to the notions of principles and theories, as distinct from concepts, and reviews the methods of assessing such principles and theories, for the purpose of justifying or criticizing them. Finally, it looks at a recent debate (...) on how abstract and idealized political theory should be, and assesses the significance of disagreement in political theory. The discussion is carried out from an angle inspired by the philosophy of science. (shrink)
Climate change and other harmful large-scale processes challenge our understandings of individual responsibility. People throughout the world suffer harms—severe shortfalls in health, civic status, or standard of living relative to the vital needs of human beings—as a result of physical processes to which many people appear to contribute. Climate change, polluted air and water, and the erosion of grasslands, for example, occur because a great many people emit carbon and pollutants, build excessively, enable their flocks to overgraze, or otherwise stress (...) the environment. If a much smaller number of people engaged in these types of conduct, the harms in question would not occur, or would be substantially lessened. However, the conduct of any particular person (and, in the case of climate change, of even quite large numbers of people) could make no apparent difference to their occurrence. My carbon emissions (and quite possibly the carbon emissions of much larger groups of people dispersed throughout the world) may not make a difference to what happens to anyone. When the conduct of some agent does not make any apparent difference to the occurrence of harm, but this conduct is of a type that brings about harm because many people engage in it, we can call this agent an overdeterminer of that harm, and their conduct overdetermining conduct. In this essay we explore the moral status of overdetermining harm. (shrink)
Traditionally, moral philosophers have distinguished between doing and allowing harm, and have normally proceeded as if this bipartite distinction can exhaustively characterize all cases of human conduct involving harm. By contrast, cognitive scientists and psychologists studying causal judgment have investigated the concept ‘enable’ as distinct from the concept ‘cause’ and other causal terms. Empirical work on ‘enable’ and its employment has generally not focused on cases where human agents enable harm. In this paper, we present new empirical evidence to support (...) the claim that some important cases in the moral philosophical literature are best viewed as instances of enabling harm rather than doing or allowing harm. We also present evidence that enabling harm is regarded as normatively distinct from doing and allowing harm when it comes to assigning compensatory responsibility. Moral philosophers should be exploring the tripartite distinction between doing harm, allowing harm, and enabling harm, rather than simply the traditional bipartite distinction. Cognitive scientists and psychologists studying moral judgment, who have so far largely adopted the bipartite distinction in this area of research, should likewise investigate the tripartite distinction. (shrink)
In this essay we argue that an agent’s failure to assist someone in need at one time can change the cost she can be morally required to take on to assist that same person at a later time. In particular, we show that the cost the agent can subsequently be required to take on to help the person in need can increase quite significantly, and can be enforced through the proportionate use of force. We explore the implications of this argument (...) for the duties of the affluent to address global poverty. (shrink)
Many Christian theodicists believe that God's creating us with the capacity to love Him and each other justifies, in large part, God's permitting evil. For example, after reminding us that, according to Christian doctrine, the supreme good for human beings is to enter into a reciprocal love relationship with God, Vincent Brummer recently wrote: In creating human persons in order to love them, God necessarily assumes vulnerability in relation to them. In fact, in this relation, he becomes even (...) more vulnerable than we do, since he cannot count on the steadfastness of our love the way we can count on his steadfastness.... If God did not grant us the ability to sin and cause affliction to him and to one another, we would not have the kind of free and autonomous existence necessary to enter into a relation of love with God and with one another.... Far from contradicting the value which the free will defence places upon the freedom and responsibility of human persons, the idea of a loving God necessarily entails it. In this way we can see that the free will defence is based on the love of God rather than on the supposed intrinsic value of human freedom and responsibility.1 And Peter van Inwagen recently put the same point this way. (shrink)
It is natural to think of causes as difference-makers. What exact difference causes make, however, is an open question. In this paper, I argue that the right way of understanding difference-making is in terms of causal processes: causes make a difference to a causal process that leads to the effect. I will show that this way of understanding difference-making nicely captures the distinction between causing an outcome and helping determine how the outcome happens and, thus, explains why causation is not (...) transitive. Moreover, the theory handles tricky cases that are problematic for competing accounts of difference-making. (shrink)
This paper explores Thomas Aquinas’ and Richard Swinburne’s doctrines of simplicity in the context of their philosophical theologies. Both say that God is simple. However, Swinburne takes simplicity as a property of the theistic hypothesis, while for Aquinas simplicity is a property of God himself. For Swinburne, simpler theories are ceteris paribus more likely to be true; for Aquinas, simplicity and truth are properties of God which, in a certain way, coincide – because God is metaphysically simple. Notwithstanding their different (...) approaches, some unreckoned parallels between their thoughts are brought to light. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.