In this article, we propose the Fair Priority Model for COVID-19 vaccine distribution, and emphasize three fundamental values we believe should be considered when distributing a COVID-19 vaccine among countries: Benefiting people and limiting harm, prioritizing the disadvantaged, and equal moral concern for all individuals. The Priority Model addresses these values by focusing on mitigating three types of harms caused by COVID-19: death and permanent organ damage, indirect health consequences, such as health care system strain and stress, as well as (...) economic destruction. It proposes proceeding in three phases: the first addresses premature death, the second long-term health issues and economic harms, and the third aims to contain viral transmission fully and restore pre-pandemic activity. -/- To those who may deem an ethical framework irrelevant because of the belief that many countries will pursue "vaccine nationalism," we argue such a framework still has broad relevance. Reasonable national partiality would permit countries to focus on vaccine distribution within their borders up until the rate of transmission is below 1, at which point there would not be sufficient vaccine-preventable harm to justify retaining a vaccine. When a government reaches the limit of national partiality, it should release vaccines for other countries. -/- We also argue against two other recent proposals. Distributing a vaccine proportional to a country's population mistakenly assumes that equality requires treating differently situated countries identically. Prioritizing countries according to the number of front-line health care workers, the proportion of the population over 65, and the number of people with comorbidities within each country may exacerbate disadvantage and end up giving the vaccine in large part to wealthy nations. (shrink)
Contrary to the widely accepted consensus, ChristopherHeathWellman argues that there are no pre-institutional judicial procedural rights. Thus commonly affirmed rights like the right to a fair trial cannot be assumed in the literature on punishment and legal philosophy as they usually are. Wellman canvasses and rejects a variety of grounds proposed for such rights. I answer his skepticism by proposing two novel grounds for procedural rights. First, a general right against unreasonable risk of punishment (...) grounds rights to an institutionalized system of punishment. Second, to complement and extend the first ground, I more controversially propose a right to provision of the robust good of security. People have a right to others' protecting for avoiding wrongfully harming them: when I take an action that is reasonably expected to threaten the protected interests of others, I must take appropriate care to avoid setting back those interests. Inflicting punishment on someone--intentionally harming them in response to a violation--is prima facie wrongful, so I can only count as taking appropriate care in punishing when I follow familiar procedures that reliably and redundantly test whether they are liable to such punishment, i.e. whether they have forfeited their right against punishment through a culpable act. (shrink)
A large portion of normative philosophical thought on immigration seeks to address the question “What policies for admitting and excluding foreigners may states justly adopt?” This question places normative philosophical discussions of immigration within the boundaries of political philosophy, whose concern is the moral assessment of social institutions. Several recent contributions to normative philosophical thought on immigration propose to answer this question, but adopt methods of reasoning about possible answers that might be taken to suggest that normative philosophical inquiry about (...) immigration belongs to the field of ethics, whose concern is the moral assessment of individual action and character. This paper focuses particularly on recent work by ChristopherHeathWellman and Kieran Oberman, both of whom attempt to derive conclusions about the justice of aspects of states’ immigrant admissions policies from answers to the question “Is it morally permissible for person P to migrate internationally?” I argue in this paper that such individualist ethical approaches to normative philosophical reasoning about states’ immigration policies obscure factors consideration of which is indispensable for assessing their justice, producing misguided policy recommendations. These factors include the global structural causes of international migration, and the role wealthy receiving countries of the global North play in shaping these causes – factors that are better appreciated by political philosophy than by ethics, given the respective objects of concern of each. (shrink)
Do famous athletes have special obligations to act virtuously? A number of philosophers have investigated this question by examining whether famous athletes are subject to special role model obligations (Wellman 2003; Feezel 2005; Spurgin 2012). In this paper we will take a different approach and give a positive response to this question by arguing for the position that sport and gaming celebrities are ‘ambassadors of the game’: moral agents whose vocations as rule-followers have unique implications for their non-lusory lives. (...) According to this idea, the actions of a game’s players and other stakeholders—especially the actions of its stars—directly affect the value of the game itself, a fact which generates additional moral reasons to behave in a virtuous manner. We will begin by explaining the three main positions one may take with respect to the question: moral exceptionalism, moral generalism, and moral exemplarism. We will argue that no convincing case for moral exemplarism has thus far been made, which gives us reason to look for new ways to defend this position. We then provide our own ‘ambassadors of the game’ account and argue that it gives us good reason to think that sport and game celebrities are subject to special obligations to act virtuously. (shrink)
This article asks whether states have a right to close their borders because of their right to self-determination, as proposed recently by ChristopherWellman, Michael Walzer, and others. It asks the fundamental question whether self-determination can, in even its most unrestricted form, support the exclusion of immigrants. I argue that the answer is no. To show this, I construct three different ways in which one might use the idea of self-determination to justify immigration restrictions and show that each (...) of these arguments fails. My conclusion is that the nature and value of self-determination have to do with the conditions of genuine self-government, not membership of political society. Consequently, the demand for open borders is fully consistent with respect to self-determination. (shrink)
ChristopherWellman is the strongest proponent of the natural-duty theory of political obligations and argues that his version of the theory can satisfy the key requirement of ; namely, justifying to members of a state the system of political obligations they share in. Critics argue that natural-duty theories like Wellman's actually require well-ordered states and/or their members to dedicate resources to providing the goods associated with political order to needy outsiders. The implication is that natural-duty approaches weaken (...) the particularity requirement and cannot justify to citizens the systems of political obligation they share in. I argue that the critics’ diagnosis of natural-duty approaches is correct, whereas the proposed implication is false. I maintain that 1) only natural-duty approaches can justify political obligations, and that 2) weakening the particularity requirement contributes to the theory's ability to justify a range-limited system of political obligations among compatriots. (shrink)
This volume brings together a range of influential essays by distinguished philosophers and political theorists on the issue of global justice. Global justice concerns the search for ethical norms that should govern interactions between people, states, corporations and other agents acting in the global arena, as well as the design of social institutions that link them together. The volume includes articles that engage with major theoretical questions such as the applicability of the ideals of social and economic equality to the (...) global sphere, the degree of justified partiality to compatriots, and the nature and extent of the responsibilities of the affluent to address global poverty and other hardships abroad. It also features articles that bring the theoretical insights of global justice thinkers to bear on matters of practical concern to contemporary societies, such policies associated with immigration, international trade, and climate change. -/- Contents: Introduction; Part I Standards of Global Justice: (i) Assistance-Based Responsibilities to the Global Poor: Famine, affluence and mortality, Peter Singer; We don't owe them a thing! A tough-minded but soft-hearted view of aid to the faraway needy, Jan Narveson; Does distance matter morally to the duty to rescue? Frances Myrna Kamm. (ii) Contribution-Based Responsibilities to the Global Poor: 'Assisting' the global poor, Thomas Pogge; Should we stop thinking about poverty in terms of helping the poor?, Alan Patten; Poverty and the moral significance of contribution, Gerhard Øverland. (iii)Cosmopolitans, Global Egalitarians, and its Critics: The one and the many faces of cosmopolitanism, Catherine Lu; Cosmopolitan justice and equalizing opportunities, Simon Caney; The problem of global justice, Thomas Nagel; Against global egalitarianism, David Miller; Egalitarian challenges to global egalitarianism: a critique, Christian Barry and Laura Valentini. Part II Pressing Global Socioeconomic Issues: (i) Governing the Flow of People: Immigration and freedom of association, ChristopherWellman; Democratic theory and border coercion: no right to unilaterally control your own borders, Arash Abizadeh; Justice in migration: a closed borders utopia?, Lea Ypi. (ii) Climate Change: Global environment and international inequality, Henry Shue; Valuing policies in response to climate change: some ethical issues, John Broome; Saved by disaster? Abrupt climate change, political inertia, and the possibility of an intergenerational arms race, Stephen M. Gardiner; Polycentric systems for coping with collective action and global environmental change, Elinor Ostrom. (iii) International Trade: Responsibility and global labor justice, Iris Marion Young; Property rights and the resource curse, Leif Wenar; Fairness in trade I: obligations arising from trading and the pauper-labor argument, Mathias Risse; Name index. -/- See: www.ashgate.com/default.aspx?page=637&calctitle=1&pageSubject=483&sort=pubdate&forthcoming=1&title_i d=9958&edition_id=13385. (shrink)
What experimental game theorists may have demonstrated is not that people are systematically irrational but that human rationality is heavily scaffolded. Remove the scaffolding, and we do not do very well. People are able to get on because they “offload” an enormous amount of practical reasoning onto their environment. As a result, when they are put in novel or unfamiliar environments, they perform very poorly, even on apparently simple tasks. -/- This observation is supported by recent empirically informed shifts in (...) philosophy of mind toward a view of cognition as (to cite the current slogan) “embodied, embedded, enactive, extended.” Andy Clark and others have made a very plausible case for the idea that a proper assessment of human cognitive competence must include environmental components. To limit our attention to what lies within the skin-skull boundary is, in effect, to miss the big story on human rationality. Insofar as we are rational, it is often because of our ingenuity at developing “work-arounds” to the glitches in the fast-and-frugal heuristic problem-solving capabilities that natural selection has equipped us with. And these work-arounds often involve a detour through the environment (so-called offloading of cognitive burdens). -/- When it comes to practical rationality, things are no different. Yet in many discussions of “the will,” there is still a tendency to put too much emphasis on what goes on inside the agent’s head. Our objective in this chapter is to articulate this conception of “the extended will” more clearly, using the strategies that people employ to overcome procrastination for the central set of examples. Procrastination, in our view, constitutes a particular type of self-control problem, one that is particularly amenable to philosophical reflection, not only because of the high volume of psychological research on the subject but also because of the large quantity of “self-help” literature in circulationa literature that provides an invaluable perspective on the everyday strategies that people use in order to defeat (or, better yet, circumvent) this type of self-defeating behavior pattern. In general, what we find is that the internalist bias that permeates discussions of the will gives rise to a set of practical recommendations that overemphasize changing the way one thinks about a task, while ignoring the much richer set of strategies that are available in the realm of environmental scaffolding. In the concluding section, we highlight some of the policy implications of this, particularly regarding social trends involving the dismantling of support structures. (shrink)
Society, based on contract and voluntary exchange, is evolving, but remains only partly developed. Goods and services that meet the needs of individuals, such as food, clothing, and shelter, are amply produced and distributed through the market process. However, those that meet common or community needs, while distributed through the market, are produced politically through taxation and violence. These goods attach not to individuals but to a place; to enjoy them, individuals must go to the place where they are. Land (...) owners, all unknowingly, distribute such services contractually as they rent or sell sites. Rent or price is the market value of such services, net after disservices, as they affect each site. By distributing occupancies to those who can pay the highest price, land owners’ interests align with those of society. Without this, tenure would be precarious—by force or favor of politicians. The 18th-century separation of land from state, so little studied by historians, permitted the development of modern property in land. This change is perhaps “the greatest single step in the evolution of Society the world has ever seen.” When land owners realize that they market community services, they will organize to produce and administer them as well, and society will be made whole. (shrink)
Much of the philosophical literature on causation has focused on the concept of actual causation, sometimes called token causation. In particular, it is this notion of actual causation that many philosophical theories of causation have attempted to capture.2 In this paper, we address the question: what purpose does this concept serve? As we shall see in the next section, one does not need this concept for purposes of prediction or rational deliberation. What then could the purpose be? We will argue (...) that one can gain an important clue here by looking at the ways in which causal judgments are shaped by people‘s understanding of norms. (shrink)
I argue that the best interpretation of the general theory of relativity has need of a causal entity, and causal structure that is not reducible to light cone structure. I suggest that this causal interpretation of GTR helps defeat a key premise in one of the most popular arguments for causal reductionism, viz., the argument from physics.
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
Blaming (construed broadly to include both blaming-attitudes and blaming-actions) is a puzzling phenomenon. Even when we grant that someone is blameworthy, we can still sensibly wonder whether we ought to blame him. We sometimes choose to forgive and show mercy, even when it is not asked for. We are naturally led to wonder why we shouldn’t always do this. Wouldn’t it be a better to wholly reject the punitive practices of blame, especially in light of their often undesirable effects, and (...) embrace an ethic of unrelenting forgiveness and mercy? In this paper I seek to address these questions by offering an account of blame that provides a rationale for thinking that to wholly forswear blaming blameworthy agents would be deeply mistaken. This is because, as I will argue, blaming is a way of valuing, it is “a mode of valuation.” I will argue that among the minimal standards of respect generated by valuable objects, notably persons, is the requirement to redress disvalue with blame. It is not just that blame is something additional we are required to do in properly valuing, but rather blame is part of what that it is to properly value. Blaming, given the existence of blameworthy agents, is mode of valuation required by the standards of minimal respect. To forswear blame would be to fail value what we ought to value. (shrink)
I discuss what Aquinas’ doctrine of divine simplicity is, and what he takes to be its implications. I also discuss the extent to which Aquinas succeeds in motivating and defending those implications.
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
This chapter surveys hybrid theories of well-being. It also discusses some criticisms, and suggests some new directions that philosophical discussion of hybrid theories might take.
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...) show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
Jeff McMahan has long shown himself to be a vigorous and incisive critic of speciesism, and in his essay “Our Fellow Creatures” he has been particularly critical of speciesist arguments that draw inspiration from Wittgenstein. In this essay I consider his arguments against speciesism generally and the species-norm account of deprivation in particular. I argue that McMahan's ethical framework is more nuanced and more open to the incorporation of speciesist intuitions regarding deprivation than he himself suggests. Specifically, I argue that, (...) given his willingness to include a comparative dimension in his “Intrinsic Potential Account” he ought to recognize species as a legitimate comparison class. I also argue that a sensible speciesism can be pluralist and flexible enough to accommodate many of McMahan's arguments in defense of “moral individualist” intuitions. In this way, I hope to make the case for at least a partial reconciliation between McMahan and the “Wittgensteinian speciesists”, e.g. Cora Diamond, Stephen Mulhall, and Raimond Gaita. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
In Modal Logic as Metaphysics, Timothy Williamson claims that the possibilism-actualism (P-A) distinction is badly muddled. In its place, he introduces a necessitism-contingentism (N-C) distinction that he claims is free of the confusions that purportedly plague the P-A distinction. In this paper I argue first that the P-A distinction, properly understood, is historically well-grounded and entirely coherent. I then look at the two arguments Williamson levels at the P-A distinction and find them wanting and show, moreover, that, when the N-C (...) distinction is broadened (as per Williamson himself) so as to enable necessitists to fend off contingentist objections, the P-A distinction can be faithfully reconstructed in terms of the N-C distinction. However, Williamson’s critique does point to a genuine shortcoming in the common formulation of the P-A distinction. I propose a new definition of the distinction in terms of essential properties that avoids this shortcoming. (shrink)
In any field, we might expect different features relevant to its understanding and development to receive attention at different times, depending on the stage of that field’s growth and the interests that occupy theorists and even the history of the theorists themselves. In the relatively young life of argumentation theory, at least as it has formed a body of issues with identified research questions, attention has almost naturally been focused on the central concern of the field—arguments. Focus is also given (...) to the nature of arguers and the position of the evaluator, who is often seen as possessing a “God’s-eye view” (Hamblin 1970). Less attention, however, has been paid in the philosophical literature to the .. (shrink)
According to commonsense psychology, one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. Recent lines of research purport to show that commonsense is mistaken on both of these points: Mack and Rock (1998) tell us that attention is necessary for consciousness, while Kentridge and Heywood (2001) claim that consciousness is not necessary for attention. If these lines of research were successful they would have important (...) implications regarding the prospects of using attention research to inform us about consciousness. The present essay shows that these lines of research are not successful, and that the commonsense picture of the relationship between attention and consciousness can be. (shrink)
In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis’s is that they allow special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt to determine (...) (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts. (shrink)
Our reception of Hegel’s theory of action faces a fundamental difficulty: on the one hand, that theory is quite clearly embedded in a social theory of modern life, but on the other hand most of the features of the society that gave that embedding its specific content have become almost inscrutably strange to us (e.g., the estates and the monarchy). Thus we find ourselves in the awkward position of stressing the theory’s sociality even as we scramble backwards to distance ourselves (...) from the particular social institutions that gave conceptualized form to such sociality in Hegel’s own opinion. My attempt in this article is to make our position less awkward by giving us at least one social-ontological leg to stand on. Specifically, I want to defend a principled and conceptual pluralism as forming the heart of Hegel’s theory of action. If this view can be made out, then we will have a social-ontological structure that might be filled out in different ways in Hegel’s time and our own while simultaneously giving real teeth to the notion that Hegel’s theory of action is essentially social. (shrink)
Should economics study the psychological basis of agents’ choice behaviour? I show how this question is multifaceted and profoundly ambiguous. There is no sharp distinction between ‘mentalist’ answ...
There is a long‐standing project in psychology the goal of which is to explain our ability to perceive speech. The project is motivated by evidence that seems to indicate that the cognitive processing to which speech sounds are subjected is somehow different from the normal processing employed in hearing. The Motor Theory of speech perception was proposed in the 1960s as an attempt to explain this specialness. The first part of this essay is concerned with the Motor Theory's explanandum. It (...) shows that it is rather hard to give a precise account of what the Motor Theory is a theory of. The second part of the essay identifies problems with the theory's explanans: There are difficulties in finding a plausible account of what the content of the Motor Theory is supposed to be. (shrink)
The Realist that investigates questions of ontology by appeal to the quantificational structure of language assumes that the semantics for the privileged language of ontology is externalist. I argue that such a language cannot be (some variant of) a natural language, as some Realists propose. The flexibility exhibited by natural language expressions noted by Chomsky and others cannot obviously be characterized by the rigid models available to the externalist. If natural languages are hostile to externalist treatments, then the meanings of (...) natural language expressions serve as poor guides for ontological investigation, insofar as their meanings will fail to determine the referents of their constituents. This undermines the Realist’s use of natural languages to settle disputes in metaphysics. (shrink)
Rather than approaching the question of the constructive or therapeutic character of Hegel’s Logic through a global consideration of its argument and its relation to the rest of Hegel’s system, I want to come at the question by considering a specific thread that runs through the argument of the Logic, namely the question of the proper understanding of power or control. What I want to try to show is that there is a close connection between therapeutic and constructive elements in (...) Hegel’s treatment of power. To do so I will make use of two deep criticisms of Hegel’s treatment from Michael Theunissen. First comes Theunissen’s claim that in Hegel’s logical scheme, reality is necessarily dominated by the concept rather than truly reciprocally related to it. Then I will consider Theunissen’s structurally analogous claim that for Hegel, the power of the concept is the management of the suppression of the other. Both of these claims are essentially claims about the way in which elements of the logic of reflection are modified and yet continue to play a role in the logic of the concept. (shrink)
The ‘traditional’ interpretation of the Receptacle in Plato’s Timaeus maintains that its parts act as substrata to ordinary particulars such as dogs and tables: particulars are form-matter compounds to which Forms supply properties and the Receptacle supplies a substratum, as well as a space in which these compounds come to be. I argue, against this view, that parts of the Receptacle cannot act as substrata for those particulars. I also argue, making use of contemporary discussions of supersubstantivalism, against a substratum (...) interpretation that separates substratum and space in the Timaeus. (shrink)
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
Selection against embryos that are predisposed to develop disabilities is one of the less controversial uses of embryo selection technologies. Many bio-conservatives argue that while the use of ESTs to select for non-disease-related traits, such as height and eye-colour, should be banned, their use to avoid disease and disability should be permitted. Nevertheless, there remains significant opposition, particularly from the disability rights movement, to the use of ESTs to select against disability. In this article we examine whether and why the (...) state could be justified in restricting the use of ESTs to select against disability. We first outline the challenge posed by proponents of ‘liberal eugenics’. Liberal eugenicists challenge those who defend restrictions on the use of ESTs to show why the use of these technologies would create a harm of the type and magnitude required to justify coercive measures. We argue that this challenge could be met by adverting to the risk of harms to future persons that would result from a loss of certain forms of cognitive diversity. We suggest that this risk establishes a pro tanto case for restricting selection against some disabilities, including dyslexia and Asperger's syndrome. (shrink)
Giubilini and Minerva argue that the permissibility of abortion entails the permissibility of infanticide. Proponents of what we refer to as the Birth Strategy claim that there is a morally significant difference brought about at birth that accounts for our strong intuition that killing newborns is morally impermissible. We argue that strategy does not account for the moral intuition that late-term, non-therapeutic abortions are morally impermissible. Advocates of the Birth Strategy must either judge non-therapeutic abortions as impermissible in the later (...) stages of pregnancy or conclude that they are permissible on the basis of premises that are far less intuitively plausible than the opposite conclusion and its supporting premises. (shrink)
This paper explores the level of obligation called for by Milton Friedman’s classic essay “The Social Responsibility of Business is to Increase Profits.” Several scholars have argued that Friedman asserts that businesses have no or minimal social duties beyond compliance with the law. This paper argues that this reading of Friedman does not give adequate weight to some claims that he makes and to their logical extensions. Throughout his article, Friedman emphasizes the values of freedom, respect for law, and duty. (...) The principle that a business professional should not infringe upon the liberty of other members of society can be used by business ethicists to ground a vigorous line of ethical analysis. Any practice, which has a negative externality that requires another party to take a significant loss without consent or compensation, can be seen as unethical. With Friedman’s framework, we can see how ethics can be seen as arising from the nature of business practice itself. Business involves an ethics in which we consider, work with, and respect strangers who are outside of traditional in-groups. (shrink)
In this essay, I argue that a proper understanding of the historicity of love requires an appreciation of the irreplaceability of the beloved. I do this through a consideration of ideas that were first put forward by Robert Kraut in “Love De Re” (1986). I also evaluate Amelie Rorty's criticisms of Kraut's thesis in “The Historicity of Psychological Attitudes: Love is Not Love Which Alters Not When It Alteration Finds” (1986). I argue that Rorty fundamentally misunderstands Kraut's Kripkean analogy, and (...) I go on to criticize her claim that concern over the proper object of love should be best understood as a concern over constancy. This leads me to an elaboration of the distinct senses in which love can be seen as historical. I end with a further defense of the irreplaceability of the beloved and a discussion of the relevance of recent debates over the importance of personal identity for an adequate account of the historical dimension of love. (shrink)
Although contemporary metaphysics has recently undergone a neo-Aristotelian revival wherein dispositions, or capacities are now commonplace in empirically grounded ontologies, being routinely utilised in theories of causality and modality, a central Aristotelian concept has yet to be given serious attention – the doctrine of hylomorphism. The reason for this is clear: while the Aristotelian ontological distinction between actuality and potentiality has proven to be a fruitful conceptual framework with which to model the operation of the natural world, the distinction between (...) form and matter has yet to similarly earn its keep. In this chapter, I offer a first step toward showing that the hylomorphic framework is up to that task. To do so, I return to the birthplace of that doctrine - the biological realm. Utilising recent advances in developmental biology, I argue that the hylomorphic framework is an empirically adequate and conceptually rich explanatory schema with which to model the nature of organisms. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively (...) proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics. (shrink)
The iterative conception of set is typically considered to provide the intuitive underpinnings for ZFCU (ZFC+Urelements). It is an easy theorem of ZFCU that all sets have a definite cardinality. But the iterative conception seems to be entirely consistent with the existence of “wide” sets, sets (of, in particular, urelements) that are larger than any cardinal. This paper diagnoses the source of the apparent disconnect here and proposes modifications of the Replacement and Powerset axioms so as to allow for the (...) existence of wide sets. Drawing upon Cantor’s notion of the absolute infinite, the paper argues that the modifications are warranted and preserve a robust iterative conception of set. The resulting theory is proved consistent relative to ZFC + “there exists an inaccessible cardinal number.”. (shrink)
We present a theory of truth in fiction that improves on Lewis's [1978] ‘Analysis 2’ in two ways. First, we expand Lewis's possible worlds apparatus by adding non-normal or impossible worlds. Second, we model truth in fiction as belief revision via ideas from dynamic epistemic logic. We explain the major objections raised against Lewis's original view and show that our theory overcomes them.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.