Two controversies exist regarding the appropriate characterization of hierarchical and adaptive evolution in natural populations. In biology, there is the Wright-Fisher controversy over the relative roles of random genetic drift, natural selection, population structure, and interdemic selection in adaptive evolution begun by Sewall Wright and Ronald Aylmer Fisher. There is also the Units of Selection debate, spanning both the biological and the philosophical literature and including the impassioned group-selection debate. Why do these two discourses exist separately, and interact relatively little? (...) We postulate that the reason for this schism can be found in the differing focus of each controversy, a deep difference itself determined by distinct general styles of scientific research guiding each discourse. That is, the Wright-Fisher debate focuses on adaptive process, and tends to be instructed by the mathematical modeling style, while the focus of the Units of Selection controversy is adaptive product, and is typically guided by the function style. The differences between the two discourses can be usefully tracked by examining their interpretations of two contested strategies for theorizing hierarchical selection: horizontal and vertical averaging. (shrink)
The thesis of this paper is that utopianism is a theoretical necessity—we couldn’t, for example, engage in normative political philosophy without it—and, further, that in consciously embracing utopianism we will consequently experience an enrichment of our political lives. Thus, the title of my paper has a double meaning: it highlights the fact that utopianism always plays a normative role in political philosophy, as its concern is inevitably the promotion of a certain vision of the good life; and secondly it suggests (...) that there normatively ‘ought to be’ a recognized and respectable role for utopianism within political philosophy. The first meaning, I believe, is self-explanatory. Regarding the second, it expresses my hope to— in short— take what is old, and through a modest process of rehabilitation, make it new again. (shrink)
It is my goal in this paper to offer a strategy for translating universal statements about utopia into particular statements. This is accomplished by drawing out their implicit, temporally embedded, points of reference. Universal statements of the kind I find troublesome are those of the form ‘Utopia is x’, where ‘x’ can be anything from ‘the receding horizon’ to ‘the nation of the virtuous’. To such statements, I want to put the questions: ‘Which utopias?’; ‘In what sense?’; and ‘When was (...) that, is that, or will that be, the case for utopias?’ Through an exploration of these lines of questioning, I arrive at three archetypes of utopian theorizing which serve to provide the answers: namely, utopian historicism, utopian presentism, and utopian futurism. The employment of these archetypes temporally grounds statements about utopia in the past, present, or future, and thus forces discussion of discrete particulars instead of abstract universals with no meaningful referents. (shrink)
Do famous athletes have special obligations to act virtuously? A number of philosophers have investigated this question by examining whether famous athletes are subject to special role model obligations (Wellman 2003; Feezel 2005; Spurgin 2012). In this paper we will take a different approach and give a positive response to this question by arguing for the position that sport and gaming celebrities are ‘ambassadors of the game’: moral agents whose vocations as rule-followers have unique implications for their non-lusory lives. According (...) to this idea, the actions of a game’s players and other stakeholders—especially the actions of its stars—directly affect the value of the game itself, a fact which generates additional moral reasons to behave in a virtuous manner. We will begin by explaining the three main positions one may take with respect to the question: moral exceptionalism, moral generalism, and moral exemplarism. We will argue that no convincing case for moral exemplarism has thus far been made, which gives us reason to look for new ways to defend this position. We then provide our own ‘ambassadors of the game’ account and argue that it gives us good reason to think that sport and game celebrities are subject to special obligations to act virtuously. (shrink)
The central argument of this article is that the standard conception of character given in virtue theory, as exemplified in the work of Rosalind Hursthouse, is seriously flawed. Partially, this is because looking behind a moral action for a ‘character’ is suspiciously akin to looking behind an object for an ‘essence’, and is susceptible to the same interpretive errors as an epistemic strategy. Alternately, a character—once inducted and projected upon a moral agent—is supposed to be a more or less permanent (...) property of that individual; a schema which leaves little room for the real possibility of personal transformation. I argue here that what is often referred to in virtue literature as ‘character’ can be productively re-described as the aggregate of all moral actions performed by any one moral agent: nothing more, and nothing less. My hope is that this interpretive strategy will result in broader and more coherent readings of moral actions, and thus also clarify moral confusion resulting from the current lack of the same. (shrink)
Mysticism and the sciences have traditionally been theoretical enemies, and the closer that philosophy allies itself with the sciences, the greater the philosophical tendency has been to attack mysticism as a possible avenue towards the acquisition of knowledge and/or understanding. Science and modern philosophy generally aim for epistemic disclosure of their contents, and, conversely, mysticism either aims at the restriction of esoteric knowledge, or claims such knowledge to be non-transferable. Thus the mystic is typically seen by analytic philosophers as a (...) variety of 'private language' speaker, although the plausibility of such a position is seemingly foreclosed by Wittgenstein's work in the Philosophical Investigations. Yorke re-examines Wittgenstein's conclusion on the matter of private language, and argues that so-called 'ineffable' mystical experiences, far from being a 'beetle in a box', can play a viable role in our public language-games, via renewed efforts at articulation. (shrink)
My aim in this paper is to demonstrate that actual egalitarian social practices are unsustainable in most circumstances, thus diffusing Cohen’s conundrum by providing an ‘out’ for our rich egalitarian. I will also try to provide a balm for the troubles produced by continuing inequality, by showing how embracing a common conception of utopia can assist a society in its efforts towards establishing egalitarian practices. Doing so will first require an explanation of how giving, like any social practice, can be (...) thought of in terms of being externally suggested, internally willed, or some combination of the two. (shrink)
One of the most historically recent and damaging blows to the reputation of utopianism came from its association with the totalitarian regimes of Hitler’s Third Reich and Mussolini’s Fascist party in World War II and the prewar era. Being an apologist for utopianism, it seemed to some, was tantamount to being an apologist for Nazism and all of its concomitant horrors. The fantasy principle of utopia was viewed as irretrievably bound up with the irrationalism of modern dictatorship. While these conclusions (...) are somewhat understandable given the broad strokes that definitions of utopia are typically painted with, I will show in this paper that the link between the mythos of fascism and the constructs of utopianism results from an unfortunate conflation at the theoretical level. The irrationalism of any mass ethos and the rationalism of the thoughtful utopian planner are, indeed, completely at odds with each other. I arrive at this conclusion via an analysis of the concepts of myth and narrative, and the relationships these have with the concept of utopia. (shrink)
This paper will compare the concept of nature as it appears in the philosophies of the American pragmatist John Dewey and the Chinese text known as the Zhuangzi, with an aim towards mapping out a heuristic program which might be used to correct various interpretive difficulties in reading each figure. I shall argue that Dewey and Zhuangzi both held more complex and comprehensive philosophies of nature than for which either is typically credited. Such a view of nature turns on the (...) notion of continuity, particularly that between an experiencing organism [Dewey’s “live creature”] and the conditioning environment [Zhuangzi’s “crooked tree”]. Where Dewey’s and Zhuangzi’s ideas about nature converge, one finds similarities in prescriptions made for human action, and in the few places where they differ, one finds mutually complementary insights. (shrink)
Background: Screen time among adults represents a continuing and growing problem in relation to health behaviors and health outcomes. However, no instrument currently exists in the literature that quantifies the use of modern screen-based devices. The primary purpose of this study was to develop and assess the reliability of a new screen time questionnaire, an instrument designed to quantify use of multiple popular screen-based devices among the US population. -/- Methods: An 18-item screen-time questionnaire was created to quantify use of (...) commonly used screen devices (e.g. television, smartphone, tablet) across different time points during the week (e.g. weekday, weeknight, weekend). Test-retest reliability was assessed through intra-class correlation coefficients (ICCs) and standard error of measurement (SEM). The questionnaire was delivered online using Qualtrics and administered through Amazon Mechanical Turk (MTurk). -/- Results: Eighty MTurk workers completed full study participation and were included in the final analyses. All items in the screen time questionnaire showed fair to excellent relative reliability (ICCs = 0.50–0.90; all < 0.000), except for the item inquiring about the use of smartphone during an average weekend day (ICC = 0.16, p = 0.069). The SEM values were large for all screen types across the different periods under study. -/- Conclusions: Results from this study suggest this self-administered questionnaire may be used to successfully classify individuals into different categories of screen time use (e.g. high vs. low); however, it is likely that objective measures are needed to increase precision of screen time assessment. (shrink)
In Modal Logic as Metaphysics, Timothy Williamson claims that the possibilism-actualism (P-A) distinction is badly muddled. In its place, he introduces a necessitism-contingentism (N-C) distinction that he claims is free of the confusions that purportedly plague the P-A distinction. In this paper I argue first that the P-A distinction, properly understood, is historically well-grounded and entirely coherent. I then look at the two arguments Williamson levels at the P-A distinction and find them wanting and show, moreover, that, when the N-C (...) distinction is broadened (as per Williamson himself) so as to enable necessitists to fend off contingentist objections, the P-A distinction can be faithfully reconstructed in terms of the N-C distinction. However, Williamson’s critique does point to a genuine shortcoming in the common formulation of the P-A distinction. I propose a new definition of the distinction in terms of essential properties that avoids this shortcoming. (shrink)
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...) counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients’ health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted. (shrink)
Background The purpose of this study was to examine whether extended use of a variety of screen-based devices, in addition to television, was associated with poor dietary habits and other health-related characteristics and behaviors among US adults. The recent phenomenon of binge-watching was also explored. -/- Methods A survey to assess screen time across multiple devices, dietary habits, sleep duration and quality, perceived stress, self-rated health, physical activity, and body mass index, was administered to a sample of US adults using (...) the Qualtrics platform and distributed via Amazon Mechanical Turk (MTurk). Participants were adults 18 years of age and older, English speakers, current US residents, and owners of a television and at least one other device with a screen. Three different screen time categories (heavy, moderate, and light) were created for total screen time, and separately for screen time by type of screen, based on distribution tertiles. Kruskal-Wallis tests were conducted to examine differences in dietary habits and health-related characteristics between screen time categories. -/- Results Aggregate screen time across all devices totaled 17.5 h per day for heavy users. Heavy users reported the least healthful dietary patterns and the poorest health-related characteristics – including self-rated health – compared to moderate and light users. Moreover, unique dietary habits emerged when examining dietary patterns by type of screen separately, such that heavy users of TV and smartphone displayed the least healthful dietary patterns compared to heavy users of TV-connected devices, laptop, and tablet. Binge-watching was also significantly associated with less healthy dietary patterns, including frequency of fast-food consumption as well as eating family meals in front of a television, and perceived stress. -/- Conclusions The present study found that poorer dietary choices, as well as other negative health-related impacts, occurred more often as the viewing time of a variety of different screen-based devices increased in a sample of US adults. Future research is needed to better understand what factors among different screen-based devices might affect health behaviors and in turn health-related outcomes. Research is also required to better understand how binge-watching behavior contributes impacts health-related behaviors and characteristics. (shrink)
A collection of papers presented at the First International Summer Institute in Cognitive Science, University at Buffalo, July 1994, including the following papers: ** Topological Foundations of Cognitive Science, Barry Smith ** The Bounds of Axiomatisation, Graham White ** Rethinking Boundaries, Wojciech Zelaniec ** Sheaf Mereology and Space Cognition, Jean Petitot ** A Mereotopological Definition of 'Point', Carola Eschenbach ** Discreteness, Finiteness, and the Structure of Topological Spaces, Christopher Habel ** Mass Reference and the Geometry of Solids, Almerindo E. (...) Ojeda ** Defining a 'Doughnut' Made Difficult, N .M. Gotts ** A Theory of Spatial Regions with Indeterminate Boundaries, A.G. Cohn and N.M. Gotts ** Mereotopological Construction of Time from Events, Fabio Pianesi and Achille C. Varzi ** Computational Mereology: A Study of Part-of Relations for Multi-media Indexing, Wlodek Zadrozny and Michelle Kim. (shrink)
I defend Christopher Peacocke's and Robert Hopkins's experienced resemblance accounts of depiction against criticisms put forward by Gavin McIntosh in a recent article in this journal. I argue that, while there may be reasons for rejecting Peacocke's and Hopkins's accounts, McIntosh fails to provide any.
In The Realm of Reason (2004), Christopher Peacocke develops a “generalized rationalism” concerning, among other things, what it is for someone to be “entitled”, or justified, in forming a given belief. In the course of his discussion, Peacocke offers two arguments to the best explanation that aim to undermine scepticism and establish a justification for our belief in the reliability of sense perception, respectively. If sound, these ambitious arguments would answer some of the oldest and most vexing epistemological problems. (...) In this paper I will evaluate these arguments, concluding that they are inconclusive at best. Despite offering some interestingly original arguments, Peacocke gives us no reason to think that scepticism is false, and that perception is generally reliable. (shrink)
Review of *New Essays on the A Priori*, an excellent collection edited by Paul Boghossian and Christopher Peacocke. Contributors include: Tyler Burge; Quassim Cassam; Philip Kitcher; Penelope Maddy; Hartry Field; Paul Horwich; Peter Railton; Stephen Yablo; Bob Hale; Crispin Wright; Frank Jackson; Stewart Shapiro; Michael Friedman; Martin Davies; Bill Brewer; and Thomas Nagel.
Creativity pervades human life. It is the mark of individuality, the vehicle of self-expression, and the engine of progress in every human endeavor. It also raises a wealth of neglected and yet evocative philosophical questions: What is the role of consciousness in the creative process? How does the audience for a work for art influence its creation? How can creativity emerge through childhood pretending? Do great works of literature give us insight into human nature? Can a computer program really be (...) creative? How do we define creativity in the first place? Is it a virtue? What is the difference between creativity in science and art? Can creativity be taught? -/- The new essays that comprise The Philosophy of Creativity take up these and other key questions and, in doing so, illustrate the value of interdisciplinary exchange. Written by leading philosophers and psychologists involved in studying creativity, the essays integrate philosophical insights with empirical research. -/- CONTENTS -/- I. Introduction Introducing The Philosophy of Creativity Elliot Samuel Paul and Scott Barry Kaufman -/- II. The Concept of Creativity 1. An Experiential Account of Creativity Bence Nanay -/- III. Aesthetics & Philosophy of Art 2. Creativity and Insight Gregory Currie 3. The Creative Audience: Some Ways in which Readers, Viewers and/or Listeners Use their Imaginations to Engage Fictional Artworks Noël Carroll 4. The Products of Musical Creativity Christopher Peacocke -/- IV. Ethics & Value Theory 5. Performing Oneself Owen Flanagan 6. Creativity as a Virtue of Character Matthew Kieran -/- V. Philosophy of Mind & Cognitive Science 7. Creativity and Not So Dumb Luck Simon Blackburn 8. The Role of Imagination in Creativity Dustin Stokes 9. Creativity, Consciousness, and Free Will: Evidence from Psychology Experiments Roy F. Baumeister, Brandon J. Schmeichel, and C. Nathan DeWall 10. The Origins of Creativity Elizabeth Picciuto and Peter Carruthers 11. Creativity and Artificial Intelligence: a Contradiction in Terms? Margaret Boden -/- VI. Philosophy of Science 12. Hierarchies of Creative Domains: Disciplinary Constraints on Blind-Variation and Selective-Retention Dean Keith Simonton -/- VII. Philosophy of Education (& Education of Philosophy) 13. Educating for Creativity Berys Gaut 14. Philosophical Heuristics Alan Hájek. (shrink)
Much of the philosophical literature on causation has focused on the concept of actual causation, sometimes called token causation. In particular, it is this notion of actual causation that many philosophical theories of causation have attempted to capture.2 In this paper, we address the question: what purpose does this concept serve? As we shall see in the next section, one does not need this concept for purposes of prediction or rational deliberation. What then could the purpose be? We will argue (...) that one can gain an important clue here by looking at the ways in which causal judgments are shaped by people‘s understanding of norms. (shrink)
I argue that the best interpretation of the general theory of relativity has need of a causal entity, and causal structure that is not reducible to light cone structure. I suggest that this causal interpretation of GTR helps defeat a key premise in one of the most popular arguments for causal reductionism, viz., the argument from physics.
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
Blaming (construed broadly to include both blaming-attitudes and blaming-actions) is a puzzling phenomenon. Even when we grant that someone is blameworthy, we can still sensibly wonder whether we ought to blame him. We sometimes choose to forgive and show mercy, even when it is not asked for. We are naturally led to wonder why we shouldn’t always do this. Wouldn’t it be a better to wholly reject the punitive practices of blame, especially in light of their often undesirable effects, and (...) embrace an ethic of unrelenting forgiveness and mercy? In this paper I seek to address these questions by offering an account of blame that provides a rationale for thinking that to wholly forswear blaming blameworthy agents would be deeply mistaken. This is because, as I will argue, blaming is a way of valuing, it is “a mode of valuation.” I will argue that among the minimal standards of respect generated by valuable objects, notably persons, is the requirement to redress disvalue with blame. It is not just that blame is something additional we are required to do in properly valuing, but rather blame is part of what that it is to properly value. Blaming, given the existence of blameworthy agents, is mode of valuation required by the standards of minimal respect. To forswear blame would be to fail value what we ought to value. (shrink)
I discuss what Aquinas’ doctrine of divine simplicity is, and what he takes to be its implications. I also discuss the extent to which Aquinas succeeds in motivating and defending those implications.
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
This chapter surveys hybrid theories of well-being. It also discusses some criticisms, and suggests some new directions that philosophical discussion of hybrid theories might take.
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Chapter 1: "Reason for Hope " by Michael J. Murray Chapter 2: "Theistic Arguments" by William C. Davis Chapter 3: "A Scientific Argument for the Existence of God: The Fine- Tuning Design Argument" by Robin Collins Chapter 4: "God, Evil and Suffering" by Daniel Howard Snyder Chapter 5: "Arguments for Atheism" by John O'Leary Hawthorne Chapter 6: "Faith and Reason" by Caleb Miller Chapter 7: "Religious Pluralism" by Timothy O'Connor Chapter 8: "Eastern Religions" by Robin Collins Chapter 9: "Divine Providence (...) and Human Freedom" by Scott A. Davison Chapter 10: "The Incarnation and the Trinity" by Thomas D. Senor Chapter 11: "The Resurrection of the Body and the Life Everlasting" by Trenton Merricks Chapter 12: "Heaven and Hell" by Michael J. Murray Chapter 13: "Religion and Science" by W. Christopher Stewart Chapter 14: "Miracles and Christian Theism" by J. A. Cover Chapter 15: "Christianity and Ethics" by Frances Howard-Snyder Chapter 16: "The Authority of Scripture" by Douglas Blount. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
Jeff McMahan has long shown himself to be a vigorous and incisive critic of speciesism, and in his essay “Our Fellow Creatures” he has been particularly critical of speciesist arguments that draw inspiration from Wittgenstein. In this essay I consider his arguments against speciesism generally and the species-norm account of deprivation in particular. I argue that McMahan's ethical framework is more nuanced and more open to the incorporation of speciesist intuitions regarding deprivation than he himself suggests. Specifically, I argue that, (...) given his willingness to include a comparative dimension in his “Intrinsic Potential Account” he ought to recognize species as a legitimate comparison class. I also argue that a sensible speciesism can be pluralist and flexible enough to accommodate many of McMahan's arguments in defense of “moral individualist” intuitions. In this way, I hope to make the case for at least a partial reconciliation between McMahan and the “Wittgensteinian speciesists”, e.g. Cora Diamond, Stephen Mulhall, and Raimond Gaita. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
In any field, we might expect different features relevant to its understanding and development to receive attention at different times, depending on the stage of that field’s growth and the interests that occupy theorists and even the history of the theorists themselves. In the relatively young life of argumentation theory, at least as it has formed a body of issues with identified research questions, attention has almost naturally been focused on the central concern of the field—arguments. Focus is also given (...) to the nature of arguers and the position of the evaluator, who is often seen as possessing a “God’s-eye view” (Hamblin 1970). Less attention, however, has been paid in the philosophical literature to the .. (shrink)
According to commonsense psychology, one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. Recent lines of research purport to show that commonsense is mistaken on both of these points: Mack and Rock (1998) tell us that attention is necessary for consciousness, while Kentridge and Heywood (2001) claim that consciousness is not necessary for attention. If these lines of research were successful they would have important (...) implications regarding the prospects of using attention research to inform us about consciousness. The present essay shows that these lines of research are not successful, and that the commonsense picture of the relationship between attention and consciousness can be. (shrink)
In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis’s is that they allow special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt to determine (...) (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts. (shrink)
Our reception of Hegel’s theory of action faces a fundamental difficulty: on the one hand, that theory is quite clearly embedded in a social theory of modern life, but on the other hand most of the features of the society that gave that embedding its specific content have become almost inscrutably strange to us (e.g., the estates and the monarchy). Thus we find ourselves in the awkward position of stressing the theory’s sociality even as we scramble backwards to distance ourselves (...) from the particular social institutions that gave conceptualized form to such sociality in Hegel’s own opinion. My attempt in this article is to make our position less awkward by giving us at least one social-ontological leg to stand on. Specifically, I want to defend a principled and conceptual pluralism as forming the heart of Hegel’s theory of action. If this view can be made out, then we will have a social-ontological structure that might be filled out in different ways in Hegel’s time and our own while simultaneously giving real teeth to the notion that Hegel’s theory of action is essentially social. (shrink)
Should economics study the psychological basis of agents’ choice behaviour? I show how this question is multifaceted and profoundly ambiguous. There is no sharp distinction between ‘mentalist’ answ...
There is a long‐standing project in psychology the goal of which is to explain our ability to perceive speech. The project is motivated by evidence that seems to indicate that the cognitive processing to which speech sounds are subjected is somehow different from the normal processing employed in hearing. The Motor Theory of speech perception was proposed in the 1960s as an attempt to explain this specialness. The first part of this essay is concerned with the Motor Theory's explanandum. It (...) shows that it is rather hard to give a precise account of what the Motor Theory is a theory of. The second part of the essay identifies problems with the theory's explanans: There are difficulties in finding a plausible account of what the content of the Motor Theory is supposed to be. (shrink)
The Realist that investigates questions of ontology by appeal to the quantificational structure of language assumes that the semantics for the privileged language of ontology is externalist. I argue that such a language cannot be (some variant of) a natural language, as some Realists propose. The flexibility exhibited by natural language expressions noted by Chomsky and others cannot obviously be characterized by the rigid models available to the externalist. If natural languages are hostile to externalist treatments, then the meanings of (...) natural language expressions serve as poor guides for ontological investigation, insofar as their meanings will fail to determine the referents of their constituents. This undermines the Realist’s use of natural languages to settle disputes in metaphysics. (shrink)
Rather than approaching the question of the constructive or therapeutic character of Hegel’s Logic through a global consideration of its argument and its relation to the rest of Hegel’s system, I want to come at the question by considering a specific thread that runs through the argument of the Logic, namely the question of the proper understanding of power or control. What I want to try to show is that there is a close connection between therapeutic and constructive elements in (...) Hegel’s treatment of power. To do so I will make use of two deep criticisms of Hegel’s treatment from Michael Theunissen. First comes Theunissen’s claim that in Hegel’s logical scheme, reality is necessarily dominated by the concept rather than truly reciprocally related to it. Then I will consider Theunissen’s structurally analogous claim that for Hegel, the power of the concept is the management of the suppression of the other. Both of these claims are essentially claims about the way in which elements of the logic of reflection are modified and yet continue to play a role in the logic of the concept. (shrink)
The ‘traditional’ interpretation of the Receptacle in Plato’s Timaeus maintains that its parts act as substrata to ordinary particulars such as dogs and tables: particulars are form-matter compounds to which Forms supply properties and the Receptacle supplies a substratum, as well as a space in which these compounds come to be. I argue, against this view, that parts of the Receptacle cannot act as substrata for those particulars. I also argue, making use of contemporary discussions of supersubstantivalism, against a substratum (...) interpretation that separates substratum and space in the Timaeus. (shrink)
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.