In any field, we might expect different features relevant to its understanding and development to receive attention at different times, depending on the stage of that field’s growth and the interests that occupy theorists and even the history of the theorists themselves. In the relatively young life of argumentation theory, at least as it has formed a body of issues with identified research questions, attention has almost naturally been focused on the central concern of the field—arguments. Focus is also given (...) to the nature of arguers and the position of the evaluator, who is often seen as possessing a “God’s-eye view” (Hamblin 1970). Less attention, however, has been paid in the philosophical literature to the .. (shrink)
Plato’s Socrates holds that we always have reason to be just, since being just is essential for living a happy and successful life. In Book II of Plato’s Republic, Socrates’ main interlocutor, Glaucon, raises a vivid and powerful challenge to this claim. He presents the case of Gyges, a Lydian shepherd who possesses a ring that gives him the power of invisibility. Glaucon’s contention is that Gyges does not have reason to be just in this circumstance, since being just will (...) not promote his happiness. Thus, the argument poses the following challenge: what reason do we have to be just, particularly in circumstances where we can get away with injustice? In this essay, we describe Glaucon’s challenge, highlight its similarity to challenges offered by other skeptical figures in the history of philosophy, namely, Hobbes’ Foole and Hume’s sensible knave, and present three broad lines of response. (shrink)
Functional diversity holds the promise of understanding ecosystems in ways unattainable by taxonomic diversity studies. Underlying this promise is the intuition that investigating the diversity of what organisms actually do—i.e. their functional traits—within ecosystems will generate more reliable insights into the ways these ecosystems behave, compared to considering only species diversity. But this promise also rests on several conceptual and methodological—i.e. epistemic—assumptions that cut across various theories and domains of ecology. These assumptions should be clearly addressed, notably for the sake (...) of an effective comparison and integration across domains, and for assessing whether or not to use functional diversity approaches for developing ecological management strategies. The objective of this contribution is to identify and critically analyze the most salient of these assumptions. To this aim, we provide an “epistemic roadmap” that pinpoints these assumptions along a set of historical, conceptual, empirical, theoretical, and normative dimensions. (shrink)
The article presents Lord Acton’s notion of liberalism and citizenship. Liberalism, as ordinarily understood, treats the individual as the founding stone of civil society and the measure of political order – man and woman and their rights are supreme. In the past, this allowed liberalism to delegitimize society of estates and absolutism, yet it raised the insoluble dilemma of how to reconnect the self‑sufficient individual with the society and the state. Furthermore, social engineering employed in service of equality and individual (...) rights, made liberalism an abstract doctrine, hostile to any tradition and illiberal in its nature. Unlike this doctrinaire liberalism, Actonian liberalism is organic, thriving on national tradition and having only one arbitrary element – higher law that is the yardstick of good and evil. Organic liberalism knows not the contradiction between the individual and the polity. It balances the rights of the individual with the respect for the national tradition, and stresses individual and communal liberty as the ultimate aim of politics. Further, man and woman participate in a multitude of intermediate organizations in which they can truly exercise their citizenship. Organic liberalism, as Acton claims, is a constant element of Western Civilization, even if it achieves its maturity sometime in the seventeenth‑eighteenth century. It then become a characteristic feature of Anglo‑American liberalism. (shrink)
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
Bio-ontologies are essential tools for accessing and analyzing the rapidly growing pool of plant genomic and phenomic data. Ontologies provide structured vocabularies to support consistent aggregation of data and a semantic framework for automated analyses and reasoning. They are a key component of the Semantic Web. This paper provides background on what bio-ontologies are, why they are relevant to botany, and the principles of ontology development. It includes an overview of ontologies and related resources that are relevant to plant science, (...) with a detailed description of the Plant Ontology (PO). We discuss the challenges of building an ontology that covers all green plants (Viridiplantae). Key results: Ontologies can advance plant science in four keys areas: 1. comparative genetics, genomics, phenomics, and development, 2. taxonomy and systematics, 3. semantic applications and 4. education. Conclusions: Bio-ontologies offer a flexible framework for comparative plant biology, based on common botanical understanding. As genomic and phenomic data become available for more species, we anticipate that the annotation of data with ontology terms will become less centralized, while at the same time, the need for cross-species queries will become more common, causing more researchers in plant science to turn to ontologies. (shrink)
This essay begins with a consideration of one way in which animals and persons may be valued as “irreplaceable.” Drawing on both Plato and Pascal, I consider reasons for skepticism regarding the legitimacy of this sort of attachment. While I do not offer a complete defense against such skepticism, I do show that worries here may be overblown due to the conflation of distinct metaphysical and normative concerns. I then go on to clarify what sort of value is at issue (...) in cases of irreplaceable attachment. I characterize “unique value” as the kind of value attributed to a thing when we take that thing to be (theoretically, not just practically) irreplaceable. I then consider the relationship between this sort of value and intrinsic value. After considering the positions of Gowans, Moore, Korsgaard, Frankfurt, and others, I conclude that unique value is best understood not as a variety of intrinsic value but rather as one kind of final value that is grounded in the extrinsic properties of the object. (shrink)
One of the reasons why there is no Hegelian school in contemporary ethics in the way that there are Kantian, Humean and Aristotelian schools is because Hegelians have been unable to clearly articulate the Hegelian alternative to those schools’ moral psychologies, i.e., to present a Hegelian model of the motivation to, perception of, and responsibility for moral action. Here it is argued that in its most basic terms Hegel's model can be understood as follows: the agent acts in a responsible (...) and thus paradigmatic sense when she identifies as reasons those motivations which are grounded in his or her talents and support actions that are likely to develop those talents in ways suggested by his or her interests. (shrink)
Major Research Paper Abstract -/- A Part of This World: Deleuze & The Logic Of Creation. -/- Is there a particular danger in following Deleuze’s philosophy to its end result? According to Peter Hallward and Alain Badiou, Deleuze’s philosophy has some rather severe conclusions. Deleuze has been known as a vitalist thinker of life and affirmation. Hallward & Badiou seek to challenge the accepted view of Deleuze; showing that these accepted norms in Deleuzian scholarship should be challenged; and that initially (...) Deleuze calls for the evacuation of political action in order to remain firm in the realm of pure contemplation. I intend to investigate and defend Deleuze’s philosophy and against critics like Badiou and Hallward; and that not only is Deleuze’s philosophy creative and vital but also highly revolutionary and ‘a part of this world.’ I will look at several works in Deleuze’s corpus, as well as look at Deleuzian scholars whom defend Deleuze’s position -/- Hallward sees Deleuze as a theophantic thinker of the one and like Spinoza an individual mode must align oneself with the intellectual love of god, so that creativity and expressivity may mediate through them. Thus, according to Hallward the major theme of Deleuze’s philosophy is creativity; and a subject or a creature must tap into this vital spark of creation, which, is also a form of creatural confinement. Hallward states this creative act can only occur in the realm of the virtual, by lines of flight leading 'out of this world'. The subject is then re-introduced to an extra-worldly existence of contemplation and remains further away from decisions and lived experience. Deleuze, according to Hallward, falls prey to a cosmological pantheism. -/- Badiou has similar concerns. Deleuze’s philosophy is too systematic and abstract. The entirety of Deleuzes’ work is surrounded by metaphysics of the one; and essentially its repercussions lead to an overt asceticism. Badiou notes that Deleuze wants us all to surrender thought to a renewed concept of the one. Through the surrender of the one, the multiple is lost and incorporated into the realm of simulacra. Everything in this Deleuzian world is ‘always-already’ in the infinite and inhuman totality of the one. According to Badiou, this entire process is articulated in the power of inorganic life that operates through all of us. Like Hallward, Badiou sees Deleuze demolishing the subject, who is stuck between machinery and exteriority. Subjects are forced to transcend and go beyond their limits, slowly collapsing into an infinite virtuality. Badiou believes this is a powerful metaphor for a philosophy of death. Thus the conditions of Deleuzian thought are contingent upon asceticism, making a Deleuzian world a sort of ‘crowned anarchy’. Badiou sees Deleuze’s ascetic mission intimately linked with a philosophy of death, and like Hallward we should pay careful attention to the outcome of such an aristocratic philosophy. Death according to Badiou, symbolizes Deleuzian thought, not only making it dangerous, but also actualizing it as an ineffective position. Badiou also points out that Deleuze’s conceptual sources are not only limited but also repeated time and time again through a monotonous selection of concepts. Is this a fair critique and representation of Deleuzian thought? -/- Eugene Holland states, that both Hallward and Badiou have misrepresented Deleuze. Deleuze does invoke the creation of a new earth but one which we all fully believe in. The only world Deleuze wants to get out of is the world of habits, conformity, power; and forces that block creative being. According to Holland, Hallward presents us a Deleuze that inhibits an engagement with the world. However Deleuze’s creative enterprise is insistent on forming concepts that can change and transform our world. -/- So the question arises where does the problem of misrepresentation begin? It begins with both Badiou and Hallward having an erroneous account of the actual/virtual distinction in Deleuze’s Philosophy. According to Protevi, Hallward posits a dualism between the actual and the virtual, denying the role of the intensive. Hallward initially sees the relationship between the intensive and the virtual, ignoring the fact that the intensive has its own ontological register that mediates both the virtual and the actual. However, Protevi notes if one could not accept the intensive for an ontological register and had to place it with one or the other; you would have to accept an interrelationship between the actual and the intensive. Hallward places it in the realm of the virtual, thus, leading us to his major claim that Deleuze’s philosophy leads us out of the world. Protevi states, intensive processes happen in our world they are a part of this world. Hallward completely empties all creativity from the actual, thus depending on the virtual and its slippery slope. Both Hallward and Badiou have missed the point altogether. We live in an intensive/actual world and the main point about Deleuze’s politics has to do with experimentation and social interaction and the transformation and intervention of the concept. As Daniel W. Smith states, unlike Badiou, Deleuze is not searching for an axiomatic approach to the world, one that is prone to reductionism but rather with problematic, inventive and creative methods to transform a society. (shrink)
In this paper, I argue that it is open to semicompatibilists to maintain that no ability to do otherwise is required for moral responsibility. This is significant for two reasons. First, it undermines Christopher Evan Franklin’s recent claim that everyone thinks that an ability to do otherwise is necessary for free will and moral responsibility. Second, it reveals an important difference between John Martin Fischer’s semicompatibilism and Kadri Vihvelin’s version of classical compatibilism, which shows that the dispute between them (...) is not merely a verbal dispute. Along the way, I give special attention to the notion of general abilities, and, though I defend the distinctiveness of Fischer’s semicompatibilism against the verbal dispute charge, I also use the discussion of the nature of general abilities to argue for the falsity of a certain claim that Fischer and coauthor Mark Ravizza have made about their account. (shrink)
A collection of essays, mostly original, on the actual and possible positions on free will available to Buddhist philosophers, by ChristopherGowans, Rick Repetti, Jay Garfield, Owen Flanagan, Charles Goodman, Galen Strawson, Susan Blackmore, Martin T. Adam, Christian Coseru, Marie Friquegnon, Mark Siderits, Ben Abelson, B. Alan Wallace, Peter Harvey, Emily McRae, and Karin Meyers, and a Foreword by Daniel Cozort.
Chapter 1: "Reason for Hope " by Michael J. Murray Chapter 2: "Theistic Arguments" by William C. Davis Chapter 3: "A Scientific Argument for the Existence of God: The Fine- Tuning Design Argument" by Robin Collins Chapter 4: "God, Evil and Suffering" by Daniel Howard Snyder Chapter 5: "Arguments for Atheism" by John O'Leary Hawthorne Chapter 6: "Faith and Reason" by Caleb Miller Chapter 7: "Religious Pluralism" by Timothy O'Connor Chapter 8: "Eastern Religions" by Robin Collins Chapter 9: "Divine Providence (...) and Human Freedom" by Scott A. Davison Chapter 10: "The Incarnation and the Trinity" by Thomas D. Senor Chapter 11: "The Resurrection of the Body and the Life Everlasting" by Trenton Merricks Chapter 12: "Heaven and Hell" by Michael J. Murray Chapter 13: "Religion and Science" by W. Christopher Stewart Chapter 14: "Miracles and Christian Theism" by J. A. Cover Chapter 15: "Christianity and Ethics" by Frances Howard-Snyder Chapter 16: "The Authority of Scripture" by Douglas Blount. (shrink)
This book’s goal is to give an intellectual context for the following manuscript. -/- Includes bibliographical references and an index. Pages 1-123. 1). Philosophy. 2). Metaphysics. 3). Philosophy, German. 4). Philosophy, German -- 18th century. 5). Philosophy, German and Greek Influences Metaphysics. I. Hegel, Georg Wilhelm Friedrich -- 1770-1831 -- Das älteste Systemprogramm des deutschen Idealismus. II. Rosenzweig, Franz, -- 1886-1929. III. Schelling, Friedrich Wilhelm Joseph von, -- 1775-1854. IV. Hölderlin, Friedrich, -- 1770-1843. V. Ferrer, Daniel Fidel, 1952-. [Translation from (...) German into English of the-- Das älteste Systemprogramm des deutschen Idealismus.]. -/- Note: the manuscript is in the handwriting of G.W.F. Hegel, but the actual authorship is disputed. No date is given. Franz Rosenzweig made up the title as it is known today. He published the text in 1917. At that time, F. Rosenzweig thought F.W.J. Schelling was the author. No one has read this book for errors. As always, any errors, mistakes or oversights etc. are mine alone. Given a couple more years, I could improve this book. This is a philosophical translation and not a philological translation. Martin Luther who did the famous early translation of the Bible into German wrote in a letter, “If anyone does not like my translation, they can ignore it… (September 15, 1530)”. There are no ‘correct’ translations. Some are just better than other translations. -/- The Oldest Systematic Program of German Idealism. The German title is: Das Älteste Systemprogramm Des Deutschen Idealismus. This title was made up by Franz Rosenzweig in 1917, when he first published the manuscript. He found the manuscript in the Royal Library in Berlin in 1913. The manuscript suggested date is around 1796 and was done by handwriting research. However, the manuscript is not dated. The Prussian State Library auctioned in March 1913 from the auction of the house Liepmannssohn in Berlin a single sheet on the front and back with Hegel's cursive handwriting. The manuscript was lost during WWII. But Dieter Henrich found it again in 1979 in the “Biblioteka Jagiellonska” in Krakow (Poland), where it is today. Address: Jagiellonian Library, Jagiellonian University, al. Mickiewicza 22, 30-059 Cracow, Poland. Later research suggests that manuscript had come from the estate of Hegel’s student Friedrich Christoph Förster (1791-1868). He was one of the editors of Hegel’s posthumous works and most likely had access to a number of Hegel’s manuscripts. This text actually being one of them. Hegel traveled around Bohemia with Marie and Friedrich Christoph Förster around the year 1820-21 (see Klaus Vieweg). -/- Philosophical mystery -- who is the author or authors of this text? -/- Take a plunge into the deep and cold waters. Maybe a quagmire or quandary, but decidedly interesting. This project is to contextualize an old handwritten manuscript which is about 225 years old. The actual author is a mystery. I offer my own assessment. You can make your own assessments. The mystery has continued to unfold since 1917. There is plenty to read. Otherwise, think about the authorship and read more of the German philosophers and authors from this period and enjoy the depth of thinking and philosophizing. On one hand, there is just the sheer fun in the puzzle of the authorship questions; and on the other hand, these are the alluring thoughts that lead to the nascent stage of German Idealism and our intellectual heritage. There is no end to the accolades for this group of philosophers. A heritage that we still hear in in our attempts to move forward into our future. -/- Do your own astute exegesis (ἐξήγησις) as all paths are still open. Let your thought take to the wings of what is called thinking with this text. Critical encounters (Auseinandersetzung, or a Gegenüberstellung) with at least: Friedrich Hölderlin (1770-1843) Friedrich Wilhelm Joseph Schelling (1775-1854), and Georg Wilhelm Friedrich Hegel (1770-1831) --- starts here! German Idealism. We are not going to study this situation endlessly, instead we make some broad strokes and provide you a general context. You are allowed to read between the lines too. Goal: to understand the overall affinity and differences between the intellectuals of this period in German history; and to come to grips with this demanding text within its large scholarly context in the last 100 years. There are no final answers. (shrink)
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
Although contemporary metaphysics has recently undergone a neo-Aristotelian revival wherein dispositions, or capacities are now commonplace in empirically grounded ontologies, being routinely utilised in theories of causality and modality, a central Aristotelian concept has yet to be given serious attention – the doctrine of hylomorphism. The reason for this is clear: while the Aristotelian ontological distinction between actuality and potentiality has proven to be a fruitful conceptual framework with which to model the operation of the natural world, the distinction between (...) form and matter has yet to similarly earn its keep. In this chapter, I offer a first step toward showing that the hylomorphic framework is up to that task. To do so, I return to the birthplace of that doctrine - the biological realm. Utilising recent advances in developmental biology, I argue that the hylomorphic framework is an empirically adequate and conceptually rich explanatory schema with which to model the nature of organisms. (shrink)
This book is a translation of W.V. Quine's Kant Lectures, given as a series at Stanford University in 1980. It provide a short and useful summary of Quine's philosophy. There are four lectures altogether: I. Prolegomena: Mind and its Place in Nature; II. Endolegomena: From Ostension to Quantification; III. Endolegomena loipa: The forked animal; and IV. Epilegomena: What's It all About? The Kant Lectures have been published to date only in Italian and German translation. The present book is filled out (...) with the translator's critical Introduction, "The esoteric Quine?" a bibliography based on Quine's sources, and an Index for the volume. (shrink)
Mysticism and the sciences have traditionally been theoretical enemies, and the closer that philosophy allies itself with the sciences, the greater the philosophical tendency has been to attack mysticism as a possible avenue towards the acquisition of knowledge and/or understanding. Science and modern philosophy generally aim for epistemic disclosure of their contents, and, conversely, mysticism either aims at the restriction of esoteric knowledge, or claims such knowledge to be non-transferable. Thus the mystic is typically seen by analytic philosophers as a (...) variety of 'private language' speaker, although the plausibility of such a position is seemingly foreclosed by Wittgenstein's work in the Philosophical Investigations. Yorke re-examines Wittgenstein's conclusion on the matter of private language, and argues that so-called 'ineffable' mystical experiences, far from being a 'beetle in a box', can play a viable role in our public language-games, via renewed efforts at articulation. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
Christoph Andreas Leonhard Creuzer (1768-1844), che dedicherà la propria vita alIa carriera ecclesiastica e aIle attività benefiche, pubblica nel 1793 - ancora giovane ed entusiasta della filosofia - un'opera che suscita un certo scalpore, le Considerazioni scettiche sulla libertà del volere, sulla quale prendono posizione, polernicamente, anche Fichte e Schelling. Pur accogliendo i princlpi della filosofia critica, Creuzer sostiene che l'idea di libertà come autonornia della volontà, quale Kant l'ha definita, conduca nienterneno che alio spinozismo, ossia alia negazione dei concetti (...) di imputazione, merito e colpa. Mascherandosi dietro uno scetticismo di comodo, Creuzer mostra corne tale conclusione spinoziana, a cui Kant ha tentato inutilmente di sottrarsi, sia l'esito obbligato tanto della sua filosofia teoretica quanto di quella pratica, che pure mirava in prima istanza a salvaguardare la responsabilità morale. (shrink)
Ectogestation involves the gestation of a fetus in an ex utero environment. The possibility of this technology raises a significant question for the abortion debate: Does a woman’s right to end her pregnancy entail that she has a right to the death of the fetus when ectogestation is possible? Some have argued that it does not Mathison & Davis. Others claim that, while a woman alone does not possess an individual right to the death of the fetus, the genetic parents (...) have a collective right to its death Räsänen. In this paper, I argue that the possibility of ectogestation will radically transform the problem of abortion. The argument that I defend purports to show that, even if it is not a person, there is no right to the death of a fetus that could be safely removed from a human womb and gestated in an artificial womb, because there are competent people who are willing to care for and raise the fetus as it grows into a person. Thus, given the possibility of ectogestation, the moral status of the fetus plays no substantial role in determining whether there is a right to its death. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
This is an encyclopedia entry on consequentializing. It explains what consequentializing is, what makes it possible, why someone might be motivated to consequentialize, and how to consequentialize a non-consequentialist theory.
This essay constitutes an attempt to probe the very idea of a saying/showing distinction of the kind that Wittgenstein advances in the Tractatus—to say what such a distinction consists in, to say what philosophical work it has to do, and to say how we might be justified in drawing such a distinction. Towards the end of the essay the discussion is related to Wittgenstein’s later work. It is argued that we can profitably see this work in such a way that (...) a saying/showing distinction arises there too. In particular, in the final sub-section of the essay, it is suggested that we can see in Wittgenstein’s later work an inducement to say what we are shown. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Much of the philosophical literature on causation has focused on the concept of actual causation, sometimes called token causation. In particular, it is this notion of actual causation that many philosophical theories of causation have attempted to capture.2 In this paper, we address the question: what purpose does this concept serve? As we shall see in the next section, one does not need this concept for purposes of prediction or rational deliberation. What then could the purpose be? We will argue (...) that one can gain an important clue here by looking at the ways in which causal judgments are shaped by people‘s understanding of norms. (shrink)
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
This chapter surveys hybrid theories of well-being. It also discusses some criticisms, and suggests some new directions that philosophical discussion of hybrid theories might take.
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
The paper argues that an account of understanding should take the form of a Carnapian explication and acknowledge that understanding comes in degrees. An explication of objectual understanding is defended, which helps to make sense of the cognitive achievements and goals of science. The explication combines a necessary condition with three evaluative dimensions: An epistemic agent understands a subject matter by means of a theory only if the agent commits herself sufficiently to the theory of the subject matter, and to (...) the degree that the agent grasps the theory, the theory answers to the facts and the agent’s commitment to the theory is justified. The threshold for outright attributions of understanding is determined contextually. The explication has descriptive as well as normative facets and allows for the possibility of understanding by means of non-explanatory theories. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
A central tension shaping metaethical inquiry is that normativity appears to be subjective yet real, where it’s difficult to reconcile these aspects. On the one hand, normativity pertains to our actions and attitudes. On the other, normativity appears to be real in a way that precludes it from being a mere figment of those actions and attitudes. In this paper, I argue that normativity is indeed both subjective and real. I do so by way of treating it as a special (...) sort of artifact, where artifacts are mind-dependent yet nevertheless can carve at the joints of reality. In particular, I argue that the properties of being a reason and being valuable for are grounded in attitudes yet are still absolutely structural. (shrink)
Should economics study the psychological basis of agents' choice behaviour? I show how this question is multifaceted and profoundly ambiguous. There is no sharp distinction between "mentalist'' answers to this question and rival "behavioural'' answers. What's more, clarifying this point raises problems for mentalists of the "functionalist'' variety (Dietrich and List, 2016). Firstly, functionalist hypotheses collapse into hypotheses about input--output dispositions, I show, unless one places some unwelcome restrictions on what counts as a cognitive variable. Secondly, functionalist hypotheses make some (...) risky commitments about the plasticity of agents' choice dispositions. (shrink)
We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key (...) ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare. (shrink)
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
A community, for ecologists, is a unit for discussing collections of organisms. It refers to collections of populations, which consist (by definition) of individuals of a single species. This is straightforward. But communities are unusual kinds of objects, if they are objects at all. They are collections consisting of other diverse, scattered, partly-autonomous, dynamic entities (that is, animals, plants, and other organisms). They often lack obvious boundaries or stable memberships, as their constituent populations not only change but also move in (...) and out of areas, and in and out of relationships with other populations. Familiar objects have identifiable boundaries, for example, and if communities do not, maybe they are not objects. Maybe they do not exist at all. The question this possibility suggests, of what criteria there might be for identifying communities, and for determining whether such communities exist at all, has long been discussed by ecologists. This essay addresses this question as it has recently been taken up by philosophers of science, by examining answers to it which appeared a century ago and which have framed the continuing discussion. (shrink)
What is philosophy of science? Numerous manuals, anthologies or essays provide carefully reconstructed vantage points on the discipline that have been gained through expert and piecemeal historical analyses. In this paper, we address the question from a complementary perspective: we target the content of one major journal of the field—Philosophy of Science—and apply unsupervised text-mining methods to its complete corpus, from its start in 1934 until 2015. By running topic-modeling algorithms over the full-text corpus, we identified 126 key research topics (...) that span across 82 years. We also tracked their evolution and fluctuating significance over time in the journal articles. Our results concur with and document known and lesser-known episodes of the philosophy of science, including the rise and fall of logic and language-related topics, the relative stability of a metaphysical and ontological questioning (space and time, causation, natural kinds, realism), the significance of epistemological issues about the nature of scientific knowledge as well as the rise of a recent philosophy of biology and other trends. These analyses exemplify how computational text-mining methods can be used to provide an empirical large-scale and data-driven perspective on the history of philosophy of science that is complementary to other current historical approaches. (shrink)
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...) show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
The Swiss psychologist Jean Piaget contends that children below the age of 12 see no necessity for the logical law of non-contradiction. I argue this view is problematic. First of all, Piaget's dialogues with children which are considered supportive of this position are not clearly so. Secondly, Piaget underestimates the necessary nature of following the logical law of non-contradiction in everyday discourse. The mere possibility of saying something significant and informative at all presupposes that the law of non-contradiction is enforced.
Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as ‘digital psychiatry’. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, (...) employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies. (shrink)
Does our life have value for us after we die? Despite the importance of such a question, many would find it absurd, even incoherent. Once we are dead, the thought goes, we are no longer around to have any wellbeing at all. However, in this paper I argue that this common thought is mistaken. In order to make sense of some of our most central normative thoughts and practices, we must hold that a person can have wellbeing after they die. (...) I provide two arguments for this claim on the basis of postmortem harms and benefits as well as the lasting significance of death. I suggest two ways of underwriting posthumous wellbeing. (shrink)
Recognizing that truth is socially constructed or that knowledge and power are related is hardly a novelty in the social sciences. In the twenty-first century, however, there appears to be a renewed concern regarding people’s relationship with the truth and the propensity for certain actors to undermine it. Organizations are highly implicated in this, given their central roles in knowledge management and production and their attempts to learn, although the entanglement of these epistemological issues with business ethics has not been (...) engaged as explicitly as it might be. Drawing on work from a virtue epistemology perspective, this paper outlines the idea of a set of epistemic vices permeating organizations, along with examples of unethical epistemic conduct by organizational actors. While existing organizational research has examined various epistemic virtues that make people and organizations effective and responsible epistemic agents, much less is known about the epistemic vices that make them ineffective and irresponsible ones. Accordingly, this paper introduces vice epistemology, a nascent but growing subfield of virtue epistemology which, to the best of our knowledge, has yet to be explicitly developed in terms of business ethics. The paper concludes by outlining a business ethics research agenda on epistemic vice, with implications for responding to epistemic vices and their illegitimacy in practice. (shrink)
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
According to commonsense psychology, one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. Recent lines of research purport to show that commonsense is mistaken on both of these points: Mack and Rock (1998) tell us that attention is necessary for consciousness, while Kentridge and Heywood (2001) claim that consciousness is not necessary for attention. If these lines of research were successful they would have important (...) implications regarding the prospects of using attention research to inform us about consciousness. The present essay shows that these lines of research are not successful, and that the commonsense picture of the relationship between attention and consciousness can be. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.