Selection against embryos that are predisposed to develop disabilities is one of the less controversial uses of embryo selection technologies. Many bio-conservatives argue that while the use of ESTs to select for non-disease-related traits, such as height and eye-colour, should be banned, their use to avoid disease and disability should be permitted. Nevertheless, there remains significant opposition, particularly from the disability rights movement, to the use of ESTs to select against disability. In this article we examine whether and why the (...) state could be justified in restricting the use of ESTs to select against disability. We first outline the challenge posed by proponents of ‘liberal eugenics’. Liberal eugenicists challenge those who defend restrictions on the use of ESTs to show why the use of these technologies would create a harm of the type and magnitude required to justify coercive measures. We argue that this challenge could be met by adverting to the risk of harms to future persons that would result from a loss of certain forms of cognitive diversity. We suggest that this risk establishes a pro tanto case for restricting selection against some disabilities, including dyslexia and Asperger's syndrome. (shrink)
Do famous athletes have special obligations to act virtuously? A number of philosophers have investigated this question by examining whether famous athletes are subject to special role model obligations (Wellman 2003; Feezel 2005; Spurgin 2012). In this paper we will take a different approach and give a positive response to this question by arguing for the position that sport and gaming celebrities are ‘ambassadors of the game’: moral agents whose vocations as rule-followers have unique implications for their non-lusory lives. According (...) to this idea, the actions of a game’s players and other stakeholders—especially the actions of its stars—directly affect the value of the game itself, a fact which generates additional moral reasons to behave in a virtuous manner. We will begin by explaining the three main positions one may take with respect to the question: moral exceptionalism, moral generalism, and moral exemplarism. We will argue that no convincing case for moral exemplarism has thus far been made, which gives us reason to look for new ways to defend this position. We then provide our own ‘ambassadors of the game’ account and argue that it gives us good reason to think that sport and game celebrities are subject to special obligations to act virtuously. (shrink)
Reproductive genetic technologies allow parents to decide whether their future children will have or lack certain genetic predispositions. A popular model that has been proposed for regulating access to RGTs is the ‘genetic supermarket’. In the genetic supermarket, parents are free to make decisions about which genes to select for their children with little state interference. One possible consequence of the genetic supermarket is that collective action problems will arise: if rational individuals use the genetic supermarket in isolation from one (...) another, this may have a negative effect on society as a whole, including future generations. In this article we argue that RGTs targeting height, innate immunity, and certain cognitive traits could lead to collective action problems. We then discuss whether this risk could in principle justify state intervention in the genetic supermarket. We argue that there is a plausible prima facie case for the view that such state intervention would be justified and respond to a number of arguments that might be adduced against that view. (shrink)
It is fair to say that Georg Wilhelm Friedrich Hegel's philosophy of mathematics and his interpretation of the calculus in particular have not been popular topics of conversation since the early part of the twentieth century. Changes in mathematics in the late nineteenth century, the new set-theoretical approach to understanding its foundations, and the rise of a sympathetic philosophical logic have all conspired to give prior philosophies of mathematics (including Hegel's) the untimely appearance of naïveté. The common view was expressed (...) by Bertrand Russell: -/- The great [mathematicians] of the seventeenth and eighteenth centuries were so much impressed by the results of their new methods that they did not trouble to examine their foundations. Although their arguments were fallacious, a special Providence saw to it that their conclusions were more or less true. Hegel fastened upon the obscurities in the foundations of mathematics, turned them into dialectical contradictions, and resolved them by nonsensical syntheses. . . .The resulting puzzles [of mathematics] were all cleared up during the nineteenth century, not by heroic philosophical doctrines such as that of Kant or that of Hegel, but by patient attention to detail (1956, 368–69). (shrink)
Mysticism and the sciences have traditionally been theoretical enemies, and the closer that philosophy allies itself with the sciences, the greater the philosophical tendency has been to attack mysticism as a possible avenue towards the acquisition of knowledge and/or understanding. Science and modern philosophy generally aim for epistemic disclosure of their contents, and, conversely, mysticism either aims at the restriction of esoteric knowledge, or claims such knowledge to be non-transferable. Thus the mystic is typically seen by analytic philosophers as a (...) variety of 'private language' speaker, although the plausibility of such a position is seemingly foreclosed by Wittgenstein's work in the Philosophical Investigations. Yorke re-examines Wittgenstein's conclusion on the matter of private language, and argues that so-called 'ineffable' mystical experiences, far from being a 'beetle in a box', can play a viable role in our public language-games, via renewed efforts at articulation. (shrink)
Much of the philosophical literature on causation has focused on the concept of actual causation, sometimes called token causation. In particular, it is this notion of actual causation that many philosophical theories of causation have attempted to capture.2 In this paper, we address the question: what purpose does this concept serve? As we shall see in the next section, one does not need this concept for purposes of prediction or rational deliberation. What then could the purpose be? We will argue (...) that one can gain an important clue here by looking at the ways in which causal judgments are shaped by people‘s understanding of norms. (shrink)
Summary: Edward Lanphier and colleagues contend that human germline editing is an unethical technology because it could have unpredictable effects on future generations. In our view, such misgivings do not justify their proposed moratorium.
Ectogestation involves the gestation of a fetus in an ex utero environment. The possibility of this technology raises a significant question for the abortion debate: Does a woman’s right to end her pregnancy entail that she has a right to the death of the fetus when ectogestation is possible? Some have argued that it does not Mathison & Davis. Others claim that, while a woman alone does not possess an individual right to the death of the fetus, the genetic parents (...) have a collective right to its death Räsänen. In this paper, I argue that the possibility of ectogestation will radically transform the problem of abortion. The argument that I defend purports to show that, even if it is not a person, there is no right to the death of a fetus that could be safely removed from a human womb and gestated in an artificial womb, because there are competent people who are willing to care for and raise the fetus as it grows into a person. Thus, given the possibility of ectogestation, the moral status of the fetus plays no substantial role in determining whether there is a right to its death. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
A central tension shaping metaethical inquiry is that normativity appears to be subjective yet real, where it’s difficult to reconcile these aspects. On the one hand, normativity pertains to our actions and attitudes. On the other, normativity appears to be real in a way that precludes it from being a mere figment of those actions and attitudes. In this paper, I argue that normativity is indeed both subjective and real. I do so by way of treating it as a special (...) sort of artifact, where artifacts are mind-dependent yet nevertheless can carve at the joints of reality. In particular, I argue that the properties of being a reason and being valuable for are grounded in attitudes yet are still absolutely structural. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
According to commonsense psychology, one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. Recent lines of research purport to show that commonsense is mistaken on both of these points: Mack and Rock (1998) tell us that attention is necessary for consciousness, while Kentridge and Heywood (2001) claim that consciousness is not necessary for attention. If these lines of research were successful they would have important (...) implications regarding the prospects of using attention research to inform us about consciousness. The present essay shows that these lines of research are not successful, and that the commonsense picture of the relationship between attention and consciousness can be. (shrink)
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
The paper argues that an account of understanding should take the form of a Carnapian explication and acknowledge that understanding comes in degrees. An explication of objectual understanding is defended, which helps to make sense of the cognitive achievements and goals of science. The explication combines a necessary condition with three evaluative dimensions: An epistemic agent understands a subject matter by means of a theory only if the agent commits herself sufficiently to the theory of the subject matter, and to (...) the degree that the agent grasps the theory, the theory answers to the facts and the agent’s commitment to the theory is justified. The threshold for outright attributions of understanding is determined contextually. The explication has descriptive as well as normative facets and allows for the possibility of understanding by means of non-explanatory theories. (shrink)
This chapter surveys hybrid theories of well-being. It also discusses some criticisms, and suggests some new directions that philosophical discussion of hybrid theories might take.
Should economics study the psychological basis of agents' choice behaviour? I show how this question is multifaceted and profoundly ambiguous. There is no sharp distinction between "mentalist'' answers to this question and rival "behavioural'' answers. What's more, clarifying this point raises problems for mentalists of the "functionalist'' variety (Dietrich and List, 2016). Firstly, functionalist hypotheses collapse into hypotheses about input--output dispositions, I show, unless one places some unwelcome restrictions on what counts as a cognitive variable. Secondly, functionalist hypotheses make some (...) risky commitments about the plasticity of agents' choice dispositions. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key (...) ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare. (shrink)
Does our life have value for us after we die? Despite the importance of such a question, many would find it absurd, even incoherent. Once we are dead, the thought goes, we are no longer around to have any wellbeing at all. However, in this paper I argue that this common thought is mistaken. In order to make sense of some of our most central normative thoughts and practices, we must hold that a person can have wellbeing after they die. (...) I provide two arguments for this claim on the basis of postmortem harms and benefits as well as the lasting significance of death. I suggest two ways of underwriting posthumous wellbeing. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
Recognizing that truth is socially constructed or that knowledge and power are related is hardly a novelty in the social sciences. In the twenty-first century, however, there appears to be a renewed concern regarding people’s relationship with the truth and the propensity for certain actors to undermine it. Organizations are highly implicated in this, given their central roles in knowledge management and production and their attempts to learn, although the entanglement of these epistemological issues with business ethics has not been (...) engaged as explicitly as it might be. Drawing on work from a virtue epistemology perspective, this paper outlines the idea of a set of epistemic vices permeating organizations, along with examples of unethical epistemic conduct by organizational actors. While existing organizational research has examined various epistemic virtues that make people and organizations effective and responsible epistemic agents, much less is known about the epistemic vices that make them ineffective and irresponsible ones. Accordingly, this paper introduces vice epistemology, a nascent but growing subfield of virtue epistemology which, to the best of our knowledge, has yet to be explicitly developed in terms of business ethics. The paper concludes by outlining a business ethics research agenda on epistemic vice, with implications for responding to epistemic vices and their illegitimacy in practice. (shrink)
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as ‘digital psychiatry’. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, (...) employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies. (shrink)
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
Nearly all defences of the agent-causal theory of free will portray the theory as a distinctively libertarian one — a theory that only libertarians have reason to accept. According to what I call ‘the standard argument for the agent-causal theory of free will’, the reason to embrace agent-causal libertarianism is that libertarians can solve the problem of enhanced control only if they furnish agents with the agent-causal power. In this way it is assumed that there is only reason to accept (...) the agent-causal theory if there is reason to accept libertarianism. I aim to refute this claim. I will argue that the reasons we have for endorsing the agent-causal theory of free will are nonpartisan. The real reason for going agent-causal has nothing to do with determinism or indeterminism, but rather with avoiding reductionism about agency and the self. As we will see, if there is reason for libertarians to accept the agent-causal theory, there is just as much reason for compatibilists to accept it. It is in this sense that I contend that if anyone should be an agent-causalist, then everyone should be an agent-causalist. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
A community, for ecologists, is a unit for discussing collections of organisms. It refers to collections of populations, which consist (by definition) of individuals of a single species. This is straightforward. But communities are unusual kinds of objects, if they are objects at all. They are collections consisting of other diverse, scattered, partly-autonomous, dynamic entities (that is, animals, plants, and other organisms). They often lack obvious boundaries or stable memberships, as their constituent populations not only change but also move in (...) and out of areas, and in and out of relationships with other populations. Familiar objects have identifiable boundaries, for example, and if communities do not, maybe they are not objects. Maybe they do not exist at all. The question this possibility suggests, of what criteria there might be for identifying communities, and for determining whether such communities exist at all, has long been discussed by ecologists. This essay addresses this question as it has recently been taken up by philosophers of science, by examining answers to it which appeared a century ago and which have framed the continuing discussion. (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...) show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
I examine some recent claims put forward by L. A. Paul, Barry Dainton and Simon Prosser, to the effect that perceptual experiences of movement and change involve an (apparent) experience of ‘passage’, in the sense at issue in debates about the metaphysics of time. Paul, Dainton and Prosser all argue that this supposed feature of perceptual experience – call it a phenomenology of passage – is illusory, thereby defending the view that there is no such a thing as passage, conceived (...) of as a feature of mind-independent reality. I suggest that in fact there is no such phenomenology of passage in the first place. There is, however, a specific structural aspect of the phenomenology of perceptual experiences of movement and change that can explain how one might mistakenly come to the belief that such experiences do involve a phenomenology of passage. (shrink)
In this essay, I argue that a proper understanding of the historicity of love requires an appreciation of the irreplaceability of the beloved. I do this through a consideration of ideas that were first put forward by Robert Kraut in “Love De Re” (1986). I also evaluate Amelie Rorty's criticisms of Kraut's thesis in “The Historicity of Psychological Attitudes: Love is Not Love Which Alters Not When It Alteration Finds” (1986). I argue that Rorty fundamentally misunderstands Kraut's Kripkean analogy, and (...) I go on to criticize her claim that concern over the proper object of love should be best understood as a concern over constancy. This leads me to an elaboration of the distinct senses in which love can be seen as historical. I end with a further defense of the irreplaceability of the beloved and a discussion of the relevance of recent debates over the importance of personal identity for an adequate account of the historical dimension of love. (shrink)
Some critics of invasion biology have argued the invasion of ecosystems by nonindigenous species can create more valuable ecosystems. They consider invaded communities as more valuable because they potentially produce more ecosystem services. To establish that the introduction of nonindigenous species creates more valuable ecosystems, they defend that value is provisioned by ecosystem services. These services are derived from ecosystem productivity, the production and cycling of resources. Ecosystem productivity is a result of biodiversity, which is understood as local species richness. (...) Invasive species increase local species richness and, therefore, increase the conservation value of local ecosystems. These views are disseminating to the public via a series of popular science books. Conservationists must respond to these views, and I outline a method of rejecting such arguments against controlling invasive species. Ecological systems are valuable for more than local productivity and biodiversity is not accurately described by a local species count. (shrink)
Many questions about wellbeing involve metaphysical dependence. Does wellbeing depend on minds? Is wellbeing determined by distinct sorts of things? Is it determined differently for different subjects? However, we should distinguish two axes of dependence. First, there are the grounds that generate value. Second, there are the connections between the grounds and value which make it so that those grounds generate that value. Given these distinct axes of dependence, there are distinct dimensions to questions about the dependence of wellbeing. In (...) this paper, I offer a view of wellbeing that gives different answers with respect to these different dimensions. The view is subjectivist about connections but objectivist about grounds. Pluralist about grounds but monist about connections. Invariabilist about connections but variabilist about grounds. Thus, the view offers a simple account that captures the complexity of wellbeing. (shrink)
What is philosophy of science? Numerous manuals, anthologies or essays provide carefully reconstructed vantage points on the discipline that have been gained through expert and piecemeal historical analyses. In this paper, we address the question from a complementary perspective: we target the content of one major journal of the field—Philosophy of Science—and apply unsupervised text-mining methods to its complete corpus, from its start in 1934 until 2015. By running topic-modeling algorithms over the full-text corpus, we identified 126 key research topics (...) that span across 82 years. We also tracked their evolution and fluctuating significance over time in the journal articles. Our results concur with and document known and lesser-known episodes of the philosophy of science, including the rise and fall of logic and language-related topics, the relative stability of a metaphysical and ontological questioning (space and time, causation, natural kinds, realism), the significance of epistemological issues about the nature of scientific knowledge as well as the rise of a recent philosophy of biology and other trends. These analyses exemplify how computational text-mining methods can be used to provide an empirical large-scale and data-driven perspective on the history of philosophy of science that is complementary to other current historical approaches. (shrink)
This paper explores the level of obligation called for by Milton Friedman’s classic essay “The Social Responsibility of Business is to Increase Profits.” Several scholars have argued that Friedman asserts that businesses have no or minimal social duties beyond compliance with the law. This paper argues that this reading of Friedman does not give adequate weight to some claims that he makes and to their logical extensions. Throughout his article, Friedman emphasizes the values of freedom, respect for law, and duty. (...) The principle that a business professional should not infringe upon the liberty of other members of society can be used by business ethicists to ground a vigorous line of ethical analysis. Any practice, which has a negative externality that requires another party to take a significant loss without consent or compensation, can be seen as unethical. With Friedman’s framework, we can see how ethics can be seen as arising from the nature of business practice itself. Business involves an ethics in which we consider, work with, and respect strangers who are outside of traditional in-groups. (shrink)
Ecological communities are seldom, if ever, biological individuals. They lack causal boundaries as the populations that constitute communities are not congruent and rarely have persistent functional roles regulating the communities’ higher-level properties. Instead we should represent ecological communities indexically, by identifying ecological communities via the network of weak causal interactions between populations that unfurl from a starting set of populations. This precisification of ecological communities helps identify how community properties remain invariant, and why they have robust characteristics. This respects the (...) diversity and aggregational nature of these complex systems while still vindicating them as units worthy of investigation. (shrink)
What would the Merleau-Ponty of Phenomenology of Perception have thought of the use of his phenomenology in the cognitive sciences? This question raises the issue of Merleau-Ponty’s conception of the relationship between the sciences and philosophy, and of what he took the philosophical significance of his phenomenology to be. In this article I suggest an answer to this question through a discussion of certain claims made in connection to the “post-cognitivist” approach to cognitive science by Hubert Dreyfus, Shaun Gallagher and (...) Francisco Varela, Evan Thompson and Eleanor Rosch. I suggest that these claims are indicative of an appropriation of Merleau-Ponty’s thought that he would have welcomed as innovative science. Despite this, I argue that he would have viewed this use of his work as potentially occluding the full philosophical significance that he believed his phenomenological investigations to contain. (shrink)
I distinguish several doctrines that economic methodologists have found attractive, all of which have a positivist flavour. One of these is the doctrine that preference assignments in economics are just shorthand descriptions of agents' choice behaviour. Although most of these doctrines are problematic, the latter doctrine about preference assignments is a respectable one, I argue. It doesn't entail any of the problematic doctrines, and indeed it is warranted independently of them.
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
Although contemporary metaphysics has recently undergone a neo-Aristotelian revival wherein dispositions, or capacities are now commonplace in empirically grounded ontologies, being routinely utilised in theories of causality and modality, a central Aristotelian concept has yet to be given serious attention – the doctrine of hylomorphism. The reason for this is clear: while the Aristotelian ontological distinction between actuality and potentiality has proven to be a fruitful conceptual framework with which to model the operation of the natural world, the distinction between (...) form and matter has yet to similarly earn its keep. In this chapter, I offer a first step toward showing that the hylomorphic framework is up to that task. To do so, I return to the birthplace of that doctrine - the biological realm. Utilising recent advances in developmental biology, I argue that the hylomorphic framework is an empirically adequate and conceptually rich explanatory schema with which to model the nature of organisms. (shrink)
Das spezifische Ziel von Argumentationen ist nicht einfach, den Adressaten etwas glauben zu machen - dies wäre bloße Rhetorik , sondern: den Adressaten beim Erkennen der Akzeptabilität (insbesondere der Wahrheit) der These anzuleiten und ihn so zu begründetem Glauben, zu Erkenntnis zu führen. Argumentationen leiten das Erkennen an, indem sie in ihren Argumenten hinreichende Akzeptabilitätsbedingungen der These als erfüllt beurteilen und so den Adressaten implizit auffordern, diese Bedingungen zu überprüfen. Argumentationen sind gültig, wenn sie prinzipiell das Erkennen anleiten können; d. (...) h. wenn die genannten Akzeptabilitätsbedingungen hinreichend sind, wenn sie tatsächlich erfüllt (die Argumente also wahr) sind und wenn es irgendjemanden gibt, der zwar die Akzeptabilität der Argumente, nicht aber die der These erkannt hat. Eine gültige Argumentation ist adäquat, um einen bestimmten Adressaten rational zu überzeugen, wenn dieser u. a. die Akzeptabilität der Argumente, nicht aber die der These erkannt hat. Die in gültigen Argumentationen als erfüllt beurteilten Akzeptabilitätsbedingungen sind Konkretisierungen allgemeiner Erkenntnisprinzipien für die spezifische These, z. B. des deduktiven Erkenntnisprinzips: 'Eine Proposition ist wahr, wenn sie von wahren Propositionen logisch impliziert wird.' oder des erkenntnisgenetischen Erkenntnisprinzips: 'Eine Proposition ist wahr, wenn sie korrekt verifiziert worden ist.' Eine Konkretisierung des deduktiven Prinzips für eine These p wäre z. B.: 'p ist wahr, 1. wenn q und r wahr sind und 2. wenn q und r zusammen p logisch implizieren.' Sind beide Bedingungen erfüllt, so könnte 'q; r; also p.' eine gültige deduktive Argumentation sein. Die verschiedenen Argumentationstypen unterscheiden sich danach, auf welchem Erkenntnisprinzip sie basieren. Das argumentativ angeleitete Erkennen der Akzeptabilität der These funktioniert so: Der Adressat benutzt das von ihm (zumindest implizit) gewußte allgemeine Erkenntnisprinzip als Checkliste, auf der er nach dem Vorbringen der Argumente abhakt, welche Akzeptabilitätsbedingung des Erkenntnisprinzips durch das Zutreffen des Arguments jeweils erfüllt wird. Auf der Basis dieser Funktionsbestimmung werden in der "Praktischen Argumentationstheorie" (erstmalig) präzise Gültigkeitskriterien für Argumentationen allgemein und für mehrere spezielle Argumentationstypen entwickelt, erkenntnistheoretisch begründet und auf komplexe Argumentationsbeispiele aus Philosophie, Wissenschaft, Technik und Kultur angewendet. Die Analyse der erkenntnistheoretischen Grundlagen - der zugrundeliegenden Erkenntnisprinzipien - vor allem der interpretierenden und praktischen Argumentationen ist zudem von erheblicher Bedeutung weit über die Argumentationstheorie hinaus: für die Interpretationstheorie, die Handlungstheorie und die praktische Philosophie. (shrink)
Kevin Elliott and others separate two common arguments for the legitimacy of societal values in scientific reasoning as the gap and the error arguments. This article poses two questions: How are these two arguments related, and what can we learn from their interrelation? I contend that we can better understand the error argument as nested within the gap because the error is a limited case of the gap with narrower features. Furthermore, this nestedness provides philosophers with conceptual tools for analyzing (...) more robustly how values pervade science. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.