In this article I examine two mathematical definitions of observational equivalence, one proposed by Charlotte Werndl and based on manifest isomorphism, and the other based on Ornstein and Weiss’s ε-congruence. I argue, for two related reasons, that neither can function as a purely mathematical definition of observational equivalence. First, each definition permits of counterexamples; second, overcoming these counterexamples will introduce non-mathematical premises about the systems in question. Accordingly, the prospects for a broadly applicable and purely mathematical definition of observational equivalence (...) are unpromising. Despite this critique, I suggest that Werndl’s proposals are valuable because they clarify the distinction between provable and unprovable elements in arguments for observational equivalence. (shrink)
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...) show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
Ectogestation involves the gestation of a fetus in an ex utero environment. The possibility of this technology raises a significant question for the abortion debate: Does a woman’s right to end her pregnancy entail that she has a right to the death of the fetus when ectogestation is possible? Some have argued that it does not Mathison & Davis. Others claim that, while a woman alone does not possess an individual right to the death of the fetus, the genetic parents (...) have a collective right to its death Räsänen. In this paper, I argue that the possibility of ectogestation will radically transform the problem of abortion. The argument that I defend purports to show that, even if it is not a person, there is no right to the death of a fetus that could be safely removed from a human womb and gestated in an artificial womb, because there are competent people who are willing to care for and raise the fetus as it grows into a person. Thus, given the possibility of ectogestation, the moral status of the fetus plays no substantial role in determining whether there is a right to its death. (shrink)
We discuss the social-epistemic aspects of Catherine Elgin’s theory of reflective equilibrium and understanding and argue that it yields an argument for the view that a crucial social-epistemic function of epistemic authorities is to foster understanding in their communities. We explore the competences that enable epistemic authorities to fulfil this role and argue that among them is an epistemic virtue we call “epistemic empathy”.
It would be unkind but not inaccurate to say that most experimental philosophy is just psychology with worse methods and better theories. In Experimental Ethics: Towards an Empirical Moral Philosophy, Christoph Luetge, Hannes Rusch, and Matthias Uhl set out to make this comparison less invidious and more flattering. Their book has 16 chapters, organized into five sections and bookended by the editors’ own introduction and prospectus. Contributors hail from four countries (Germany, USA, Spain, and the United Kingdom) and five disciplines (...) (philosophy, psychology, cognitive science, economics, and sociology). While the chapters are of mixed quality and originality, there are several fine contributions to the field. These especially include Stephan Wolf and Alexander Lenger’s sophisticated attempt to operationalize the Rawlsian notion of a veil of ignorance, Nina Strohminger et al.’s survey of the methods available to experimental ethicists for studying implicit morality, Fernando Aguiar et al.’s exploration of the possibility of operationalizing reflective equilibrium in the lab, and Nikil Mukerji’s careful defusing of three debunking arguments about the reliability of philosophical intuitions. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that isgood fora human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social (...) domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Professor Christopher Stead was Ely Professor of Divinity from 1971 until his retirement in 1980 and one of the great contributors to the Oxford Patristic Conferences for many years. In this paper I reflect on his work in Patristics, and I attempt to understand how his interests diverged from the other major contributors in the same period, and how they were formed by his philosophical milieu and the spirit of the age. As a case study to illustrate and diagnose (...) his approach, I shall focus on a debate between Stead and Rowan Williams about the significance of the word idios in Arius's theology (in the course of which I also make some suggestions of my own about the issue). (shrink)
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Much of the philosophical literature on causation has focused on the concept of actual causation, sometimes called token causation. In particular, it is this notion of actual causation that many philosophical theories of causation have attempted to capture.2 In this paper, we address the question: what purpose does this concept serve? As we shall see in the next section, one does not need this concept for purposes of prediction or rational deliberation. What then could the purpose be? We will argue (...) that one can gain an important clue here by looking at the ways in which causal judgments are shaped by people‘s understanding of norms. (shrink)
Political epistemology is the intersection of political philosophy and epistemology. This paper develops a political 'hinge' epistemology. Political hinge epistemology draws on the idea that all belief systems have fundamental presuppositions which play a role in the determination of reasons for belief and other attitudes. It uses this core idea to understand and tackle political epistemological challenges, like political disagreement, polarization, political testimony, political belief, ideology, and biases, among other possibilities. I respond to two challenges facing the development of a (...) political hinge epistemology. The first is about nature and demarcation of political hinges, while the second is about rational deliberation over political hinges. I then use political hinge epistemology to analyze ideology, dealing with the challenge of how an agent's ideology 'masks' or distorts their understanding of social reality, along with the challenge of how ideology critique can change the beliefs of agents who adhere to dominant ideologies, if agents only have their own or the competing ideology to rely on (see Haslanger 2017). I then explore how political hinge epistemology might be extended to further our understanding of political belief polarization. (shrink)
The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 2-5) the epistemological basis of probabilistic arguments is discussed. With regard to the (...) philosophical interpretation of probabilities a new subjectivist, epistemic interpretation is proposed, which identifies probabilities with tendencies of evidence (sect. 2). After drawing the conclusions of this interpretation with respect to the syntactic features of the probability concept, e.g. one variable referring to the data base (sect. 3), the justification of basic probabilities (priors) by judgements of relative frequency (sect. 4) and the justification of derivative probabilities by means of the probability calculus are explained (sect. 5). The core of the paper is the definition of '(argumentatively) valid derivative probabilistic arguments', which provides exact conditions for epistemically good probabilistic arguments, together with conditions for the adequate use of such arguments for the aim of rationally convincing an addressee (sect. 6). Finally, some measures for improving the applicability of probabilistic reasoning are proposed (sect. 7). (shrink)
A central tension shaping metaethical inquiry is that normativity appears to be subjective yet real, where it’s difficult to reconcile these aspects. On the one hand, normativity pertains to our actions and attitudes. On the other, normativity appears to be real in a way that precludes it from being a mere figment of those actions and attitudes. In this paper, I argue that normativity is indeed both subjective and real. I do so by way of treating it as a special (...) sort of artifact, where artifacts are mind-dependent yet nevertheless can carve at the joints of reality. In particular, I argue that the properties of being a reason and being valuable for are grounded in attitudes yet are still absolutely structural. (shrink)
This paper argues that we should replace the common classification of theories of welfare into the categories of hedonism, desire theories, and objective list theories. The tripartite classification is objectionable because it is unduly narrow and it is confusing: it excludes theories of welfare that are worthy of discussion, and it obscures important distinctions. In its place, the paper proposes two independent classifications corresponding to a distinction emphasised by Roger Crisp: a four-category classification of enumerative theories (about which items constitute (...) welfare), and a four-category classification of explanatory theories (about why these items constitute welfare). (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
I am going to argue for a robust realism about magnitudes, as irreducible elements in our ontology. This realistic attitude, I will argue, gives a better metaphysics than the alternatives. It suggests some new options in the philosophy of science. It also provides the materials for a better account of the mind’s relation to the world, in particular its perceptual relations.
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Recognizing that truth is socially constructed or that knowledge and power are related is hardly a novelty in the social sciences. In the twenty-first century, however, there appears to be a renewed concern regarding people’s relationship with the truth and the propensity for certain actors to undermine it. Organizations are highly implicated in this, given their central roles in knowledge management and production and their attempts to learn, although the entanglement of these epistemological issues with business ethics has not been (...) engaged as explicitly as it might be. Drawing on work from a virtue epistemology perspective, this paper outlines the idea of a set of epistemic vices permeating organizations, along with examples of unethical epistemic conduct by organizational actors. While existing organizational research has examined various epistemic virtues that make people and organizations effective and responsible epistemic agents, much less is known about the epistemic vices that make them ineffective and irresponsible ones. Accordingly, this paper introduces vice epistemology, a nascent but growing subfield of virtue epistemology which, to the best of our knowledge, has yet to be explicitly developed in terms of business ethics. The paper concludes by outlining a business ethics research agenda on epistemic vice, with implications for responding to epistemic vices and their illegitimacy in practice. (shrink)
This paper explores what a Rule Consequentialist of Brad Hooker's sort can and should say about normative rea- sons for action. I claim that they can provide a theory of reasons, but that doing so requires distinguishing dif- ferent roles of rules in the ideal code. Some rules in the ideal code specify reasons, while others perform differ- ent functions. The paper also discusses a choice that Rule Consequentialists face about how exactly to specify rea- sons. It ends by comparing (...) the theory of reasons offered by Rule Consequentialism with the theory offered by Act Consequentialism, noting that Rule Consequentialism seems better able to explain moral constraints. (shrink)
It may be true that we are epistemically in the dark about various things. Does this fact ground the truth of fallibilism? No. Still, even the most zealous skeptic will probably grant that it is not clear that one can be incognizant of their own occurrent phenomenal conscious mental goings-on. Even so, this does not entail infallibilism. Philosophers who argue that occurrent conscious experiences play an important epistemic role in the justification of introspective knowledge assume that there are occurrent beliefs. (...) But this assumption is false. This paper argues that there are no occurrent beliefs. And it considers the epistemic consequences this result has for views that attempt to show that at least some phenomenal beliefs are infallible. (shrink)
Contemporary recognition theory has developed powerful tools for understanding a variety of social problems through the lens of misrecognition. It has, however, paid somewhat less attention to how to conceive of appropriate responses to misrecognition, usually making the tacit assumption that the proper societal response is adequate or proper affirmative recognition. In this paper I argue that, although affirmative recognition is one potential response to misrecognition, it is not the only such response. In particular, I would like to make the (...) case for derecognition in some cases: derecognition, in particular, through the systematic deinstitutionalization or uncoupling of various reinforcing components of social institutions, components whose tight combination in one social institution has led to the misrecognition in the first place. I make the case through the example of recent United States debates over marriage, especially but not only with respect to gay marriage. I argue that the proper response to the misrecognition of sexual minorities embodied in exclusively heterosexual marriage codes is not affirmative recognition of lesbian and gay marriages, but rather the systematic derecognition of legal marriage as currently understood. I also argue that the systematic misrecognition of women that occurs under the contemporary institution of marriage would likewise best be addressed through legal uncoupling of heterogeneous social components embodied in the contemporary social institution of marriage. (shrink)
A community, for ecologists, is a unit for discussing collections of organisms. It refers to collections of populations, which consist (by definition) of individuals of a single species. This is straightforward. But communities are unusual kinds of objects, if they are objects at all. They are collections consisting of other diverse, scattered, partly-autonomous, dynamic entities (that is, animals, plants, and other organisms). They often lack obvious boundaries or stable memberships, as their constituent populations not only change but also move in (...) and out of areas, and in and out of relationships with other populations. Familiar objects have identifiable boundaries, for example, and if communities do not, maybe they are not objects. Maybe they do not exist at all. The question this possibility suggests, of what criteria there might be for identifying communities, and for determining whether such communities exist at all, has long been discussed by ecologists. This essay addresses this question as it has recently been taken up by philosophers of science, by examining answers to it which appeared a century ago and which have framed the continuing discussion. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
What is philosophy of science? Numerous manuals, anthologies or essays provide carefully reconstructed vantage points on the discipline that have been gained through expert and piecemeal historical analyses. In this paper, we address the question from a complementary perspective: we target the content of one major journal of the field—Philosophy of Science—and apply unsupervised text-mining methods to its complete corpus, from its start in 1934 until 2015. By running topic-modeling algorithms over the full-text corpus, we identified 126 key research topics (...) that span across 82 years. We also tracked their evolution and fluctuating significance over time in the journal articles. Our results concur with and document known and lesser-known episodes of the philosophy of science, including the rise and fall of logic and language-related topics, the relative stability of a metaphysical and ontological questioning (space and time, causation, natural kinds, realism), the significance of epistemological issues about the nature of scientific knowledge as well as the rise of a recent philosophy of biology and other trends. These analyses exemplify how computational text-mining methods can be used to provide an empirical large-scale and data-driven perspective on the history of philosophy of science that is complementary to other current historical approaches. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
Blaming (construed broadly to include both blaming-attitudes and blaming-actions) is a puzzling phenomenon. Even when we grant that someone is blameworthy, we can still sensibly wonder whether we ought to blame him. We sometimes choose to forgive and show mercy, even when it is not asked for. We are naturally led to wonder why we shouldn’t always do this. Wouldn’t it be a better to wholly reject the punitive practices of blame, especially in light of their often undesirable effects, and (...) embrace an ethic of unrelenting forgiveness and mercy? In this paper I seek to address these questions by offering an account of blame that provides a rationale for thinking that to wholly forswear blaming blameworthy agents would be deeply mistaken. This is because, as I will argue, blaming is a way of valuing, it is “a mode of valuation.” I will argue that among the minimal standards of respect generated by valuable objects, notably persons, is the requirement to redress disvalue with blame. It is not just that blame is something additional we are required to do in properly valuing, but rather blame is part of what that it is to properly value. Blaming, given the existence of blameworthy agents, is mode of valuation required by the standards of minimal respect. To forswear blame would be to fail value what we ought to value. (shrink)
We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key (...) ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare. (shrink)
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
This chapter surveys hybrid theories of well-being. It also discusses some criticisms, and suggests some new directions that philosophical discussion of hybrid theories might take.
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
This paper explores the level of obligation called for by Milton Friedman’s classic essay “The Social Responsibility of Business is to Increase Profits.” Several scholars have argued that Friedman asserts that businesses have no or minimal social duties beyond compliance with the law. This paper argues that this reading of Friedman does not give adequate weight to some claims that he makes and to their logical extensions. Throughout his article, Friedman emphasizes the values of freedom, respect for law, and duty. (...) The principle that a business professional should not infringe upon the liberty of other members of society can be used by business ethicists to ground a vigorous line of ethical analysis. Any practice, which has a negative externality that requires another party to take a significant loss without consent or compensation, can be seen as unethical. With Friedman’s framework, we can see how ethics can be seen as arising from the nature of business practice itself. Business involves an ethics in which we consider, work with, and respect strangers who are outside of traditional in-groups. (shrink)
Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as ‘digital psychiatry’. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, (...) employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies. (shrink)
This article investigates the semantics of sentences that express numerical averages, focusing initially on cases such as 'The average American has 2.3 children'. Such sentences have been used both by linguists and philosophers to argue for a disjuncture between semantics and ontology. For example, Noam Chomsky and Norbert Hornstein have used them to provide evidence against the hypothesis that natural language semantics includes a reference relation holding between words and objects in the world, whereas metaphysicians such as Joseph Melia and (...) Stephen Yablo have used them to provide evidence that apparent singular reference need not be taken as ontologically committing. We develop a fully general and independently justified compositional semantics in which such constructions are assigned truth conditions that are not ontologically problematic, and show that our analysis is superior to all extant rivals. Our analysis provides evidence that a good semantics yields a sensible ontology. It also reveals that natural language contains genuine singular terms that refer to numbers. (shrink)
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
The question of whether cognition can influence perception has a long history in neuroscience and philosophy. Here, we outline a novel approach to this issue, arguing that it should be viewed within the framework of top-down information-processing. This approach leads to a reversal of the standard explanatory order of the cognitive penetration debate: we suggest studying top-down processing at various levels without preconceptions of perception or cognition. Once a clear picture has emerged about which processes have influences on those at (...) lower levels, we can re-address the extent to which they should be considered perceptual or cognitive. Using top-down processing within the visual system as a model for higher-level influences, we argue that the current evidence indicates clear constraints on top-down influences at all stages of information processing; it does, however, not support the notion of a boundary between specific types of information-processing as proposed by the cognitive impenetrability hypothesis. (shrink)
Nearly all defences of the agent-causal theory of free will portray the theory as a distinctively libertarian one — a theory that only libertarians have reason to accept. According to what I call ‘the standard argument for the agent-causal theory of free will’, the reason to embrace agent-causal libertarianism is that libertarians can solve the problem of enhanced control only if they furnish agents with the agent-causal power. In this way it is assumed that there is only reason to accept (...) the agent-causal theory if there is reason to accept libertarianism. I aim to refute this claim. I will argue that the reasons we have for endorsing the agent-causal theory of free will are nonpartisan. The real reason for going agent-causal has nothing to do with determinism or indeterminism, but rather with avoiding reductionism about agency and the self. As we will see, if there is reason for libertarians to accept the agent-causal theory, there is just as much reason for compatibilists to accept it. It is in this sense that I contend that if anyone should be an agent-causalist, then everyone should be an agent-causalist. (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
The paper argues that an account of understanding should take the form of a Carnapian explication and acknowledge that understanding comes in degrees. An explication of objectual understanding is defended, which helps to make sense of the cognitive achievements and goals of science. The explication combines a necessary condition with three evaluative dimensions: An epistemic agent understands a subject matter by means of a theory only if the agent commits herself sufficiently to the theory of the subject matter, and to (...) the degree that the agent grasps the theory, the theory answers to the facts and the agent’s commitment to the theory is justified. The threshold for outright attributions of understanding is determined contextually. The explication has descriptive as well as normative facets and allows for the possibility of understanding by means of non-explanatory theories. (shrink)
Ecological communities are seldom, if ever, biological individuals. They lack causal boundaries as the populations that constitute communities are not congruent and rarely have persistent functional roles regulating the communities’ higher-level properties. Instead we should represent ecological communities indexically, by identifying ecological communities via the network of weak causal interactions between populations that unfurl from a starting set of populations. This precisification of ecological communities helps identify how community properties remain invariant, and why they have robust characteristics. This respects the (...) diversity and aggregational nature of these complex systems while still vindicating them as units worthy of investigation. (shrink)
Das spezifische Ziel von Argumentationen ist nicht einfach, den Adressaten etwas glauben zu machen - dies wäre bloße Rhetorik , sondern: den Adressaten beim Erkennen der Akzeptabilität (insbesondere der Wahrheit) der These anzuleiten und ihn so zu begründetem Glauben, zu Erkenntnis zu führen. Argumentationen leiten das Erkennen an, indem sie in ihren Argumenten hinreichende Akzeptabilitätsbedingungen der These als erfüllt beurteilen und so den Adressaten implizit auffordern, diese Bedingungen zu überprüfen. Argumentationen sind gültig, wenn sie prinzipiell das Erkennen anleiten können; d. (...) h. wenn die genannten Akzeptabilitätsbedingungen hinreichend sind, wenn sie tatsächlich erfüllt (die Argumente also wahr) sind und wenn es irgendjemanden gibt, der zwar die Akzeptabilität der Argumente, nicht aber die der These erkannt hat. Eine gültige Argumentation ist adäquat, um einen bestimmten Adressaten rational zu überzeugen, wenn dieser u. a. die Akzeptabilität der Argumente, nicht aber die der These erkannt hat. Die in gültigen Argumentationen als erfüllt beurteilten Akzeptabilitätsbedingungen sind Konkretisierungen allgemeiner Erkenntnisprinzipien für die spezifische These, z. B. des deduktiven Erkenntnisprinzips: 'Eine Proposition ist wahr, wenn sie von wahren Propositionen logisch impliziert wird.' oder des erkenntnisgenetischen Erkenntnisprinzips: 'Eine Proposition ist wahr, wenn sie korrekt verifiziert worden ist.' Eine Konkretisierung des deduktiven Prinzips für eine These p wäre z. B.: 'p ist wahr, 1. wenn q und r wahr sind und 2. wenn q und r zusammen p logisch implizieren.' Sind beide Bedingungen erfüllt, so könnte 'q; r; also p.' eine gültige deduktive Argumentation sein. Die verschiedenen Argumentationstypen unterscheiden sich danach, auf welchem Erkenntnisprinzip sie basieren. Das argumentativ angeleitete Erkennen der Akzeptabilität der These funktioniert so: Der Adressat benutzt das von ihm (zumindest implizit) gewußte allgemeine Erkenntnisprinzip als Checkliste, auf der er nach dem Vorbringen der Argumente abhakt, welche Akzeptabilitätsbedingung des Erkenntnisprinzips durch das Zutreffen des Arguments jeweils erfüllt wird. Auf der Basis dieser Funktionsbestimmung werden in der "Praktischen Argumentationstheorie" (erstmalig) präzise Gültigkeitskriterien für Argumentationen allgemein und für mehrere spezielle Argumentationstypen entwickelt, erkenntnistheoretisch begründet und auf komplexe Argumentationsbeispiele aus Philosophie, Wissenschaft, Technik und Kultur angewendet. Die Analyse der erkenntnistheoretischen Grundlagen - der zugrundeliegenden Erkenntnisprinzipien - vor allem der interpretierenden und praktischen Argumentationen ist zudem von erheblicher Bedeutung weit über die Argumentationstheorie hinaus: für die Interpretationstheorie, die Handlungstheorie und die praktische Philosophie. (shrink)
The main focus of this paper is the question as to what it is for an individual to think of her environment in terms of a concept of causation, or causal concepts, in contrast to some more primitive ways in which an individual might pick out or register what are in fact causal phenomena. I show how versions of this question arise in the context of two strands of work on causation, represented by Elizabeth Anscombe and Christopher Hitchcock, respectively. (...) I then describe a central type of reasoning that, I suggest, a subject has to be able to engage in, if we are to credit her with causal concepts. I also point out that this type of reasoning turns on the idea of a physical connection between cause and effect, as articulated in recent singularist approaches of causation. (shrink)
In his paper, Jakob Hohwy outlines a theory of the brain as an organ for prediction-error minimization, which he claims has the potential to profoundly alter our understanding of mind and cognition. One manner in which our understanding of the mind is altered, according to PEM, stems from the neurocentric conception of the mind that falls out of the framework, which portrays the mind as “inferentially-secluded” from its environment. This in turn leads Hohwy to reject certain theses of embodied cognition. (...) Focusing on this aspect of Hohwy’s argument, we first outline the key components of the PEM framework such as the “evidentiary boundary,” before looking at why this leads Hohwy to reject certain theses of embodied cognition. We will argue that although Hohwy may be correct to reject specific theses of embodied cognition, others are in fact implied by the PEM framework and may contribute to its development. We present the metaphor of the “body as a laboratory” in order to highlight wha... (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.