The National Center for Biomedical Ontology is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists, funded by the National Institutes of Health (NIH) Roadmap, to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to create (...) new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease. (shrink)
The National Center for Biomedical Ontology is now in its seventh year. The goals of this National Center for Biomedical Computing are to: create and maintain a repository of biomedical ontologies and terminologies; build tools and web services to enable the use of ontologies and terminologies in clinical and translational research; educate their trainees and the scientific community broadly about biomedical ontology and ontology-based technology and best practices; and collaborate with a variety of groups who develop and use ontologies and (...) terminologies in biomedicine. The centerpiece of the National Center for Biomedical Ontology is a web-based resource known as BioPortal. BioPortal makes available for research in computationally useful forms more than 270 of the world's biomedical ontologies and terminologies, and supports a wide range of web services that enable investigators to use the ontologies to annotate and retrieve data, to generate value sets and special-purpose lexicons, and to perform advanced analytics on a wide range of biomedical data. (shrink)
Desire and Motivation in Indian Philosophy. By Christopher G. Framarin. Routledge Hindu Studies Series. London: Routledge, 2009. Pp. xv + 196. $170 ; $44.95.
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...) show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004 ) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a (...) number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively (...) proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the indifference approach (...) to statistical mechanical probabilities. (shrink)
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers and Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim is to (...) assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful. (shrink)
Theories that use expected utility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “Direct Difference Taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the most plausible alternative, (...) a proposal offered by Arntzenius :31–58, 2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which Direct Difference Taking avoids. (shrink)
In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis’s is that they allow special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt to determine (...) (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and describes some areas (...) where further research is needed. (shrink)
Several variants of Lewis's Best System Account of Lawhood have been proposed that avoid its commitment to perfectly natural properties. There has been little discussion of the relative merits of these proposals, and little discussion of how one might extend this strategy to provide natural property-free variants of Lewis's other accounts, such as his accounts of duplication, intrinsicality, causation, counterfactuals, and reference. We undertake these projects in this paper. We begin by providing a framework for classifying and assessing the variants (...) of the Best System Account. We then evaluate these proposals, and identify the most promising candidates. We go on to develop a proposal for systematically modifying Lewis's other accounts so that they, too, avoid commitment to perfectly natural properties. We conclude by briefly considering a different route one might take to developing natural property-free versions of Lewis's other accounts, drawing on recent work by Williams. (shrink)
Evidential Uniqueness is the thesis that, for any batch of evidence, there’s a unique doxastic state that a subject with that evidence should have. One of the most common kinds of objections to views that violate Evidential Uniqueness are arbitrariness objections – objections to the effect that views that don’t satisfy Evidential Uniqueness lead to unacceptable arbitrariness. The goal of this paper is to examine a variety of arbitrariness objections that have appeared in the literature, and to assess the extent (...) to which these objections bolster the case for Evidential Uniqueness. After examining a number of different arbitrariness objections, I’ll conclude that, by and large, these objections do little to bolster the case for Evidential Uniqueness. (shrink)
One popular approach to statistical mechanics understands statistical mechanical probabilities as measures of rational indifference. Naive formulations of this ``indifference approach'' face reversibility worries - while they yield the right prescriptions regarding future events, they yield the wrong prescriptions regarding past events. This paper begins by showing how the indifference approach can overcome the standard reversibility worries by appealing to the Past Hypothesis. But, the paper argues, positing a Past Hypothesis doesn't free the indifference approach from all reversibility worries. For (...) while appealing to the Past Hypothesis allows it to escape one kind of reversibility worry, it makes it susceptible to another - the Meta-Reversibility Objection. And there is no easy way for the indifference approach to escape the Meta-Reversibility Objection. As a result, reversibility worries pose a steep challenge to the viability of the indifference approach. (shrink)
This is a review of Toby Handfield's book, "A Philosophical Guide to Chance", that discusses Handfield's Debunking Argument against realist accounts of chance.
In this article, we propose the Fair Priority Model for COVID-19 vaccine distribution, and emphasize three fundamental values we believe should be considered when distributing a COVID-19 vaccine among countries: Benefiting people and limiting harm, prioritizing the disadvantaged, and equal moral concern for all individuals. The Priority Model addresses these values by focusing on mitigating three types of harms caused by COVID-19: death and permanent organ damage, indirect health consequences, such as health care system strain and stress, as well as (...) economic destruction. It proposes proceeding in three phases: the first addresses premature death, the second long-term health issues and economic harms, and the third aims to contain viral transmission fully and restore pre-pandemic activity. -/- To those who may deem an ethical framework irrelevant because of the belief that many countries will pursue "vaccine nationalism," we argue such a framework still has broad relevance. Reasonable national partiality would permit countries to focus on vaccine distribution within their borders up until the rate of transmission is below 1, at which point there would not be sufficient vaccine-preventable harm to justify retaining a vaccine. When a government reaches the limit of national partiality, it should release vaccines for other countries. -/- We also argue against two other recent proposals. Distributing a vaccine proportional to a country's population mistakenly assumes that equality requires treating differently situated countries identically. Prioritizing countries according to the number of front-line health care workers, the proportion of the population over 65, and the number of people with comorbidities within each country may exacerbate disadvantage and end up giving the vaccine in large part to wealthy nations. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
All parties involved in researching, developing, manufacturing, and distributing COVID-19 vaccines need guidance on their ethical obligations. We focus on pharmaceutical companies' obligations because their capacities to research, develop, manufacture, and distribute vaccines make them uniquely placed for stemming the pandemic. We argue that an ethical approach to COVID-19 vaccine production and distribution should satisfy four uncontroversial principles: optimising vaccine production, including development, testing, and manufacturing; fair distribution; sustainability; and accountability. All parties' obligations should be coordinated and mutually consistent. For (...) instance, companies should not be obligated to provide host countries with additional booster shots at the expense of fulfilling bilateral contracts with countries in which there are surges. Finally, any satisfactory approach should include mechanisms for assurance that all parties are honouring their obligations. This assurance enables countries, pharmaceutical companies, global organisations, and others to verify compliance with the chosen approach and protect ethically compliant stakeholders from being unfairly exploited by unethical behaviour of others. (shrink)
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key (...) ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare. (shrink)
The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 2-5) the epistemological basis of probabilistic arguments is discussed. With regard to the (...) philosophical interpretation of probabilities a new subjectivist, epistemic interpretation is proposed, which identifies probabilities with tendencies of evidence (sect. 2). After drawing the conclusions of this interpretation with respect to the syntactic features of the probability concept, e.g. one variable referring to the data base (sect. 3), the justification of basic probabilities (priors) by judgements of relative frequency (sect. 4) and the justification of derivative probabilities by means of the probability calculus are explained (sect. 5). The core of the paper is the definition of '(argumentatively) valid derivative probabilistic arguments', which provides exact conditions for epistemically good probabilistic arguments, together with conditions for the adequate use of such arguments for the aim of rationally convincing an addressee (sect. 6). Finally, some measures for improving the applicability of probabilistic reasoning are proposed (sect. 7). (shrink)
This paper argues that we should reject G. E. Moore’s anti-skeptical argument as it is presented in “Proof of an External World.” However, the reason I offer is different from traditional objections. A proper understanding of Moore’s “proof” requires paying attention to an important distinction between two forms of skepticism. I call these Ontological Skepticism and Epistemic Skepticism. The former is skepticism about the ontological status of fundamental reality, while the latter is skepticism about our empirical knowledge. Philosophers often assume (...) that Moore’s response to “external world skepticism” deals exclusively with the former, not the latter. But this is a mistake. I shall argue that Moore’s anti-skeptical argument targets an ontological form of skepticism. Thus, the conclusion is an ontological claim about fundamental reality, while the premises are epistemic claims. If this is correct, then the conclusion outstrips the scope of its premises and proves too much. (shrink)
The article develops and justifies, on the basis of the epistemological argumentation theory, two central pieces of the theory of evaluative argumentation interpretation: 1. criteria for recognizing argument types and 2. rules for adding reasons to create ideal arguments. Ad 1: The criteria for identifying argument types are a selection of essential elements from the definitions of the respective argument types. Ad 2: After presenting the general principles for adding reasons (benevolence, authenticity, immanence, optimization), heuristics are proposed for finding missing (...) reasons, for deductive arguments, e.g., semantic tableaux are suggested. (shrink)
Many researchers consider cancer to have molecular causes, namely mutated genes that result in abnormal cell proliferation (e.g. Weinberg 1998). For others, the causes of cancer are to be found not at the molecular level but at the tissue level where carcinogenesis consists of disrupted tissue organization with downward causation effects on cells and cellular components (e.g. Sonnenschein and Soto 2008). In this contribution, I ponder how to make sense of such downward causation claims. Adopting a manipulationist account of causation (...) (Woodward 2003), I propose a formal definition of downward causation and discuss further requirements (in light of Baumgartner 2009). I then show that such an account cannot be mobilized in support of non-reductive physicalism (contrary to Raatikainen 2010). However, I also argue that such downward causation claims might point at particularly interesting dynamic properties of causal relationships that might prove salient in characterizing causal relationships (following Woodward 2010). (shrink)
Many researchers consider cancer to have molecular causes, namely mutated genes that result in abnormal cell proliferation (e.g. Weinberg 1998); yet for others, the causes of cancer are to be found not at the molecular level but at the tissue level and carcinogenesis would consist in a disrupted tissue organization with downward causation effects on cells and cellular components (e.g. Sonnenschein & Soto 2008). In this contribution, I ponder how to make sense of such downward causation claims. Adopting a manipulationist (...) account of causation (Woodward 2003), I propose a formal definition of downward causation, and discuss further requirements (in light of Baumgartner 2009). I then show that such an account cannot be mobilized in support of non-reductive physicalism (contrary to Raatikainen 2010). However, I also argue that such downward causation claims might point at particularly interesting dynamic properties of causal relationships that might prove salient in characterizing causal relationships (following Woodward 2010). (shrink)
Our reception of Hegel’s theory of action faces a fundamental difficulty: on the one hand, that theory is quite clearly embedded in a social theory of modern life, but on the other hand most of the features of the society that gave that embedding its specific content have become almost inscrutably strange to us (e.g., the estates and the monarchy). Thus we find ourselves in the awkward position of stressing the theory’s sociality even as we scramble backwards to distance ourselves (...) from the particular social institutions that gave conceptualized form to such sociality in Hegel’s own opinion. My attempt in this article is to make our position less awkward by giving us at least one social-ontological leg to stand on. Specifically, I want to defend a principled and conceptual pluralism as forming the heart of Hegel’s theory of action. If this view can be made out, then we will have a social-ontological structure that might be filled out in different ways in Hegel’s time and our own while simultaneously giving real teeth to the notion that Hegel’s theory of action is essentially social. (shrink)
Jeff McMahan has long shown himself to be a vigorous and incisive critic of speciesism, and in his essay “Our Fellow Creatures” he has been particularly critical of speciesist arguments that draw inspiration from Wittgenstein. In this essay I consider his arguments against speciesism generally and the species-norm account of deprivation in particular. I argue that McMahan's ethical framework is more nuanced and more open to the incorporation of speciesist intuitions regarding deprivation than he himself suggests. Specifically, I argue that, (...) given his willingness to include a comparative dimension in his “Intrinsic Potential Account” he ought to recognize species as a legitimate comparison class. I also argue that a sensible speciesism can be pluralist and flexible enough to accommodate many of McMahan's arguments in defense of “moral individualist” intuitions. In this way, I hope to make the case for at least a partial reconciliation between McMahan and the “Wittgensteinian speciesists”, e.g. Cora Diamond, Stephen Mulhall, and Raimond Gaita. (shrink)
One of the reasons why there is no Hegelian school in contemporary ethics in the way that there are Kantian, Humean and Aristotelian schools is because Hegelians have been unable to clearly articulate the Hegelian alternative to those schools’ moral psychologies, i.e., to present a Hegelian model of the motivation to, perception of, and responsibility for moral action. Here it is argued that in its most basic terms Hegel's model can be understood as follows: the agent acts in a responsible (...) and thus paradigmatic sense when she identifies as reasons those motivations which are grounded in his or her talents and support actions that are likely to develop those talents in ways suggested by his or her interests. (shrink)
In this book some options concerning the greenhouse effect are assessed from a welfarist point of view: business as usual, stabilization of greenhouse gas emissions and reduction by 25% and by 60%. Up to today only economic analyses of such options are available, which monetize welfare losses. Because this is found to be wanting from a moral point of view, the present study welfarizes (among others) monetary losses on the basis of a hedonistic utilitarianism and other, justice incorporating, welfare ethics. (...) For these welfarist evaluations information about the social consequences of the four options are collected from the literature and eventually corrected; then the consequences for individual well-being are assessed based on psychological research about well-being dependent on the social situation of the individual; finally the aggregation formulas of the respective welfare ethics are applied to these data. Assessments by other types of ethics, e.g. Kantian ethic, are included. The strongest abatement option is found to be optimum with great unanimity. - In addition a cost-welfare analysis of greenhouse gas abatement is undertaken revealing efficient cost-welfare ratios for these measures and the most efficient ratio for the strongest option. - A final, more theoretical part discusses the moral obligations following from such evaluations. The notion of 'moral obligation' is explained in a way that, apart from moral goodness of the required act, reinforcement by formal or informal sanctions is another necessary condition for moral obligations. This leads to a conception of a historical morality according to which the demands of morality rise in the long run. Applying this conception to the greenhouse effect implies that presently we have the moral duty to raise the standards of greenhouse gas abatement as much as is politically feasible. (shrink)
Spinoza rarely refers to art. However, there are extensive resources for a Spinozist aesthetics in his discussion of health in the Ethics and of social affects in his political works. There have been recently been a few essays linking Spinoza and art, but this essay additionally fuses Spinoza’s politics to an affective aesthetics. Spinoza’s statements that art makes us healthier (Ethics 4p54Sch; Emendation section 17) form the foundation of an aesthetics. In Spinoza’s definition, “health” is caused by external objects that (...) maintain our power to act in a variety of ways. Humans need such objects because our complex bodies constantly lose or consume many parts necessary to our overall functioning. Notably, Spinoza defines humans’ bodies through this complexity (2p13Sch), so health as maintenance of complexity is a distinctly human endeavor. Further, while art is not the only healthy activity, I argue that art is a particularly potent cure, which explains Spinoza’s otherwise opaque comment that music can cure melancholy (by which he meant a near-total inability to act, akin to death). Rather than only causing frivolous pleasures, art may be as essential to human flourishing as are other human beings in general; other people are “most useful” because of the variety of actions they make possible (4p35Cor & Sch1). Art’s production of a dizzying variety of affects is likewise most useful for health. -/- Having established how art in general affects the individual, I then explain the role of artists in shaping social groups. Artists use vivid and highly charged affective techniques, as do political sovereigns and religious prophets (TTP chapters 1-2 & 15-16). However, sovereigns and prophets are concerned exclusively with “morality,” defined by Spinoza as the use of affects (primarily based on fear and hope) to produce “obedience” in the generic multitude or people at large. An artist, however, rarely causes affects in the whole nation, affecting instead only a smaller niche or “sub-genre” of people. The affects produced in this group are also not identical to those used by sovereigns, since artists do not primarily deploy sad affects of hope and fear but instead use a wide variety of joyful affects. Further, in Spinoza’s analysis of ceremonies (TTP chapter 5), we see how small groups exposed to repeated ceremonies or social practices eventually develop new strengths which they lacked before. Repeated exposure to shared aesthetic “ceremonies” (e.g., live music performances) of the same sub-genre will over time create the capacity of new powers in the sub-genre of people, which distinguishes them from the masses. Spinoza says sovereigns forge a “second nature” for the generic people through affects; we can then affirm that smaller groups exposed to a distinct sub-genre of art can acquire a new “third nature” which will contain unique powers extending beyond healthy maintenance of their existing bodies. That is, art in general is necessary to flourish and remain whole (maintaining health), but it can also occasionally expand what one is to unforeseen heights (through specific artistic sub-genres). (shrink)
Desire', 'preference', 'utility', '(utility-aggregating) moral desirability' are terms that build on each other in this order. The article follows this definitional structure and presents these terms and their justifications. The aim is to present welfare-ethical criteria of the common good that define 'moral desirability' as an aggregation, e.g. addition, of individual utility: utilitarianism, utility egalitarianism, leximin, prioritarianism.
My aim in this paper is to demonstrate that actual egalitarian social practices are unsustainable in most circumstances, thus diffusing Cohen’s conundrum by providing an ‘out’ for our rich egalitarian. I will also try to provide a balm for the troubles produced by continuing inequality, by showing how embracing a common conception of utopia can assist a society in its efforts towards establishing egalitarian practices. Doing so will first require an explanation of how giving, like any social practice, can be (...) thought of in terms of being externally suggested, internally willed, or some combination of the two. (shrink)
In this paper, first the term 'prioritarianism' is defined, with some mathematical precision, on the basis of intuitive conceptions of prioritarianism, especially the idea that "benefiting people matters more the worse off these people are". (The prioritarian weighting function is monotonously ascending and concave, while its first derivation is smoothly descending and convex but positive throughout.) Furthermore, (moderate welfare) egalitarianism is characterized. In particular a new symmetry condition is defended, i.e. that egalitarianism evaluates upper and lower deviations from the social (...) middle symmetrically and equally negatively (as do e.g. variance and Gini). Finally, it is shown that this feature distinguishes egalitarianism also extensionally from prioritarianism. (shrink)
American History X (hereafter AHX) has been accused by numerous critics of a morally dangerous cinematic seduction: using stylish cinematography, editing, and sound, the film manipulates the viewer through glamorizing an immoral and hate-filled neo-nazi protagonist. In addition, there’s the disturbing fact that the film seems to accomplish this manipulation through methods commonly grouped under the category of “fascist aesthetics.” More specifically, AHX promotes its neo-nazi hero through the use of several filmic techniques made famous by Nazi propagandist Leni Riefenstahl. (...) Now most critics admit that, in the end, the film claims to denounce racism and attempts to show us the conversion of the protagonist to the path of righteousness, but they complain that nonetheless the film (perhaps unintentionally) ends up implicitly promoting the immoral worldview it rather superficially professes to reject in its final act. This charge of hypocrisy is connected to another worry: the moral conversion in the film is said to fall flat because the intellectual resources on display to support the character’s racism are not counterbalanced by equally explicit (but superior) arguments for the anti-racist position ultimately embraced by the character. In other words, just as the devil is said to get all the good lines in Milton’s Paradise Lost, in AHX the racists get all the arguments. This has been taken to be a morally problematic flaw of the film. Critics lament that Derek’s conversion seems to result not from relevant logical inferences and valid rational argumentation but from overly simplistic and arguably egoistic insights (e.g., “has anything you've done made your life better?”) combined, perhaps, with a hackneyed cliché (in prison, one of his best friends is a black person!) In this paper I’ll attempt to rebut these charges and defend the film as a powerful, and powerfully moral, work of art. I’ll be suggesting that the seductive techniques employed allow for many viewers a degree of sympathy towards the protagonist that is crucial, both for making that character’s more horrific actions especially unsettling, and also for making his eventual conversion plausible and ultimately compelling. I’ll also argue that the manner in which his conversion is presented is in fact subtler than many critics have allowed: Derek’s transformation is not artificial or implausible but is depicted as resulting from a cumulative series of emotionally powerful life events and personal engagements. It is certainly true that it is not represented in the way some would seemingly have preferred, i.e. as straightforwardly resulting from a process of gradual intellectual improvement in Derek’s reasoning on questions of race and politics. However, I’ll argue that the decidedly emotional basis of his moral evolution is both refreshingly realistic and no hindrance to accepting his conversion as rational. Finally, properly understanding the legitimacy of the emotional foundations of much moral thought will also allow us to appreciate the ways in which our initial worries about this film’s (not insignificant) ability to persuade viewers through the engagement of emotions need not, in itself, be seen as a barrier to endorsing the film as a morally praiseworthy work. (shrink)
During the current financial crisis, the need for an alternative to a laissez-faire ethics of capitalism (the Milton Friedman view) becomes clear. I argue that we need an order ethics which employs economics as a key theoretical resource and which focuses on institutions for implementing moral norms. -/- I will point to some aspects of order ethics which highlight the importance of rules, e.g. global rules for the financial markets. In this regard, order ethics (“Ordnungsethik”) is the complement of the (...) German conception of “Ordnungspolitik” which also stresses the importance of a regulatory framework. This framework is needed not to tame the market, but to make it more profitable in the long run. -/- The conception of order ethics relies heavily on contractarianism, especially on James Buchanan’s work. Unlike many other conceptions of ethics, it does not start with an aim to achieve, but rather with an account of what the social world – in which ethical norms have to be implemented – is like. Our social world is different from the pre-modern one. Pre-modern societies played zero-sum games in which people could only gain significantly at the expense of others. And the types of ethics that we are still used to today have been developed within these pre-modern societies. -/- Modern societies, by contrast, can be characterised – by economists and other social theorists alike – as societies with continuous growth. This growth has only been made possible by the modern competitive market economy which enables everyone to pursue his own interests within a carefully devised institutional system. In this system, positive sum games are played, which makes it in principle possible to improve the position of every individual at the same time. Most kinds of ethics, however, resulting from the conditions of pre-modern societies, ignore the possibility of win-win-situations and instead require us to be moderate, to share, to sacrifice, as this would have been functional in zero-sum games. These conceptions distinguish – in more or less strict ways – between self-interest and altruistic motivation. Self-interest, more often than not, is ultimately seen as something evil. -/- Such an ethics cannot be functional in modern societies. Ethical concepts lag behind. Within zero-sum games, it was necessary to call for temperance, for moderate profits, or for a condemnation of lending money at interest. Within positive-sum games, however, the morally desired result of a social process cannot be brought about by changes in motivation, by switching from ‘egoistic’ to ‘altruistic’ motivation. The second theoretical element introduced by order ethics is the distinction between actions and rules, which was already mentioned. Traditional ethics concerns actions: It calls directly for changes in behaviour. This is a consequence of pre-modern conditions as reconstructed before: People in the pre-modern world were only able to control their actions, not so much however the conditions of their actions. In particular, rules like laws, constitutions, social structures, the market order, and also ethical norms have remained stable for centuries. In modern societies, this situation has changed entirely. The rules governing our actions have increasingly come under our control. -/- In this situation, ethics has to focus on rules. Morality must be incorporated in incentive-compatible rules. Direct calls for changes in behaviour without changes in the rules lead only to an erosion of compliance with moral norms. Individuals that continue to behave ‘morally’ will be singled out, because the incentives have not been changed. Moral norms which are to be justified cannot require people to abstain from pursuing their own advantage. People abstain from taking ‘immoral’ advantages only if adherence to ethical norms yields greater benefits over the planned sequence of actions than defection in the single case. Thus ‘abstaining’ is not abstaining in the long run, it is rather an investment in expectations of long-term benefits. By adhering to ethical norms, I become a reliable partner for interactions. The norms do indeed constrain my actions, but they simultaneously expand my options in interactions. And people consent to rules only if these rules hold greater advantages for them, at least in the long run. -/- In general, ethics cannot require people to abandon their individual calculation of advantages. However, it may suggest improving one’s calculation, by calculating in the long run rather than in the short run, and by taking into account the interests of our fellows, as we depend on their acceptance for reaching an optimal level of well-being, especially in a globalized world full of interdependence. -/- The problem of implementation can now be placed at the beginning of a conception of order ethics, justified with reference to the conditions of modern societies I have sketched. Under the conditions of pre-modern societies, an ethics of temperance had evolved that posed simultaneously the problems of implementation and justification. The implementation of well-justified norms or standards could then be regarded as unproblematic, because the social structures allowed for a direct face-to-face enforcement of norms. Pre-modern societies not only favored an ethics of temperance, they also had the instrument of face-to-face-sanctions within their smaller and non-anonymous communities. This instrument is no longer functional in modern anonymous societies, and so we have to face up to the problem of implementation right at the start of our ethical conception. Simultaneously, an order ethics relies on the implementation of sanctions for enforcing incentive-compatible rules. In modern societies, rules and institutions, to a large extent, must fulfil the tasks that were, in pre-modern times, fulfilled by moral norms, which in turn were sanctioned by face-to-face sanctions. Norm implementation in modern societies thus works by setting adequate incentives in order to prevent the erosion of moral norms, which would happen if ‘moral’ actors were systematically threatened with exploitation by other, less ‘moral’ actors. -/- This conception of order ethics is then elaborated further in the area of business ethics. -/- . (shrink)
Over the last eighty years, studies in play have carved out a small, but increasingly significant, niche within the social sciences and a rich repository has been built which underscores the importance of play to social, cultural, and psychological development. The general point running through these works is a philosophical recognition that play should not be separated from the trappings of everyday life, but instead should be seen as one of the more primordial aspects of human existence. Gadamer is one (...) philosopher frequently associated with interest in play. In his magnum opus, Truth and Method (1960), Gadamer insisted the significance of play to human understanding is not merely recreational, but rather it discloses the full context of any given situation by promoting a freedom of possibilities within the horizon of one’s own life-world (i.e. the world directly and immediately experienced). As such, his philosophical analysis of play was essential to his overall project of philosophical hermeneutics as it can explain how meaning is not derived from something essential within a text, but rather considered from a full range of possibilities. There are good reasons to expand on that understanding of play within philosophical studies and we suggest one way to do so is to compare Gadamer’s treatment of play with similar ideas from thinkers often associated with other philosophical schools. Although there are other candidates (e.g. Wittgenstein’s language games) for such an analysis, we limit our comparison here to the notion of transaction, as treated by the American pragmatist John Dewey in his volume Knowing and the Known (co-authored with Arthur Bentley in 1949). Because Dewey tied his conception of transaction to an overarching philosophy of inquiry, we believe comparing it to Gadamer’s use of play can highlight the deep philosophical import of this concept to the understanding of philosophical inquiry. (shrink)
Der Download enthält die penultimative Fassung (noch unter dem vorläufigen Titel "Molina über Vorsehung und Freiheit"). Diese ausführliche Einleitung zu dem Band "Luis de Molina: Göttlicher Plan und menschliche Freiheit", hg. und übersetzt von C. Jäger, H. Kraml und G. Leibold, Hamburg: Meiner 2018, rekonstruiert auf 165 S. Molinas berühmte Theorie der Willensfreiheit und die Frage ihrer Vereinbarkeit mit göttlichem Vorherwissen und göttlicher Vorsehung. Sie zeichnet wesentliche Stationen der Debatte um den theologischen Determinismus nach, wie sie sich von Augustinus und (...) Boethius über Anselm, Thomas von Aquin, Scotus, Ockham und andere bis zu Molina und von dort bis in die jüngste, vor allem analytische Religionsphilosophie hinein entwickelt. (shrink)
José Ángel Gascón’s essay "Where are dissent and reasons in epistemic justification?" is an exposition of a version of a social functionalist epistemology. I agree with Gascón's emphasis on reasons and on taking into account dissent as important parts of epistemology. But I think that these concerns do not require a social functionalist epistemology, but that, on the contrary, Gascón's social functionalist epistemology throws the baby out with the bathwater. It does so by excluding also a traditional, at its core (...) individualistic epistemology, which defines central concepts like 'justified', 'knowledge' still in individualistic terms as the result of a mental cognizing process but is open to social extensions, e.g. concerning cooperation in the acquisition of knowledge or the transfer of knowledge via argumentation. Such a socially open epistemology with an individualistic core – or "open individualistic epistemology" for short – is also the basis of the epistemological argumentation theory. In the following I want to explain and defend this open individualistic epistemology together with the epistemological argumentation theory (sect. 2) and explain on this basis some problems of Gascón’s theory (sect. 3). (shrink)
‘Greek Ethics’, an undergraduate class taught by the British moral philosopher N. J. H. Dent, introduced this reviewer to the ethical philosophy of ancient Greece. The class had a modest purview—a sequence of Socrates, Plato, and Aristotle—but it proved no less effective, in retrospect, than more synoptic classes for having taken this apparently limited and (for its students and academic level) appropriate focus. This excellent Companion will now serve any such class extremely well, allowing students a broader exposure than that (...) traditional sequence, without sacrificing the class’s circumscribed focus. The eighteen chapters encompass some of what went before, and surprisingly much of what came after, those three central philosophers—including, for instance, a discussion of Plotinus and his successors, as well as a discussion of Horace. The book will therefore be useful in many different types of class on ethical philosophy in the ancient world. This Companion will be useful not only to students, but also to at least three further groups: specialists in ancient Greek philosophy (since some contributors advance significant new positions, e.g. R. Kamtekar on Plato’s ethical psychology and D. Charles on Aristotle’s ‘ergon argument’ as already implicitly invoking ‘to kalon’); scholars working in academic subjects adjacent to ancient Greek philosophy; and contemporary moral philosophers. (shrink)
A collection of papers presented at the First International Summer Institute in Cognitive Science, University at Buffalo, July 1994, including the following papers: ** Topological Foundations of Cognitive Science, Barry Smith ** The Bounds of Axiomatisation, Graham White ** Rethinking Boundaries, Wojciech Zelaniec ** Sheaf Mereology and Space Cognition, Jean Petitot ** A Mereotopological Definition of 'Point', Carola Eschenbach ** Discreteness, Finiteness, and the Structure of Topological Spaces, Christopher Habel ** Mass Reference and the Geometry of Solids, Almerindo E. (...) Ojeda ** Defining a 'Doughnut' Made Difficult, N .M. Gotts ** A Theory of Spatial Regions with Indeterminate Boundaries, A.G. Cohn and N.M. Gotts ** Mereotopological Construction of Time from Events, Fabio Pianesi and Achille C. Varzi ** Computational Mereology: A Study of Part-of Relations for Multi-media Indexing, Wlodek Zadrozny and Michelle Kim. (shrink)
This paper argues for extreme rational permissivism—the view that agents with identical evidence can rationally believe contradictory hypotheses—and a mild version of steadfastness. Agents can rationally come to different conclusions on the basis of the same evidence because their way of weighing the theoretic virtues may differ substantially. Nevertheless, in the face of disagreement, agents face considerable pressure to reduce their confidence. Indeed, I argue that agents often ought to reduce their confidence in the higher-order propositions that they know or (...) rationally believe disputed content. I argue, however, that when the subject matter is difficult, there is more flexibility for agents to simultaneously believe that p while withholding belief about whether such belief is rational or known. This allows for modest steadfastness on hard questions in, e.g., philosophy and religion. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.