Identification of non-coding RNAs (ncRNAs) has been significantly enhanced due to the rapid advancement in sequencing technologies. On the other hand, semantic annotation of ncRNA data lag behind their identification, and there is a great need to effectively integrate discovery from relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a precisely defined ncRNA controlled vocabulary, which can fill a specific and highly needed niche in unification of ncRNA biology.
In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the (...) domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. The NCRO ontology is free and open to all users. (shrink)
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. (...) We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
Recent impossibility theorems for fair risk assessment extend to the domain of epistemic justice. We translate the relevant model, demonstrating that the problems of fair risk assessment and just credibility assessment are structurally the same. We motivate the fairness criteria involved in the theorems as also being appropriate in the setting of testimonial justice. Any account of testimonial justice that implies the fairness/justice criteria must be abandoned, on pain of triviality.
Our aim here is to present a result that connects some approaches to justifying countable additivity. This result allows us to better understand the force of a recent argument for countable additivity due to Easwaran. We have two main points. First, Easwaran’s argument in favour of countable additivity should have little persuasive force on those permissive probabilists who have already made their peace with violations of conglomerability. As our result shows, Easwaran’s main premiss – the comparative principle – is strictly (...) stronger than conglomerability. Second, with the connections between the comparative principle and other probabilistic concepts clearly in view, we point out that opponents of countable additivity can still make a case that countable additivity is an arbitrary stopping point between finite and full additivity. (shrink)
Identification of non-coding RNAs (ncRNAs) has been significantly improved over the past decade. On the other hand, semantic annotation of ncRNA data is facing critical challenges due to the lack of a comprehensive ontology to serve as common data elements and data exchange standards in the field. We developed the Non-Coding RNA Ontology (NCRO) to handle this situation. By providing a formally defined ncRNA controlled vocabulary, the NCRO aims to fill a specific and highly needed niche in semantic annotation of (...) large amounts of ncRNA biological and clinical data. (shrink)
A prominent pillar of Bayesian philosophy is that, relative to just a few constraints, priors “wash out” in the limit. Bayesians often appeal to such asymptotic results as a defense against charges of excessive subjectivity. But, as Seidenfeld and coauthors observe, what happens in the short run is often of greater interest than what happens in the limit. They use this point as one motivation for investigating the counterintuitive short run phenomenon of dilation since, it is alleged, “dilation contrasts with (...) the asymptotic merging of posterior probabilities reported by Savage (1954) and by Blackwell and Dubins (1962)” (Herron et al., 1994). A partition dilates an event if, relative to every cell of the partition, uncertainty concerning that event increases. The measure of uncertainty relevant for dilation, however, is not the same measure that is relevant in the context of results concerning whether priors wash out or “opinions merge.” Here, we explicitly investigate the short run behavior of the metric relevant to merging of opinions. As with dilation, it is possible for uncertainty (as gauged by this metric) to increase relative to every cell of a partition. We call this phenomenon distention. It turns out that dilation and distention are orthogonal phenomena. (shrink)
Traditionally, species have been treated as classes. In fact they may be considered individuals. The logical term “individual” has been confused with a biological synonym for “organism.” If species are individuals, then: 1) their names are proper, 2) there cannot be instances of them, 3) they do not have defining properties, 4) their constituent organisms are parts, not members. “ Species " may be defined as the most extensive units in the natural economy such that reproductive competition occurs among their (...) parts. Species are to evolutionary theory as firms are to economic theory: this analogy resolves many issues, such as the problems of “reality” and the ontological status of nomenclatorial types. (shrink)
For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No.
Joshua Greene has argued that the empirical findings of cognitive science have implications for ethics. In particular, he has argued (1) that people’s deontological judgments in response to trolley problems are strongly influenced by at least one morally irrelevant factor, personal force, and are therefore at least somewhat unreliable, and (2) that we ought to trust our consequentialist judgments more than our deontological judgments when making decisions about unfamiliar moral problems. While many cognitive scientists have rejected Greene’s dual-process theory of (...) moral judgment on empirical grounds, philosophers have mostly taken issue with his normative assertions. For the most part, these two discussions have occurred separately. The current analysis aims to remedy this situation by philosophically analyzing the implications of moral dilemma research using the CNI model of moral decision-making – a formalized, mathematical model that decomposes three distinct aspects of moral-dilemma judgments. In particular, we show how research guided by the CNI model reveals significant conceptual, empirical, and theoretical problems with Greene’s dual-process theory, thereby questioning the foundations of his normative conclusions. (shrink)
We might think that thought experiments are at their most powerful or most interesting when they produce new knowledge. This would be a mistake; thought experiments that seek understanding are just as powerful and interesting, and perhaps even more so. A growing number of epistemologists are emphasizing the importance of understanding for epistemology, arguing that it should supplant knowledge as the central notion. In this chapter, I bring the literature on understanding in epistemology to bear on explicating the different ways (...) that thought experiments increase three important kinds of understanding: explanatory, objectual and practical. (shrink)
Rivka Weinberg advances an error theory of ultimate meaning with three parts: (1) a conceptual analysis, (2) the claim that the extension of the concept is empty, and (3) a proposed fitting response, namely being very, very sad. Weinberg’s conceptual analysis of ultimate meaning involves two features that jointly make it metaphysically impossible, namely (i) the separateness of activities and valued ends, and (ii) the bounded nature of human lives. Both are open to serious challenges. We offer an internalist alternative (...) to (i) and a relational alternative to (ii). We then draw out implications for (2) and conclude with reasons to be cheerful about the prospects of a meaningful life. (shrink)
Sometimes we learn through the use of imagination. The epistemology of imagination asks how this is possible. One barrier to progress on this question has been a lack of agreement on how to characterize imagination; for example, is imagination a mental state, ability, character trait, or cognitive process? This paper argues that we should characterize imagination as a cognitive ability, exercises of which are cognitive processes. Following dual process theories of cognition developed in cognitive science, the set of imaginative processes (...) is then divided into two kinds: one that is unconscious, uncontrolled, and effortless, and another that is conscious, controlled, and effortful. This paper outlines the different epistemological strengths and weaknesses of the two kinds of imaginative process, and argues that a dual process model of imagination helpfully resolves or clarifies issues in the epistemology of imagination and the closely related epistemology of thought experiments. (shrink)
We provide counterexamples to some purported characterizations of dilation due to Pedersen and Wheeler :1305–1342, 2014, ISIPTA ’15: Proceedings of the 9th international symposium on imprecise probability: theories and applications, 2015).
What role does the imagination play in scientific progress? After examining several studies in cognitive science, I argue that one thing the imagination does is help to increase scientific understanding, which is itself indispensable for scientific progress. Then, I sketch a transcendental justification of the role of imagination in this process.
John D. Norton is responsible for a number of influential views in contemporary philosophy of science. This paper will discuss two of them. The material theory of induction claims that inductive arguments are ultimately justified by their material features, not their formal features. Thus, while a deductive argument can be valid irrespective of the content of the propositions that make up the argument, an inductive argument about, say, apples, will be justified (or not) depending on facts about apples. The argument (...) view of thought experiments claims that thought experiments are arguments, and that they function epistemically however arguments do. These two views have generated a great deal of discussion, although there hasn’t been much written about their combination. I argue that despite some interesting harmonies, there is a serious tension between them. I consider several options for easing this tension, before suggesting a set of changes to the argument view that I take to be consistent with Norton’s fundamental philosophical commitments, and which retain what seems intuitively correct about the argument view. These changes require that we move away from a unitary epistemology of thought experiments and towards a more pluralist position. (shrink)
This essay has two aims. The first is to correct an increasingly popular way of misunderstanding Belot's Orgulity Argument. The Orgulity Argument charges Bayesianism with defect as a normative epistemology. For concreteness, our argument focuses on Cisewski et al.'s recent rejoinder to Belot. The conditions that underwrite their version of the argument are too strong and Belot does not endorse them on our reading. A more compelling version of the Orgulity Argument than Cisewski et al. present is available, however---a point (...) that we make by drawing an analogy with de Finetti's argument against mandating countable additivity. Having presented the best version of the Orgulity Argument, our second aim is to develop a reply to it. We extend Elga's idea of appealing to finitely additive probability to show that the challenge posed by the Orgulity Argument can be met. (shrink)
In his article “Beyond Point-and-Shoot Morality,” Joshua Greene argues that the empirical findings of cognitive neuroscience have implications for ethics. Specifically, he contends that we ought to trust our manual, conscious reasoning system more than our automatic, emotional system when confronting unfamiliar problems; and because cognitive neuroscience has shown that consequentialist judgments are generated by the manual system and deontological judgments are generated by the automatic system, we ought to trust the former more than the latter when facing unfamiliar moral (...) problems. In the present article, I analyze one of the premises of Greene’s argument. In particular, I ask what exactly an unfamiliar problem is and whether moral problems can be classified as unfamiliar. After exploring several different possible interpretations of familiarity and unfamiliarity, I conclude that the concepts are too problematic to be philosophically compelling, and thus should be abandoned. (shrink)
Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The visualization (...) is unusual in that it depicts the dynamics and structure of a computer model instead of that model’s target system, and because it is generated algorithmically. Using considerations from epistemology and aesthetics, we explore how this new kind of visualization increases scientific understanding of the content and function of computer models in systems biology to reduce epistemic opacity. (shrink)
Imagination is necessary for scientific practice, yet there are no in vivo sociological studies on the ways that imagination is taught, thought of, or evaluated by scientists. This article begins to remedy this by presenting the results of a qualitative study performed on two systems biology laboratories. I found that the more advanced a participant was in their scientific career, the more they valued imagination. Further, positive attitudes toward imagination were primarily due to the perceived role of imagination in problem-solving. (...) But not all problem-solving episodes involved clear appeals to imagination, only maximally specific problems did. This pattern is explained by the presence of an implicit norm governing imagination use in the two labs: only use imagination on maximally specific problems, and only when all other available methods have failed. This norm was confirmed by the participants, and I argue that it has epistemological reasons in its favour. I also found that its strength varies inversely with career stage, such that more advanced scientists do (and should) occasionally bring their imaginations to bear on more general problems. A story about scientific pedagogy explains the trend away from (and back to) imagination over the course of a scientific career. Finally, some positive recommendations are given for a more imagination-friendly scientific pedagogy. (shrink)
Scientists imagine for epistemic reasons, and these imaginings can be better or worse. But what does it mean for an imagining to be epistemically better or worse? There are at least three metaepistemological frameworks that present different answers to this question: epistemological consequentialism, deontic epistemology, and virtue epistemology. This paper presents empirical evidence that scientists adopt each of these different epistemic frameworks with respect to imagination, but argues that the way they do this is best explained if scientists are fundamentally (...) epistemic consequentialists about imagination. (shrink)
Philosophical conceptual analysis is an experimental method. Focusing on this helps to justify it from the skepticism of experimental philosophers who follow Weinberg, Nichols & Stich. To explore the experimental aspect of philosophical conceptual analysis, I consider a simpler instance of the same activity: everyday linguistic interpretation. I argue that this, too, is experimental in nature. And in both conceptual analysis and linguistic interpretation, the intuitions considered problematic by experimental philosophers are necessary but epistemically irrelevant. They are like variables introduced (...) into mathematical proofs which drop out before the solution. Or better, they are like the hypotheses that drive science, which do not themselves need to be true. In other words, it does not matter whether or not intuitions are accurate as descriptions of the natural kinds that undergird philosophical concepts; the aims of conceptual analysis can still be met. (shrink)
Fodor argued that learning a concept by hypothesis testing would involve an impossible circularity. I show that Fodor's argument implicitly relies on the assumption that actually φ-ing entails an ability to φ. But this assumption is false in cases of φ-ing by luck, and just such luck is involved in testing hypotheses with the kinds of generative random sampling methods that many cognitive scientists take our minds to use. Concepts thus can be learned by hypothesis testing without circularity, and it (...) is plausible that this is how humans in fact acquire at least some of their concepts. (shrink)
Meursault, the protagonist of Camus' The Stranger, is unable to grieve, a fact that ultimately leads to his condemnation and execution. Given the emotional distresses involved in grief, should we envy Camus or pity him? I defend the latter conclusion. As St. Augustine seemed to dimly recognize, the pains of grief are integral to the process of bereavement, a process that both motivates and provides a distinctive opportunity to attain the good of self-knowledge.
Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...) runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people’s willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system’s user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived “cognitive” capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system. (shrink)
One major aim of the book is to articulate a view of the mechanics of infallible divine foreknowledge that avoids commitment to causal determinism, explains how infallible foreknowledge is compatible with human freedom, and explains how God’s divine providence is compatible with human freedom and indeterministic events. The modest epistemic goal is to articulate a view that enjoys a not very low epistemic status. But even with such modest goals, I think the view cannot credibly be said to offer or (...) . In fact, at critical moments when and are in question, we find very little detailed discussion.There is another epistemological goal in the book. It is to show that we are not in an epistemic position to know that causal determinism provides the basis for explaining how God knows the future and so we are not in a position to know that God’s infallible foreknowledge is incompatible with human freedom . But if infallible foreknow .. (shrink)
Originally published in 1991, The Laboratory of the Mind: Thought Experiments in the Natural Sciences, is the first monograph to identify and address some of the many interesting questions that pertain to thought experiments. While the putative aim of the book is to explore the nature of thought experimental evidence, it has another important purpose which concerns the crucial role thought experiments play in Brown’s Platonic master argument.In that argument, Brown argues against naturalism and empiricism (Brown 2012), for mathematical Platonism (...) (Brown 2008), and from the Platonist-friendly, abstract universals posited by the Dretske-Tooley-Armstrong (DTA) account of the laws of nature to a more general, physical Platonism. The Laboratory of the Mind is where he takes this final step. (shrink)
I present data that suggest the universal entailments of counterfactual donkey sentences aren’t as universal as some have claimed. I argue that this favors the strategy of attributing these entailments to a special property of the similarity ordering on worlds provided by some contexts, rather than to a semantically encoded sensitivity to assignment.
The aim of this paper to analyse the central argument of Cottingham’s (1998) Philosophy and the Good Life, and to strengthen and develop it against misinterpretation and objection. Cottingham’s argument is an objection to ‘ratiocentrism’, the view that the good life can be understood in terms of and attained by reason and strength of will. The objection begins from a proper understanding of akrasia, or weakness of will, but its focus, and the focus of this paper, is the relation between (...) reason and the passions in the good life. Akrasia serves to illustrate ratiocentrism’s misunderstanding of this relation and of the nature of the passions themselves. In § I, I outline and clarify the objection. In § II, I present and provisionally elaborate on Cottingham’s diagnosis of what a corrected understanding of the passions makes necessary for the good life, viz. the rediscovery and reclamation of the source of our passions, our childhood past. In § III, I discuss whether ratiocentrism could accept and absorb the critique as developed so far. Cottingham (1998: 162) is aware that his claim, with its emphasis on self-knowledge, could be reinterpreted by ratiocentrism as no more than the need for reason to work with a different source of information regarding the passions in order to master them. I briefly present three further objections to show why this is a mistake. In § IV, I argue that Cottingham’s diagnosis is not quite right, and I seek to emphasise aspects of self-discovery that I believe Cottingham overlooks or underplays. What is needed is a set of interrelated dispositions, viz. acceptance, vulnerability, courage, and compassion; these can be inculcated and sustained by the journey Cottingham defends, but it is the dispositions, rather than the journey, that are properly considered a necessary part of the good life. (shrink)
Michael Sandel’s latest book is not a scholarly work but is clearly intended as a work of public philosophy—a contribution to public rather than academic discourse. The book makes two moves. The first, which takes up most of it, is to demonstrate by means of a great many examples, mostly culled from newspaper stories, that markets and money corrupt—degrade—the goods they are used to allocate. The second follows from the first as Sandel’s proposed solution: we as a society should (...) deliberate together about the proper meaning and purpose of various goods, relationships, and activities (such as baseball and education) and how they should be valued. -/- Public philosophy is a different genre from academic philosophy, but that does not mean that it cannot be held to high standards. In my view, while this book does provide food for thought and food for conversation, it nevertheless has significant failings as a work of public philosophy rather than journalistic social activism on the model of Naomi Klein’s No logo (1999). (shrink)
Three-dimensional material models of molecules were used throughout the 19th century, either functioning as a mere representation or opening new epistemic horizons. In this paper, two case studies are examined: the 1875 models of van ‘t Hoff and the 1890 models of Sachse. What is unique in these two case studies is that both models were not only folded, but were also conceptualized mathematically. When viewed in light of the chemical research of that period not only were both of these (...) aspects, considered in their singularity, exceptional, but also taken together may be thought of as a subversion of the way molecules were chemically investigated in the 19th century. Concentrating on this unique shared characteristic in the models of van ‘t Hoff and the models of Sachse, this paper deals with the shifts and displacements between their operational methods and existence: between their technical and epistemological aspects and the fact that they were folded, which was forgotten or simply ignored in the subsequent development of chemistry. (shrink)
An observation of Hume’s has received a lot of attention over the last decade and a half: Although we can standardly imagine the most implausible scenarios, we encounter resistance when imagining propositions at odds with established moral (or perhaps more generally evaluative) convictions. The literature is ripe with ‘solutions’ to this so-called ‘Puzzle of Imaginative Resistance’. Few, however, question the plausibility of the empirical assumption at the heart of the puzzle. In this paper, we explore empirically whether the difficulty we (...) witness in imagining certain propositions is indeed due to claim type (evaluative v. non-evaluative) or whether it is much rather driven by mundane features of content. Our findings suggest that claim type plays but a marginal role, and that there might hence not be much of a ‘puzzle’ to be solved. (shrink)
In recent years, there has been a heated debate about how to interpret findings that seem to show that humans rapidly and automatically calculate the visual perspectives of others. In the current study, we investigated the question of whether automatic interference effects found in the dot-perspective task (Samson, Apperly, Braithwaite, Andrews, & Bodley Scott, 2010) are the product of domain-specific perspective-taking processes or of domain-general “submentalizing” processes (Heyes, 2014). Previous attempts to address this question have done so by implementing inanimate (...) controls, such as arrows, as stimuli. The rationale for this is that submentalizing processes that respond to directionality should be engaged by such stimuli, whereas domain-specific perspective-taking mechanisms, if they exist, should not. These previous attempts have been limited, however, by the implied intentionality of the stimuli they have used (e.g. arrows), which may have invited participants to imbue them with perspectival agency. Drawing inspiration from “novel entity” paradigms from infant gaze-following research, we designed a version of the dot-perspective task that allowed us to precisely control whether a central stimulus was viewed as animate or inanimate. Across four experiments, we found no evidence that automatic “perspective-taking” effects in the dot-perspective task are modulated by beliefs about the animacy of the central stimulus. Our results also suggest that these effects may be due to the task-switching elements of the dot-perspective paradigm, rather than automatic directional orienting. Together, these results indicate that neither the perspective-taking nor the standard submentalizing interpretations of the dot-perspective task are fully correct. (shrink)
Debunking skeptics claim that our moral beliefs are formed by processes unsuited to identifying objective facts, such as emotions inculcated by our genes and culture; therefore, they say, even if there are objective moral facts, we probably don’t know them. I argue that the debunking skeptics cannot explain the pervasive trend toward liberalization of values over human history, and that the best explanation is the realist’s: humanity is becoming increasingly liberal because liberalism is the objectively correct moral stance.
A moral theory T is esoteric if and only if T is true but there are some individuals who, by the lights of T itself, ought not to embrace T, where to embrace T is to believe T and rely upon it in practical deliberation. Some philosophers hold that esotericism is a strong, perhaps even decisive, reason to reject a moral theory. However, proponents of this objection have often supposed its force is obvious and have said little to articulate it. (...) I defend a version of this objection—namely, that, in light of the strongly first-personal epistemology of benefit and burden, esoteric theories fail to justify the allocation of benefits and burdens to which moral agents would be subject under such theories. Because of the holistic nature of moral-theory justification, this conclusion in turn implies that the entirety of a moral theory must be open to public scrutiny in order for the theory to be justified. I conclude by answering several objections to my account of the esotericism objection. (shrink)
The fact of evolution raises important questions for the position of moral realism, because the origin of our moral dispositions in a contingent evolutionary process is on the face of it incompatible with the view that our moral beliefs track independent moral truths. Moreover, this meta-ethical worry seems to undermine the normative justification of our moral norms and beliefs. If we don’t have any grounds to believe that the source of our moral beliefs has any ontological authority, how can our (...) moral judgments be justified in an objective way? In this chapter, I argue that while traditional moral realism is untenable in the light of evolution, normative justification should not be handed the same fate. It is precisely in the fact that moral norms and beliefs are grounded in evolved, innate and therefore universally shared intuitions that those norms and beliefs can be objective-for-us. Such an internalist justification allows us to differentiate moral right from wrong, not because some feature of the external world forces us to acknowledge this, but because our moral nature forces us to project this moral judgment on the world. What’s more, guided by this innate moral compass we can both assess and realize moral progress. (shrink)
This article offers a novel solution to the problem of material constitution: by including non-concrete objects among the parts of material objects, we can avoid having a statue and its constituent piece of clay composed of all the same proper parts. Non-concrete objects—objects that aren’t concrete, but possibly are—have been used in defense of the claim that everything necessarily exists. But the account offered shows that non-concreta are independently useful in other domains as well. The resulting view falls under a (...) ‘nonmaterial partist’ class of views that includes, in particular, Laurie Paul’s and Kathrin Koslicki’s constitution views; ones where material objects have properties or structures as parts respectively. The article gives reasons for preferring the non-concretist solution over these other non-material partist views and defends it against objections. (shrink)
Suppose you can save only one of two groups of people from harm, with one person in one group, and five persons in the other group. Are you obligated to save the greater number? While common sense seems to say ‘yes’, the numbers skeptic says ‘no’. Numbers Skepticism has been partly motivated by the anti-consequentialist thought that the goods, harms and well-being of individual people do not aggregate in any morally significant way. However, even many non-consequentialists think that Numbers Skepticism (...) goes too far in rejecting the claim that you ought to save the greater number. Besides the prima facie implausibility of Numbers Skepticism, Michael Otsuka has developed an intriguing argument against this position. Otsuka argues that Numbers Skepticism, in conjunction with an independently plausible moral principle, leads to inconsistent choices regarding what ought to be done in certain circumstances. This inconsistency in turn provides us with a good reason to reject Numbers Skepticism. Kirsten Meyer offers a notable challenge to Otsuka’s argument. I argue that Meyer’s challenge can be met, and then offer my own reasons for rejecting Otsuka’s argument. In light of these criticisms, I then develop an improved, yet structurally similar argument to Otsuka’s argument. I argue for the slightly different conclusion that the view proposed by John Taurek that ‘the numbers don’t count’ leads to inconsistent choices, which in turn provides us with a good reason to reject Taurek’s position. (shrink)
There has been a growing charge that perdurantism—with its bloated ontology of very person-like objects that coincide persons—implies the repugnant conclusion that we are morally obliged to be feckless. I argue that this charge critically overlooks the epistemic situation—what I call the ‘veil of ignorance’—that perdurantists find themselves in. Though the veil of ignorance still requires an alteration of our commonsense understanding of the demands on action, I argue for two conclusions. The first is that the alteration that is required (...) isn’t a moral one, but rather an alteration of prudential reasoning. Second, and more importantly, this alteration isn’t necessarily a repugnant one. In fact, given that it prudentially pushes one towards greater impartiality, it may be seen as a point in favor of perdurantism. (shrink)
The ‘Big 3’ theories of well-being—hedonism, desire-satisfactionism, and objective list theory—attempt to explain why certain things are good for people by appealing to prudentially good-making properties. But they don’t attempt to explain why the properties they advert to make something good for a person. Perfectionism, the view that well-being consists in nature-fulfilment, is often considered a competitor to these views (or else a version of the objective list theory). However, I argue that perfectionism is best understood as explaining why certain (...) properties are prudentially good-making. This version of perfectionism is compatible with each of the Big 3, and, I argue, quite attractive. (shrink)
Standard Analytic Epistemology (SAE) names a contingently clustered class of methods and theses that have dominated English-speaking epistemology for about the past half-century. The major contemporary theories of SAE include versions of foundationalism, coherentism, reliabilism, and contextualism. While proponents of SAE don’t agree about how to define naturalized epistemology, most agree that a thoroughgoing naturalism in epistemology can’t work. For the purposes of this paper, we will suppose that a naturalistic theory of epistemology takes as its core, as its starting-point, (...) an empirical theory. The standard argument against naturalistic approaches to epistemology is that empirical theories are essentially descriptive, while epistemology is essentially prescriptive, and a descriptive theory cannot yield normative, evaluative prescriptions. In short, naturalistic theories cannot overcome the is-ought divide. Our main goal in this paper is to show that the standard argument against naturalized epistemology has it almost exactly backwards. (shrink)
I motivate “Origin Conventionalism”—the view that which facts about one’s origins are essential to one’s existence in part depend on our person-directed attitudes. One important upshot of the view is that it offers a novel and attractive solution to the Nonidentity Problem. The Nonidentity Problem typically assumes that the sperm-egg pair from which a person originates is essential to that person’s existence; if so, then for many future persons that come into existence under adverse conditions, had those conditions not been (...) realized, the individuals wouldn't have existed. This is problematic since it delivers the counter-intuitive conclusion that it’s not wrong to bring about such adverse conditions since they don’t harm anyone. Origin Conventionalism, in contrast, holds that whether a person’s sperm-egg origin is essential to their existence depends on their person-directed attitudes. I argue that this provides a unique and attractive way of preserving the intuition that the actions in the ‘nonidentity cases’ are morally wrong because of the potential harm done to the individuals in question. (shrink)
Why should a thought experiment, an experiment that only exists in people's minds, alter our fundamental beliefs about reality? After all, isn't reasoning from the imaginary to the real a sign of psychosis? A historical survey of how thought experiments have shaped our physical laws might lead one to believe that it's not the case that the laws of physics lie - it's that they don't even pretend to tell the truth. My aim in this paper is to defend an (...) account of thought experiments that fits smoothly into our understanding of the historical trajectory of actual thought experiments and that explains how any rational person could allow an imagined, unrealized (or unrealizable) situation to change their conception of the universe. (shrink)
This paper explores the epistemological significance of the view that we can literally see, hear, and touch evaluative properties (the high-level theory of value perception). My central contention is that, from the perspective of epistemology, the question of whether there are such high-level experiences doesn’t matter. Insofar as there are such experiences, they most plausibly emerged through the right kind of interaction with evaluative capacities that are not literally perceptual (e.g., of the sort involved in imaginative evaluative reflection). But even (...) if these other evaluative capacities turn out not to alter the content of perceptual experience, they would still be sufficient to do all of the justificatory work that high-level experiences are meant to do. I close by observing that it may matter a great deal whether a certain other picture of value perception is true. This alternative picture has it that desires and/or emotions are perceptual-like experiences of value. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.