Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis’s is that they allow special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt to determine (...) (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts. (shrink)
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Although contemporary metaphysics has recently undergone a neo-Aristotelian revival wherein dispositions, or capacities are now commonplace in empirically grounded ontologies, being routinely utilised in theories of causality and modality, a central Aristotelian concept has yet to be given serious attention – the doctrine of hylomorphism. The reason for this is clear: while the Aristotelian ontological distinction between actuality and potentiality has proven to be a fruitful conceptual framework with which to model the operation of the natural world, the distinction between (...) form and matter has yet to similarly earn its keep. In this chapter, I offer a first step toward showing that the hylomorphic framework is up to that task. To do so, I return to the birthplace of that doctrine - the biological realm. Utilising recent advances in developmental biology, I argue that the hylomorphic framework is an empirically adequate and conceptually rich explanatory schema with which to model the nature of organisms. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
Several variants of Lewis's Best System Account of Lawhood have been proposed that avoid its commitment to perfectly natural properties. There has been little discussion of the relative merits of these proposals, and little discussion of how one might extend this strategy to provide natural property-free variants of Lewis's other accounts, such as his accounts of duplication, intrinsicality, causation, counterfactuals, and reference. We undertake these projects in this paper. We begin by providing a framework for classifying and assessing the variants (...) of the Best System Account. We then evaluate these proposals, and identify the most promising candidates. We go on to develop a proposal for systematically modifying Lewis's other accounts so that they, too, avoid commitment to perfectly natural properties. We conclude by briefly considering a different route one might take to developing natural property-free versions of Lewis's other accounts, drawing on recent work by Williams. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively (...) proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004 ) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a (...) number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the indifference approach (...) to statistical mechanical probabilities. (shrink)
Dispositional properties are often referred to as ‘causal powers’, but what does dispositional causation amount to? Any viable theory must account for two fundamental aspects of the metaphysics of causation – the causal complexity and context sensitivity of causal interactions. The theory of mutual manifestations attempts to do so by locating the complexity and context sensitivity within the nature of dispositions themselves. But is this theory an acceptable first step towards a viable theory of dispositional causation? This paper argues that (...) the reconceptualization that the theory entails comes at too high a price, and is an unnecessary step in the wrong direction: these two central aspects concerning the metaphysics of causation can and should be accounted for in a dispositional account of causation without it. (shrink)
According to the proponents of Developmental Systems Theory and the Causal Parity Thesis, the privileging of the genome as “first among equals” with respect to the development of phenotypic traits is more a reflection of our own heuristic prejudice than of ontology - the underlying causal structures responsible for that specified development no more single out the genome as primary than they do other broadly “environmental” factors. Parting with the methodology of the popular responses to the Thesis, this paper offers (...) a novel criterion for ‘causal primacy’, one that is grounded in the ontology of the unique causal role of dispositional properties. This paper argues that, if the genome is conceptualised as realising dispositional properties that are “directed toward” phenotypic traits, the parity of ‘causal roles’ between genetic and extra-genetic factors is no longer apparent, and further, that the causal primacy of the genome is both plausible and defensible. (shrink)
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers and Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim is to (...) assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and describes some areas (...) where further research is needed. (shrink)
In identifying intrinsic molecular chance and extrinsic adaptive pressures as the only causally relevant factors in the process of evolution, the theoretical perspective of the Modern Synthesis had a major impact on the perceived tenability of an ontology of dispositional properties. However, since the late 1970s, an increasing number of evolutionary biologists have challenged the descriptive and explanatory adequacy of this “chance alone, extrinsic only” understanding of evolutionary change. Because morphological studies of homology, convergence, and teratology have revealed a space (...) of possible forms and phylogenetic trajectories that is considerably more restricted than expected, evo-devo has focused on the causal contribution of intrinsic developmental processes to the course of evolution. Evo-devo’s investigation into the developmental structure of the modality of morphology – including both the possibility and impossibility of organismal form – has led to the utilisation of a number of dispositional concepts that emphasise the tendency of the evolutionary process to change along certain routes. In this sense, and in contrast to the perspective of the Modern Synthesis, evo-devo can be described as a “science of dispositions.” This chapter discusses the recent philosophical literature on dispositional properties in evo-devo, exploring debates about both the metaphysical and epistemological aspects of the central dispositional concepts utilised in contemporary evo-devo and addressing the epistemological question of how dispositional properties challenge existing explanatory models in evolutionary biology. (shrink)
According to dispositionalism, de re modality is grounded in the intrinsic natures of dispositional properties. Those properties are able to serve as the ground of de re modal truths, it is said, because they bear a special relation to counterfactual conditionals, one of truthmaking. However, because dispositionalism purports to ground de re modality only on the intrinsic natures of dispositional properties, it had better be the case that they do not play that truthmaking role merely in virtue of their being (...) embedded in some particular, extrinsic causal context. This paper examines a recent argument against dispositionalism that purports to show that the intrinsicality of that relation cannot be maintained, due to the ceteris paribus nature of the counterfactuals that dispositions make-true. When two prominent responses are examined, both are found wanting: at best, they require unjustified special pleading, and at worst, they amount to little more than ad hoc conceptual trickery. (shrink)
A collection of original and innovative essays that compare the justice issues raised by climate engineering to the justice issues raised by competing approaches to solving the climate problem.
According to contemporary ‘process ontology’, organisms are best conceptualised as spatio-temporally extended entities whose mereological composition is fundamentally contingent and whose essence consists in changeability. In contrast to the Aristotelian precepts of classical ‘substance ontology’, from the four-dimensional perspective of this framework, the identity of an organism is grounded not in certain collections of privileged properties, or features which it could not fail to possess, but in the succession of diachronic relations by which it persists, or ‘perdures’ as one entity (...) over time. In this paper, I offer a novel defence of substance ontology by arguing that the coherency and plausibility of the radical reconceptualisation of organisms proffered by process ontology ultimately depends upon its making use of the ‘substantial’ principles it purports to replace. (shrink)
Bio-ontologies are essential tools for accessing and analyzing the rapidly growing pool of plant genomic and phenomic data. Ontologies provide structured vocabularies to support consistent aggregation of data and a semantic framework for automated analyses and reasoning. They are a key component of the Semantic Web. This paper provides background on what bio-ontologies are, why they are relevant to botany, and the principles of ontology development. It includes an overview of ontologies and related resources that are relevant to plant science, (...) with a detailed description of the Plant Ontology (PO). We discuss the challenges of building an ontology that covers all green plants (Viridiplantae). Key results: Ontologies can advance plant science in four keys areas: 1. comparative genetics, genomics, phenomics, and development, 2. taxonomy and systematics, 3. semantic applications and 4. education. Conclusions: Bio-ontologies offer a flexible framework for comparative plant biology, based on common botanical understanding. As genomic and phenomic data become available for more species, we anticipate that the annotation of data with ontology terms will become less centralized, while at the same time, the need for cross-species queries will become more common, causing more researchers in plant science to turn to ontologies. (shrink)
As there is currently a neo-Aristotelian revival currently taking place within contemporary metaphysics and dispositions, or causal powers are now being routinely utilised in theories of causality and modality, more attention is beginning to be paid to a central Aristotelian concern: the metaphysics of substantial unity, and the doctrine of hylomorphism. In this paper, I distinguish two strands of hylomorphism present in the contemporary literature and argue that not only does each engender unique conceptual difficulties, but neither adequately captures the (...) metaphysics of Aristotelian hylomorphism. Thus both strands of contemporary hylomorphism, I argue, fundamentally misunderstand what substantial unity amounts to in the hylomorphic framework – namely, the metaphysical inseparability of matter and form. (shrink)
The biological sciences have always proven a fertile ground for philosophical analysis, one from which has grown a rich tradition stemming from Aristotle and flowering with Darwin. And although contemporary philosophy is increasingly becoming conceptually entwined with the study of the empirical sciences with the data of the latter now being regularly utilised in the establishment and defence of the frameworks of the former, a practice especially prominent in the philosophy of physics, the development of that tradition hasn’t received the (...) wider attention it so thoroughly deserves. This review will briefly introduce some recent significant topics of debate within the philosophy of biology, focusing on those whose metaphysical themes (in everything from composition to causation) are likely to be of wide-reaching, cross-disciplinary interest. (shrink)
This is a review of Toby Handfield's book, "A Philosophical Guide to Chance", that discusses Handfield's Debunking Argument against realist accounts of chance.
One popular approach to statistical mechanics understands statistical mechanical probabilities as measures of rational indifference. Naive formulations of this ``indifference approach'' face reversibility worries - while they yield the right prescriptions regarding future events, they yield the wrong prescriptions regarding past events. This paper begins by showing how the indifference approach can overcome the standard reversibility worries by appealing to the Past Hypothesis. But, the paper argues, positing a Past Hypothesis doesn't free the indifference approach from all reversibility worries. For (...) while appealing to the Past Hypothesis allows it to escape one kind of reversibility worry, it makes it susceptible to another - the Meta-Reversibility Objection. And there is no easy way for the indifference approach to escape the Meta-Reversibility Objection. As a result, reversibility worries pose a steep challenge to the viability of the indifference approach. (shrink)
Theories that use expected utility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “direct difference taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the most plausible alternative, (...) a proposal offered by Arntzenius (2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which direct difference taking avoids. (shrink)
As biological and biomedical research increasingly reference the environmental context of the biological entities under study, the need for formalisation and standardisation of environment descriptors is growing. The Environment Ontology (ENVO) is a community-led, open project which seeks to provide an ontology for specifying a wide range of environments relevant to multiple life science disciplines and, through an open participation model, to accommodate the terminological requirements of all those needing to annotate data using ontology classes. This paper summarises ENVO’s motivation, (...) content, structure, adoption, and governance approach. (shrink)
This article examines two questions about scientists’ search for knowledge. First, which search strategies generate discoveries effectively? Second, is it advantageous to diversify search strategies? We argue pace Weisberg and Muldoon, “Epistemic Landscapes and the Division of Cognitive Labor”, that, on the first question, a search strategy that deliberately seeks novel research approaches need not be optimal. On the second question, we argue they have not shown epistemic reasons exist for the division of cognitive labor, identifying the errors that led (...) to their conclusions. Furthermore, we generalize the epistemic landscape model, showing that one should be skeptical about the benefits of social learning in epistemically complex environments. (shrink)
I aim to show that the failure of the California Class Size Reduction initiative highlights an important class of situations in the application of evidence to policy. There are some circumstances in which the implementation of a policy will be self-defeating. The introduction of the factor assumed to have causal efficacy into the target population can lead to changes in the conditions of the target population that amount to interfering factors. Because these interfering factors are a direct result of the (...) policy implementation they should be relatively easy to predict, and so part of the tricky issue of judging where evidence is relevant should in some circumstances be relatively straightforward. -/- The failure of the California Class Size Reduction initiative also shows how important it is to identify the correct causal factor. The more accurate the attribution of causality, the less susceptible it will be to interfering factors and breaks in the causal chain. (shrink)
The Plant Ontology (PO) is a community resource consisting of standardized terms, definitions, and logical relations describing plant structures and development stages, augmented by a large database of annotations from genomic and phenomic studies. This paper describes the structure of the ontology and the design principles we used in constructing PO terms for plant development stages. It also provides details of the methodology and rationale behind our revision and expansion of the PO to cover development stages for all plants, particularly (...) the land plants (bryophytes through angiosperms). As a case study to illustrate the general approach, we examine variation in gene expression across embryo development stages in Arabidopsis and maize, demonstrating how the PO can be used to compare patterns of expression across stages and in developmentally different species. Although many genes appear to be active throughout embryo development, we identified a small set of uniquely expressed genes for each stage of embryo development and also between the two species. Evaluating the different sets of genes expressed during embryo development in Arabidopsis or maize may inform future studies of the divergent developmental pathways observed in monocotyledonous versus dicotyledonous species. The PO and its annotation databasemake plant data for any species more discoverable and accessible through common formats, thus providing support for applications in plant pathology, image analysis, and comparative development and evolution. (shrink)
In this article, we propose the Fair Priority Model for COVID-19 vaccine distribution, and emphasize three fundamental values we believe should be considered when distributing a COVID-19 vaccine among countries: Benefiting people and limiting harm, prioritizing the disadvantaged, and equal moral concern for all individuals. The Priority Model addresses these values by focusing on mitigating three types of harms caused by COVID-19: death and permanent organ damage, indirect health consequences, such as health care system strain and stress, as well as (...) economic destruction. It proposes proceeding in three phases: the first addresses premature death, the second long-term health issues and economic harms, and the third aims to contain viral transmission fully and restore pre-pandemic activity. -/- To those who may deem an ethical framework irrelevant because of the belief that many countries will pursue "vaccine nationalism," we argue such a framework still has broad relevance. Reasonable national partiality would permit countries to focus on vaccine distribution within their borders up until the rate of transmission is below 1, at which point there would not be sufficient vaccine-preventable harm to justify retaining a vaccine. When a government reaches the limit of national partiality, it should release vaccines for other countries. -/- We also argue against two other recent proposals. Distributing a vaccine proportional to a country's population mistakenly assumes that equality requires treating differently situated countries identically. Prioritizing countries according to the number of front-line health care workers, the proportion of the population over 65, and the number of people with comorbidities within each country may exacerbate disadvantage and end up giving the vaccine in large part to wealthy nations. (shrink)
I offer a new argument for the elimination of ‘beliefs’ from cognitive science based on Wimsatt’s concept of robustness and a related concept of fragility. Theoretical entities are robust if multiple independent means of measurement produce invariant results in detecting them. Theoretical entities are fragile when multiple independent means of detecting them produce highly variant results. I argue that sufficiently fragile theoretical entities do not exist. Recent studies in psychology show radical variance between what self-report and non-verbal behaviour indicate about (...) participants’ beliefs. This is evidence that ‘belief’ is fragile, and is thus a strong candidate for elimination. 1 Introduction2 Robustness and Fragility2.1 A historical example of robustness2.2 Fragility and elimination3 The Received View4 Evidence for the Fragility of Belief4.1 Contamination and fragility4.2 Implicit association tests and fragility5 Attempts to Preserve the Belief Category for Cognitive Science5.1 Beliefs and aliefs5.2 Contradictory beliefs5.3 In-between beliefs and the unity assumption5.4 Belief sub-classes5.5 Self-deception6 Alternative Mental States7 Conclusion. (shrink)
Which way does causation proceed? The pattern in the material world seems to be upward: particles to molecules to organisms to brains to mental processes. In contrast, the principles of quantum mechanics allow us to see a pattern of downward causation. These new ideas describe sets of multiple levels in which each level influences the levels below it through generation and selection. Top-down causation makes exciting sense of the world: we can find analogies in psychology, in the formation of our (...) minds, in locating the source of consciousness, and even in the possible logic of belief in God. (shrink)
Modern physics has cast doubt on Newton's idea of particles with definite properties. Do we have to go back to Aristotle for a new understanding of the ultimate nature of substance? If we ask, `what is the nature of substance?', we might be told that this substance is salt, that one is copper, or that the atomic nucleus is a mixture of protons and neutrons. But what are all these substances? What do they have in common which makes them substances? (...) We don't seem to think that such things as colours, numbers, or shapes are by themselves `substantial enough' to be substances in their own right. We therefore change our question to `what is it to be a substance?', or to `what is the ultimate nature of the simplest substances?'. (shrink)
This study examines the relation of language use to a person’s ability to perform categorization tasks and to assess their own abilities in those categorization tasks. A silent rhyming task was used to confirm that a group of people with post-stroke aphasia (PWA) had corresponding covert language production (or “inner speech”) impairments. The performance of the PWA was then compared to that of age- and education-matched healthy controls on three kinds of categorization tasks and on metacognitive self-assessments of their performance (...) on those tasks. The PWA showed no deficits in their ability to categorize objects for any of the three trial types (visual, thematic, and categorial). However, on the categorial trials, their metacognitive assessments of whether they had categorized correctly were less reliable than those of the control group. The categorial trials were distinguished from the others by the fact that the categorization could not be based on some immediately perceptible feature or on the objects’ being found together in a type of scenario or setting. This result offers preliminary evidence for a link between covert language use and a specific form of metacognition. (shrink)
Being born into a family structure—being born of a mother—is key to being human. It is, for Jacques Lacan, essential to the formation of human desire. It is also part of the structure of analogy in the Thomistic thought of Erich Przywara. AI may well increase exponentially in sophistication, and even achieve human-like qualities; but it will only ever form an imaginary mirroring of genuine human persons—an imitation that is in fact morbid and dehumanising. Taking Lacan and Przywara at a (...) point of convergence on this topic offers important insight into human exceptionalism. (shrink)
Two controversies exist regarding the appropriate characterization of hierarchical and adaptive evolution in natural populations. In biology, there is the Wright-Fisher controversy over the relative roles of random genetic drift, natural selection, population structure, and interdemic selection in adaptive evolution begun by Sewall Wright and Ronald Aylmer Fisher. There is also the Units of Selection debate, spanning both the biological and the philosophical literature and including the impassioned group-selection debate. Why do these two discourses exist separately, and interact relatively little? (...) We postulate that the reason for this schism can be found in the differing focus of each controversy, a deep difference itself determined by distinct general styles of scientific research guiding each discourse. That is, the Wright-Fisher debate focuses on adaptive process, and tends to be instructed by the mathematical modeling style, while the focus of the Units of Selection controversy is adaptive product, and is typically guided by the function style. The differences between the two discourses can be usefully tracked by examining their interpretations of two contested strategies for theorizing hierarchical selection: horizontal and vertical averaging. (shrink)
What would the Merleau-Ponty of Phenomenology of Perception have thought of the use of his phenomenology in the cognitive sciences? This question raises the issue of Merleau-Ponty’s conception of the relationship between the sciences and philosophy, and of what he took the philosophical significance of his phenomenology to be. In this article I suggest an answer to this question through a discussion of certain claims made in connection to the “post-cognitivist” approach to cognitive science by Hubert Dreyfus, Shaun Gallagher and (...) Francisco Varela, Evan Thompson and Eleanor Rosch. I suggest that these claims are indicative of an appropriation of Merleau-Ponty’s thought that he would have welcomed as innovative science. Despite this, I argue that he would have viewed this use of his work as potentially occluding the full philosophical significance that he believed his phenomenological investigations to contain. (shrink)
The National Center for Biomedical Ontology is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists, funded by the National Institutes of Health (NIH) Roadmap, to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to create (...) new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease. (shrink)
It is my goal in this paper to offer a strategy for translating universal statements about utopia into particular statements. This is accomplished by drawing out their implicit, temporally embedded, points of reference. Universal statements of the kind I find troublesome are those of the form ‘Utopia is x’, where ‘x’ can be anything from ‘the receding horizon’ to ‘the nation of the virtuous’. To such statements, I want to put the questions: ‘Which utopias?’; ‘In what sense?’; and ‘When was (...) that, is that, or will that be, the case for utopias?’ Through an exploration of these lines of questioning, I arrive at three archetypes of utopian theorizing which serve to provide the answers: namely, utopian historicism, utopian presentism, and utopian futurism. The employment of these archetypes temporally grounds statements about utopia in the past, present, or future, and thus forces discussion of discrete particulars instead of abstract universals with no meaningful referents. (shrink)
This article attempts to explain the value that we assign to the presence of friends at the time when life is ending. It first shows that Aristotle’s treatment of friendship does not provide a clear account of such value. It then uses J. L. Austin’s notion of performativity to supplement one recent theory of friendship – given by Dean Cocking and Jeanette Kennett – in such a way that that theory can then account for friendship’s special value at our time (...) of death. (shrink)
J. L. Austin showed that performative speech acts can fail in various ways, and that the ways in which they fail can often be revealing, but he was not concerned with understanding performative failures that occur in the context of poetry. Geoffrey Hill suggests, in both his poetry and his prose writings, that these failures are more interesting than Austin realized. This article corrects Maximilian de Gaynesford’s misunderstanding of Hill’s treatment of this point. It then explains the way in which (...) Hill’s understanding of the performative restrictions on poetry relate to his conception of poetry’s role, analogous to that of the Saturnalian misruler, in establishing the authority of ordinary language. (shrink)
Based on a close reading of the debate between Rawls and Sen on primary goods versus capabilities, I argue that liberal theory cannot adequately respond to Sen’s critique within a conventionally neutralist framework. In support of the capability approach, I explain why and how it defends a more robust conception of opportunity and freedom, along with public debate on substantive questions about well-being and the good life. My aims are: to show that Sen’s capability approach is at odds with Rawls’s (...) political liberal version of neutrality; to carve out a third space in the neutrality debate; and to begin to develop, from Sen’s approach, the idea of public value liberalism as a position that falls within that third space.En me basant sur une lecture attentive du débat entre Rawls et Sen sur les biens premiers versus les capabilités, je soutiendrai que la théorie libérale est incapable, dans un cadre neutraliste conventionnel, de répondre adéquatement à des injustices dans le domaine de la santé. À partir de l’approche des capabilités, j’explique pourquoi et comment cette approche permet de défendre une conception plus robuste de l’opportunité et de la liberté, de même qu’un débat public sur des questions substantielles concernant le bien-être et la vie bonne. Mes objectifs sont : de clarifier le rapport entre le neutralisme de Rawls et sa défense des biens premiers,, de démontrer les implications de la critique des capabilités de Sen, et, d’esquisser une troisième position dans le débat sur la neutralité versus le perfectionnisme – à savoir, celle d’un perfectionnisme motivé par des considérations de légitimité. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.