This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004 ) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a (...) number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected. (shrink)
Although contemporary metaphysics has recently undergone a neo-Aristotelian revival wherein dispositions, or capacities are now commonplace in empirically grounded ontologies, being routinely utilised in theories of causality and modality, a central Aristotelian concept has yet to be given serious attention – the doctrine of hylomorphism. The reason for this is clear: while the Aristotelian ontological distinction between actuality and potentiality has proven to be a fruitful conceptual framework with which to model the operation of the natural world, the distinction between (...) form and matter has yet to similarly earn its keep. In this chapter, I offer a first step toward showing that the hylomorphic framework is up to that task. To do so, I return to the birthplace of that doctrine - the biological realm. Utilising recent advances in developmental biology, I argue that the hylomorphic framework is an empirically adequate and conceptually rich explanatory schema with which to model the nature of organisms. (shrink)
I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively (...) proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics. (shrink)
Dispositional properties are often referred to as ‘causal powers’, but what does dispositional causation amount to? Any viable theory must account for two fundamental aspects of the metaphysics of causation – the causal complexity and context sensitivity of causal interactions. The theory of mutual manifestations attempts to do so by locating the complexity and context sensitivity within the nature of dispositions themselves. But is this theory an acceptable first step towards a viable theory of dispositional causation? This paper argues that (...) the reconceptualization that the theory entails comes at too high a price, and is an unnecessary step in the wrong direction: these two central aspects concerning the metaphysics of causation can and should be accounted for in a dispositional account of causation without it. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
According to the proponents of Developmental Systems Theory and the Causal Parity Thesis, the privileging of the genome as “first among equals” with respect to the development of phenotypic traits is more a reflection of our own heuristic prejudice than of ontology - the underlying causal structures responsible for that specified development no more single out the genome as primary than they do other broadly “environmental” factors. Parting with the methodology of the popular responses to the Thesis, this paper offers (...) a novel criterion for ‘causal primacy’, one that is grounded in the ontology of the unique causal role of dispositional properties. This paper argues that, if the genome is conceptualised as realising dispositional properties that are “directed toward” phenotypic traits, the parity of ‘causal roles’ between genetic and extra-genetic factors is no longer apparent, and further, that the causal primacy of the genome is both plausible and defensible. (shrink)
As there is currently a neo-Aristotelian revival currently taking place within contemporary metaphysics and dispositions, or causal powers are now being routinely utilised in theories of causality and modality, more attention is beginning to be paid to a central Aristotelian concern: the metaphysics of substantial unity, and the doctrine of hylomorphism. In this paper, I distinguish two strands of hylomorphism present in the contemporary literature and argue that not only does each engender unique conceptual difficulties, but neither adequately captures the (...) metaphysics of Aristotelian hylomorphism. Thus both strands of contemporary hylomorphism, I argue, fundamentally misunderstand what substantial unity amounts to in the hylomorphic framework – namely, the metaphysical inseparability of matter and form. (shrink)
A collection of original and innovative essays that compare the justice issues raised by climate engineering to the justice issues raised by competing approaches to solving the climate problem.
According to contemporary ‘process ontology’, organisms are best conceptualised as spatio-temporally extended entities whose mereological composition is fundamentally contingent and whose essence consists in changeability. In contrast to the Aristotelian precepts of classical ‘substance ontology’, from the four-dimensional perspective of this framework, the identity of an organism is grounded not in certain collections of privileged properties, or features which it could not fail to possess, but in the succession of diachronic relations by which it persists, or ‘perdures’ as one entity (...) over time. In this paper, I offer a novel defence of substance ontology by arguing that the coherency and plausibility of the radical reconceptualisation of organisms proffered by process ontology ultimately depends upon its making use of the ‘substantial’ principles it purports to replace. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the indifference approach (...) to statistical mechanical probabilities. (shrink)
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers and Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim is to (...) assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful. (shrink)
Theories that use expected utility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “Direct Difference Taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the most plausible alternative, (...) a proposal offered by Arntzenius :31–58, 2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which Direct Difference Taking avoids. (shrink)
According to dispositionalism, de re modality is grounded in the intrinsic natures of dispositional properties. Those properties are able to serve as the ground of de re modal truths, it is said, because they bear a special relation to counterfactual conditionals, one of truthmaking. However, because dispositionalism purports to ground de re modality only on the intrinsic natures of dispositional properties, it had better be the case that they do not play that truthmaking role merely in virtue of their being (...) embedded in some particular, extrinsic causal context. This paper examines a recent argument against dispositionalism that purports to show that the intrinsicality of that relation cannot be maintained, due to the ceteris paribus nature of the counterfactuals that dispositions make-true. When two prominent responses are examined, both are found wanting: at best, they require unjustified special pleading, and at worst, they amount to little more than ad hoc conceptual trickery. (shrink)
In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis’s is that they allow special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt to determine (...) (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and describes some areas (...) where further research is needed. (shrink)
In identifying intrinsic molecular chance and extrinsic adaptive pressures as the only causally relevant factors in the process of evolution, the theoretical perspective of the Modern Synthesis had a major impact on the perceived tenability of an ontology of dispositional properties. However, since the late 1970s, an increasing number of evolutionary biologists have challenged the descriptive and explanatory adequacy of this “chance alone, extrinsic only” understanding of evolutionary change. Because morphological studies of homology, convergence, and teratology have revealed a space (...) of possible forms and phylogenetic trajectories that is considerably more restricted than expected, evo-devo has focused on the causal contribution of intrinsic developmental processes to the course of evolution. Evo-devo’s investigation into the developmental structure of the modality of morphology – including both the possibility and impossibility of organismal form – has led to the utilisation of a number of dispositional concepts that emphasise the tendency of the evolutionary process to change along certain routes. In this sense, and in contrast to the perspective of the Modern Synthesis, evo-devo can be described as a “science of dispositions.” This chapter discusses the recent philosophical literature on dispositional properties in evo-devo, exploring debates about both the metaphysical and epistemological aspects of the central dispositional concepts utilised in contemporary evo-devo and addressing the epistemological question of how dispositional properties challenge existing explanatory models in evolutionary biology. (shrink)
Several variants of Lewis's Best System Account of Lawhood have been proposed that avoid its commitment to perfectly natural properties. There has been little discussion of the relative merits of these proposals, and little discussion of how one might extend this strategy to provide natural property-free variants of Lewis's other accounts, such as his accounts of duplication, intrinsicality, causation, counterfactuals, and reference. We undertake these projects in this paper. We begin by providing a framework for classifying and assessing the variants (...) of the Best System Account. We then evaluate these proposals, and identify the most promising candidates. We go on to develop a proposal for systematically modifying Lewis's other accounts so that they, too, avoid commitment to perfectly natural properties. We conclude by briefly considering a different route one might take to developing natural property-free versions of Lewis's other accounts, drawing on recent work by Williams. (shrink)
Evidential Uniqueness is the thesis that, for any batch of evidence, there’s a unique doxastic state that a subject with that evidence should have. One of the most common kinds of objections to views that violate Evidential Uniqueness are arbitrariness objections – objections to the effect that views that don’t satisfy Evidential Uniqueness lead to unacceptable arbitrariness. The goal of this paper is to examine a variety of arbitrariness objections that have appeared in the literature, and to assess the extent (...) to which these objections bolster the case for Evidential Uniqueness. After examining a number of different arbitrariness objections, I’ll conclude that, by and large, these objections do little to bolster the case for Evidential Uniqueness. (shrink)
One popular approach to statistical mechanics understands statistical mechanical probabilities as measures of rational indifference. Naive formulations of this ``indifference approach'' face reversibility worries - while they yield the right prescriptions regarding future events, they yield the wrong prescriptions regarding past events. This paper begins by showing how the indifference approach can overcome the standard reversibility worries by appealing to the Past Hypothesis. But, the paper argues, positing a Past Hypothesis doesn't free the indifference approach from all reversibility worries. For (...) while appealing to the Past Hypothesis allows it to escape one kind of reversibility worry, it makes it susceptible to another - the Meta-Reversibility Objection. And there is no easy way for the indifference approach to escape the Meta-Reversibility Objection. As a result, reversibility worries pose a steep challenge to the viability of the indifference approach. (shrink)
Cognitive abilities cannot be measured directly. What we can measure is individual variation in task performance. In this paper, we first make the case for why we should be interested in mapping individual differences in task performance on to particular cognitive abilities: we suggest that it is crucial for examining the causes and consequences of variation both within and between species. As a case study, we examine whether multiple measures of inhibitory control for non-human animals do indeed produce correlated task (...) performance; however, no clear pattern emerges that would support the notion of a common cognitive ability underpinning individual differences in performance. We advocate a psychometric approach involving a three-step programme to make theoretical and empirical progress: first, we need tasks that reveal signature limits in performance. Second, we need to assess the reliability of individual differences in task performance. Third, multi-trait multi-method test batteries will be instrumental in validating cognitive abilities. Together, these steps will help us to establish what varies between individuals that could impact their fitness and ultimately shape the course of the evolution of animal minds. Finally, we propose executive functions, including working memory, inhibitory control and attentional shifting, as a sensible starting point for this endeavour. (shrink)
The biological sciences have always proven a fertile ground for philosophical analysis, one from which has grown a rich tradition stemming from Aristotle and flowering with Darwin. And although contemporary philosophy is increasingly becoming conceptually entwined with the study of the empirical sciences with the data of the latter now being regularly utilised in the establishment and defence of the frameworks of the former, a practice especially prominent in the philosophy of physics, the development of that tradition hasn’t received the (...) wider attention it so thoroughly deserves. This review will briefly introduce some recent significant topics of debate within the philosophy of biology, focusing on those whose metaphysical themes (in everything from composition to causation) are likely to be of wide-reaching, cross-disciplinary interest. (shrink)
What endogenous factors contribute to minority (Red Queen) or majority (Red King) domination under conditions of coercive bargaining? We build on previous work demonstrating minority disadvantage in non-coercive bargaining games to show that under neutral initial conditions, majorities are advantaged in high conflict situations, and minorities are advantaged in low conflict games. These effects are a function of the relationship between (1) relative proportions of the majority and minority groups and (2) costs of conflict. Although both Red King and Red (...) Queen effects can occur, we further show that agents’ increased initial propensity toward conflict advantages majorities. (shrink)
This is a review of Toby Handfield's book, "A Philosophical Guide to Chance", that discusses Handfield's Debunking Argument against realist accounts of chance.
Bio-ontologies are essential tools for accessing and analyzing the rapidly growing pool of plant genomic and phenomic data. Ontologies provide structured vocabularies to support consistent aggregation of data and a semantic framework for automated analyses and reasoning. They are a key component of the Semantic Web. This paper provides background on what bio-ontologies are, why they are relevant to botany, and the principles of ontology development. It includes an overview of ontologies and related resources that are relevant to plant science, (...) with a detailed description of the Plant Ontology (PO). We discuss the challenges of building an ontology that covers all green plants (Viridiplantae). Key results: Ontologies can advance plant science in four keys areas: 1. comparative genetics, genomics, phenomics, and development, 2. taxonomy and systematics, 3. semantic applications and 4. education. Conclusions: Bio-ontologies offer a flexible framework for comparative plant biology, based on common botanical understanding. As genomic and phenomic data become available for more species, we anticipate that the annotation of data with ontology terms will become less centralized, while at the same time, the need for cross-species queries will become more common, causing more researchers in plant science to turn to ontologies. (shrink)
Aworkshop was held August 26–28, 2015, by the Earth- Life Science Institute (ELSI) Origins Network (EON, see Appendix I) at the Tokyo Institute of Technology. This meeting gathered a diverse group of around 40 scholars researching the origins of life (OoL) from various perspectives with the intent to find common ground, identify key questions and investigations for progress, and guide EON by suggesting a roadmap of activities. Specific challenges that the attendees were encouraged to address included the following: What key (...) questions, ideas, and investigations should the OoL research community address in the near and long term? How can this community better organize itself and prioritize its efforts? What roles can particular subfields play, and what can ELSI and EON do to facilitate research progress? (See also Appendix II.) The present document is a product of that workshop; a white paper that serves as a record of the discussion that took place and a guide and stimulus to the solution of the most urgent and important issues in the study of the OoL. This paper is not intended to be comprehensive or a balanced representation of the opinions of the entire OoL research community. It is intended to present a number of important position statements that contain many aspirational goals and suggestions as to how progress can be made in understanding the OoL. The key role played in the field by current societies and recurring meetings over the past many decades is fully acknowledged, including the International Society for the Study of the Origin of Life (ISSOL) and its official journal Origins of Life and Evolution of Biospheres, as well as the International Society for Artificial Life (ISAL). (shrink)
As biological and biomedical research increasingly reference the environmental context of the biological entities under study, the need for formalisation and standardisation of environment descriptors is growing. The Environment Ontology (ENVO) is a community-led, open project which seeks to provide an ontology for specifying a wide range of environments relevant to multiple life science disciplines and, through an open participation model, to accommodate the terminological requirements of all those needing to annotate data using ontology classes. This paper summarises ENVO’s motivation, (...) content, structure, adoption, and governance approach. (shrink)
Biological ontologies are used to organize, curate, and interpret the vast quantities of data arising from biological experiments. While this works well when using a single ontology, integrating multiple ontologies can be problematic, as they are developed independently, which can lead to incompatibilities. The Open Biological and Biomedical Ontologies Foundry was created to address this by facilitating the development, harmonization, application, and sharing of ontologies, guided by a set of overarching principles. One challenge in reaching these goals was that the (...) OBO principles were not originally encoded in a precise fashion, and interpretation was subjective. Here we show how we have addressed this by formally encoding the OBO principles as operational rules and implementing a suite of automated validation checks and a dashboard for objectively evaluating each ontology’s compliance with each principle. This entailed a substantial effort to curate metadata across all ontologies and to coordinate with individual stakeholders. We have applied these checks across the full OBO suite of ontologies, revealing areas where individual ontologies require changes to conform to our principles. Our work demonstrates how a sizable federated community can be organized and evaluated on objective criteria that help improve overall quality and interoperability, which is vital for the sustenance of the OBO project and towards the overall goals of making data FAIR. Competing Interest StatementThe authors have declared no competing interest. (shrink)
The Plant Ontology (PO) is a community resource consisting of standardized terms, definitions, and logical relations describing plant structures and development stages, augmented by a large database of annotations from genomic and phenomic studies. This paper describes the structure of the ontology and the design principles we used in constructing PO terms for plant development stages. It also provides details of the methodology and rationale behind our revision and expansion of the PO to cover development stages for all plants, particularly (...) the land plants (bryophytes through angiosperms). As a case study to illustrate the general approach, we examine variation in gene expression across embryo development stages in Arabidopsis and maize, demonstrating how the PO can be used to compare patterns of expression across stages and in developmentally different species. Although many genes appear to be active throughout embryo development, we identified a small set of uniquely expressed genes for each stage of embryo development and also between the two species. Evaluating the different sets of genes expressed during embryo development in Arabidopsis or maize may inform future studies of the divergent developmental pathways observed in monocotyledonous versus dicotyledonous species. The PO and its annotation databasemake plant data for any species more discoverable and accessible through common formats, thus providing support for applications in plant pathology, image analysis, and comparative development and evolution. (shrink)
In temporal binding, the temporal interval between one event and another, occurring some time later, is subjectively compressed. We discuss two ways in which temporal binding has been conceptualized. In studies showing temporal binding between a voluntary action and its causal consequences, such binding is typically interpreted as providing a measure of an implicit or pre-reflective “sense of agency”. However, temporal binding has also been observed in contexts not involving voluntary action, but only the passive observation of a cause-effect sequence. (...) In those contexts, it has been interpreted as a top-down effect on perception reflecting a belief in causality. These two views need not be in conflict with one another, if one thinks of them as concerning two separate mechanisms through which temporal binding can occur. In this paper, we explore an alternative possibility: that there is a unitary way of explaining temporal binding both within and outside the context of voluntary action as a top-down effect on perception reflecting a belief in causality. Any such explanation needs to account for ways in which agency, and factors connected with agency, have been shown to affect the strength of temporal binding. We show that principles of causal inference and causal selection already familiar from the literature on causal learning have the potential to explain why the strength of people’s causal beliefs can be affected by the extent to which they are themselves actively involved in bringing about events, thus in turn affecting binding. (shrink)
In this article, we propose the Fair Priority Model for COVID-19 vaccine distribution, and emphasize three fundamental values we believe should be considered when distributing a COVID-19 vaccine among countries: Benefiting people and limiting harm, prioritizing the disadvantaged, and equal moral concern for all individuals. The Priority Model addresses these values by focusing on mitigating three types of harms caused by COVID-19: death and permanent organ damage, indirect health consequences, such as health care system strain and stress, as well as (...) economic destruction. It proposes proceeding in three phases: the first addresses premature death, the second long-term health issues and economic harms, and the third aims to contain viral transmission fully and restore pre-pandemic activity. -/- To those who may deem an ethical framework irrelevant because of the belief that many countries will pursue "vaccine nationalism," we argue such a framework still has broad relevance. Reasonable national partiality would permit countries to focus on vaccine distribution within their borders up until the rate of transmission is below 1, at which point there would not be sufficient vaccine-preventable harm to justify retaining a vaccine. When a government reaches the limit of national partiality, it should release vaccines for other countries. -/- We also argue against two other recent proposals. Distributing a vaccine proportional to a country's population mistakenly assumes that equality requires treating differently situated countries identically. Prioritizing countries according to the number of front-line health care workers, the proportion of the population over 65, and the number of people with comorbidities within each country may exacerbate disadvantage and end up giving the vaccine in large part to wealthy nations. (shrink)
I offer a new argument for the elimination of ‘beliefs’ from cognitive science based on Wimsatt’s concept of robustness and a related concept of fragility. Theoretical entities are robust if multiple independent means of measurement produce invariant results in detecting them. Theoretical entities are fragile when multiple independent means of detecting them produce highly variant results. I argue that sufficiently fragile theoretical entities do not exist. Recent studies in psychology show radical variance between what self-report and non-verbal behaviour indicate about (...) participants’ beliefs. This is evidence that ‘belief’ is fragile, and is thus a strong candidate for elimination. 1 Introduction2 Robustness and Fragility2.1 A historical example of robustness2.2 Fragility and elimination3 The Received View4 Evidence for the Fragility of Belief4.1 Contamination and fragility4.2 Implicit association tests and fragility5 Attempts to Preserve the Belief Category for Cognitive Science5.1 Beliefs and aliefs5.2 Contradictory beliefs5.3 In-between beliefs and the unity assumption5.4 Belief sub-classes5.5 Self-deception6 Alternative Mental States7 Conclusion. (shrink)
A scientific community can be modeled as a collection of epistemic agents attempting to answer questions, in part by communicating about their hypotheses and results. We can treat the pathways of scientific communication as a network. When we do, it becomes clear that the interaction between the structure of the network and the nature of the question under investigation affects epistemic desiderata, including accuracy and speed to community consensus. Here we build on previous work, both our own and others’, in (...) order to get a firmer grasp on precisely which features of scientific communities interact with which features of scientific questions in order to influence epistemic outcomes. (shrink)
Being born into a family structure—being born of a mother—is key to being human. It is, for Jacques Lacan, essential to the formation of human desire. It is also part of the structure of analogy in the Thomistic thought of Erich Przywara. AI may well increase exponentially in sophistication, and even achieve human-like qualities; but it will only ever form an imaginary mirroring of genuine human persons—an imitation that is in fact morbid and dehumanising. Taking Lacan and Przywara at a (...) point of convergence on this topic offers important insight into human exceptionalism. (shrink)
This study examines the relation of language use to a person’s ability to perform categorization tasks and to assess their own abilities in those categorization tasks. A silent rhyming task was used to confirm that a group of people with post-stroke aphasia (PWA) had corresponding covert language production (or “inner speech”) impairments. The performance of the PWA was then compared to that of age- and education-matched healthy controls on three kinds of categorization tasks and on metacognitive self-assessments of their performance (...) on those tasks. The PWA showed no deficits in their ability to categorize objects for any of the three trial types (visual, thematic, and categorial). However, on the categorial trials, their metacognitive assessments of whether they had categorized correctly were less reliable than those of the control group. The categorial trials were distinguished from the others by the fact that the categorization could not be based on some immediately perceptible feature or on the objects’ being found together in a type of scenario or setting. This result offers preliminary evidence for a link between covert language use and a specific form of metacognition. (shrink)
It is widely accepted that the way information transfers across networks depends importantly on the structure of the network. Here, we show that the mechanism of information transfer is crucial: in many respects the effect of the specific transfer mechanism swamps network effects. Results are demonstrated in terms of three different types of transfer mechanism: germs, genes, and memes. With an emphasis on the specific case of transfer between sub-networks, we explore both the dynamics of each of these across networks (...) and a measure of their comparative fitness. Germ and meme transfer exhibit very different dynamics across linked networks. For germs, measured in terms of time to total infection, network type rather than degree of linkage between sub-networks is the primary factor. For memes or belief transfer, measured in terms of time to consensus, it is the opposite: degree of linkage trumps network type in importance. The dynamics of genetic information transfer is unlike either germs or memes. Transfer of genetic information is robust across network differences to which both germs and memes prove sensitive. We also consider function: how well germ, gene, and meme transfer mechanisms can meet their respective objectives of infecting the population, mixing and transferring genetic information, and spreading a message. A shared formal measure of fitness is introduced for purposes of comparison, again with an emphasis on linked sub-networks. Meme transfer proves superior to transfer by genetic reproduction on that measure, with both memes and genes superior to infection dynamics across all networks types. What kinds of network structure optimize fitness also differ among the three. Both germs and genes show fairly stable fitness with added links between sub-networks, but genes show greater sensitivity to the structure of sub-networks at issue. Belief transfer, in contrast to the other two, shows a clear decline in fitness with increasingly connected networks. When it comes to understanding how information moves on networks, our results indicate that questions of information dynamics on networks cannot be answered in terms of networks alone. A primary role is played by the specific mechanism of information transfer at issue. We must first ask about how a particular type of information moves. (shrink)
In order to understand the transmission of a disease across a population we will have to understand not only the dynamics of contact infection but the transfer of health-care beliefs and resulting health-care behaviors across that population. This paper is a first step in that direction, focusing on the contrasting role of linkage or isolation between sub-networks in (a) contact infection and (b) belief transfer. Using both analytical tools and agent-based simulations we show that it is the structure of a (...) network that is primary for predicting contact infection—whether the networks or sub-networks at issue are distributed ring networks or total networks (hubs, wheels, small world, random, or scale-free for example). Measured in terms of time to total infection, degree of linkage between sub-networks plays a minor role. The case of belief is importantly different. Using a simplified model of belief reinforcement, and measuring belief transfer in terms of time to community consensus, we show that degree of linkage between sub-networks plays a major role in social communication of beliefs. Here, in contrast to the case of contract infection, network type turns out to be of relatively minor importance. What you believe travels differently. In a final section we show that the pattern of belief transfer exhibits a classic power law regardless of the type of network involved. (shrink)
In this paper we make a simple theoretical point using a practical issue as an example. The simple theoretical point is that robustness is not 'all or nothing': in asking whether a system is robust one has to ask 'robust with respect to what property?' and 'robust over what set of changes in the system?' The practical issue used to illustrate the point is an examination of degrees of linkage between sub-networks and a pointed contrast in robustness and fragility between (...) the dynamics of (1) contact infection and (2) information transfer or belief change. Time to infection across linked sub-networks, it turns out, is fairly robust with regard to the degree of linkage between them. Time to infection is fragile and sensitive, however, with regard to the type of sub-network involved: total, ring, small world, random, or scale-free. Aspects of robustness and fragility are reversed where it is belief updating with reinforcement rather than infection that is at issue. In information dynamics, the pattern of time to consensus is robust across changes in network type but remarkably fragile with respect to degree of linkage between sub-networks. These results have important implications for public health interventions in realistic social networks, particularly with an eye to ethnic and socio-economic sub-communities, and in social networks with sub-communities changing in structure or linkage. (shrink)
Beyond belief change and meme adoption, both genetics and infection have been spoken of in terms of information transfer. What we examine here, concentrating on the specific case of transfer between sub-networks, are the differences in network dynamics in these cases: the different network dynamics of germs, genes, and memes. Germs and memes, it turns out, exhibit a very different dynamics across networks. For infection, measured in terms of time to total infection, it is network type rather than degree of (...) linkage between sub-networks that is of primary importance. For belief transfer, measured in terms of time to consensus, it is degree of linkage rather than network type that is crucial. Genes model each of these other dynamics in part, but match neither in full. For genetics, like belief transfer and unlike infection, network type makes little difference. Like infection and unlike belief, on the other hand, the dynamics of genetic information transfer within single and between linked networks are much the same. In ways both surprising and intriguing, transfer of genetic information seems to be robust across network differences crucial for the other two. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.