As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To (...) fill this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background (...) uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
There is a widespread attitude in epistemology that, if you know on the basis of perception, then you couldn’t have been wrong as a matter of chance. Despite the apparent intuitive plausibility of this attitude, which I’ll refer to here as “stochastic infallibilism”, it fundamentally misunderstands the way that human perceptual systems actually work. Perhaps the most important lesson of signal detection theory (SDT) is that our percepts are inherently subject to random error, and here I’ll highlight some key (...) empirical research that underscores this point. In doing so, it becomes clear that we are in fact quite willing to attribute knowledge to S that p even when S’s perceptual belief that p could have been randomly false. In short, perceptual processes can randomly fail, and perceptual knowledge is stochastically fallible. The narrow implication here is that any epistemological account that entails stochastic infallibilism, like safety, is simply untenable. More broadly, this myth of stochastic infallibilism provides a valuable illustration of the importance of integrating empirical findings into epistemological thinking. (shrink)
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that (...) entail not only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory. (shrink)
I defend an analog of probabilism that characterizes rationally coherent estimates for chances. Specifically, I demonstrate the following accuracy-dominance result for stochastic theories in the C*-algebraic framework: supposing an assignment of chance values is possible if and only if it is given by a pure state on a given algebra, your estimates for chances avoid accuracy-dominance if and only if they are given by a state on that algebra. When your estimates avoid accuracy-dominance (roughly: when you cannot guarantee that (...) other estimates would be more accurate), I say that they are sufficiently coherent. In formal epistemology and quantum foundations, the notion of rational coherence that gets more attention requires that you never allow for a sure loss (or “Dutch book”) in a given sort of betting game; I call this notion full coherence. I characterize when these two notions of rational coherence align, and I show that there is a quantum state giving estimates that are sufficiently coherent, but not fully coherent. (shrink)
The iterated Prisoner’s Dilemma has become the standard model for the evolution of cooperative behavior within a community of egoistic agents, frequently cited for implications in both sociology and biology. Due primarily to the work of Axelrod (1980a, 198Ob, 1984, 1985), a strategy of tit for tat (TFT) has established a reputation as being particularly robust. Nowak and Sigmund (1992) have shown, however, that in a world of stochastic error or imperfect communication, it is not TFT that finally triumphs (...) in an ecological model based on population percentages (Axelrod and Hamilton 1981), but ‘generous tit for tat’ (GTFT), which repays cooperation with a probability of cooperation approaching 1 but forgives defection with a probability of l/3. In this paper, we consider a spatialized instantiation of the stochastic Prisoner’s Dilemma, using two-dimensional cellular automata (Wolfram, 1984, 1986; Gutowitz, 1990) to model the spatial dynamics of populations of competing strategies. The surprising result is that in the spatial model it is not GIFT but still more generous strategies that are favored. The optimal strategy within this spatial ecology appears to be a form of ‘bending over backwards’, which returns cooperation for defection with a probability of 2/3 - a rate twice as generous as GTFT. (shrink)
Philosophers of biology have worked extensively on how we ought best to interpret the probabilities which arise throughout evolutionary theory. In spite of this substantial work, however, much of the debate has remained persistently intractable. I offer the example of Bayesian models of divergence time estimation as a case study in how we might bring further resources from the biological literature to bear on these debates. These models offer us an example in which a number of different sources of uncertainty (...) are combined to produce an estimate for a complex, unobservable quantity. These models have been carefully analyzed in recent biological work, which has determined the relationship between these sources of uncertainty, both quantitatively and qualitatively. I suggest here that this case shows us the limitations of univocal analyses of probability in evolution, as well as the simple dichotomy between “subjective” and “objective” probabilities, and I conclude by gesturing toward ways in which we might introduce more sophisticated interpretive taxonomies of probability as a path toward advancing debates on probability in the life sciences. (shrink)
In this note I examine some implications of stochastic interpretations of quantum mechanics for the concept of "charge without charge" presented by Wheeler and Misner. I argue that if a stochastic interpretation of quantum mechanics were correct, then certain shortcomings of the "charge without charge" concept could be overcome.
Intentional sampling methods are non-probabilistic procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. Intentional sampling methods are intended for exploratory research or pilot studies where tight budget constraints preclude the use of traditional randomized representative sampling. The possibility of subsequently generalize statistically from such deterministic samples to the general population has been the issue of long standing arguments and debates. Nevertheless, the intentional sampling techniques developed in this paper explore pragmatic (...) strategies for overcoming some of the real or perceived shortcomings and limitations of intentional sampling in practical applications. (shrink)
We characterize those identities and independencies which hold for all probability functions on a unary language satisfying the Principle of Atom Exchangeability. We then show that if this is strengthen to the requirement that Johnson's Sufficientness Principle holds, thus giving Carnap's Continuum of inductive methods for languages with at least two predicates, then new and somewhat inexplicable identities and independencies emerge, the latter even in the case of Carnap's Continuum for the language with just a single predicate.
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers and Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim (...) is to assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful. (shrink)
The articles examines how failure, especially in so-called 'stochastic' arts or sciences like medicine and navigation stimulated reflections about the nature of the knowledge required of a genuine art (techne) or science.
It has become customary to conceptualize the living cell as an intricate piece of machinery, different to a man-made machine only in terms of its superior complexity. This familiar understanding grounds the conviction that a cell's organization can be explained reductionistically, as well as the idea that its molecular pathways can be construed as deterministic circuits. The machine conception of the cell owes a great deal of its success to the methods traditionally used in molecular biology. However, the recent introduction (...) of novel experimental techniques capable of tracking individual molecules within cells in real time is leading to the rapid accumulation of data that are inconsistent with an engineering view of the cell. This paper examines four major domains of current research in which the challenges to the machine conception of the cell are particularly pronounced: cellular architecture, protein complexes, intracellular transport, and cellular behaviour. It argues that a new theoretical understanding of the cell is emerging from the study of these phenomena which emphasizes the dynamic, self-organizing nature of its constitution, the fluidity and plasticity of its components, and the stochasticity and non-linearity of its underlying processes. (shrink)
Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where (...) the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty. (shrink)
The aim of this paper is to examine the considerations on stochastic arts in Antiquity and to show how Galen’s analysis concerning the “art of conjecturing” constitutes a preferable alternative to the traditional ways used by philosophers to explain the inherent fallibility in the medical art. By distinguishing the scientific diagnosis from the conjectural one, Galen encompasses all cases relevant to the medical art. The former, because of its general nature, can be theorized. As for the latter, it concerns (...) only the individual and is therefore not likely to enter the theory, except by the method that makes it possible. This paper attempts to outline the modus operandi underlying the technical conjecture in order to illustrate Galen’s deftly developed position on a subject almost exclusively investigated by philosophers until then. (shrink)
This essay frames systemic patterns of mental abuse against women of color and Indigenous women on Turtle Island (North America) in terms of larger design-of-distribution strategies in settler colonial societies, as these societies use various forms of social power to distribute, reproduce, and automate social inequalities (including public health precarities and mortality disadvantages) that skew socio-economic gain continuously toward white settler populations and their descendants. It departs from traditional studies in gender-based violence research that frame mental abuses such as gaslighting--commonly (...) understood as mental manipulation through lying or deceit--stochastically, as chance-driven interpersonal phenomena. Building on structural analyses of knowledge in political epistemology (Dotson 2012, Berenstain 2016), political theory (Davis and Ernst 2017), and Indigenous social theory (Tuck and Yang 2012), I develop the notion of cultural gaslighting to refer to the social and historical infrastructural support mechanisms that disproportionately produce abusive mental ambients in settler colonial cultures in order to further the ends of cultural genocide and dispossession. I conclude by proposing a social epidemiological account of gaslighting that a) highlights the public health harms of abusive ambients for minority populations, b) illuminates the hidden rules of social structure in settler colonial societies, and c) amplifies the corresponding need for structural reparations. (shrink)
In this paper I propose an interpretation of classical statistical mechanics that centers on taking seriously the idea that probability measures represent complete states of statistical mechanical systems. I show how this leads naturally to the idea that the stochasticity of statistical mechanics is associated directly with the observables of the theory rather than with the microstates (as traditional accounts would have it). The usual assumption that microstates are representationally significant in the theory is therefore dispensable, a consequence which suggests (...) interesting possibilities for developing non-equilibrium statistical mechanics and investigating inter-theoretic answers to the foundational questions of statistical mechanics. (shrink)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (1) knowing that E is stochastically (...) independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models. (shrink)
The primary quantum mechanical equation of motion entails that measurements typically do not have determinate outcomes, but result in superpositions of all possible outcomes. Dynamical collapse theories (e.g. GRW) supplement this equation with a stochastic Gaussian collapse function, intended to collapse the superposition of outcomes into one outcome. But the Gaussian collapses are imperfect in a way that leaves the superpositions intact. This is the tails problem. There are several ways of making this problem more precise. But many authors (...) dismiss the problem without considering the more severe formulations. Here I distinguish four distinct tails problems. The first (bare tails problem) and second (structured tails problem) exist in the literature. I argue that while the first is a pseudo-problem, the second has not been adequately addressed. The third (multiverse tails problem) reformulates the second to account for recently discovered dynamical consequences of collapse. Finally the fourth (tails problem dilemma) shows that solving the third by replacing the Gaussian with a non-Gaussian collapse function introduces new conflict with relativity theory. (shrink)
The conspicuous similarities between interpretive strategies in classical statistical mechanics and in quantum mechanics may be grounded on their employment of common implementations of probability. The objective probabilities which represent the underlying stochasticity of these theories can be naturally associated with three of their common formal features: initial conditions, dynamics, and observables. Various well-known interpretations of the two theories line up with particular choices among these three ways of implementing probability. This perspective has significant application to debates on primitive ontology (...) and to the quantum measurement problem. (shrink)
The current assessment of behaviors in the inventories to diagnose autism spectrum disorders (ASD) focus on observation and discrete categorizations. Behaviors require movements, yet measurements of physical movements are seldom included. Their inclusion however, could provide an objective characterization of behavior to help unveil interactions between the peripheral and the central nervous systems. Such interactions are critical for the development and maintenance of spontaneous autonomy, self-regulation and voluntary control. At present, current approaches cannot deal with the heterogeneous, dynamic and (...) class='Hi'>stochastic nature of development. Accordingly, they leave no avenues for real-time or longitudinal assessments of change in a coping system continuously adapting and developing compensatory mechanisms. We offer a new unifying statistical framework to reveal re-afferent kinesthetic features of the individual with ASD. The new methodology is based on the non-stationary stochastic patterns of minute fluctuations (micro-movements) inherent to our natural actions. Such patterns of behavioral variability provide re-entrant sensory feedback contributing to the autonomous regulation and coordination of the motor output. From an early age, this feedback supports centrally driven volitional control and fluid, flexible transitions between intentional and spontaneous behaviors. We show that in ASD there is a disruption in the maturation of this form of proprioception. Despite this disturbance, each individual has unique adaptive compensatory capabilities that we can unveil and exploit to evoke faster and more accurate decisions. Measuring the kinesthetic re-afference in tandem with stimuli variations we can detect changes in their micro-movements indicative of a more predictive and reliable kinesthetic percept. Our methods address the heterogeneity of ASD with a personalized approach grounded in the inherent sensory-motor abilities that the individual has already developed. (shrink)
When philosophers of physics explore the nature of chance, they usually look to quantum mechanics. When philosophers of biology explore the nature of chance, they usually look to microevolutionary phenomena, such as mutation or random drift. What has been largely overlooked is the role of chance in macroevolution. The stochastic models of paleobiology employ conceptions of chance that are similar to those at the microevolutionary level, yet different from the conceptions of chance often associated with quantum mechanics and Laplacean (...) determinism. (shrink)
Most defenders of the new mechanistic approach accept ontic constraints for successful scientific explanation (Illari 2013; Craver 2014). The minimal claim is that scientific explanations have objective truthmakers, namely mechanisms that exist in the physical world independently of any observer and that cause or constitute the phenomena-to- be-explained. How can this idea be applied to type-level explanations? Many authors at least implicitly assume that in order for mechanisms to be the truthmakers of type-level explanation they need to be regular (Andersen (...) 2012; Sheredos 2015). One problem of this assumption is that most mechanisms are (highly) stochastic in the sense that they “fail more often than they succeed” (Bogen 2005; Andersen 2012). How can a mechanism type whose instances are more likely not to produce an instance of a particular phenomenon type be the truthmaker of the explanation of that particular phenomenon type? In this paper, I will give an answer to this question. I will analyze the notion of regularity and I will discuss Andersen's suggestion for how to cope with stochastic mechanisms. I will argue that her suggestion cannot account for all kinds of stochastic mechanisms and does not provide an answer as to why regularity grounds type-level explanation. According to my analysis, a mechanistic type- level explanation is true if and only if at least one of the following two conditions is satisfied: the mechanism brings about the phenomenon more often than any other phenomenon (comparative regularity) or the phenomenon is more often brought about by the mechanism than by any other mechanism/causal sequence (comparative reverse regularity). (shrink)
A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of (...) event. Neither can probabilistic risk analysis. This paper will argue that the approach that is referred to as engineering safety could be applied to reducing the risk from black swan extinction events. It will also propose a conceptual sketch of how such a strategy may be implemented: isolated, self-sufficient, and continuously manned underground refuges. Some characteristics of such refuges are also described, in particular the psychosocial aspects. Furthermore, it is argued that this implementation of the engineering safety strategy safety barriers would be effective and plausible and could reduce the risk of an extinction event in a wide range of possible scenarios. Considering the staggering opportunity cost of an existential catastrophe, such strategies ought to be explored more vigorously. (shrink)
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to (...) yield probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)
For aggregative theories of moral value, it is a challenge to rank worlds that each contain finitely many valuable events. And, although there are several existing proposals for doing so, few provide a cardinal measure of each world's value. This raises the even greater challenge of ranking lotteries over such worlds—without a cardinal value for each world, we cannot apply expected value theory. How then can we compare such lotteries? To date, we have just one method for doing so (proposed (...) separately by Arntzenius, Bostrom, and Meacham), which is to compare the prospects for value at each individual location, and to then represent and compare lotteries by their expected values at each of those locations. But, as I show here, this approach violates several key principles of decision theory and generates some implausible verdicts. I propose an alternative—one which delivers plausible rankings of lotteries, which is implied by a plausible collection of axioms, and which can be applied alongside almost any ranking of infinite worlds. (shrink)
In recent years, the branching spacetime (BST) interpretation of quantum mechanics has come under study by a number of philosophers, physicists and mathematicians. This paper points out some implications of the BST interpretation for two areas of quantum physics: (1) quantum gravity, and (2) stochastic interpretations of quantum mechanics.
No two individuals with the autism diagnosis are ever the same—yet many practitioners and parents can recognize signs of ASD very rapidly with the naked eye. What, then, is this phenotype of autism that shows itself across such distinct clinical presentations and heterogeneous developments? The “signs” seem notoriously slippery and resistant to the behavioral threshold categories that make up current assessment tools. Part of the problem is that cognitive and behavioral “abilities” typically are theorized as high-level disembodied and modular functions—that (...) are assessed discretely (impaired, normal, enhanced) to define a spectral syndrome. Even as biology reminds us that organic developing bodies are not made up of independent switches, we remain often seduced by the simplicity of mechanistic and cognitive models. Developmental disorders such as autism have accordingly been theorized as due to different modular dysfunctions—typically of cortical origin, i.e., failures of “theory of mind” (Baron-Cohen et al., 1985), of the “mirror neuron system” (Ramachandran and Oberman, 2006), of “weak central coherence” (Happe and Frith, 2006) or of the balance of “empathizing” and “systemizing” (Baron-Cohen, 2009), just to list a few. -/- The broad array of autonomic (Ming et al., 2005; Cheshire, 2012) and sensorimotor (Damasio and Maurer, 1978; Maurer and Damasio, 1982; Donnellan and Leary, 1995; Leary and Hill, 1996; Donnellan and Leary, 2012; Donnellan et al., 2012) differences experienced and reported by people with autism have by such theories typically been sidelined as “co-morbidities,” possibly sharing genetic causes, but rendered as incidental and decisively behaviorally irrelevant symptoms—surely disconnected from cognition. But what if the development of cortically based mental processes and autonomous control relies on the complexities and proper function of the peripheral nervous systems? Through such an “embodied” lens the heterogeneous symptoms of autism invites new interpretations. We propose here that many behavioral-level findings can be re-defined as downstream effects of how developing nervous systems attempt to cope and adapt to the challenges of having various noisy, unpredictable, and unreliable peripheral inputs. (shrink)
Description courte (Électre, 2019) : Une étude d'un des principaux axes de réflexion du philosophe des sciences et de la nature Raymond Ruyer (1902-1987). À la lumière des découvertes de l'embryogenèse et en s'appuyant par ailleurs sur la théorie de l'information, il proposa une interprétation des concepts unificateurs de la cybernétique mécaniste. -/- Short Descriptor (Electre 2019): A study of one of the main axes of reflection of the French philosopher of science and of nature Raymond Ruyer (1902-1987). Relying on (...) the discoveries about embryogenesis, and also with the use of information theory, Ruyer proposed an interpretation of the main unifying concepts of mechanistic cybernetics. -/- Cet ouvrage propose une étude fouillée d'un des principaux axes de réflexion du philosophe des sciences et de la nature français Raymond Ruyer (1902–1987) : la cybernétique. Après avoir proposé une philosophie structuraliste, Ruyer la modifia à la lumière des découvertes de l'embryogenèse, puis il proposa une interprétation des concepts unificateurs de la cybernétique mécaniste. Réfléchissant sur cette dernière et sur la théorie de l'information, en particulier sur l'origine de l'information, il défendit que cette cybernétique n'était qu'une lecture inversée de la vraie cybernétique, qui nous donnerait de lire dans l'expérience même les traces du pouvoir morphogénétique, appréhendé comme un champ axiologique. Dans un texte résumant son propre parcours, Ruyer affirma finalement que la critique de la théorie de l'information « peut donner […] l'espoir d'aboutir à quelque chose comme une nouvelle théologie. » Les idées directrices de Ruyer sont tout particulièrement contextualisées ici à partir de la question du développement des formes en biologie, et de celles de la génétique, de la genèse stochastique de l'ordre, et de l'identification mentale ou physique de l'information. Il se termine en départageant ce qui est théologique et axiologique dans ce projet de métaphysique qui, bien que resté inachevé, n'en représente pas moins le plus impressionnant conçu en France au siècle dernier. – This book offers an in-depth study of one of the main axes in the reflection of French philosopher of science and nature Raymond Ruyer. In a text summarising his own development, Ruyer stated about the philosophical critique of information theory that it "is what can give the most long-lasting hope of getting to something like a new theology." After propounding a structuralist philosophy, and distinguishing between form and structure, to then modify it in the light of discoveries in embryogenesis, Ruyer offered a re-evaluation of the unifying concepts of mechanistic cybernetics. Thinking about it and about information theory, he defended the idea that this cybernetics was in reality an inverted reading of the real one, which would allow us to read in experience itself traces of the morphogenetic power, apprehended as the axiological field. On some transversal points, the development of forms in biology and genetics, the stochastic genesis of order, the identification of information with either psychological and mental, or physical reality, behaviour, and the access to meaning, this work exposes the main ideas of Ruyer while situating them in the context of the breadth of others' contributions. It ends by determining what is theological and axiological in this project for a metaphysics which, although unfinished, is nevertheless the most impressive effort done in France in the last century. – Available on i6doc dot com (ISBN 978-2-930517-56-8 ; pdf 978-2-930517-57-5). (shrink)
Starting from a recent paper by S. Kaufmann, we introduce a notion of conjunction of two conditional events and then we analyze it in the setting of coherence. We give a representation of the conjoined conditional and we show that this new object is a conditional random quantity, whose set of possible values normally contains the probabilities assessed for the two conditional events. We examine some cases of logical dependencies, where the conjunction is a conditional event; moreover, we give the (...) lower and upper bounds on the conjunction. We also examine an apparent paradox concerning stochastic independence which can actually be explained in terms of uncorrelation. We briefly introduce the notions of disjunction and iterated conditioning and we show that the usual probabilistic properties still hold. (shrink)
Risk communication has been generally categorized as a warning act, which is performed in order to prevent or minimize risk. On the other side, risk analysis has also underscored the role played by information in reducing uncertainty about risk. In both approaches the safety aspects related to the protection of the right to health are on focus. However, it seems that there are cases where a risk cannot possibly be avoided or uncertainty reduced, this is for instance valid for the (...) declaration of side effects associated with pharmaceutical products or when a decision about drug approval or retirement must be delivered on the available evidence. In these cases, risk communication seems to accomplish other tasks than preventing risk or reducing uncertainty. The present paper analyzes the legal instruments which have been developed in order to control and manage the risks related to drugs – such as the notion of “development risk” or “residual risk” – and relates them to different kinds of uncertainty. These are conceptualized as epistemic, ecological, metric, ethical, and stochastic, depending on their nature. By referring to this taxonomy, different functions of pharmaceutical risk communication are identified and connected with the legal tools of uncertainty management. The purpose is to distinguish the different functions of risk communication and make explicit their different legal nature and implications. (shrink)
The modern synthesis in evolutionary biology is taken to be that period in which a consensus developed among biologists about the major causes of evolution, a consensus that informed research in evolutionary biology for at least a half century. As such, it is a particularly fruitful period to consider when reflecting on the meaning and role of chance in evolutionary explanation. Biologists of this period make reference to “chance” and loose cognates of “chance,” such as: “random,” “contingent,” “accidental,” “haphazard,” or (...) “stochastic.” Of course, what an author might mean by “chance” in any specific context varies. -/- In the following, we first offer a historiographical note on the synthesis. Second, we introduce five ways in which synthesis authors spoke about chance. We do not take these to be an exhaustive taxonomy of all possible ways in which chance meaningfully figures in explanations in evolutionary biology. These are simply five common uses of the term by biologists at this period. They will serve to organize our summary of the collected references to chance and the analysis and discussion of the following questions: • What did synthesis authors understand by chance? • How did these authors see chance operating in evolution? • Did their appeals to chance increase or decrease over time during the synthesis? That is, was there a “hardening” of the synthesis, as Gould claimed (1983)? (shrink)
The scientific status of evolutionary theory seems to be more or less perennially under question. I am not referring here (just) to the silliness of young Earth creation- ism (Pigliucci 2002; Boudry and Braeckman 2010), or even of the barely more intel- lectually sophisticated so-called Intelligent Design theory (Recker 2010; Brigandt this volume), but rather to discussions among scientists and philosophers of science concerning the epistemic status of evolutionary theory (Sober 2010). As we shall see in what follows, this debate (...) has a long history, dating all the way back to Darwin, and it is in great part rooted in the fundamental dichotomy between what French biologist and Nobel laureate Jacques Monod (1971) called chance and necessity—i.e., the inevitable and inextricable interplay of deterministic and stochastic mechanisms operating during the course of evolution. (shrink)
MỘT SỐ QUÁ TRÌNH NGẪU NHIÊN CÓ BƯỚC NHẢY -/- Hoàng Thị Phương Thảo -/- Luận án Tiến sỹ -/- TRƯỜNG ĐẠI HỌC KHOA HỌC TỰ NHIÊN ĐẠI HỌC QUỐC GIA HÀ NỘI -/- Hà Nội - 2015 .
We extend previous work on cooperation to some related questions regarding the evolution of simple forms of communication. The evolution of cooperation within the iterated Prisoner's Dilemma has been shown to follow different patterns, with significantly different outcomes, depending on whether the features of the model are classically perfect or stochastically imperfect (Axelrod 1980a, 1980b, 1984, 1985; Axelrod and Hamilton, 1981; Nowak and Sigmund, 1990, 1992; Sigmund 1993). Our results here show that the same holds for communication. Within a simple (...) model, the evolution of communication seems to require a stochastically imperfect world. (shrink)
Agent-causal libertarians maintain we are irreducible agents who, by acting, settle matters that aren’t already settled. This implies that the neural matters underlying the exercise of our agency don’t conform to deterministic laws, but it does not appear to exclude the possibility that they conform to statistical laws. However, Pereboom (Noûs 29:21–45, 1995; Living without free will, Cambridge University Press, Cambridge, 2001; in: Nadelhoffer (ed) The future of punishment, Oxford University Press, New York, 2013) has argued that, if these neural (...) matters conform to either statistical or deterministic physical laws, the complete conformity of an irreducible agent’s settling of matters with what should be expected given the applicable laws would involve coincidences too wild to be credible. Here, I show that Pereboom’s argument depends on the assumption that, at times, the antecedent probability certain behavior will occur applies in each of a number of occasions, and is incapable of changing as a result of what one does from one occasion to the next. There is, however, no evidence this assumption is true. The upshot is the wild coincidence objection is an empirical objection lacking empirical support. Thus, it isn’t a compelling argument against agent-causal libertarianism. (shrink)
The current rise of neurodevelopmental disorders poses a critical need to detect risk early in order to rapidly intervene. One of the tools pediatricians use to track development is the standard growth chart. The growth charts are somewhat limited in predicting possible neurodevelopmental issues. They rely on linear models and assumptions of normality for physical growth data – obscuring key statistical information about possible neurodevelopmental risk in growth data that actually has accelerated, non-linear rates-of-change and variability encompassing skewed distributions. Here, (...) we use new analytics to profile growth data from 36 newborn babies that were tracked longitudinally for 5 months. By switching to incremental (velocity-based) growth charts and combining these dynamic changes with underlying fluctuations in motor performance – as the transition from spontaneous random noise to a systematic signal – we demonstrate a method to detect very early stunting in the development of voluntary neuromotor control and to flag risk of neurodevelopmental derail. (shrink)
Optimization and Stochastic Processes Applied to Economy and Finance. Textbook for the BM&F-USP (Brazilian Mercantile and Futures Exchange - University of Sao Paulo) Master's degree program in Finance.
In this article I examine two mathematical definitions of observational equivalence, one proposed by Charlotte Werndl and based on manifest isomorphism, and the other based on Ornstein and Weiss’s ε-congruence. I argue, for two related reasons, that neither can function as a purely mathematical definition of observational equivalence. First, each definition permits of counterexamples; second, overcoming these counterexamples will introduce non-mathematical premises about the systems in question. Accordingly, the prospects for a broadly applicable and purely mathematical definition of observational equivalence (...) are unpromising. Despite this critique, I suggest that Werndl’s proposals are valuable because they clarify the distinction between provable and unprovable elements in arguments for observational equivalence. (shrink)
Bell-type inequalities are proven using oversimplified probabilistic models and/or counterfactual definiteness (CFD). If setting-dependent variables describing measuring instruments are correctly introduced, none of these inequalities may be proven. In spite of this, a belief in a mysterious quantum nonlocality is not fading. Computer simulations of Bell tests allow people to study the different ways in which the experimental data might have been created. They also allow for the generation of various counterfactual experiments’ outcomes, such as repeated or simultaneous measurements performed (...) in different settings on the same “photon-pair”, and so forth. They allow for the reinforcing or relaxing of CFD compliance and/or for studying the impact of various “photon identification procedures”, mimicking those used in real experiments. Data samples consistent with quantum predictions may be generated by using a specific setting-dependent identification procedure. It reflects the active role of instruments during the measurement process. Each of the setting-dependent data samples are consistent with specific setting-dependent probabilistic models which may not be deduced using non-contextual local realistic or stochastic hidden variables. In this paper, we will be discussing the results of these simulations. Since the data samples are generated in a locally causal way, these simulations provide additional strong arguments for closing the door on quantum nonlocality. (shrink)
By means of a probabilistic mathematical model, we bring into discussion the origin of life as a stochastic process. We consider only the chance of information emergence in the proteome and genome under the ideal thermodynamic and chemical conditions. For a more realistic model, we used, as a parameter, the information amount in N. equitans genome, the simplest known nowadays, as the equivalent to the first living cell that could have emerged in primitive Earth. We estimated the probability of (...) information emergence by chance as being smaller than 10-500’000. Considering the necessary ideal conditions for information emergence, the probability of the origin of life would be even smaller. (shrink)
The monograph is an English, expanded and revised version of the book Cheshko, V. T., Ivanitskaya, L.V., & Glazko, V.I. (2018). Anthropocene. Philosophy of Biotechnology. Moscow, Course. The manuscript was completed by me on November 15, 2019. It is a study devoted to the development of the concept of a stable evolutionary human strategy as a unique phenomenon of global evolution. The name “An Evolutionary Metaphysics (Cheshko, 2012; Glazko et al., 2016). With equal rights, this study could be entitled “Biotechnology (...) as a result and factor of the evolutionary processˮ. The choice in favor of used “The Evolutionary Metaphysics of Human Enhancement Technologiesˮ was made in accordance with the basic principle of modern post-academician and human-sized science, a classic example of which is biotechnology. The “Metaphysics of Evolution” and “Evolutionary Metaphysics” concepts are used in several ways in modern philosophical discourse. In any case, the values contain a logical or associative reference to the teleological nature of the evolutionary process (Hull, 1967, 1989; Apel, 1995; Faye, 2016; Dupre, 2017; Rose, 2018, etc). In our study, the “evolutionary metaphysics” serves to denote the thesis of the rationalization and technologization of global evolution and anthropogenesis, in particular. At the same time, the postulate of an open future remains relevant in relation to the results of the evolutionary process. The theory of evolution of complex, including the humans system and algorithm for its constructing are а synthesis of evolutionary epistemology, philosophical anthropology and concrete scientific empirical basis in modern science. ln other words, natural philosophy is regaining the status bar element theoretical science in the era of technology-driven evolution. The co-evolutionary concept of 3-modal stable evolutionary strategy of Homo sapiens is developed. The concept based оn the principle of evolutionary complementarity of anthropogenesis: value of evolutionary risk and evolutionary path of human evolution are defined bу descriptive (evolutionary efficiency) and creative-teleological (evolutionary correctness) parameters simultaneously, that cannot bе instrumental reduced to others ones. Resulting volume of both parameters define the vectors of blological, social, cultural and techno-rationalistic human evolution Ьу two gear mechanism genetic and cultural co-evolution and techno-humanitarian balance. The resultant each of them сап estimated Ьу the ratio of socio-psychological predispositions of humanization / dehumanization in mentality. Explanatory model and methodology of evaluation of creatively teleological evolutionary risk component of NBIC technological complex is proposed. Integral part of the model is evolutionary semantics (time-varying semantic code, the compliance of the blological, socio-cultural and techno-rationalist adaptive modules of human stable evolutionary strategy). It is seem necessary to make three clarifications. First, logical construct, “evolutionary metaphysics” contains an internal contradiction, because it unites two alternative explanatory models. “Metaphysics”, as a subject, implies deducibility of the process from the initial general abstract principle, and, consequently, the outcome of the development of the object is uniquely determined by the initial conditions. Predicate, “evolutionary”, means stochastic mechanism of realizing the same principle by memorizing and replicating random choices in all variants of the post-Darwin paradigm. In philosophy, random choice corresponds to the category of “free will” of a reasonable agent. In evolutionary theory, the same phenomenon is reflected in the concept of “covariant replication”. Authors will attempt to synthesize both of these models in a single transdisciplinary theoretical framework. Secondly, the interpretation of the term “evolutionary (adaptive) strategyˮ is different from the classical definition. The difference is that the adaptive strategy in this context is equivalent to the survival, i.e. it includes the adaptation to the environment and the transformation (construction) of the medium in accordance with the objectives of survival. To emphasize this difference authors used verbal construction “adaptiveˮ (rather than “evolutionaryˮ) strategy as more adequate. In all other cases, the two terms may be regarded as synonymous. Thirdly, the initial two essays of this series were published in one book in 2012. Their main goal was the development of the logically consistent methodological concept of stable adaptive (evolutionary) strategy of hominines and the argumentation of its heuristic possibilities as a transdisciplinary scientific paradigm of modern anthropology. The task was to demonstrate the possibilities of the SESH concept in describing and explaining the evolutionary prospects for the interaction of social organization and technology (techno-humanitarian balance) and the associated biological and cultural mechanisms of the genesis of religion (gene-cultural co-evolution). In other words, it was related to the sphere of cultural and philosophical anthropology, i.e. to the axiological component of any theoretical constructions describing the behavior of self-organizing systems with human participation. In contrast, the present work is an attempt to introduce this concept into the sphere of biological anthropology and, consequently, its main goal is to demonstrate the possibility of verification of its main provisions by means of procedures developed by natural science, i.e. refers to the descriptive component of the same theoretical constructions. The result of this in the future should be methods for assessing, calculating and predicting the risk of loss of biological and cultural identity of a person, associated with a permanent and continuously deepening process of development of science and technology. (shrink)
The problem with reductionism in biology is not the reduction, but the implicit attitude of determinism that usually accompanies it. Methodological reductionism is supported by deterministic beliefs, but making such a connection is problematic when it is based on an idea of determinism as fixed predictability. Conflating determinism with predictability gives rise to inaccurate models that overlook the dynamic complexity of our world, as well as ignore our epistemic limitations when we try to model it. Furthermore, the assumption of a (...) strictly deterministic framework is unnecessarily hindering to biology. By removing the dogma of determinism, biological methods, including reductive methods, can be expanded to include stochastic models and probabilistic interpretations. Thus, the dogma of reductionism can be saved once its ties with determinism are severed. In this paper, I analyze two problems that have faced molecular biology for the last 50 years—protein folding and cancer. Both cases demonstrate the long influence of reductionism and determinism on molecular biology, as well as how abandoning determinism has opened the door to more probabilistic and unconstrained reductive methods in biology. (shrink)
The problem of cancer is examined from the metaphysical standpoint of essence and ground. An essentialist definition of cancer is assumed that would be valid in all possible worlds in which cancer could logically exist. The grounds of cancer are then examined and elucidated. Two grounding cancer properties are identified and discussed: symmetry- breaking and computational intelligence. Each examination leads to concrete conclusions for novel therapeutic approaches and a more fundamental understanding of what cancer is at bottom. Other possible cancer (...) grounding properties related to evolution, adaptability and stochastic features are identified for future work. This approach is novel and offers new solutions to the problem of cancer. (shrink)
Spontaneous collapse theories of quantum mechanics turn the usual Schrödinger equation into a stochastic dynamical law. In particular, in this paper, I will focus on the GRW theory. Two philosophical issues that can be raised about GRW concern (i) the ontology of the theory, in particular the nature of the wave function and its role within the theory, and (ii) the interpretation of the objective probabilities involved in the dynamics of the theory. During the last years, it has been (...) claimed that we can take advantage of dispositional properties in order to develop an ontology for GRW theory, and also in order to ground the objective probabilities which are postulated by it. However, in this paper, I will argue that the dispositional interpretations which have been discussed in the literature so far are either flawed or – at best – incomplete. If we want to endorse a dispositional interpretation of GRW theory we thus need an extended account that specifies the precise nature of those properties and which makes also clear how they can correctly ground all the probabilities postulated by the theory. Thus, after having introduced several different kinds of probabilistic dispositions, I will try to fill the gap in the literature by proposing a novel and complete dispositional account of GRW, based on what I call spontaneous weighted multi-track propensities. I claim that such an account can satisfy both of our desiderata. (shrink)
I analyze Deleuze’s concept of temporality in terms of its ontology and axiological (political and aesthetic) aspects. For Deleuze, the concept of temporality is non-monolithic, in the senses that it is modified throughout his works — the monographs, lectures, and those works that were co-authored with Félix Guattari — and that it is developed through reference to a dizzying array of concepts, thinkers, artistic works, and social phenomena. -/- I observe that Deleuze’s concept of temporality involves a complex ontology of (...) difference, which I elaborate through reference to Deleuze’s analyses of Ancient Greek and Stoic conceptualizations of time. From Plato through to Chrysippus, temporality gradually comes to be identified as a form that comprehends the variation of particulars. Deleuze modifies the ancients’ concept of time to suggest that time obtains as a form of ceaseless ontological variation. Through reference to Deleuze’s reading of Gilbert Simondon, I further suggest that Deleuze tends to conceive of temporality as an ontogenetic force which participates in the complex process of individuation. -/- A standout feature of this dissertation involves an analysis of how Deleuze’s concept of temporality is modified in his works on cinema. In Cinema 1: The Movement-Image and Cinema 2: The Time-Image, temporality comes to be characterized as something other than the measure of the movement of existents. In his detailed analyses of Bergson — in Cinema 1: The Movement-Image, Cinema 2: The Time-Image, and Bergsonism — Deleuze suggests that time involves an actualization of aspects of a virtual past as contemporaneous with the lived present. While not an outright denial of the relation of temporal succession, Deleuze’s claim implies a diminishment of this relation’s significance in an adequate elaboration of the nature of temporality. -/- Further, I observe —through reference to Deleuze’s readings of Marx, Kierkegaard, and Spinoza — that (the explicitly temporal) change of societal forms of economic organization is non-reducible to that suggested by linear evolution. The claim is that putatively discrete modes of economic organization do not enjoy temporal displacement with respect to one another. This suggests that linear evolutionary models of societal development are inadequate. This further implies that temporality is non-reducible to the relation of temporal succession. In concrete terms, societal change is characterized as immanent temporal variation. -/- Taken together, these analyses yield the conclusion that Deleuze tends to conceive of the nature of temporality as involving the ongoing realization of multiple — non-identical, sometimes contrary — aspects of a stochastic process of creation that is expressed in ontogenetic circumstances, social evolution, literary works, and filmic works. (shrink)
Opinions are rarely binary; they can be held with different degrees of conviction, and this expanded attitude spectrum can affect the influence one opinion has on others. Our goal is to understand how different aspects of influence lead to recognizable spatio-temporal patterns of opinions and their strengths. To do this, we introduce a stochastic spatial agent-based model of opinion dynamics that includes a spectrum of opinion strengths and various possible rules for how the opinion strength of one individual affects (...) the influence that this individual has on others. Through simulations, we find that even a small amount of amplification of opinion strength through interaction with like-minded neighbors can tip the scales in favor of polarization and deadlock. (shrink)
We describe a software system for the analysis of defined benefit actuarial plans. The system uses a recursive formulation of the actuarial stochastic processes to implement precise and efficient computations of individual and group cash flows.
The Generalized Poisson Distribution (GPD) adds an extra parameter to the usual Poisson distribution. This parameter induces a loss of homogeneity in the stochastic processes modeled by the distribution. Thus, the generalized distribution becomes an useful model for counting processes where the occurrence of events is not homogeneous. This model creates the need for an inferential procedure, to test for the value of this extra parameter. The FBST (Full Bayesian Significance Test) is a Bayesian hypotheses test procedure, capable of (...) providing an evidence measure on sharp hypotheses (where the dimension of the parametric space under the null hypotheses is smaller than that of the full parametric space). The goal of this work is study the empirical properties of the FBST for testing the nullity of extra parameter of the generalized Poisson distribution. Numerical experiments show a better performance of FBST with respect to the classical likelihood ratio test, and suggest that FBST is an efficient and robust tool for this application. (shrink)
Cognitive scientists used to deem reasoning either as a higher cognitive process based on the manipulation of abstract rules or as a higher cognitive process that is stochastic rather than involving abstract rules. I maintain that these different perspectives are closely intertwined with a theoretical and methodological endorsement of either cognitivism or connectionism. Cognitivism and connectionism represent two prevailing and opposed paradigms in cognitive science. I aim to extoll the virtues of connectionist models of enthymematic reasoning by the following (...) means: (1) via the phenomenon of creative enthymeme, viz. the inference where one cannot even articulate the missing premise, I introduce a connectionist mechanism of pattern recognition as underlying expertise; (2) via Gestalt switch or Gestalt click, I demonstrate how differences in pattern recognition of an expert and a novice can be construed as qualitatively different and not merely a matter of faster reasoning. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.