ABSTRACTIn this article, we discuss the benefits of Bayesian statistics and how to utilize them in studies of moral education. To demonstrate concrete examples of the applications of Bayesian statistics to studies of moral education, we reanalyzed two data sets previously collected: one small data set collected from a moral educational intervention experiment, and one big data set from a large-scale Defining Issues Test-2 survey. The results suggest that Bayesian analysis of data sets collected from moral educational studies (...) can provide additional useful statistical information, particularly that associated with the strength of evidence supporting alternative hypotheses, which has not been provided by the classical frequentist approach focusing on P-values. Finally, we introduce several practical guidelines pertaining to how to utilize Bayesian statistics, including the utilization of newly developed free statistical software, Jeffrey’s Amazing Statistics Program, and thresholding based on Bayes Factors, to scholars in the field of moral education. (shrink)
This paper puts forward the hypothesis that the distinctive features of quantum statistics are exclusively determined by the nature of the properties it describes. In particular, all statistically relevant properties of identical quantum particles in many-particle systems are conjectured to be irreducible, ‘inherent’ properties only belonging to the whole system. This allows one to explain quantum statistics without endorsing the ‘Received View’ that particles are non-individuals, or postulating that quantum systems obey peculiar probability distributions, or assuming that there (...) are primitive restrictions on the range of states accessible to such systems. With this, the need for an unambiguously metaphysical explanation of certain physical facts is acknowledged and satisfied. (shrink)
The spin-statistics connection is derived in a simple manner under the postulates that the original and the exchange wave functions are simply added, and that the azimuthal phase angle, which defines the orientation of the spin part of each single-particle spin-component eigenfunction in the plane normal to the spin-quantization axis, is exchanged along with the other parameters. The spin factor (−1)2s belongs to the exchange wave function when this function is constructed so as to get the spinor ambiguity under (...) control. This is achieved by effecting the exchange of the azimuthal angle by means of rotations and admitting only rotations in one sense. The procedure works in Galilean as well as in Lorentz-invariant quantum mechanics. Relativistic quantum field theory is not required. (shrink)
Statistical evidence is crucial throughout disparate impact’s three-stage analysis: during (1) the plaintiff’s prima facie demonstration of a policy’s disparate impact; (2) the defendant’s job-related business necessity defense of the discriminatory policy; and (3) the plaintiff’s demonstration of an alternative policy without the same discriminatory impact. The circuit courts are split on a vital question about the “practical significance” of statistics at Stage 1: Are “small” impacts legally insignificant? For example, is an employment policy that causes a one percent (...) disparate impact an appropriate policy for redress through disparate impact litigation? This circuit split calls for a comprehensive analysis of practical significance testing across disparate impact’s stages. Importantly, courts and commentators use “practical significance” ambiguously between two aspects of practical significance: the magnitude of an effect and confidence in statistical evidence. For example, at Stage 1 courts might ask whether statistical evidence supports a disparate impact (a confidence inquiry) and whether such an impact is large enough to be legally relevant (a magnitude inquiry). Disparate impact’s texts, purposes, and controlling interpretations are consistent with confidence inquires at all three stages, but not magnitude inquiries. Specifically, magnitude inquiries are inappropriate at Stages 1 and 3—there is no discriminatory impact or reduction too small or subtle for the purposes of the disparate impact analysis. Magnitude inquiries are appropriate at Stage 2, when an employer defends a discriminatory policy on the basis of its job-related business necessity. (shrink)
Statistics play a critical role in biological and clinical research. To promote logically consistent representation and classification of statistical entities, we have developed the Ontology of Biological and Clinical Statistics (OBCS). OBCS extends the Ontology of Biomedical Investigations (OBI), an OBO Foundry ontology supported by some 20 communities. Currently, OBCS contains 686 terms, including 381 classes imported from OBI and 147 classes specific to OBCS. The goal of this paper is to present OBCS for community critique and to (...) describe a number of use cases designed to illustrate its potential applications. The OBCS project and source code are available at http://obcs.googlecode.com. (shrink)
The last two decades have seen a welcome proliferation of the collection and dissemination of data on social progress, as well as considered public debates rethinking existing standards of measuring the progress of societies. These efforts are to be welcomed. However, they are only a nascent step on a longer road to the improved measurement of social progress. In this paper, I focus on the central role that gender should take in future efforts to measure progress in securing human rights, (...) with a particular focus on anti-poverty rights. I proceed in four parts. First, I argue that measurement of human rights achievements and human rights deficits is entailed by the recognition of human rights, and that adequate measurement of human rights must be genuinely gender-sensitive. Second, I argue that existing systems of information collection currently fail rights holders, especially women, by failing to adequately gather information on the degree to which their rights are secure. If my first two claims are correct, this failure represents a serious injustice, and in particular an injustice for women. Third, I make recommendations regarding changes to existing information collection that would generate gender-sensitive measures of anti-poverty rights. Fourth, I conclude by responding to various objections that have been raised regarding the rise of indicators to track human rights. (shrink)
This paper, “Cultural Statistics, the Media and the Planning and Development of Calabar, Nigeria” stresses the need for the use of Cultural Statistics and effective media communication in the planning and development of Calabar, the Cross River State Capital. This position is anchored on the fact that in virtually every sphere of life, there can be no development without planning, and there can be no proper planning without accurate data or information. Cultural Statistics, and effective use of (...) the media thus become imperative in the planning and development of Calabar, especially as the Cross River State capital, is fast becoming an internationally recognized cultural city due largely to its annual Calabar Festival and Carnival. The paper among other things argues that cultural statistics and the use of the media will reposition the city of Calabar, not only in terms of development, but also in marketing and branding, taking into consideration the new economy and globalization which involve technology, creativity, human capital and capacity for innovation. The paper concludes that although some effort has been made by the Cross River State government in gathering and publishing some cultural information in brochures and other periodicals, there will be need for deliberate and conscientious effort to be made by the relevant government authorities to collect, collate, analyze and interpret cultural data in Calabar and project same in the media with a view to enhancing the planning and development of the Cross River State capital so as to truly make it a tourism and cultural haven in Nigeria and in the continent of Africa. (shrink)
The purpose of this paper is twofold: -/- 1) to highlight the widely ignored but fundamental problem of ‘superpopulations’ for the use of inferential statistics in development studies. We do not to dwell on this problem however as it has been sufficiently discussed in older papers by statisticians that social scientists have nevertheless long chosen to ignore; the interested reader can turn to those for greater detail. -/- 2) to show that descriptive statistics both avoid the problem of (...) superpopulations and can be a powerful tool when used correctly. A few examples are provided. -/- The paper ends with considerations of some reasons we think are behind the adherence to methods that are known to be inapplicable to many of the types of questions asked in development studies yet still widely practiced. (shrink)
We start with the ambition -- dating back to the early days of the semantic web -- of assembling a significant portion human knowledge into a contradiction-free form using semantic web technology. We argue that this would not be desirable, because there are concepts, known as essentially contested concepts, whose definitions are contentious due to deep-seated ethical disagreements. Further, we argue that the ninetenth century hermeneutical tradition has a great deal to say, both about the ambition, and about why it (...) fails. We conclude with some remarks about statistics. (shrink)
Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. Terms in OBCS, including ‘data collection’, ‘data transformation in statistics’, ‘data (...) visualization’, ‘statistical data analysis’, and ‘drawing a conclusion based on data’, cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. By representing statistics-related terms and their relations in a rigorous fashion, OBCS facilitates standard data analysis and integration, and supports reproducible biological and clinical research. (shrink)
If the goal of statistical analysis is to form justified credences based on data, then an account of the foundations of statistics should explain what makes credences justified. I present a new account called statistical reliabilism (SR), on which credences resulting from a statistical analysis are justified (relative to alternatives) when they are in a sense closest, on average, to the corresponding objective probabilities. This places (SR) in the same vein as recent work on the reliabilist justification of credences (...) generally [Dunn, 2015, Tang, 2016, Pettigrew, 2018], but it has the advantage of being action-guiding in that knowledge of objective probabilities is not required to identify the best-justified available credences. The price is that justification is relativized to a specific class of candidate objective probabilities, and to a particular choice of reliability measure. On the other hand, I show that (SR) has welcome implications for frequentist-Bayesian reconciliation, including a clarification of the use of priors; complemen- tarity between probabilist and fallibilist [Gelman and Shalizi, 2013, Mayo, 2018] approaches towards statistical foundations; and the justification of credences outside of formal statistical settings. Regarding the latter, I demonstrate how the insights of statistics may be used to amend other reliabilist accounts so as to render them action-guiding. I close by discussing new possible research directions for epistemologists and statisticians (and other applied users of probability) raised by the (SR) framework. (shrink)
This paper discusses the issue of the identity and individuality (or lack thereof) of quantum mechanical particles. It first reconstructs, on the basis of the extant literature, a general argument in favour of the conclusion that such particles are not individual objects. Then, it critically assesses each one of the argument’s premises. The upshot is that, in fact, there is no compelling reason for believing that quantum particles are not individual objects.
There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton’s footsteps. I argue for two related theses in light of this standard interpretation, based on a (...) reading of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. (shrink)
The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of evidence (...) for hypotheses. We suggest that rather than mere statistical reform, what is needed is a better understanding of the different modes of statistical inference and a better understanding of how statistical inference relates to scientific inference. (shrink)
Perhaps the topic of acceptable risk never had a sexier and more succinct introduction than the one Edward Norton, playing an automobile company executive, gave it in Fight Club: “Take the number of vehicles in the field (A), multiply it by the probable rate of failure (B), and multiply the result by the average out of court settlement (C). A*B*C=X. If X is less than the cost of the recall, we don’t do one.” Of course, this dystopic scene also gets (...) to the heart of the issue in another way: acceptable risk deals with mathematical calculations about the value of life, injury, and emotional wreckage, making calculation a difficult matter ethically, politically, and economically. This entry will explore the history of this idea, focusing on its development alongside statistics into its wide importance today. (shrink)
My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design (...) problems from current work in history of engineering and technology studies. I then model Carnap’s practice based on those lessons and uncover ways in which Carnap’s technical work in inductive logic takes some of these lessons on board. This shows ways in which Carnap’s philosophical project subtly changes right through his late work on induction, providing an important corrective to interpretations that ignore the work on inductive logic. Specifically, I show that paying attention to the historical details of Carnap’s attempt to apply his work in inductive logic to decision theory and theoretical statistics in the 1950s and 1960s helps us understand how Carnap develops and rearticulates the philosophical point of the practical/theoretical distinction in his late work, offering thus a new interpretation of Carnap’s technical work within the broader context of philosophy of science and analytical philosophy in general. (shrink)
This dissertation shows how initial conditions play a special role in the explanation of contingent and irregular outcomes, including, in the form of geographic context, the special case of uneven development in the social sciences. The dissertation develops a general theory of this role, recognizes its empirical limitations in the social sciences, and considers how it might be applied to the question of uneven development. The primary purpose of the dissertation is to identify and correct theoretical problems in the study (...) of uneven development; it is not intended to be an empirical study. Chapter 1 introduces the basic problem, and discusses why it has become especially salient in debates concerning uneven development. Chapter 2 develops an argument for the importance of initial conditions in the philosophy of science, developed specifically in the context of the Bhaskar/Cartwright ‘open systems’ (and by extension, ‘exogenous factor’) emphasis on the ubiquity of contingency in the universe and rejection of explanation based on laws of nature (regularity accounts) of causation. Chapter 3 makes three claims concerning the concept of contingency, especially as related to the study of society: 1) that there are eight distinct uses of the word contingency, and its many meanings are detrimental to clarity of discussion and thought in history and the social sciences; 2) that it is possible to impose some order on these different uses through developing a classification of contingency into three types based on assumptions concerning possible worlds and determinism; 3) that one of the classes is a special use of the word without relevance to the social sciences, while the two remaining classes are nothing more than a variety of the ‘no hidden factors’ argument in the debate on indeterminism and determinism (and thus related to the concept of spacetime trajectories caused by initial conditions and the interference of these in the form of ‘exogenous factors’ with ‘open systems’). Chapter 4 The concept of explanation based on initial conditions together with laws of nature is widely associated with determinism. In the social sciences determinism has frequently been rejected due to the moral dilemmas it is perceived as presenting. Chapter 4 considers problems with this view. Chapter 5 considers attitudes among geographers, economists, and historians towards using geographic factors as initial conditions in explanation and how they might acceptably be used, in particular their role in ‘anchoring’ aspatial theories of social processes to real-world distributions. Chapter 6 considers the relationship of the statistical methods common in development studies with the trend towards integrating geographical factors into econometric development studies. It introduces the statistical argument on ‘apparent populations’ that arrives at conclusions concerning determinism consistent with Chapters 2 and 3 of the dissertation. The need for the visual interpretation of data with descriptive statistics and maps and their utility in the study of uneven development is discussed with a number of examples. Chapter 7 applies these concepts to the ‘institutions versus geography’ debate in development studies, using Acemoglu, Johnson and Robinson’s 2002 ‘reversal of fortune’ argument as a primary example. Chapter 8 considers possible directions for future work, both theoretical and empirical. Chapter 9 concludes with a discussion of additional possible objections to the use of initial conditions as exogenous factors in explanation. (shrink)
Scientists use models to understand the natural world, and it is important not to conflate model and nature. As an illustration, we distinguish three different kinds of populations in studies of ecology and evolution: theoretical, laboratory, and natural populations, exemplified by the work of R.A. Fisher, Thomas Park, and David Lack, respectively. Biologists are rightly concerned with all three types of populations. We examine the interplay between these different kinds of populations, and their pertinent models, in three examples: the notion (...) of “effective” population size, the work of Thomas Park on /Tribolium/ populations, and model-based clustering algorithms such as /Structure/. Finally, we discuss ways to move safely between three distinct population types while avoiding confusing models and reality. (shrink)
The reward system of science is the priority rule. The first scientist making a new discovery is rewarded with prestige, while second runners get little or nothing. Michael Strevens, following Philip Kitcher, defends this reward system, arguing that it incentivizes an efficient division of cognitive labor. I argue that this assessment depends on strong implicit assumptions about the replicability of findings. I question these assumptions on the basis of metascientific evidence and argue that the priority rule systematically discourages replication. My (...) analysis leads us to qualify Kitcher and Strevens’s contention that a priority-based reward system is normatively desirable for science. (shrink)
Many oppose the use of profile evidence against defendants at trial, even when the statistical correlations are reliable and the jury is free from prejudice. The literature has struggled to justify this opposition. We argue that admitting profile evidence is objectionable because it violates what we call “equal protection”—that is, a right of innocent defendants not to be exposed to higher ex ante risks of mistaken conviction compared to other innocent defendants facing similar charges. We also show why admitting other (...) forms of evidence, such as eyewitness, trace, and motive evidence, does not violate equal protection. (shrink)
Conciliationism faces a challenge that has not been satisfactorily addressed. There are clear cases of epistemically significant merely possible disagreement, but there are also clear cases where merely possible disagreement is epistemically irrelevant. Conciliationists have not yet accounted for this asymmetry. In this paper, we propose that the asymmetry can be explained by positing a selection constraint on all cases of peer disagreement—whether actual or merely possible. If a peer’s opinion was not selected in accordance with the proposed constraint, then (...) it lacks epistemic significance. This allows us to distinguish the epistemically significant cases of merely possible disagreement from the insignificant ones. (shrink)
This paper shows how the classical finite probability theory (with equiprobable outcomes) can be reinterpreted and recast as the quantum probability calculus of a pedagogical or toy model of quantum mechanics over sets (QM/sets). There have been several previous attempts to develop a quantum-like model with the base field of ℂ replaced by ℤ₂. Since there are no inner products on vector spaces over finite fields, the problem is to define the Dirac brackets and the probability calculus. The previous attempts (...) all required the brackets to take values in ℤ₂. But the usual QM brackets <ψ|ϕ> give the "overlap" between states ψ and ϕ, so for subsets S,T⊆U, the natural definition is <S|T>=|S∩T| (taking values in the natural numbers). This allows QM/sets to be developed with a full probability calculus that turns out to be a non-commutative extension of classical Laplace-Boole finite probability theory. The pedagogical model is illustrated by giving simple treatments of the indeterminacy principle, the double-slit experiment, Bell's Theorem, and identical particles in QM/Sets. A more technical appendix explains the mathematics behind carrying some vector space structures between QM over ℂ and QM/Sets over ℤ₂. (shrink)
Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a 'good' model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data will (...) run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called 'measure problem' in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data. (shrink)
To support my Phd theses and results of my grant research in 1999, I asked 1) prominent chemist Antonín Holý, author of substances to treat hepatitis and HIV, about the indivisibility of the art and science (published in Slovak Narodna Obroda and Czech blisty,cz), 2) the distinguished economist William Baumol about the alternative activities (published in Slovak Nove Slovo, Czech Respekt and blisty.cz), 3) Nobel Laureate Clive Granger about the significance of the economics (published in 2004 in Czech weekly Tyden). (...) The interviews were exhibited in Holland Park, W8 6LU, The Ice House between 18. Oct - 3. Nov. 2013. (shrink)
What does it look like when a group of scientists set out to re-envision an entire field of biology in symbolic and formal terms? I analyze the founding and articulation of Numerical Taxonomy between 1950 and 1970, the period when it set out a radical new approach to classification and founded a tradition of mathematics in systematic biology. I argue that introducing mathematics in a comprehensive way also requires re-organizing the daily work of scientists in the field. Numerical taxonomists sought (...) to establish a mathematical method for classification that was universal to every type of organism, and I argue this intrinsically implicated them in a qualitative re-organization of the work of all systematists. I also discuss how Numerical Taxonomy’s re-organization of practice became entrenched across systematic biology even as opposing schools produced their own competing mathematical methods. In this way, the structure of the work process became more fundamental than the methodological theories that motivated it. (shrink)
Standard statistical measures of strength of association, although pioneered by Pearson deliberately to be acausal, nowadays are routinely used to measure causal efficacy. But their acausal origins have left them ill suited to this latter purpose. I distinguish between two different conceptions of causal efficacy, and argue that: 1) Both conceptions can be useful 2) The statistical measures only attempt to capture the first of them 3) They are not fully successful even at this 4) An alternative definition more squarely (...) based on causal thinking not only captures the second conception, it can also capture the first one better too. (shrink)
I give a pedagogical derivation of the Cramer-Rao Bound, which gives a lower bound on the variance of estimators used in statistical point estimation, commonly used to give numerical estimates of the systematic uncertainties in a measurement.
Recently we proposed “quantum language" (or,“the linguistic Copenhagen interpretation of quantum mechanics"), which was not only characterized as the metaphysical and linguistic turn of quantum mechanics but also the linguistic turn of Descartes=Kant epistemology. Namely, quantum language is the scientific final goal of dualistic idealism. It has a great power to describe classical systems as well as quantum systems. Thus, we believe that quantum language is the language in which science is written. The purpose of this preprint is to examine (...) and assert our belief (i.e.,“proposition in quantum language" ⇔“scientific proposition). We believe that it's one of main themes of scientific philosophy to make such language. (shrink)
We leave in a beautiful and uniform world, a world where everything probable is possible. Since the epic theory of relativity many scientists have embarked in a pursuit of astonishing theoretical fantasies, abandoning the prudent and logical path to scientific inquiry. The theory is a complex theoretical framework that facilitates the understanding of the universal laws of physics. It is based on the space-time continuum fabric abstract concept, and it is well suited for interpreting cosmic events. However, it is not (...) well suited for handling of small, local topics as global warming, local energy issues, and overall common humanity matters. We now forward may fancy theories and spend unimaginable effort to validate them, even when we are perhaps headed in a wrong direction. For example, in our times matters of climate changes are debated by politicians based on economical considerations that are as illogical as they come. The venerable paths of scientific method developed during centuries by prominent scientists and philosophers has been willingly ignored and abandoned for various and prejudiced purpose. Contact email: gondork@yahoo.com . (shrink)
Spinoza’s geometrical approach to his masterwork, the Ethics, can be represented by a digraph, a mathematical object whose properties have been extensively studied. The paper describes a project that developed a series of computer programs to analyze the Ethics as a digraph. The paper presents a statistical analysis of the distribution of the elements of the Ethics. It applies a network statistic, betweenness, to analyze the relative importance to Spinoza’s argument of the individual propositions. The paper finds that a small (...) percentage of the propositions greatly exceed the majority in this importance. The paper then describes two logical structures that appear respectively in the Ethics and argues that they result in redundancy in the sense that about ten percent of the propositions could have been eliminated. The appendices list these structures and describes how the resources of the study can be made available to readers. (shrink)
One of the reasons why most of us feel puzzled about the problem of abortion is that we want, and do not want, to allow to the unborn child the rights that belong to adults and children. When we think of a baby about to be born it seems absurd to think that the next few minutes or even hours could make so radical a difference to its status; yet as we go back in the life of the fetus we (...) are more and more reluctant to say that this is a human being and must be treated as such. No doubt this is the deepest source of our dilemma, but it is not the only one. For we are also confused about the general question of what we may and may not do where the interests of human beings conflict. We have strong intuitions about certain cases; saying, for instance, that it is all right to raise the level of education in our country, though statistics allow us to predict that a rise in the suicide rate will follow, while it is not all right to kill the feeble-minded to aid cancer research. It is not easy, however, to see the principles involved, and one way of throwing light on the abortion issue will be by setting up parallels involving adults or children once born. So we will be able to isolate the “equal rights” issue and should be able to make some advance... (shrink)
Recently, the practice of deciding legal cases on purely statistical evidence has been widely criticised. Many feel uncomfortable with finding someone guilty on the basis of bare probabilities, even though the chance of error might be stupendously small. This is an important issue: with the rise of DNA profiling, courts are increasingly faced with purely statistical evidence. A prominent line of argument—endorsed by Blome-Tillmann 2017; Smith 2018; and Littlejohn 2018—rejects the use of such evidence by appealing to epistemic norms that (...) apply to individual inquirers. My aim in this paper is to rehabilitate purely statistical evidence by arguing that, given the broader aims of legal systems, there are scenarios in which relying on such evidence is appropriate. Along the way I explain why popular arguments appealing to individual epistemic norms to reject legal reliance on bare statistics are unconvincing, by showing that courts and individuals face different epistemic predicaments (in short, individuals can hedge when confronted with statistical evidence, whilst legal tribunals cannot). I also correct some misconceptions about legal practice that have found their way into the recent literature. (shrink)
Universities and funders in many countries have been using Journal Impact Factor (JIF) as an indicator for research and grant assessment despite its controversial nature as a statistical representation of scientific quality. This study investigates how the changes of JIF over the years can affect its role in research evaluation and science management by using JIF data from annual Journal Citation Reports (JCR) to illustrate the changes. The descriptive statistics find out an increase in the median JIF for the (...) top 50 journals in the JCR, from 29.300 in 2017 to 33.162 in 2019. Moreover, on average, elite journal families have up to 27 journals in the top 50. In the group of journals with a JIF of lower than 1, the proportion has shrunk by 14.53% in the 2015–2019 period. The findings suggest a potential ‘JIF bubble period’ that science policymaker, university, public fund managers, and other stakeholders should pay more attention to JIF as a criterion for quality assessment to ensure more efficient science management. (shrink)
After decades of intense debate over the old pessimistic induction (Laudan, 1977; Putnam, 1978), it has now become clear that it has at least the following four problems. First, it overlooks the fact that present theories are more successful than past theories. Second, it commits the fallacy of biased statistics. Third, it erroneously groups together past theories from different fields of science. Four, it misses the fact that some theoretical components of past theories were preserved. I argue that these (...) four problems entitle us to construct what I call the grand pessimistic induction that since the old pessimistic induction has infinitely many hidden problems, the new pessimistic induction (Stanford, 2006) also has infinitely many hidden problems. (shrink)
The study of cultural evolution has taken on an increasingly interdisciplinary and diverse approach in explicating phenomena of cultural transmission and adoptions. Inspired by this computational movement, this study uses Bayesian networks analysis, combining both the frequentist and the Hamiltonian Markov chain Monte Carlo (MCMC) approach, to investigate the highly representative elements in the cultural evolution of a Vietnamese city’s architecture in the early 20th century. With a focus on the façade design of 68 old houses in Hanoi’s Old Quarter (...) (based on 78 data lines extracted from 248 photos), the study argues that it is plausible to look at the aesthetics, architecture, and designs of the house façade to find traces of cultural evolution in Vietnam, which went through more than six decades of French colonization and centuries of sociocultural influence from China. The in-depth technical analysis, though refuting the presumed model on the probabilistic dependency among the variables, yields several results, the most notable of which is the strong influence of Buddhism over the decorations of the house façade. Particularly, in the top 5 networks with the best Bayesian Information Criterion (BIC) scores and p<0.05, the variable for decorations (DC) always has a direct probabilistic dependency on the variable B for Buddhism. The paper then checks the robustness of these models using Hamiltonian MCMC method and find the posterior distributions of the models’ coefficients all satisfy the technical requirement. Finally, this study suggests integrating Bayesian statistics in the social sciences in general and for the study of cultural evolution and architectural transformation in particular. (shrink)
For centuries, the major story of enlightenment was that education is and should be the cornerstone of our society. We try to educate people to make them respectable members of society, something which we inherently relate to being "better persons", firmly believing that education makes humans less prone to evil. Today, modern research seems to validate that premise: statistics verify that more education results to less crime. But is this picture accurate and does this mean anything regarding morality per (...) se? This paper tries to examine the facts with a more critical eye and determine whether education is indeed a source of ethics or not. The results of the analysis show that what we understand as education is not only unrelated to ethics but can also be a factor resulting in the degradation of morality in humans. Rousseau's arguments against science and arts are re-enforced with arguments stemming from other great philosophers and from modern experience itself. Using modern statistical analysis regarding the correlation of crime and education and through the examination of the modern regression in ethical issues, it becomes evident that education cannot and should not be a source of ethics. Knowing what is ethical is not as important as living an ethical life. Pharisees were the first to be denied the entrance to the kingdom of God. As Oscar Wilde once said, "Education is an admirable thing, but it is well to remember from time to time that nothing that is worth knowing can be taught". (shrink)
Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and (...) show how organisms that are risk-prone in the beliefs they form can be fitter than organisms that are risk-free. To address the second question, we discuss a problem that is widely recognized in statistics – the problem of over-fitting – and one influential device for addressing that problem, the Akaike Information Criterion (AIC). We then use AIC to solve epistemological versions of the disjunction and distality problems, which are two key problems concerning what it is for a belief state to have one semantic content rather than another. (shrink)
Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...) that a deeper understanding of cancer can be achieved by means of more theoretical work, rather than by merely accumulating more data. To support our thesis, we introduce the analytic view of theory development, which rests on the concept of plausibility, and make clear in what sense plausibility and probability are distinct concepts. Then, the concept of plausibility is used to point out the ineliminable role played by the epistemic subject in the development of statistical tools and in the process of theory assessment. We then move to address a central issue in cancer research, namely the relevance of computational tools developed by bioinformaticists to detect driver mutations in the debate between the two main rival theories of carcinogenesis. Finally, we briefly extend our considerations on the role that plausibility plays in evidence amalgamation from cancer research to the more general issue of the divergences between frequentists and Bayesians in the philosophy of medicine and statistics. We argue that taking into account plausibility-based considerations can lead to clarify some epistemological shortcomings that afflict both these perspectives. (shrink)
We address the question whether there is an explanation for the fact that as Fodor put it the micro-level “converges on stable macro-level properties”, and whether there are lessons from this explanation for other issues in the vicinity. We argue that stability in large systems can be understood in terms of statistical limit theorems. In the thermodynamic limit of infinite system size N → ∞ systems will have strictly stable macroscopic properties in the sense that transitions between different macroscopic phases (...) of matter (if there are any) will not occur in finite time. Indeed stability in this sense is a consequence of the absence of fluctuations, as (large) fluctuations would be required to induce such macroscopic transformations. These properties can be understood in terms of coarse-grained descriptions, and the statistical limit theorems for independent or weakly dependent random variable describing the behaviour averages and the statistics of fluctuations in the large system limit. We argue that RNG analyses applied to off-critical systems can provide a rationalization for the applicability of these limit theorems. Furthermore we discuss some related issues as, for example, the role of the infinite-system idealization. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation (...) that satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
In recent years, semiotics has become an innovative theoretical framework in mathematics education. The purpose of this article is to show that semiotics can be used to explain learning as a process of experimenting with and communicating about one's own representations of mathematical problems. As a paradigmatic example, we apply a Peircean semiotic framework to answer the question of how students learned the concept of "distribution" in a statistics course by "diagrammatic reasoning" and by developing "hypostatic abstractions," that is (...) by forming new mathematical objects which can be used as means for communication and further reasoning. Peirce's semiotic terminology is used as an alternative for notions such as modeling, symbolizing, and reification. We will show that it is a precise instrument of analysis with regard to the complexity of learning and of communication in mathematics classroom. (shrink)
Entropy is ubiquitous in physics, and it plays important roles in numerous other disciplines ranging from logic and statistics to biology and economics. However, a closer look reveals a complicated picture: entropy is defined differently in different contexts, and even within the same domain different notions of entropy are at work. Some of these are defined in terms of probabilities, others are not. The aim of this chapter is to arrive at an understanding of some of the most important (...) notions of entropy and to clarify the relations between them, After setting the stage by introducing the thermodynamic entropy, we discuss notions of entropy in information theory, statistical mechanics, dynamical systems theory and fractal geometry. (shrink)
The article begins by describing two longstanding problems associated with direct inference. One problem concerns the role of uninformative frequency statements in inferring probabilities by direct inference. A second problem concerns the role of frequency statements with gerrymandered reference classes. I show that past approaches to the problem associated with uninformative frequency statements yield the wrong conclusions in some cases. I propose a modification of Kyburg’s approach to the problem that yields the right conclusions. Past theories of direct inference have (...) postponed treatment of the problem associated with gerrymandered reference classes by appealing to an unexplicated notion of projectability . I address the lacuna in past theories by introducing criteria for being a relevant statistic . The prescription that only relevant statistics play a role in direct inference corresponds to the sort of projectability constraints envisioned by past theories. (shrink)
Virtues are dispositions that make their bearers admirable. Dispositions can be studied scientifically by systematically varying whether their alleged bearers are in (or take themselves to be in) the dispositions' eliciting conditions. In recent decades, empirically-minded philosophers looked to social and personality psychology to study the extent to which ordinary humans embody dispositions traditionally considered admirable in the Aristotelian tradition. This led some to conclude that virtues are not attainable ideals, and that we should focus our ethical reflection and efforts (...) more on jerry-rigging our environments than on improving our characters. Most virtue ethicists resisted this reorientation. However, much of the scientific evidence on which the controversy was based has failed to replicate, raising the question of how much faith we should place in methodologically suspect studies. In this paper, I assess the state of the debate and recommend best practices for a renewed interdisciplinary investigation of virtues and vices in which philosophical expertise related to conceptualization and theorizing is essentially intertwined with scientific expertise related to operationalization, measurement, and statistics. (shrink)
Teller argued that violations of Bell’s inequalities are to be explained by interpreting quantum entangled systems according to ‘relational holism’, that is, by postulating that they exhibit irreducible (‘inherent’) relations. Teller also suggested a possible application of this idea to quantum statistics. However, the basic proposal was not explained in detail nor has the additional idea about statistics been articulated in further work. In this article, I reconsider relational holism, amending it and spelling it out as appears necessary (...) for a proper assessment, and application, of the position. †To contact the author, please write to: FB Philosophie‐Zukunftskolleg, University of Konstanz, Universitätstraße 10, 78464, Konstanz, Germany; e‐mail: matteo.morganti@uni ‐konstanz.de. (shrink)
Research in ecology and evolutionary biology (evo-eco) often tries to emulate the “hard” sciences such as physics and chemistry, but to many of its practitioners feels more like the “soft” sciences of psychology and sociology. I argue that this schizophrenic attitude is the result of lack of appreciation of the full consequences of the peculiarity of the evo-eco sciences as lying in between a-historical disciplines such as physics and completely historical ones as like paleontology. Furthermore, evo-eco researchers have gotten stuck (...) on mathematically appealing but philosophi- cally simplistic concepts such as null hypotheses and p-values defined according to the frequentist approach in statistics, with the consequence of having been unable to fully embrace the complexity and subtlety of the problems with which ecologists and evolutionary biologists deal with. I review and discuss some literature in ecology, philosophy of science and psychology to show that a more critical methodological attitude can be liberating for the evo-eco scientist and can lead to a more fecund and enjoyable practice of ecology and evolutionary biology. With this aim, I briefly cover concepts such as the method of multiple hypotheses, Bayesian analysis, and strong inference. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.