The spin-statistics connection is derived in a simple manner under the postulates that the original and the exchange wave functions are simply added, and that the azimuthal phase angle, which defines the orientation of the spin part of each single-particle spin-component eigenfunction in the plane normal to the spin-quantization axis, is exchanged along with the other parameters. The spin factor (−1)2s belongs to the exchange wave function when this function is constructed so as to get the spinor ambiguity under (...) control. This is achieved by effecting the exchange of the azimuthal angle by means of rotations and admitting only rotations in one sense. The procedure works in Galilean as well as in Lorentz-invariant quantum mechanics. Relativistic quantum field theory is not required. (shrink)
The paper investigates the understanding of quantum indistinguishability after quantum information in comparison with the “classical” quantum mechanics based on the separable complex Hilbert space. The two oppositions, correspondingly “distinguishability / indistinguishability” and “classical / quantum”, available implicitly in the concept of quantum indistinguishability can be interpreted as two “missing” bits of classical information, which are to be added after teleportation of quantum information to be restored the initial state unambiguously. That new understanding of quantum indistinguishability is linked to the (...) distinction of classical (Maxwell-Boltzmann) versus quantum (either Fermi-Dirac or Bose-Einstein) statistics. The latter can be generalized to classes of wave functions (“empty” qubits) and represented exhaustively in Hilbert arithmetic therefore connectible to the foundations of mathematics, more precisely, to the interrelations of propositional logic and set theory sharing the structure of Boolean algebra and two anti-isometric copies of Peano arithmetic. (shrink)
Statistics play a critical role in biological and clinical research. To promote logically consistent representation and classification of statistical entities, we have developed the Ontology of Biological and Clinical Statistics (OBCS). OBCS extends the Ontology of Biomedical Investigations (OBI), an OBO Foundry ontology supported by some 20 communities. Currently, OBCS contains 686 terms, including 381 classes imported from OBI and 147 classes specific to OBCS. The goal of this paper is to present OBCS for community critique and to (...) describe a number of use cases designed to illustrate its potential applications. The OBCS project and source code are available at http://obcs.googlecode.com. (shrink)
The law views with suspicion statistical evidence, even evidence that is probabilistically on a par with direct, individual evidence that the law is in no way suspicious of. But it has proved remarkably hard to either justify this suspicion, or to debunk it. In this paper, we connect the discussion of statistical evidence to broader epistemological discussions of similar phenomena. We highlight Sensitivity – the requirement that a belief be counterfactually sensitive to the truth in a specific way – as (...) a way of epistemically explaining the legal suspicion towards statistical evidence. Still, we do not think of this as a satisfactory vindication of the reluctance to rely on statistical evidence. Knowledge – and Sensitivity, and indeed epistemology in general – are of little, if any, legal value. Instead, we tell an incentive-based story vindicating the suspicion towards statistical evidence. We conclude by showing that the epistemological story and the incentive-based story are closely and interestingly related, and by offering initial thoughts about the role of statistical evidence in morality. (shrink)
In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by Cognitive Constructivism and the FBST (Full Bayesian Significance Test). The constructivist framework is contrasted with the traditional epistemological settings for orthodox Bayesian and frequentist statistics provided by Decision Theory and Falsificationism.
Many who think that naked statistical evidence alone is inadequate for a trial verdict think that use of probability is the problem, and something other than probability – knowledge, full belief, causal relations – is the solution. I argue that the issue of whether naked statistical evidence is weak can be formulated within the probabilistic idiom, as the question whether likelihoods or only posterior probabilities should be taken into account in our judgment of a case. This question also identifies a (...) major difference between the Process Reliabilist and Probabilistic Tracking views of knowledge and other concepts. Though both are externalist, and probabilistic, epistemic theories, Tracking does and Process Reliabilism does not put conditions on likelihoods. So Tracking implies that a naked statistic is not adequate evidence about an individual, and does not yield knowledge, whereas the Reliabilist thinks it gives justified belief and knowledge. Not only does the Tracking view imply that naked statistical evidence is insufficient for a verdict, but it gives us resources to explain why, in terms of risk and the special conditions of a trial. (shrink)
ABSTRACTIn this article, we discuss the benefits of Bayesian statistics and how to utilize them in studies of moral education. To demonstrate concrete examples of the applications of Bayesian statistics to studies of moral education, we reanalyzed two data sets previously collected: one small data set collected from a moral educational intervention experiment, and one big data set from a large-scale Defining Issues Test-2 survey. The results suggest that Bayesian analysis of data sets collected from moral educational studies (...) can provide additional useful statistical information, particularly that associated with the strength of evidence supporting alternative hypotheses, which has not been provided by the classical frequentist approach focusing on P-values. Finally, we introduce several practical guidelines pertaining to how to utilize Bayesian statistics, including the utilization of newly developed free statistical software, Jeffrey’s Amazing Statistics Program, and thresholding based on Bayes Factors, to scholars in the field of moral education. (shrink)
A hitherto neglected form of explanation is explored, especially its role in population genetics. “Statistically abstractive explanation” (SA explanation) mandates the suppression of factors probabilistically relevant to an explanandum when these factors are extraneous to the theoretical project being pursued. When these factors are suppressed, the explanandum is rendered uncertain. But this uncertainty traces to the theoretically constrained character of SA explanation, not to any real indeterminacy. Random genetic drift is an artifact of such uncertainty, and it is therefore wrong (...) to reify it as a cause of evolution or as a process in its own right. *Received July 2009. †To contact the author, please write to: Department of Philosophy, University of Toronto, 170 St. George St., Toronto, ON M5R 2M8, Canada; e‐mail: [email protected] (shrink)
In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...) standards of proof should not be interpreted numerically. These ‘proof paradoxes’ illustrate that purely statistical evidence can warrant high credence in a disputed fact without satisfying the relevant legal standard. In this essay I evaluate three influential attempts to explain why merely statistical evidence cannot satisfy legal standards. (shrink)
Recent attempts to resolve the Paradox of the Gatecrasher rest on a now familiar distinction between individual and bare statistical evidence. This paper investigates two such approaches, the causal approach to individual evidence and a recently influential (and award-winning) modal account that explicates individual evidence in terms of Nozick's notion of sensitivity. This paper offers counterexamples to both approaches, explicates a problem concerning necessary truths for the sensitivity account, and argues that either view is implausibly committed to the impossibility of (...) no-fault wrongful convictions. The paper finally concludes that the distinction between individual and bare statistical evidence cannot be maintained in terms of causation or sensitivity. We have to look elsewhere for a solution of the Paradox of the Gatecrasher. (shrink)
Martin Smith has recently proposed, in this journal, a novel and intriguing approach to puzzles and paradoxes in evidence law arising from the evidential standard of the Preponderance of the Evidence. According to Smith, the relation of normic support provides us with an elegant solution to those puzzles. In this paper I develop a counterexample to Smith’s approach and argue that normic support can neither account for our reluctance to base affirmative verdicts on bare statistical evidence nor resolve the pertinent (...) paradoxes. Normic support is, as a consequence, not a successful epistemic anti-luck condition. (shrink)
One finds, in Maxwell's writings on thermodynamics and statistical physics, a conception of the nature of these subjects that differs in interesting ways from the way that they are usually conceived. In particular, though—in agreement with the currently accepted view—Maxwell maintains that the second law of thermodynamics, as originally conceived, cannot be strictly true, the replacement he proposes is different from the version accepted by most physicists today. The modification of the second law accepted by most physicists is a probabilistic (...) one: although statistical fluctuations will result in occasional spontaneous differences in temperature or pressure, there is no way to predictably and reliably harness these to produce large violations of the original version of the second law. Maxwell advocates a version of the second law that is strictly weaker; the validity of even this probabilistic version is of limited scope, limited to situations in which we are dealing with large numbers of molecules en masse and have no ability to manipulate individual molecules. Connected with this is his concept of the thermodynamic concepts of heat, work, and entropy; on the Maxwellian view, these are concepts that must be relativized to the means we have available for gathering information about and manipulating physical systems. The Maxwellian view is one that deserves serious consideration in discussions of the foundation of statistical mechanics. It has relevance for the project of recovering thermodynamics from statistical mechanics because, in such a project, it matters which version of the second law we are trying to recover. (shrink)
Many philosophers have argued that statistical evidence regarding group char- acteristics (particularly stereotypical ones) can create normative conflicts between the requirements of epistemic rationality and our moral obligations to each other. In a recent paper, Johnson-King and Babic argue that such conflicts can usually be avoided: what ordinary morality requires, they argue, epistemic rationality permits. In this paper, we show that as data gets large, Johnson-King and Babic’s approach becomes less plausible. More constructively, we build on their project and develop (...) a generalized model of reasoning about stereotypes under which one can indeed avoid normative conflicts, even in a big data world, when data contain some noise. In doing so, we also articulate a general approach to rational belief updating for noisy data. (shrink)
Recently, the practice of deciding legal cases on purely statistical evidence has been widely criticised. Many feel uncomfortable with finding someone guilty on the basis of bare probabilities, even though the chance of error might be stupendously small. This is an important issue: with the rise of DNA profiling, courts are increasingly faced with purely statistical evidence. A prominent line of argument—endorsed by Blome-Tillmann 2017; Smith 2018; and Littlejohn 2018—rejects the use of such evidence by appealing to epistemic norms that (...) apply to individual inquirers. My aim in this paper is to rehabilitate purely statistical evidence by arguing that, given the broader aims of legal systems, there are scenarios in which relying on such evidence is appropriate. Along the way I explain why popular arguments appealing to individual epistemic norms to reject legal reliance on bare statistics are unconvincing, by showing that courts and individuals face different epistemic predicaments (in short, individuals can hedge when confronted with statistical evidence, whilst legal tribunals cannot). I also correct some misconceptions about legal practice that have found their way into the recent literature. (shrink)
The conspicuous similarities between interpretive strategies in classical statistical mechanics and in quantum mechanics may be grounded on their employment of common implementations of probability. The objective probabilities which represent the underlying stochasticity of these theories can be naturally associated with three of their common formal features: initial conditions, dynamics, and observables. Various well-known interpretations of the two theories line up with particular choices among these three ways of implementing probability. This perspective has significant application to debates on primitive ontology (...) and to the quantum measurement problem. (shrink)
Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. Terms in OBCS, including ‘data collection’, ‘data transformation in statistics’, ‘data (...) visualization’, ‘statistical data analysis’, and ‘drawing a conclusion based on data’, cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. By representing statistics-related terms and their relations in a rigorous fashion, OBCS facilitates standard data analysis and integration, and supports reproducible biological and clinical research. (shrink)
Why does classical equilibrium statistical mechanics work? Malament and Zabell (1980) noticed that, for ergodic dynamical systems, the unique absolutely continuous invariant probability measure is the microcanonical. Earman and Rédei (1996) replied that systems of interest are very probably not ergodic, so that absolutely continuous invariant probability measures very distant from the microcanonical exist. In response I define the generalized properties of epsilon-ergodicity and epsilon-continuity, I review computational evidence indicating that systems of interest are epsilon-ergodic, I adapt Malament and Zabell’s (...) defense of absolute continuity to support epsilon-continuity, and I prove that, for epsilon-ergodic systems, every epsilon-continuous invariant probability measure is very close to the microcanonical. (shrink)
According to the Rational Threshold View, a rational agent believes p if and only if her credence in p is equal to or greater than a certain threshold. One of the most serious challenges for this view is the problem of statistical evidence: statistical evidence is often not sufficient to make an outright belief rational, no matter how probable the target proposition is given such evidence. This indicates that rational belief is not as sensitive to statistical evidence as rational credence. (...) The aim of this paper is twofold. First, we argue that, in addition to playing a decisive role in rationalizing outright belief, non-statistical evidence also plays a preponderant role in rationalizing credence. More precisely, when both types of evidence are present in a context, non-statistical evidence should receive a heavier weight than statistical evidence in determining rational credence. Second, based on this result, we argue that a modified version of the Rational Threshold View can avoid the problem of statistical evidence. We conclude by suggesting a possible explanation of the varying sensitivity to different types of evidence for belief and credence based on the respective aims of these attitudes. (shrink)
A question, long discussed by legal scholars, has recently provoked a considerable amount of philosophical attention: ‘Is it ever appropriate to base a legal verdict on statistical evidence alone?’ Many philosophers who have considered this question reject legal reliance on bare statistics, even when the odds of error are extremely low. This paper develops a puzzle for the dominant theories concerning why we should eschew bare statistics. Namely, there seem to be compelling scenarios in which there are multiple (...) sources of incriminating statistical evidence. As we conjoin together different types of statistical evidence, it becomes increasingly incredible to suppose that a positive verdict would be impermissible. I suggest that none of the dominant views in the literature can easily accommodate such cases, and close by offering a diagnosis of my own. (shrink)
This paper offers a new angle on the common idea that the process of science does not support epistemic diversity. Under minimal assumptions on the nature of journal editing, we prove that editorial procedures, even when impartial in themselves, disadvantage less prominent research programs. This purely statistical bias in article selection further skews existing differences in the success rate and hence attractiveness of research programs, and exacerbates the reputation difference between the programs. After a discussion of the modeling assumptions, the (...) paper ends with a number of recommendations that may help promote scientific diversity through editorial decision making. (shrink)
A popular informal argument suggests that statistics about the preponderance of criminal involvement among particular demographic groups partially justify others in making defensive mistakes against members of the group. One could worry that evidence-relative accounts of moral rights vindicate this argument. After constructing the strongest form of this objection, I offer several replies: most demographic statistics face an unmet challenge from reference class problems, even those that meet it fail to ground non-negligible conditional probabilities, even if they did, (...) they introduce new costs likely to cancel out any justificatory contribution of the statistic, but even if they didn’t, demographic facts are the wrong sort to make a moral difference to agents’ negative rights. I conclude that the popular argument should be rejected, and evidence-relative theories do not have the worrisome implication. (shrink)
Over the years, mathematics and statistics have become increasingly important in the social sciences1 . A look at history quickly confirms this claim. At the beginning of the 20th century most theories in the social sciences were formulated in qualitative terms while quantitative methods did not play a substantial role in their formulation and establishment. Moreover, many practitioners considered mathematical methods to be inappropriate and simply unsuited to foster our understanding of the social domain. Notably, the famous Methodenstreit also (...) concerned the role of mathematics in the social sciences. Here, mathematics was considered to be the method of the natural sciences from which the social sciences had to be separated during the period of maturation of these disciplines. All this changed by the end of the century. By then, mathematical, and especially statistical, methods were standardly used, and their value in the social sciences became relatively uncontested. The use of mathematical and statistical methods is now ubiquitous: Almost all social sciences rely on statistical methods to analyze data and form hypotheses, and almost all of them use (to a greater or lesser extent) a range of mathematical methods to help us understand the social world. Additional indication for the increasing importance of mathematical and statistical methods in the social sciences is the formation of new subdisciplines, and the establishment of specialized journals and societies. Indeed, subdisciplines such as Mathematical Psychology and Mathematical Sociology emerged, and corresponding journals such as The Journal of Mathematical Psychology (since 1964), The Journal of Mathematical Sociology (since 1976), Mathematical Social Sciences (since 1980) as well as the online journals Journal of Artificial Societies and Social Simulation (since 1998) and Mathematical Anthropology and Cultural Theory (since 2000) were established. What is more, societies such as the Society for Mathematical Psychology (since 1976) and the Mathematical Sociology Section of the American Sociological Association (since 1996) were founded. Similar developments can be observed in other countries. The mathematization of economics set in somewhat earlier (Vazquez 1995; Weintraub 2002). However, the use of mathematical methods in economics started booming only in the second half of the last century (Debreu 1991). Contemporary economics is dominated by the mathematical approach, although a certain style of doing economics became more and more under attack in the last decade or so. Recent developments in behavioral economics and experimental economics can also be understood as a reaction against the dominance (and limitations) of an overly mathematical approach to economics. There are similar debates in other social sciences. It is, however, important to stress that problems of one method (such as axiomatization or the use of set theory) can hardly be taken as a sign of bankruptcy of mathematical methods in the social sciences tout court. This chapter surveys mathematical and statistical methods used in the social sciences and discusses some of the philosophical questions they raise. It is divided into two parts. Sections 1 and 2 are devoted to mathematical methods, and Sections 3 to 7 to statistical methods. As several other chapters in this handbook provide detailed accounts of various mathematical methods, our remarks about the latter will be rather short and general. Statistical methods, on the other hand, will be discussed in-depth. (shrink)
This paper puts forward the hypothesis that the distinctive features of quantum statistics are exclusively determined by the nature of the properties it describes. In particular, all statistically relevant properties of identical quantum particles in many-particle systems are conjectured to be irreducible, ‘inherent’ properties only belonging to the whole system. This allows one to explain quantum statistics without endorsing the ‘Received View’ that particles are non-individuals, or postulating that quantum systems obey peculiar probability distributions, or assuming that there (...) are primitive restrictions on the range of states accessible to such systems. With this, the need for an unambiguously metaphysical explanation of certain physical facts is acknowledged and satisfied. (shrink)
Agent-causal libertarians maintain we are irreducible agents who, by acting, settle matters that aren’t already settled. This implies that the neural matters underlying the exercise of our agency don’t conform to deterministic laws, but it does not appear to exclude the possibility that they conform to statistical laws. However, Pereboom (Noûs 29:21–45, 1995; Living without free will, Cambridge University Press, Cambridge, 2001; in: Nadelhoffer (ed) The future of punishment, Oxford University Press, New York, 2013) has argued that, if these neural (...) matters conform to either statistical or deterministic physical laws, the complete conformity of an irreducible agent’s settling of matters with what should be expected given the applicable laws would involve coincidences too wild to be credible. Here, I show that Pereboom’s argument depends on the assumption that, at times, the antecedent probability certain behavior will occur applies in each of a number of occasions, and is incapable of changing as a result of what one does from one occasion to the next. There is, however, no evidence this assumption is true. The upshot is the wild coincidence objection is an empirical objection lacking empirical support. Thus, it isn’t a compelling argument against agent-causal libertarianism. (shrink)
This chapter will review selected aspects of the terrain of discussions about probabilities in statistical mechanics (with no pretensions to exhaustiveness, though the major issues will be touched upon), and will argue for a number of claims. None of the claims to be defended is entirely original, but all deserve emphasis. The first, and least controversial, is that probabilistic notions are needed to make sense of statistical mechanics. The reason for this is the same reason that convinced Maxwell, Gibbs, and (...) Boltzmann that probabilities would be needed, namely, that the second law of thermodynamics, which in its original formulation says that certain processes are impossible, must, on the kinetic theory, be replaced by a weaker formulation according to which what the original version deems impossible is merely improbable. Second is that we ought not take the standard measures invoked in equilibrium statistical mechanics as giving, in any sense, the correct probabilities about microstates of the system. We can settle for a much weaker claim: that the probabilities for outcomes of experiments yielded by the standard distributions are effectively the same as those yielded by any distribution that we should take as a representing probabilities over microstates. Lastly, (and most controversially): in asking about the status of probabilities in statistical mechanics, the familiar dichotomy between epistemic probabilities (credences, or degrees of belief) and ontic (physical) probabilities is insufficient; the concept of probability that is best suited to the needs of statistical mechanics is one that combines epistemic and physical considerations. (shrink)
In some cases, there appears to be an asymmetry in the evidential value of statistical and more individualized evidence. For example, while I may accept that Alex is guilty based on eyewitness testimony that is 80% likely to be accurate, it does not seem permissible to do so based on the fact that 80% of a group that Alex is a member of are guilty. In this paper I suggest that rather than reflecting a deep defect in statistical evidence, this (...) asymmetry might arise from a general constraint on rational inquiry. Plausibly the degree of evidential support needed to justify taking a proposition to be true depends on the stakes of error. While relying on statistical evidence plausibly raises the stakes by introducing new kinds of risk to members of the reference class, paradigmatically `individualized' evidence---evidence tracing back to A's voluntary behavior---can lower the stakes. The net result explains the apparent evidential asymmetry without positing a deep difference in the brute justificatory power of different types of evidence. (shrink)
The paper takes a closer look at the role of knowledge and evidence in legal theory. In particular, the paper examines a puzzle arising from the evidential standard Preponderance of the Evidence and its application in civil procedure. Legal scholars have argued since at least the 1940s that the rule of the Preponderance of the Evidence gives rise to a puzzle concerning the role of statistical evidence in judicial proceedings, sometimes referred to as the Problem of Bare Statistical Evidence. While (...) this puzzle has led to the development of a multitude of accounts and approaches in the legal literature, I argue here that the problem can be resolved fairly straightforwardly within a knowledge-first framework. (shrink)
In this paper I propose an interpretation of classical statistical mechanics that centers on taking seriously the idea that probability measures represent complete states of statistical mechanical systems. I show how this leads naturally to the idea that the stochasticity of statistical mechanics is associated directly with the observables of the theory rather than with the microstates (as traditional accounts would have it). The usual assumption that microstates are representationally significant in the theory is therefore dispensable, a consequence which suggests (...) interesting possibilities for developing non-equilibrium statistical mechanics and investigating inter-theoretic answers to the foundational questions of statistical mechanics. (shrink)
Statistical evidence—say, that 95% of your co-workers badmouth each other—can never render resenting your colleague appropriate, in the way that other evidence (say, the testimony of a reliable friend) can. The problem of statistical resentment is to explain why. We put the problem of statistical resentment in several wider contexts: The context of the problem of statistical evidence in legal theory; the epistemological context—with problems like the lottery paradox for knowledge, epistemic impurism and doxastic wrongdoing; and the context of a (...) wider set of examples of responses and attitudes that seem not to be appropriately groundable in statistical evidence. Regrettably, we do not come up with a fully general, fully adequate, fully unified account of all the phenomena discussed. But we give reasons to believe that no such account is forthcoming, and we sketch a somewhat messier account that may be the best that can be had here. (shrink)
This paper, “Cultural Statistics, the Media and the Planning and Development of Calabar, Nigeria” stresses the need for the use of Cultural Statistics and effective media communication in the planning and development of Calabar, the Cross River State Capital. This position is anchored on the fact that in virtually every sphere of life, there can be no development without planning, and there can be no proper planning without accurate data or information. Cultural Statistics, and effective use of (...) the media thus become imperative in the planning and development of Calabar, especially as the Cross River State capital, is fast becoming an internationally recognized cultural city due largely to its annual Calabar Festival and Carnival. The paper among other things argues that cultural statistics and the use of the media will reposition the city of Calabar, not only in terms of development, but also in marketing and branding, taking into consideration the new economy and globalization which involve technology, creativity, human capital and capacity for innovation. The paper concludes that although some effort has been made by the Cross River State government in gathering and publishing some cultural information in brochures and other periodicals, there will be need for deliberate and conscientious effort to be made by the relevant government authorities to collect, collate, analyze and interpret cultural data in Calabar and project same in the media with a view to enhancing the planning and development of the Cross River State capital so as to truly make it a tourism and cultural haven in Nigeria and in the continent of Africa. (shrink)
Abstract. Suppose that the word of an eyewitness makes it 80% probable that A committed a crime, and that B is drawn from a population in which the incidence rate of that crime is 80%. Many philosophers and legal theorists have held that if this is our only evidence against those parties then (i) we may be justified in finding against A but not against B; but (ii) that doing so incurs a loss in the accuracy of our findings. This (...) paper argues against (ii). It argues that accuracy considerations can motivate taking different attitudes towards individualized and statistical evidence even across cases where they generate the same probability that the defendant is guilty. (shrink)
This paper has three main objectives: (a) Discuss the formal analogy between some important symmetry-invariance arguments used in physics, probability and statistics. Specifically, we will focus on Noether’s theorem in physics, the maximum entropy principle in probability theory, and de Finetti-type theorems in Bayesian statistics; (b) Discuss the epistemological and ontological implications of these theorems, as they are interpreted in physics and statistics. Specifically, we will focus on the positivist (in physics) or subjective (in statistics) interpretations (...) vs. objective interpretations that are suggested by symmetry and invariance arguments; (c) Introduce the cognitive constructivism epistemological framework as a solution that overcomes the realism-subjectivism dilemma and its pitfalls. The work of the physicist and philosopher Max Born will be particularly important in our discussion. (shrink)
The Generality Problem is widely recognized to be a serious problem for reliabilist theories of justification. James R. Beebe's Statistical Solution is one of only a handful of attempted solutions that has garnered serious attention in the literature. In their recent response to Beebe, Julien Dutant and Erik J. Olsson successfully refute Beebe's Statistical Solution. This paper presents a New Statistical Solution that countenances Dutant and Olsson's objections, dodges the serious problems that trouble rival solutions, and retains the theoretical virtues (...) that made Beebe's solution so attractive in the first place. There indeed exists a principled, rigorous, conceptually sparse, and plausible solution to the Generality Problem: it is the New Statistical Solution. (shrink)
There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton’s footsteps. I argue for two related theses in light of this standard interpretation, based on a (...) reading of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. (shrink)
In a quantum universe with a strong arrow of time, it is standard to postulate that the initial wave function started in a particular macrostate---the special low-entropy macrostate selected by the Past Hypothesis. Moreover, there is an additional postulate about statistical mechanical probabilities according to which the initial wave function is a ''typical'' choice in the macrostate. Together, they support a probabilistic version of the Second Law of Thermodynamics: typical initial wave functions will increase in entropy. Hence, there are two (...) sources of randomness in such a universe: the quantum-mechanical probabilities of the Born rule and the statistical mechanical probabilities of the Statistical Postulate. I propose a new way to understand time's arrow in a quantum universe. It is based on what I call the Thermodynamic Theories of Quantum Mechanics. According to this perspective, there is a natural choice for the initial quantum state of the universe, which is given by not a wave function but by a density matrix. The density matrix plays a microscopic role: it appears in the fundamental dynamical equations of those theories. The density matrix also plays a macroscopic / thermodynamic role: it is exactly the projection operator onto the Past Hypothesis subspace. Thus, given an initial subspace, we obtain a unique choice of the initial density matrix. I call this property "the conditional uniqueness" of the initial quantum state. The conditional uniqueness provides a new and general strategy to eliminate statistical mechanical probabilities in the fundamental physical theories, by which we can reduce the two sources of randomness to only the quantum mechanical one. I also explore the idea of an absolutely unique initial quantum state, in a way that might realize Penrose's idea of a strongly deterministic universe. (shrink)
The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of evidence (...) for hypotheses. We suggest that rather than mere statistical reform, what is needed is a better understanding of the different modes of statistical inference and a better understanding of how statistical inference relates to scientific inference. (shrink)
Over the past fifteen years there has been a considerable amount of debate concerning what theoretical population dynamic models tell us about the nature of natural selection and drift. On the causal interpretation, these models describe the causes of population change. On the statistical interpretation, the models of population dynamics models specify statistical parameters that explain, predict, and quantify changes in population structure, without identifying the causes of those changes. Selection and drift are part of a statistical description of population (...) change; they are not discrete, apportionable causes. Our objective here is to provide a definitive statement of the statistical position, so as to allay some confusions in the current literature. We outline four commitments that are central to statisticalism. They are: 1. Natural Selection is a higher order eﬀect; 2. Trait fitness is primitive; 3. Modern Synthesis (MS)-models are substrate neutral; 4. MS-selection and drift are model-relative. (shrink)
The article focuses on prosecutor's fallacy and interrogator's fallacy, the two kinds of reasoning in inferring a suspect's guilt. The prosecutor's fallacy is a combination of two conditional probabilities that lead to unfortunate commission of error in the process due to the inclination of the prosecutor in the establishment of strong evidence that will indict the defendant. It provides a comprehensive discussion of Gerd Gigerenzer's discourse on a criminal case in Germany explaining the perils of prosecutor's fallacy in his application (...) of probability to practical problems. It also discusses the interrogator's fallacy which was introduced by Robert A. J. Matthews as the error on the assumption that confessional evidence can never reduce the probability of guilt. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an “objective” probability.
The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package combines (...) the ability to construct Bayesian network models using directed acyclic graphs (DAGs), the Markov chain Monte Carlo (MCMC) simulation technique, and the graphic capability of the ggplot2 package. As a result, it can improve the user experience and intuitive understanding when constructing and analyzing Bayesian network models. A case example is offered to illustrate the usefulness of the package for Big Data analytics and cognitive computing. (shrink)
This is a trend study of School Size, Location and Enrolment Figures of Junior secondary schools in Akwa Ibom State of Nigeria covering 2008 – 2016 with implications on sustainable development. The study was tailored to follow the ex-post facto research design. This study was a census, hence the entire population of 227 public secondary schools were used. Secondary quantitative data were obtained using “School Size, Location and Enrolment Figures Checklist (SSLEFC)” were analysed using descriptive statistics, while line graphs (...) and bar charts were used to illustrate the statistical trends. The hypotheses were tested using the independent t-test statistical approach. Findings showed that higher rates of enrolment were recorded in large and urban schools than in small schools and rural schools respectively. The mean differences in the enrolment trend among urban and provincial schools were factually huge. It was presumed that there is an upward pattern in enrolment in all the schools from 2008 – 2013 and a descending pattern from 2015– 2016. Based on this conclusion, implications were discussed, while it was recommended, among others, that infrastructural provisions and adequate supply of qualified personnel be allocated to urban and rural schools evenly, to discourage rural-urban migration but promote active rural participation in Education, as well as foster sustainability in schools. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the indifference approach (...) to statistical mechanical probabilities. (shrink)
This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radical constructivism.
There are three main traditional accounts of vagueness : one takes it as a genuinely metaphysical phenomenon, one takes it as a phenomenon of ignorance, and one takes it as a linguistic or conceptual phenomenon. In this paper I first very briefly present these views, especially the epistemicist and supervaluationist strategies, and shortly point to some well-known problems that the views carry. I then examine a 'statistical epistemicist' account of vagueness that is designed to avoid precisely these problems – it (...) will be a view that provides an account of the phenomenon of vagueness as coming from our linguistic practices, while insisting that meaning supervenes on use, and that our use of vague terms does yield sharp and precise meanings, which we ignore, thus allowing bivalence to hold. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and describes some areas (...) where further research is needed. (shrink)
If the goal of statistical analysis is to form justified credences based on data, then an account of the foundations of statistics should explain what makes credences justified. I present a new account called statistical reliabilism (SR), on which credences resulting from a statistical analysis are justified (relative to alternatives) when they are in a sense closest, on average, to the corresponding objective probabilities. This places (SR) in the same vein as recent work on the reliabilist justification of credences (...) generally [Dunn, 2015, Tang, 2016, Pettigrew, 2018], but it has the advantage of being action-guiding in that knowledge of objective probabilities is not required to identify the best-justified available credences. The price is that justification is relativized to a specific class of candidate objective probabilities, and to a particular choice of reliability measure. On the other hand, I show that (SR) has welcome implications for frequentist-Bayesian reconciliation, including a clarification of the use of priors; complemen- tarity between probabilist and fallibilist [Gelman and Shalizi, 2013, Mayo, 2018] approaches towards statistical foundations; and the justification of credences outside of formal statistical settings. Regarding the latter, I demonstrate how the insights of statistics may be used to amend other reliabilist accounts so as to render them action-guiding. I close by discussing new possible research directions for epistemologists and statisticians (and other applied users of probability) raised by the (SR) framework. (shrink)
The history of statistics is filled with many controversies, in which the prime focus has been the difference in the “interpretation of probability” between Fre- quentist and Bayesian theories. Many philosophical arguments have been elabo- rated to examine the problems of both theories based on this dichotomized view of statistics, including the well-known stopping-rule problem and the catch-all hy- pothesis problem. However, there are also several “hybrid” approaches in theory, practice, and philosophical analysis. This poses many fundamental questions. (...) This paper reviews three cases and argues that the interpretation problem of probabil- ity is insufficient to begin a philosophical analysis of the current issues in the field of statistics. A novel viewpoint is proposed to examine the relationship between the stopping-rule problem and the catch-all hypothesis problem. (shrink)
The purpose of this paper is twofold: -/- 1) to highlight the widely ignored but fundamental problem of ‘superpopulations’ for the use of inferential statistics in development studies. We do not to dwell on this problem however as it has been sufficiently discussed in older papers by statisticians that social scientists have nevertheless long chosen to ignore; the interested reader can turn to those for greater detail. -/- 2) to show that descriptive statistics both avoid the problem of (...) superpopulations and can be a powerful tool when used correctly. A few examples are provided. -/- The paper ends with considerations of some reasons we think are behind the adherence to methods that are known to be inapplicable to many of the types of questions asked in development studies yet still widely practiced. (shrink)
This short article is based on my special lecture entitled "Aristotle and the Philosophy of Education" at Tamagawa University Research Institute in Tokyo on September 19, 2015, through a recording of the spoken language transcribed in written form with some corrections. The lecture delivered on that day consists of two parts: referring to historical research and a statistical survey, the first half focuses on uncovering the fact that the philosophy of education has been slighted both in Japanese and Western academia (...) and that this fact is valid for both Aristotle's and contemporary studies on the philosophy of education; the second half endeavors to describe two possible research projects - namely, a classical-philosophical and interpretative study of Aristotle's philosophy of education and a contemporary-philosophical and Aristotelian study of the philosophy education. Some of these topics have been published but others have not. This short article concerns the first half of the lecture and measures the gap between philosophy and the philosophy of education in Japan through a statistical survey of the largest competitive research funding database in Japan: KAKEN. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.