Results for 'Frequentist'

34 found
Order:
  1. Reviving Frequentism.Mario Hubert - 2021 - Synthese 199:5255–5584.
    Philosophers now seem to agree that frequentism is an untenable strategy to explain the meaning of probabilities. Nevertheless, I want to revive frequentism, and I will do so by grounding probabilities on typicality in the same way as the thermodynamic arrow of time can be grounded on typicality within statistical mechanics. This account, which I will call typicality frequentism, will evade the major criticisms raised against previous forms of frequentism. In this theory, probabilities arise within a physical theory from statistical (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  2. Bayesian versus frequentist clinical trials.David Teira - 2011 - In Fred Gifford (ed.), Philosophy of Medicine. Boston: Elsevier. pp. 255-297.
    I will open the first part of this paper by trying to elucidate the frequentist foundations of RCTs. I will then present a number of methodological objections against the viability of these inferential principles in the conduct of actual clinical trials. In the following section, I will explore the main ethical issues in frequentist trials, namely those related to randomisation and the use of stopping rules. In the final section of the first part, I will analyse why RCTs (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. A Frequentist Solution to Lindley & Phillips’ Stopping Rule Problem in Ecological Realm.Adam P. Kubiak - 2014 - Zagadnienia Naukoznawstwa 50 (200):135-145.
    In this paper I provide a frequentist philosophical-methodological solution for the stopping rule problem presented by Lindley & Phillips in 1976, which is settled in the ecological realm of testing koalas’ sex ratio. I deliver criteria for discerning a stopping rule, an evidence and a model that are epistemically more appropriate for testing the hypothesis of the case studied, by appealing to physical notion of probability and by analyzing the content of possible formulations of evidence, assumptions of models and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Improving Bayesian statistics understanding in the age of Big Data with the bayesvl R package.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Manh-Toan Ho, Manh-Tung Ho & Peter Mantello - 2020 - Software Impacts 4 (1):100016.
    The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package combines the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  5. Cultural evolution in Vietnam’s early 20th century: a Bayesian networks analysis of Hanoi Franco-Chinese house designs.Quan-Hoang Vuong, Quang-Khiem Bui, Viet-Phuong La, Thu-Trang Vuong, Manh-Toan Ho, Hong-Kong T. Nguyen, Hong-Ngoc Nguyen, Kien-Cuong P. Nghiem & Manh-Tung Ho - 2019 - Social Sciences and Humanities Open 1 (1):100001.
    The study of cultural evolution has taken on an increasingly interdisciplinary and diverse approach in explicating phenomena of cultural transmission and adoptions. Inspired by this computational movement, this study uses Bayesian networks analysis, combining both the frequentist and the Hamiltonian Markov chain Monte Carlo (MCMC) approach, to investigate the highly representative elements in the cultural evolution of a Vietnamese city’s architecture in the early 20th century. With a focus on the façade design of 68 old houses in Hanoi’s Old (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  6. Why do we need to employ Bayesian statistics and how can we employ it in studies of moral education?: With practical guidelines to use JASP for educators and researchers.Hyemin Han - 2018 - Journal of Moral Education 47 (4):519-537.
    ABSTRACTIn this article, we discuss the benefits of Bayesian statistics and how to utilize them in studies of moral education. To demonstrate concrete examples of the applications of Bayesian statistics to studies of moral education, we reanalyzed two data sets previously collected: one small data set collected from a moral educational intervention experiment, and one big data set from a large-scale Defining Issues Test-2 survey. The results suggest that Bayesian analysis of data sets collected from moral educational studies can provide (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  7. Cognitive Constructivism, Eigen-Solutions, and Sharp Statistical Hypotheses.Julio Michael Stern - 2007 - Cybernetics and Human Knowing 14 (1):9-36.
    In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by Cognitive Constructivism and the FBST (Full Bayesian Significance Test). The constructivist framework is contrasted with the traditional epistemological settings for orthodox Bayesian and frequentist statistics provided by Decision Theory and Falsificationism.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  8. Evidence amalgamation, plausibility, and cancer research.Marta Bertolaso & Fabio Sterpetti - 2019 - Synthese 196 (8):3279-3317.
    Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  9. Exploring the association between character strengths and moral functioning.Hyemin Han, Kelsie J. Dawson, David I. Walker, Nghi Nguyen & Youn-Jeng Choi - 2023 - Ethics and Behavior 33 (4):286-303.
    We explored the relationship between 24 character strengths measured by the Global Assessment of Character Strengths (GACS), which was revised from the original VIA instrument, and moral functioning comprising postconventional moral reasoning, empathic traits and moral identity. Bayesian Model Averaging (BMA) was employed to explore the best models, which were more parsimonious than full regression models estimated through frequentist regression, predicting moral functioning indicators with the 24 candidate character strength predictors. Our exploration was conducted with a dataset collected from (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Resurrecting logical probability.James Franklin - 2001 - Erkenntnis 55 (2):277-305.
    The logical interpretation of probability, or "objective Bayesianism'' – the theory that (some) probabilities are strictly logical degrees of partial implication – is defended. The main argument against it is that it requires the assignment of prior probabilities, and that any attempt to determine them by symmetry via a "principle of insufficient reason" inevitably leads to paradox. Three replies are advanced: that priors are imprecise or of little weight, so that disagreement about them does not matter, within limits; that it (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  11. Hypothetical Frequencies as Approximations.Jer Steeger - 2024 - Erkenntnis 89 (4):1295-1325.
    Hájek (Erkenntnis 70(2):211–235, 2009) argues that probabilities cannot be the limits of relative frequencies in counterfactual infinite sequences. I argue for a different understanding of these limits, drawing on Norton’s (Philos Sci 79(2):207–232, 2012) distinction between approximations (inexact descriptions of a target) and idealizations (separate models that bear analogies to the target). Then, I adapt Hájek’s arguments to this new context. These arguments provide excellent reasons not to use hypothetical frequencies as idealizations, but no reason not to use them as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Enviromental genotoxicity evaluation: Bayesian approach for a mixture statistical model.Julio Michael Stern, Angela Maria de Souza Bueno, Carlos Alberto de Braganca Pereira & Maria Nazareth Rabello-Gay - 2002 - Stochastic Environmental Research and Risk Assessment 16:267–278.
    The data analyzed in this paper are part of the results described in Bueno et al. (2000). Three cytogenetics endpoints were analyzed in three populations of a species of wild rodent – Akodon montensis – living in an industrial, an agricultural, and a preservation area at the Itajaí Valley, State of Santa Catarina, Brazil. The polychromatic/normochromatic ratio, the mitotic index, and the frequency of micronucleated polychromatic erythrocites were used in an attempt to establish a genotoxic profile of each area. It (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Bayesian Test of Significance for Conditional Independence: The Multinomial Model.Julio Michael Stern, Pablo de Morais Andrade & Carlos Alberto de Braganca Pereira - 2014 - Entropy 16:1376-1395.
    Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The full Bayesian (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Constructive Verification, Empirical Induction, and Falibilist Deduction: A Threefold Contrast.Julio Michael Stern - 2011 - Information 2 (4):635-650.
    This article explores some open questions related to the problem of verification of theories in the context of empirical sciences by contrasting three epistemological frameworks. Each of these epistemological frameworks is based on a corresponding central metaphor, namely: (a) Neo-empiricism and the gambling metaphor; (b) Popperian falsificationism and the scientific tribunal metaphor; (c) Cognitive constructivism and the object as eigen-solution metaphor. Each of one of these epistemological frameworks has also historically co-evolved with a certain statistical theory and method for testing (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  15. Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2018 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Unit Roots: Bayesian Significance Test.Julio Michael Stern, Marcio Alves Diniz & Carlos Alberto de Braganca Pereira - 2011 - Communications in Statistics 40 (23):4200-4213.
    The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Statistical Significance Testing in Economics.William Peden & Jan Sprenger - 2022 - In Conrad Heilmann & Julian Reiss (eds.), Routledge Handbook of Philosophy of Economics. Routledge.
    The origins of testing scientific models with statistical techniques go back to 18th century mathematics. However, the modern theory of statistical testing was primarily developed through the work of Sir R.A. Fisher, Jerzy Neyman, and Egon Pearson in the inter-war period. Some of Fisher's papers on testing were published in economics journals (Fisher, 1923, 1935) and exerted a notable influence on the discipline. The development of econometrics and the rise of quantitative economic models in the mid-20th century made statistical significance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Testing the Independence of Poisson Variates under the Holgate Bivariate Distribution: The Power of a New Evidence Test.Julio Michael Stern & Shelemyahu Zacks - 2002 - Statistics and Probability Letters 60:313-320.
    A new Evidence Test is applied to the problem of testing whether two Poisson random variables are dependent. The dependence structure is that of Holgate’s bivariate distribution. These bivariate distribution depends on three parameters, 0 < theta_1, theta_2 < infty, and 0 < theta_3 < min(theta_1, theta_2). The Evidence Test was originally developed as a Bayesian test, but in the present paper it is compared to the best known test of the hypothesis of independence in a frequentist framework. It (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  19. Are ecology and evolutionary biology “soft” sciences?Massimo Pigliucci - 2002 - Annales Zoologici Finnici 39:87-98.
    Research in ecology and evolutionary biology (evo-eco) often tries to emulate the “hard” sciences such as physics and chemistry, but to many of its practitioners feels more like the “soft” sciences of psychology and sociology. I argue that this schizophrenic attitude is the result of lack of appreciation of the full consequences of the peculiarity of the evo-eco sciences as lying in between a-historical disciplines such as physics and completely historical ones as like paleontology. Furthermore, evo-eco researchers have gotten stuck (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  20. Karl Pearson and the Logic of Science: Renouncing Causal Understanding (the Bride) and Inverted Spinozism.Julio Michael Stern - 2018 - South American Journal of Logic 4 (1):219-252.
    Karl Pearson is the leading figure of XX century statistics. He and his co-workers crafted the core of the theory, methods and language of frequentist or classical statistics – the prevalent inductive logic of contemporary science. However, before working in statistics, K. Pearson had other interests in life, namely, in this order, philosophy, physics, and biological heredity. Key concepts of his philosophical and epistemological system of anti-Spinozism (a form of transcendental idealism) are carried over to his subsequent works on (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. D. H. MELLOR The Matter of Chance.Luke Glynn - 2011 - British Journal for the Philosophy of Science 62 (4):899-906.
    Though almost forty years have elapsed since its first publication, it is a testament to the philosophical acumen of its author that 'The Matter of Chance' contains much that is of continued interest to the philosopher of science. Mellor advances a sophisticated propensity theory of chance, arguing that this theory makes better sense than its rivals (in particular subjectivist, frequentist, logical and classical theories) of ‘what professional usage shows to be thought true of chance’ (p. xi) – in particular (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Reliable credence and the foundations of statistics.Jesse Clifon - manuscript
    If the goal of statistical analysis is to form justified credences based on data, then an account of the foundations of statistics should explain what makes credences justified. I present a new account called statistical reliabilism (SR), on which credences resulting from a statistical analysis are justified (relative to alternatives) when they are in a sense closest, on average, to the corresponding objective probabilities. This places (SR) in the same vein as recent work on the reliabilist justification of credences generally (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. ‘The Innocent v The Fickle Few’: How Jurors Understand Random-Match-Probabilities and Judges’ Directions when Reasoning about DNA and Refuting Evidence.Michelle B. Cowley-Cunningham - 2017 - Journal of Forensic Science and Criminal Investigation 3 (5):April/May 2017.
    DNA evidence is one of the most significant modern advances in the search for truth since the cross examination, but its format as a random-match-probability makes it difficult for people to assign an appropriate probative value (Koehler, 2001). While Frequentist theories propose that the presentation of the match as a frequency rather than a probability facilitates more accurate assessment (e.g., Slovic et al., 2000), Exemplar-Cueing Theory predicts that the subjective weight assigned may be affected by the frequency or probability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. (2 other versions)Probability and Randomness.Antony Eagle - 2016 - In Alan Hájek & Christopher Hitchcock (eds.), The Oxford Handbook of Probability and Philosophy. Oxford: Oxford University Press. pp. 440-459.
    Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a property of a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  25. Novelty versus Replicability: Virtues and Vices in the Reward System of Science.Felipe Romero - 2017 - Philosophy of Science 84 (5):1031-1043.
    The reward system of science is the priority rule. The first scientist making a new discovery is rewarded with prestige, while second runners get little or nothing. Michael Strevens, following Philip Kitcher, defends this reward system, arguing that it incentivizes an efficient division of cognitive labor. I argue that this assessment depends on strong implicit assumptions about the replicability of findings. I question these assumptions on the basis of metascientific evidence and argue that the priority rule systematically discourages replication. My (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  26. Probability and Informed Consent.Nir Ben-Moshe, Benjamin A. Levinstein & Jonathan Livengood - 2023 - Theoretical Medicine and Bioethics 44 (6):545-566.
    In this paper, we illustrate some serious difficulties involved in conveying information about uncertain risks and securing informed consent for risky interventions in a clinical setting. We argue that in order to secure informed consent for a medical intervention, physicians often need to do more than report a bare, numerical probability value. When probabilities are given, securing informed consent generally requires communicating how probability expressions are to be interpreted and communicating something about the quality and quantity of the evidence for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Revisiting the two predominant statistical problems: the stopping-rule problem and the catch-all hypothesis problem.Yusaku Ohkubo - 2021 - Annals of the Japan Association for Philosophy of Science 30:23-41.
    The history of statistics is filled with many controversies, in which the prime focus has been the difference in the “interpretation of probability” between Fre- quentist and Bayesian theories. Many philosophical arguments have been elabo- rated to examine the problems of both theories based on this dichotomized view of statistics, including the well-known stopping-rule problem and the catch-all hy- pothesis problem. However, there are also several “hybrid” approaches in theory, practice, and philosophical analysis. This poses many fundamental questions. This paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Classical versus Bayesian Statistics.Eric Johannesson - 2020 - Philosophy of Science 87 (2):302-318.
    In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this article is to investigate the extent to which classicists and Bayesians can agree. My conclusion is that, in certain situations, they cannot. The upshot is that, if we assume that the classicist is not allowed to have a higher degree of belief in a null hypothesis after he has rejected it than before, then he has to either have trivial or incoherent credences to begin with (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. A more principled use of the p-value? Not so fast: a critique of Colquhoun’s argument.Ognjen Arandjelovic - 2019 - Royal Society Open Science 6 (5):181519.
    The usefulness of the statistic known as the p-value, as a means of quantify-ing the strength of evidence for the presence of an effect from empirical data has long been questioned in the statistical community. In recent years there has been a notable increase in the awareness of both fundamental and practical limitations of the statistic within the target research fields, and especially biomedicine. In this article I analyse the recently published article which, in summary, argues that with a better (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Hypothesis Testing, “Dutch Book” Arguments, and Risk.Daniel Malinsky - 2015 - Philosophy of Science 82 (5):917-929.
    “Dutch Book” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “Dutch Book”–type mathematical results with principles actually endorsed (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Practical foundations for probability: Prediction methods and calibration.Benedikt Höltgen - manuscript
    Although probabilistic statements are ubiquitous, probability is still poorly understood. This shows itself, for example, in the mere stipulation of policies like expected utility maximisation and in disagreements about the correct interpretation of probability. In this work, we provide an account of probabilistic predictions that explains when, how, and why they can be useful for decision-making. We demonstrate that a calibration criterion on finite sets of predictions allows one to anticipate the distribution of utilities that a given policy will yield. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Why Inferential Statistics are Inappropriate for Development Studies and How the Same Data Can be Better Used.Ballinger Clint - manuscript
    The purpose of this paper is twofold: -/- 1) to highlight the widely ignored but fundamental problem of ‘superpopulations’ for the use of inferential statistics in development studies. We do not to dwell on this problem however as it has been sufficiently discussed in older papers by statisticians that social scientists have nevertheless long chosen to ignore; the interested reader can turn to those for greater detail. -/- 2) to show that descriptive statistics both avoid the problem of superpopulations and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Statistical significance under low power: A Gettier case?Daniel Dunleavy - 2020 - Journal of Brief Ideas.
    A brief idea on statistics and epistemology.
    Download  
     
    Export citation  
     
    Bookmark  
  34. Initial Conditions as Exogenous Factors in Spatial Explanation.Clint Ballinger - 2008 - Dissertation, University of Cambridge
    This dissertation shows how initial conditions play a special role in the explanation of contingent and irregular outcomes, including, in the form of geographic context, the special case of uneven development in the social sciences. The dissertation develops a general theory of this role, recognizes its empirical limitations in the social sciences, and considers how it might be applied to the question of uneven development. The primary purpose of the dissertation is to identify and correct theoretical problems in the study (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations