Results for 'Frequentist statistics'

1000+ found
Order:
  1. Reviving Frequentism.Mario Hubert - 2021 - Synthese 199:5255–5584.
    Philosophers now seem to agree that frequentism is an untenable strategy to explain the meaning of probabilities. Nevertheless, I want to revive frequentism, and I will do so by grounding probabilities on typicality in the same way as the thermodynamic arrow of time can be grounded on typicality within statistical mechanics. This account, which I will call typicality frequentism, will evade the major criticisms raised against previous forms of frequentism. In this theory, probabilities arise within a physical theory from statistical (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. A Frequentist Solution to Lindley & Phillips’ Stopping Rule Problem in Ecological Realm.Adam P. Kubiak - 2014 - Zagadnienia Naukoznawstwa 50 (200):135-145.
    In this paper I provide a frequentist philosophical-methodological solution for the stopping rule problem presented by Lindley & Phillips in 1976, which is settled in the ecological realm of testing koalas’ sex ratio. I deliver criteria for discerning a stopping rule, an evidence and a model that are epistemically more appropriate for testing the hypothesis of the case studied, by appealing to physical notion of probability and by analyzing the content of possible formulations of evidence, assumptions of models and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2018 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Statistical Significance Testing in Economics.William Peden & Jan Sprenger - 2021 - In Conrad Heilmann & Julian Reiss (eds.), The Routledge Handbook of the Philosophy of Economics.
    The origins of testing scientific models with statistical techniques go back to 18th century mathematics. However, the modern theory of statistical testing was primarily developed through the work of Sir R.A. Fisher, Jerzy Neyman, and Egon Pearson in the inter-war period. Some of Fisher's papers on testing were published in economics journals (Fisher, 1923, 1935) and exerted a notable influence on the discipline. The development of econometrics and the rise of quantitative economic models in the mid-20th century made statistical significance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Classical versus Bayesian Statistics.Eric Johannesson - 2020 - Philosophy of Science 87 (2):302-318.
    In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this article is to investigate the extent to which classicists and Bayesians can agree. My conclusion is that, in certain situations, they cannot. The upshot is that, if we assume that the classicist is not allowed to have a higher degree of belief in a null hypothesis after he has rejected it than before, then he has to either have trivial or incoherent credences to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Statistical significance under low power: A Gettier case?Daniel Dunleavy - 2020 - Journal of Brief Ideas.
    A brief idea on statistics and epistemology.
    Download  
     
    Export citation  
     
    Bookmark  
  7. Cognitive Constructivism, Eigen-Solutions, and Sharp Statistical Hypotheses.Julio Michael Stern - 2007 - Cybernetics and Human Knowing 14 (1):9-36.
    In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by Cognitive Constructivism and the FBST (Full Bayesian Significance Test). The constructivist framework is contrasted with the traditional epistemological settings for orthodox Bayesian and frequentist statistics provided by Decision Theory and Falsificationism.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  8. Revisiting the two predominant statistical problems: the stopping-rule problem and the catch-all hypothesis problem.Yusaku Ohkubo - 2021 - Annals of the Japan Association for Philosophy of Science 30:23-41.
    The history of statistics is filled with many controversies, in which the prime focus has been the difference in the “interpretation of probability” between Fre- quentist and Bayesian theories. Many philosophical arguments have been elabo- rated to examine the problems of both theories based on this dichotomized view of statistics, including the well-known stopping-rule problem and the catch-all hy- pothesis problem. However, there are also several “hybrid” approaches in theory, practice, and philosophical analysis. This poses many fundamental questions. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Improving Bayesian statistics understanding in the age of Big Data with the bayesvl R package.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Manh-Toan Ho, Manh-Tung Ho & Peter Mantello - 2020 - Software Impacts 4 (1):100016.
    The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Why Inferential Statistics are Inappropriate for Development Studies and How the Same Data Can be Better Used.Ballinger Clint - manuscript
    The purpose of this paper is twofold: -/- 1) to highlight the widely ignored but fundamental problem of ‘superpopulations’ for the use of inferential statistics in development studies. We do not to dwell on this problem however as it has been sufficiently discussed in older papers by statisticians that social scientists have nevertheless long chosen to ignore; the interested reader can turn to those for greater detail. -/- 2) to show that descriptive statistics both avoid the problem of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Why do we need to employ Bayesian statistics and how can we employ it in studies of moral education?: With practical guidelines to use JASP for educators and researchers.Hyemin Han - 2018 - Journal of Moral Education 47 (4):519-537.
    ABSTRACTIn this article, we discuss the benefits of Bayesian statistics and how to utilize them in studies of moral education. To demonstrate concrete examples of the applications of Bayesian statistics to studies of moral education, we reanalyzed two data sets previously collected: one small data set collected from a moral educational intervention experiment, and one big data set from a large-scale Defining Issues Test-2 survey. The results suggest that Bayesian analysis of data sets collected from moral educational studies (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  12. Reliable credence and the foundations of statistics.Jesse Clifon - manuscript
    If the goal of statistical analysis is to form justified credences based on data, then an account of the foundations of statistics should explain what makes credences justified. I present a new account called statistical reliabilism (SR), on which credences resulting from a statistical analysis are justified (relative to alternatives) when they are in a sense closest, on average, to the corresponding objective probabilities. This places (SR) in the same vein as recent work on the reliabilist justification of credences (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Enviromental genotoxicity evaluation: Bayesian approach for a mixture statistical model.Julio Michael Stern, Angela Maria de Souza Bueno, Carlos Alberto de Braganca Pereira & Maria Nazareth Rabello-Gay - 2002 - Stochastic Environmental Research and Risk Assessment 16:267–278.
    The data analyzed in this paper are part of the results described in Bueno et al. (2000). Three cytogenetics endpoints were analyzed in three populations of a species of wild rodent – Akodon montensis – living in an industrial, an agricultural, and a preservation area at the Itajaí Valley, State of Santa Catarina, Brazil. The polychromatic/normochromatic ratio, the mitotic index, and the frequency of micronucleated polychromatic erythrocites were used in an attempt to establish a genotoxic profile of each area. It (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Constructive Verification, Empirical Induction, and Falibilist Deduction: A Threefold Contrast.Julio Michael Stern - 2011 - Information 2 (4):635-650.
    This article explores some open questions related to the problem of verification of theories in the context of empirical sciences by contrasting three epistemological frameworks. Each of these epistemological frameworks is based on a corresponding central metaphor, namely: (a) Neo-empiricism and the gambling metaphor; (b) Popperian falsificationism and the scientific tribunal metaphor; (c) Cognitive constructivism and the object as eigen-solution metaphor. Each of one of these epistemological frameworks has also historically co-evolved with a certain statistical theory and method for testing (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  15. Novelty versus Replicability: Virtues and Vices in the Reward System of Science.Felipe Romero - 2017 - Philosophy of Science 84 (5):1031-1043.
    The reward system of science is the priority rule. The first scientist making a new discovery is rewarded with prestige, while second runners get little or nothing. Michael Strevens, following Philip Kitcher, defends this reward system, arguing that it incentivizes an efficient division of cognitive labor. I argue that this assessment depends on strong implicit assumptions about the replicability of findings. I question these assumptions on the basis of metascientific evidence and argue that the priority rule systematically discourages replication. My (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  16. Initial Conditions as Exogenous Factors in Spatial Explanation.Clint Ballinger - 2008 - Dissertation, University of Cambridge
    This dissertation shows how initial conditions play a special role in the explanation of contingent and irregular outcomes, including, in the form of geographic context, the special case of uneven development in the social sciences. The dissertation develops a general theory of this role, recognizes its empirical limitations in the social sciences, and considers how it might be applied to the question of uneven development. The primary purpose of the dissertation is to identify and correct theoretical problems in the study (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17.  85
    A more principled use of the p-value? Not so fast: a critique of Colquhoun’s argument.Ognjen Arandjelovic - 2019 - Royal Society Open Science 6 (5):181519.
    The usefulness of the statistic known as the p-value, as a means of quantify-ing the strength of evidence for the presence of an effect from empirical data has long been questioned in the statistical community. In recent years there has been a notable increase in the awareness of both fundamental and practical limitations of the statistic within the target research fields, and especially biomedicine. In this article I analyse the recently published article which, in summary, argues that with a better (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Cultural evolution in Vietnam’s early 20th century: a Bayesian networks analysis of Hanoi Franco-Chinese house designs.Quan-Hoang Vuong, Quang-Khiem Bui, Viet-Phuong La, Thu-Trang Vuong, Manh-Toan Ho, Hong-Kong T. Nguyen, Hong-Ngoc Nguyen, Kien-Cuong P. Nghiem & Manh-Tung Ho - 2019 - Social Sciences and Humanities Open 1 (1):100001.
    The study of cultural evolution has taken on an increasingly interdisciplinary and diverse approach in explicating phenomena of cultural transmission and adoptions. Inspired by this computational movement, this study uses Bayesian networks analysis, combining both the frequentist and the Hamiltonian Markov chain Monte Carlo (MCMC) approach, to investigate the highly representative elements in the cultural evolution of a Vietnamese city’s architecture in the early 20th century. With a focus on the façade design of 68 old houses in Hanoi’s Old (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  19. Evidence amalgamation, plausibility, and cancer research.Marta Bertolaso & Fabio Sterpetti - 2019 - Synthese 196 (8):3279-3317.
    Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  20. Are ecology and evolutionary biology “soft” sciences?Massimo Pigliucci - 2002 - Annales Zoologici Finnici 39:87-98.
    Research in ecology and evolutionary biology (evo-eco) often tries to emulate the “hard” sciences such as physics and chemistry, but to many of its practitioners feels more like the “soft” sciences of psychology and sociology. I argue that this schizophrenic attitude is the result of lack of appreciation of the full consequences of the peculiarity of the evo-eco sciences as lying in between a-historical disciplines such as physics and completely historical ones as like paleontology. Furthermore, evo-eco researchers have gotten stuck (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  21. Hypothesis Testing, “Dutch Book” Arguments, and Risk.Daniel Malinsky - 2015 - Philosophy of Science 82 (5):917-929.
    “Dutch Book” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “Dutch Book”–type mathematical results with principles actually endorsed (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Karl Pearson and the Logic of Science: Renouncing Causal Understanding (the Bride) and Inverted Spinozism.Julio Michael Stern - 2018 - South American Journal of Logic 4 (1):219-252.
    Karl Pearson is the leading figure of XX century statistics. He and his co-workers crafted the core of the theory, methods and language of frequentist or classical statistics – the prevalent inductive logic of contemporary science. However, before working in statistics, K. Pearson had other interests in life, namely, in this order, philosophy, physics, and biological heredity. Key concepts of his philosophical and epistemological system of anti-Spinozism (a form of transcendental idealism) are carried over to his (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Bayesian versus frequentist clinical trials.David Teira - 2011 - In Gifford Fred (ed.), Philosophy of Medicine. Amsterdam: Elsevier. pp. 255-297.
    I will open the first part of this paper by trying to elucidate the frequentist foundations of RCTs. I will then present a number of methodological objections against the viability of these inferential principles in the conduct of actual clinical trials. In the following section, I will explore the main ethical issues in frequentist trials, namely those related to randomisation and the use of stopping rules. In the final section of the first part, I will analyse why RCTs (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  24. Probabilities in Statistical Mechanics.Wayne C. Myrvold - 2016 - In Alan Hájek & Christopher Hitchcock (eds.), The Oxford Handbook of Probability and Philosophy. Oxford: Oxford University Press. pp. 573-600.
    This chapter will review selected aspects of the terrain of discussions about probabilities in statistical mechanics (with no pretensions to exhaustiveness, though the major issues will be touched upon), and will argue for a number of claims. None of the claims to be defended is entirely original, but all deserve emphasis. The first, and least controversial, is that probabilistic notions are needed to make sense of statistical mechanics. The reason for this is the same reason that convinced Maxwell, Gibbs, and (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  25. Knowledge, Evidence, and Naked Statistics.Sherrilyn Roush - 2023 - In Luis R. G. Oliveira (ed.), Externalism about Knowledge. Oxford: Oxford University Press.
    Many who think that naked statistical evidence alone is inadequate for a trial verdict think that use of probability is the problem, and something other than probability – knowledge, full belief, causal relations – is the solution. I argue that the issue of whether naked statistical evidence is weak can be formulated within the probabilistic idiom, as the question whether likelihoods or only posterior probabilities should be taken into account in our judgment of a case. This question also identifies a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Testing the Independence of Poisson Variates under the Holgate Bivariate Distribution: The Power of a New Evidence Test.Julio Michael Stern & Shelemyahu Zacks - 2002 - Statistics and Probability Letters 60:313-320.
    A new Evidence Test is applied to the problem of testing whether two Poisson random variables are dependent. The dependence structure is that of Holgate’s bivariate distribution. These bivariate distribution depends on three parameters, 0 < theta_1, theta_2 < infty, and 0 < theta_3 < min(theta_1, theta_2). The Evidence Test was originally developed as a Bayesian test, but in the present paper it is compared to the best known test of the hypothesis of independence in a frequentist framework. It (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  27. Statistical Evidence, Sensitivity, and the Legal Value of Knowledge.David Enoch, Levi Spectre & Talia Fisher - 2012 - Philosophy and Public Affairs 40 (3):197-224.
    The law views with suspicion statistical evidence, even evidence that is probabilistically on a par with direct, individual evidence that the law is in no way suspicious of. But it has proved remarkably hard to either justify this suspicion, or to debunk it. In this paper, we connect the discussion of statistical evidence to broader epistemological discussions of similar phenomena. We highlight Sensitivity – the requirement that a belief be counterfactually sensitive to the truth in a specific way – as (...)
    Download  
     
    Export citation  
     
    Bookmark   103 citations  
  28. Statistical resentment, or: what’s wrong with acting, blaming, and believing on the basis of statistics alone.David Enoch & Levi Spectre - 2021 - Synthese 199 (3-4):5687-5718.
    Statistical evidence—say, that 95% of your co-workers badmouth each other—can never render resenting your colleague appropriate, in the way that other evidence (say, the testimony of a reliable friend) can. The problem of statistical resentment is to explain why. We put the problem of statistical resentment in several wider contexts: The context of the problem of statistical evidence in legal theory; the epistemological context—with problems like the lottery paradox for knowledge, epistemic impurism and doxastic wrongdoing; and the context of a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  29. Unit Roots: Bayesian Significance Test.Julio Michael Stern, Marcio Alves Diniz & Carlos Alberto de Braganca Pereira - 2011 - Communications in Statistics 40 (23):4200-4213.
    The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Rehabilitating Statistical Evidence.Lewis Ross - 2019 - Philosophy and Phenomenological Research 102 (1):3-23.
    Recently, the practice of deciding legal cases on purely statistical evidence has been widely criticised. Many feel uncomfortable with finding someone guilty on the basis of bare probabilities, even though the chance of error might be stupendously small. This is an important issue: with the rise of DNA profiling, courts are increasingly faced with purely statistical evidence. A prominent line of argument—endorsed by Blome-Tillmann 2017; Smith 2018; and Littlejohn 2018—rejects the use of such evidence by appealing to epistemic norms that (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  31. Demographic statistics in defensive decisions.Renée Jorgensen Bolinger - 2019 - Synthese 198 (5):4833-4850.
    A popular informal argument suggests that statistics about the preponderance of criminal involvement among particular demographic groups partially justify others in making defensive mistakes against members of the group. One could worry that evidence-relative accounts of moral rights vindicate this argument. After constructing the strongest form of this objection, I offer several replies: most demographic statistics face an unmet challenge from reference class problems, even those that meet it fail to ground non-negligible conditional probabilities, even if they did, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  32. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  33. Statistical Evidence, Normalcy, and the Gatecrasher Paradox.Michael Blome-Tillmann - 2020 - Mind 129 (514):563-578.
    Martin Smith has recently proposed, in this journal, a novel and intriguing approach to puzzles and paradoxes in evidence law arising from the evidential standard of the Preponderance of the Evidence. According to Smith, the relation of normic support provides us with an elegant solution to those puzzles. In this paper I develop a counterexample to Smith’s approach and argue that normic support can neither account for our reluctance to base affirmative verdicts on bare statistical evidence nor resolve the pertinent (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  34. Merely statistical evidence: when and why it justifies belief.Paul Silva - 2023 - Philosophical Studies 180 (9):2639-2664.
    It is one thing to hold that merely statistical evidence is _sometimes_ insufficient for rational belief, as in typical lottery and profiling cases. It is another thing to hold that merely statistical evidence is _always_ insufficient for rational belief. Indeed, there are cases where statistical evidence plainly does justify belief. This project develops a dispositional account of the normativity of statistical evidence, where the dispositions that ground justifying statistical evidence are connected to the goals (= proper function) of objects. There (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  35. The Statistical Nature of Causation.David Papineau - 2022 - The Monist 105 (2):247-275.
    Causation is a macroscopic phenomenon. The temporal asymmetry displayed by causation must somehow emerge along with other asymmetric macroscopic phenomena like entropy increase and the arrow of radiation. I shall approach this issue by considering ‘causal inference’ techniques that allow causal relations to be inferred from sets of observed correlations. I shall show that these techniques are best explained by a reduction of causation to structures of equations with probabilistically independent exogenous terms. This exogenous probabilistic independence imposes a recursive order (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  36. Statistical Mechanical Imperialism.Brad Weslake - 2014 - In Alastair Wilson (ed.), Chance and Temporal Asymmetry. Oxford: Oxford University Press. pp. 241-257.
    I argue against the claim, advanced by David Albert and Barry Loewer, that all non-fundamental laws can be derived from those required to underwrite the second law of thermodynamics.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  37. Statistical mechanics and thermodynamics: A Maxwellian view.Wayne C. Myrvold - 2011 - Studies in History and Philosophy of Science Part A 42 (4):237-243.
    One finds, in Maxwell's writings on thermodynamics and statistical physics, a conception of the nature of these subjects that differs in interesting ways from the way that they are usually conceived. In particular, though—in agreement with the currently accepted view—Maxwell maintains that the second law of thermodynamics, as originally conceived, cannot be strictly true, the replacement he proposes is different from the version accepted by most physicists today. The modification of the second law accepted by most physicists is a probabilistic (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  38. When statistical evidence is not specific enough.Marcello Di Bello - 2021 - Synthese 199 (5-6):12251-12269.
    Many philosophers have pointed out that statistical evidence, or at least some forms of it, lack desirable epistemic or non-epistemic properties, and that this should make us wary of litigations in which the case against the defendant rests in whole or in part on statistical evidence. Others have responded that such broad reservations about statistical evidence are overly restrictive since appellate courts have expressed nuanced views about statistical evidence. In an effort to clarify and reconcile, I put forward an interpretive (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Statistical Thinking between Natural and Social Sciences and the Issue of the Unity of Science: from Quetelet to the Vienna Circle.Donata Romizi - 2012 - In Dennis Dieks, Wenceslao J. Gonzalez, Stephan Hartmann, Michael Stöltzner & Marcel Weber (eds.), Probabilities, Laws, and Structures. Springer Verlag.
    The application of statistical methods and models both in the natural and social sciences is nowadays a trivial fact which nobody would deny. Bold analogies even suggest the application of the same statistical models to fields as different as statistical mechanics and economics, among them the case of the young and controversial discipline of Econophysics . Less trivial, however, is the answer to the philosophical question, which has been raised ever since the possibility of “commuting” statistical thinking and models between (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. An Alternative Interpretation of Statistical Mechanics.C. D. McCoy - 2020 - Erkenntnis 85 (1):1-21.
    In this paper I propose an interpretation of classical statistical mechanics that centers on taking seriously the idea that probability measures represent complete states of statistical mechanical systems. I show how this leads naturally to the idea that the stochasticity of statistical mechanics is associated directly with the observables of the theory rather than with the microstates (as traditional accounts would have it). The usual assumption that microstates are representationally significant in the theory is therefore dispensable, a consequence which suggests (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  41. Disparate Statistics.Kevin P. Tobia - 2017 - Yale Law Journal 126 (8):2382-2420.
    Statistical evidence is crucial throughout disparate impact’s three-stage analysis: during (1) the plaintiff’s prima facie demonstration of a policy’s disparate impact; (2) the defendant’s job-related business necessity defense of the discriminatory policy; and (3) the plaintiff’s demonstration of an alternative policy without the same discriminatory impact. The circuit courts are split on a vital question about the “practical significance” of statistics at Stage 1: Are “small” impacts legally insignificant? For example, is an employment policy that causes a one percent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Foundation of statistical mechanics: Mechanics by itself.Orly Shenker - 2017 - Philosophy Compass 12 (12):e12465.
    Statistical mechanics is a strange theory. Its aims are debated, its methods are contested, its main claims have never been fully proven, and their very truth is challenged, yet at the same time, it enjoys huge empirical success and gives us the feeling that we understand important phenomena. What is this weird theory, exactly? Statistical mechanics is the name of the ongoing attempt to apply mechanics, together with some auxiliary hypotheses, to explain and predict certain phenomena, above all those described (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  43. A statistical learning approach to a problem of induction.Kino Zhao - manuscript
    At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether we are in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Neutrosophic Statistics is an extension of Interval Statistics, while Plithogenic Statistics is the most general form of statistics (second version).Florentin Smarandache - 2022 - International Journal of Neutrosophic Science 19 (1):148-165.
    In this paper, we prove that Neutrosophic Statistics is more general than Interval Statistics, since it may deal with all types of indeterminacies (with respect to the data, inferential procedures, probability distributions, graphical representations, etc.), it allows the reduction of indeterminacy, and it uses the neutrosophic probability that is more general than imprecise and classical probabilities and has more detailed corresponding probability density functions. While Interval Statistics only deals with indeterminacy that can be represented by intervals. And (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Statistics as Figleaves.Felix Bräuer - 2023 - Topoi 42 (2):433-443.
    Recently, Jennifer Saul (“Racial Figleaves, the Shifting Boundaries of the Permissible, and the Rise of Donald Trump”, 2017; “Racist and Sexist Figleaves”, 2021) has explored the use of what she calls “figleaves” in the discourse on race and gender. Following Saul, a figleaf is an utterance that, for some portion of the audience, blocks the conclusion that some other utterance, R, or the person who uttered R is racist or sexist. Such racial and gender figleaves are pernicious, says Saul, because, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Legal proof and statistical conjunctions.Lewis D. Ross - 2020 - Philosophical Studies 178 (6):2021-2041.
    A question, long discussed by legal scholars, has recently provoked a considerable amount of philosophical attention: ‘Is it ever appropriate to base a legal verdict on statistical evidence alone?’ Many philosophers who have considered this question reject legal reliance on bare statistics, even when the odds of error are extremely low. This paper develops a puzzle for the dominant theories concerning why we should eschew bare statistics. Namely, there seem to be compelling scenarios in which there are multiple (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  47. Quantum Foundations of Statistical Mechanics and Thermodynamics.Orly Shenker - 2022 - In Eleanor Knox & Alastair Wilson (eds.), The Routledge Companion to Philosophy of Physics. London, UK: Routledge. pp. Ch. 29.
    Statistical mechanics is often taken to be the paradigm of a successful inter-theoretic reduction, which explains the high-level phenomena (primarily those described by thermodynamics) by using the fundamental theories of physics together with some auxiliary hypotheses. In my view, the scope of statistical mechanics is wider since it is the type-identity physicalist account of all the special sciences. But in this chapter, I focus on the more traditional and less controversial domain of this theory, namely, that of explaining the thermodynamic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. statistical discrimination.Annabelle Lever - 2016 - The Philosophers Magazine 7 (2).
    Download  
     
    Export citation  
     
    Bookmark  
  49. Inherent Properties and Statistics with Individual Particles in Quantum Mechanics.Matteo Morganti - 2009 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 40 (3):223-231.
    This paper puts forward the hypothesis that the distinctive features of quantum statistics are exclusively determined by the nature of the properties it describes. In particular, all statistically relevant properties of identical quantum particles in many-particle systems are conjectured to be irreducible, ‘inherent’ properties only belonging to the whole system. This allows one to explain quantum statistics without endorsing the ‘Received View’ that particles are non-individuals, or postulating that quantum systems obey peculiar probability distributions, or assuming that there (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  50. A new statistical solution to the generality problem.Samuel Kampa - 2018 - Episteme 15 (2):228-244.
    The Generality Problem is widely recognized to be a serious problem for reliabilist theories of justification. James R. Beebe's Statistical Solution is one of only a handful of attempted solutions that has garnered serious attention in the literature. In their recent response to Beebe, Julien Dutant and Erik J. Olsson successfully refute Beebe's Statistical Solution. This paper presents a New Statistical Solution that countenances Dutant and Olsson's objections, dodges the serious problems that trouble rival solutions, and retains the theoretical virtues (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
1 — 50 / 1000