View topic on PhilPapers for more information
Related categories

27 found
Order:
More results on PhilPapers
  1. The Psychology of The Two Envelope Problem.J. S. Markovitch - manuscript
    This article concerns the psychology of the paradoxical Two Envelope Problem. The goal is to find instructive variants of the envelope switching problem that are capable of clear-cut resolution, while still retaining paradoxical features. By relocating the original problem into different contexts involving commutes and playing cards the reader is presented with a succession of resolved paradoxes that reduce the confusion arising from the parent paradox. The goal is to reduce confusion by understanding how we sometimes misread mathematical statements; or, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Legal Burdens of Proof and Statistical Evidence.Georgi Gardiner - forthcoming - In James Chase & David Coady (eds.), The Routledge Handbook of Applied Epistemology. Routledge.
    In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  3. Sample Representation in the Social Sciences.Kino Zhao - forthcoming - Synthese:1-19.
    The social sciences face a problem of sample non-representation, where the majority of samples consist of undergraduate students from Euro-American institutions. The problem has been identified for decades with little trend of improvement. In this paper, I trace the history of sampling theory. The dominant framework, called the design-based approach, takes random sampling as the gold standard. The idea is that a sampling procedure that is maximally uninformative prevents samplers from introducing arbitrary bias, thus preserving sample representation. I show how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Causal Inference From Noise.Nevin Climenhaga, Lane DesAutels & Grant Ramsey - 2021 - Noûs 55 (1):152-170.
    "Correlation is not causation" is one of the mantras of the sciences—a cautionary warning especially to fields like epidemiology and pharmacology where the seduction of compelling correlations naturally leads to causal hypotheses. The standard view from the epistemology of causation is that to tell whether one correlated variable is causing the other, one needs to intervene on the system—the best sort of intervention being a trial that is both randomized and controlled. In this paper, we argue that some purely correlational (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2021 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of evidence for hypotheses. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Statistical Significance Under Low Power: A Gettier Case?Daniel Dunleavy - 2020 - Journal of Brief Ideas.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. “Repeated Sampling From the Same Population?” A Critique of Neyman and Pearson’s Responses to Fisher.Mark Rubin - 2020 - European Journal for Philosophy of Science 10 (3):1-15.
    Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Conditional Degree of Belief and Bayesian Inference.Jan Sprenger - 2020 - Philosophy of Science 87 (2):319-335.
    Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis for Bayesian (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  9. Trung tâm ISR có bài ra mừng 130 năm Ngày sinh Chủ tịch Hồ Chí Minh.Hồ Mạnh Toàn - 2020 - ISR Phenikaa 2020 (5):1-3.
    Bài mới xuất bản vào ngày 19-5-2020 với tác giả liên lạc là NCS Nguyễn Minh Hoàng, cán bộ nghiên cứu của Trung tâm ISR, trình bày tiếp cận thống kê Bayesian cho việc nghiên cứu dữ liệu khoa học xã hội. Đây là kết quả của định hướng Nhóm nghiên cứu SDAG được nêu rõ ngay từ ngày 18-5-2019.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  10. An Automatic Ockham’s Razor for Bayesians?Gordon Belot - 2019 - Erkenntnis 84 (6):1361-1367.
    It is sometimes claimed that the Bayesian framework automatically implements Ockham’s razor—that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. It is shown here that the automatic razor doesn’t in fact cut it for certain mundane curve-fitting problems.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  11. Evidence Amalgamation, Plausibility, and Cancer Research.Marta Bertolaso & Fabio Sterpetti - 2019 - Synthese 196 (8):3279-3317.
    Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  12. Multiple Regression Is Not Multiple Regressions: The Meaning of Multiple Regression and the Non-Problem of Collinearity.Michael B. Morrissey & Graeme D. Ruxton - 2018 - Philosophy, Theory, and Practice in Biology 10 (3).
    Simple regression (regression analysis with a single explanatory variable), and multiple regression (regression models with multiple explanatory variables), typically correspond to very different biological questions. The former use regression lines to describe univariate associations. The latter describe the partial, or direct, effects of multiple variables, conditioned on one another. We suspect that the superficial similarity of simple and multiple regression leads to confusion in their interpretation. A clear understanding of these methods is essential, as they underlie a large range of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Imprecise Probability and the Measurement of Keynes's "Weight of Arguments".William Peden - 2018 - IfCoLog Journal of Logics and Their Applications 5 (4):677-708.
    Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the usefulness (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Nhà khoa học Việt đứng tên một mình trên tạp chí hàng đầu về khoa học dữ liệu của Nature Research.Thùy Dương - 2017 - Dân Trí Online 2017 (10).
    Dân Trí (25/10/2017) — Lần đầu tiên, có một nhà khoa học người Việt, thực hiện công trình nghiên cứu hoàn toàn 100% tại Việt Nam, đứng tên một mình được công bố tại tạp chí Scientific Data, một tạp chí hàng đầu về khoa học dữ liệu thuộc danh mục xuất bản của Nature Research danh tiếng.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  15. La valeur de l'incertitude : l'évaluation de la précision des mesures physiques et les limites de la connaissance expérimentale.Fabien Grégis - 2016 - Dissertation, Université Sorbonne Paris Cité Université Paris.Diderot (Paris 7)
    Abstract : A measurement result is never absolutely accurate: it is affected by an unknown “measurement error” which characterizes the discrepancy between the obtained value and the “true value” of the quantity intended to be measured. As a consequence, to be acceptable a measurement result cannot take the form of a unique numerical value, but has to be accompanied by an indication of its “measurement uncertainty”, which enunciates a state of doubt. What, though, is the value of measurement uncertainty? What (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  16. Philosophy as Conceptual Engineering: Inductive Logic in Rudolf Carnap's Scientific Philosophy.Christopher F. French - 2015 - Dissertation, University of British Columbia
    My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  17. Hypothesis Testing, “Dutch Book” Arguments, and Risk.Daniel Malinsky - 2015 - Philosophy of Science 82 (5):917-929.
    “Dutch Book” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “Dutch Book”–type mathematical results with principles actually endorsed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. A Merton Model of Credit Risk with Jumps.Hoang Thi Phuong Thao & Quan-Hoang Vuong - 2015 - Journal of Statistics Applications and Probability Letters 2 (2):97-103.
    In this note, we consider a Merton model for default risk, where the firm’s value is driven by a Brownian motion and a compound Poisson process.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Compte rendu de « Desrosières, Alain (2014), Prouver et gouverner. Une analyse politique des statistiques publiques ». [REVIEW]Marc-Kevin Daoust - 2014 - Science Ouverte 1:1-7.
    Prouver et gouverner étudie le rôle des institutions, des conventions et des enjeux normatifs dans la construction d’indicateurs quantitatifs. Desrosières pense qu’on ne peut étudier le développement scientifique des statistiques sans prendre en compte le développement institutionnel – en particulier le rôle de l’État – dans la constitution de cette discipline.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  20. A Frequentist Solution to Lindley & Phillips’ Stopping Rule Problem in Ecological Realm.Adam P. Kubiak - 2014 - Zagadnienia Naukoznawstwa 50 (200):135-145.
    In this paper I provide a frequentist philosophical-methodological solution for the stopping rule problem presented by Lindley & Phillips in 1976, which is settled in the ecological realm of testing koalas’ sex ratio. I deliver criteria for discerning a stopping rule, an evidence and a model that are epistemically more appropriate for testing the hypothesis of the case studied, by appealing to physical notion of probability and by analyzing the content of possible formulations of evidence, assumptions of models and meaning (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. OBCS: The Ontology of Biological and Clinical Statistics.Jie Zheng, Marcelline R. Harris, Anna Maria Masci, Yu Lin, Alfred Hero, Barry Smith & Yongqun He - 2014 - Proceedings of the Fifth International Conference on Biomedical Ontology 1327:65.
    Statistics play a critical role in biological and clinical research. To promote logically consistent representation and classification of statistical entities, we have developed the Ontology of Biological and Clinical Statistics (OBCS). OBCS extends the Ontology of Biomedical Investigations (OBI), an OBO Foundry ontology supported by some 20 communities. Currently, OBCS contains 686 terms, including 381 classes imported from OBI and 147 classes specific to OBCS. The goal of this paper is to present OBCS for community critique and to describe a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. The Undetectable Difference: An Experimental Look at the ‘Problem’ of P-Values.William M. Goodman - 2010 - Statistical Literacy Website/Papers: Www.Statlit.Org/Pdf/2010GoodmanASA.Pdf.
    In the face of continuing assumptions by many scientists and journal editors that p-values provide a gold standard for inference, counter warnings are published periodically. But the core problem is not with p-values, per se. A finding that “p-value is less than α” could merely signal that a critical value has been exceeded. The question is why, when estimating a parameter, we provide a range (a confidence interval), but when testing a hypothesis about a parameter (e.g. µ = x) we (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Steps Towards a Unified Basis for Scientific Models and Methods.Inge S. Helland - 2010 - World Scientific.
    The book attempts to build a bridge across three cultures: mathematical statistics, quantum theory and chemometrical methods.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Pearson’s Wrong Turning: Against Statistical Measures of Causal Efficacy.Robert Northcott - 2005 - Philosophy of Science 72 (5):900-912.
    Standard statistical measures of strength of association, although pioneered by Pearson deliberately to be acausal, nowadays are routinely used to measure causal efficacy. But their acausal origins have left them ill suited to this latter purpose. I distinguish between two different conceptions of causal efficacy, and argue that: 1) Both conceptions can be useful 2) The statistical measures only attempt to capture the first of them 3) They are not fully successful even at this 4) An alternative definition more squarely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  25. Bayes's Theorem. [REVIEW]Massimo Pigliucci - 2005 - Quarterly Review of Biology 80 (1):93-95.
    About a British Academy collection of papers on Bayes' famous theorem.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  26. 50 Years of Successful Predictive Modeling Should Be Enough: Lessons for Philosophy of Science.Michael Bishop & J. D. Trout - 2002 - Philosophy of Science 69 (S3):S197-S208.
    Our aim in this paper is to bring the woefully neglected literature on predictive modeling to bear on some central questions in the philosophy of science. The lesson of this literature is straightforward: For a very wide range of prediction problems, statistical prediction rules (SPRs), often rules that are very easy to implement, make predictions than are as reliable as, and typically more reliable than, human experts. We will argue that the success of SPRs forces us to reconsider our views (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  27. Derivation of the Cramer-Rao Bound.Ryan Reece - manuscript
    I give a pedagogical derivation of the Cramer-Rao Bound, which gives a lower bound on the variance of estimators used in statistical point estimation, commonly used to give numerical estimates of the systematic uncertainties in a measurement.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark