Related

Contents
59 found
Order:
1 — 50 / 59
  1. Surreal Probabilities.J. Dmitri Gallow - manuscript
    We will flip a fair coin infinitely many times. Al calls the first flip, claiming it will land heads. Betty calls every odd numbered flip, claiming they will all land heads. Carl calls every flip bar none, claiming they will all land heads. Pre-theoretically, it seems that Al's claim is infinitely more likely than Betty's, and that Betty's claim is infinitely more likely than Carl's. But standard, real-valued probability theory says that, while Al's claim is infinitely more likely than Betty's, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Practical foundations for probability: Prediction methods and calibration.Benedikt Höltgen - manuscript
    Although probabilistic statements are ubiquitous, probability is still poorly understood. This shows itself, for example, in the mere stipulation of policies like expected utility maximisation and in disagreements about the correct interpretation of probability. In this work, we provide an account of probabilistic predictions that explains when, how, and why they can be useful for decision-making. We demonstrate that a calibration criterion on finite sets of predictions allows one to anticipate the distribution of utilities that a given policy will yield. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. (2 other versions)Linguistic Copenhagen interpretation of quantum mechanics: Quantum Language [Ver. 6] (6th edition).Shiro Ishikawa - manuscript
    Recently I proposed “quantum language” (or,“the linguistic Copenhagen interpretation of quantum mechanics”), which was not only characterized as the metaphysical and linguistic turn of quantum mechanics but also the linguistic turn of Descartes=Kant epistemology. Namely, quantum language is the scientific final goal of dualistic idealism. It has a great power to describe classical systems as well as quantum systems. In this research report, quantum language is seen as a fundamental theory of statistics and reveals the true nature of statistics.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Scientific Realism vs. Anti-Realism: Toward a Common Ground.Hanti Lin - manuscript
    The debate between scientific realism and anti-realism remains at a stalemate, making reconciliation seem hopeless. Yet, important work remains: exploring a common ground, even if only to uncover deeper points of disagreement and, ideally, to benefit both sides of the debate. I propose such a common ground. Specifically, many anti-realists, such as instrumentalists, have yet to seriously engage with Sober's call to justify their preferred version of Ockham's razor through a positive account. Meanwhile, realists face a similar challenge: providing a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Frequentist Statistics as Internalist Reliabilism.Hanti Lin - manuscript
    There has long been an impression that reliabilism implies externalism and that frequentist statistics is externalist because it is reliabilist. I argue, however, that frequentist statistics can be plausibly understood as a form of internalist reliabilism -- internalist in the conventional sense but reliabilist in certain unconventional yet intriguing ways. More importantly, I develop the thesis that reliabilism does not imply externalism, not by stretching the meaning of 'reliabilism' merely to break the implication, but in order to better understand frequentist (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. The Psychology of The Two Envelope Problem.J. S. Markovitch - manuscript
    This article concerns the psychology of the paradoxical Two Envelope Problem. The goal is to find instructive variants of the envelope switching problem that are capable of clear-cut resolution, while still retaining paradoxical features. By relocating the original problem into different contexts involving commutes and playing cards the reader is presented with a succession of resolved paradoxes that reduce the confusion arising from the parent paradox. The goal is to reduce confusion by understanding how we sometimes misread mathematical statements; or, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Preregistration Does Not Improve the Transparent Evaluation of Severity in Popper’s Philosophy of Science or When Deviations are Allowed.Mark Rubin - manuscript
    One justification for preregistering research hypotheses, methods, and analyses is that it improves the transparent evaluation of the severity of hypothesis tests. In this article, I consider two cases in which preregistration does not improve this evaluation. First, I argue that, although preregistration can facilitate the transparent evaluation of severity in Mayo’s error statistical philosophy of science, it does not facilitate this evaluation in Popper’s theory-centric approach. To illustrate, I show that associated concerns about Type I error rate inflation are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. The Logit Model Measurement Problem.Stella Fillmore-Patrick - forthcoming - Philosophy of Science.
    Traditional wisdom dictates that statistical model outputs are estimates, not measurements. Despite this, statistical models are employed as measurement instruments in the social sciences. In this article, I scrutinize the use of a specific model—the logit model—for psychological measurement. Given the adoption of a criterion for measurement that I call comparability, I show that the logit model fails to yield measurements due to properties that follow from its fixed residual variance.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Convergence to the Truth.Hanti Lin - forthcoming - In Kurt Sylvan, Ernest Sosa, Jonathan Dancy & Matthias Steup (eds.), The Blackwell Companion to Epistemology, 3rd edition. Wiley Blackwell.
    This article reviews and develops an epistemological tradition in philosophy of science, called convergentism, which holds that inference methods should be assessed in terms of their abilities to converge to the truth. This tradition is compared with three competing ones: (1) explanationism, which holds that theory choice should be guided by a theory's overall balance of explanatory virtues, such as simplicity and fit with data; (2) instrumentalism, according to which scientific inference should be driven by the goal of obtaining useful (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. A comparison of imprecise Bayesianism and Dempster–Shafer theory for automated decisions under ambiguity.Mantas Radzvilas, William Peden, Daniele Tortoli & Francesco De Pretis - forthcoming - Journal of Logic and Computation.
    Ambiguity occurs insofar as a reasoner lacks information about the relevant physical probabilities. There are objections to the application of standard Bayesian inductive logic and decision theory in contexts of significant ambiguity. A variety of alternative frameworks for reasoning under ambiguity have been proposed. Two of the most prominent are Imprecise Bayesianism and Dempster–Shafer theory. We compare these inductive logics with respect to the Ambiguity Dilemma, which is a problem that has been raised for Imprecise Bayesianism. We develop an agent-based (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Just probabilities.Chad Lee-Stronach - 2024 - Noûs 58 (4):948-972.
    I defend the thesis that legal standards of proof are reducible to thresholds of probability. Many reject this thesis because it appears to permit finding defendants liable solely on the basis of statistical evidence. To the contrary, I argue – by combining Thomson's (1986) causal analysis of legal evidence with formal methods of causal inference – that legal standards of proof can be reduced to probabilities, but that deriving these probabilities involves more than just statistics.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Type I error rates are not usually inflated.Mark Rubin - 2024 - Journal of Trial and Error 1.
    The inflation of Type I error rates is thought to be one of the causes of the replication crisis. Questionable research practices such as p-hacking are thought to inflate Type I error rates above their nominal level, leading to unexpectedly high levels of false positives in the literature and, consequently, unexpectedly low replication rates. In this article, I offer an alternative view. I argue that questionable and other research practices do not usually inflate relevant Type I error rates. I begin (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13. Quantum Indeterminism, Free Will, and Self-Causation.Marco Masi - 2023 - Journal of Consciousness Studies 30 (5-6):32–56.
    A view that emancipates free will by means of quantum indeterminism is frequently rejected based on arguments pointing out its incompatibility with what we know about quantum physics. However, if one carefully examines what classical physical causal determinism and quantum indeterminism are according to physics, it becomes clear what they really imply–and, especially, what they do not imply–for agent-causation theories. Here, we will make necessary conceptual clarifications on some aspects of physical determinism and indeterminism, review some of the major objections (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. A Dilemma for Solomonoff Prediction.Sven Neth - 2023 - Philosophy of Science 90 (2):288-306.
    The framework of Solomonoff prediction assigns prior probability to hypotheses inversely proportional to their Kolmogorov complexity. There are two well-known problems. First, the Solomonoff prior is relative to a choice of Universal Turing machine. Second, the Solomonoff prior is not computable. However, there are responses to both problems. Different Solomonoff priors converge with more and more data. Further, there are computable approximations to the Solomonoff prior. I argue that there is a tension between these two responses. This is because computable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  15. Merely statistical evidence: when and why it justifies belief.Paul Silva - 2023 - Philosophical Studies 180 (9):2639-2664.
    It is one thing to hold that merely statistical evidence is _sometimes_ insufficient for rational belief, as in typical lottery and profiling cases. It is another thing to hold that merely statistical evidence is _always_ insufficient for rational belief. Indeed, there are cases where statistical evidence plainly does justify belief. This project develops a dispositional account of the normativity of statistical evidence, where the dispositions that ground justifying statistical evidence are connected to the goals (= proper function) of objects. There (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  16. Essential materials for Bayesian Mindsponge Framework analytics.Aisdl Team - 2023 - Sm3D Science Portal.
    Acknowledging that many members of the SM3D Portal need reference documents related to Bayesian Mindsponge Framework (BMF) analytics to conduct research projects effectively, we present the essential materials and most up-to-date studies employing the method in this post. By summarizing all the publications and preprints associated with BMF analytics, we also aim to help researchers reduce the time and effort for information seeking, enhance proactive self-learning, and facilitate knowledge exchange and community dialogue through transparency.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. When is an Ensemble like a Sample?Corey Dethier - 2022 - Synthese 200 (52):1-22.
    Climate scientists often apply statistical tools to a set of different estimates generated by an “ensemble” of models. In this paper, I argue that the resulting inferences are justified in the same way as any other statistical inference: what must be demonstrated is that the statistical model that licenses the inferences accurately represents the probabilistic relationship between data and target. This view of statistical practice is appropriately termed “model-based,” and I examine the use of statistics in climate fingerprinting to show (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. There is Cause to Randomize.Cristian Larroulet Philippi - 2022 - Philosophy of Science 89 (1):152 - 170.
    While practitioners think highly of randomized studies, some philosophers argue that there is no epistemic reason to randomize. Here I show that their arguments do not entail their conclusion. Moreover, I provide novel reasons for randomizing in the context of interventional studies. The overall discussion provides a unified framework for assessing baseline balance, one that holds for interventional and observational studies alike. The upshot: practitioners’ strong preference for randomized studies can be defended in some cases, while still offering a nuanced (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Explanatory reasoning in the material theory of induction.William Peden - 2022 - Metascience 31 (3):303-309.
    In his recent book, John Norton has created a theory of inference to the best explanation, within the context of his "material theory of induction". I apply it to the problem of scientific explanations that are false: if we want the theories in our explanations to be true, then why do historians and scientists often say that false theories explained phenomena? I also defend Norton's theory against some possible objections.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Statistical Significance Testing in Economics.William Peden & Jan Sprenger - 2022 - In Conrad Heilmann & Julian Reiss (eds.), Routledge Handbook of Philosophy of Economics. Routledge.
    The origins of testing scientific models with statistical techniques go back to 18th century mathematics. However, the modern theory of statistical testing was primarily developed through the work of Sir R.A. Fisher, Jerzy Neyman, and Egon Pearson in the inter-war period. Some of Fisher's papers on testing were published in economics journals (Fisher, 1923, 1935) and exerted a notable influence on the discipline. The development of econometrics and the rise of quantitative economic models in the mid-20th century made statistical significance (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Distention for Sets of Probabilities.Rush T. Stewart & Michael Nielsen - 2022 - Philosophy of Science 89 (3):604-620.
    Bayesians often appeal to “merging of opinions” to rebut charges of excessive subjectivity. But what happens in the short run is often of greater interest than what happens in the limit. Seidenfeld and coauthors use this observation as motivation for investigating the counterintuitive short run phenomenon of dilation, since, they allege, dilation is “the opposite” of asymptotic merging of opinions. The measure of uncertainty relevant for dilation, however, is not the one relevant for merging of opinions. We explicitly investigate the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. A preamble about doing research that sells.Quan-Hoang Vuong - 2022 - In Quan-Hoang Vuong, Minh-Hoang Nguyen & Viet-Phuong La (eds.), The mindsponge and BMF analytics for innovative thinking in social sciences and humanities. Berlin, Germany: De Gruyter.
    Being a researcher is challenging, especially in the beginning. Early Career Researchers (ECRs) need achievements to secure and expand their careers. In today’s academic landscape, researchers are under many pressures: data collection costs, the expectation of novelty, analytical skill requirements, lengthy publishing process, and the overall competitiveness of the career. Innovative thinking and the ability to turn good ideas into good papers are the keys to success.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Causal Inference from Noise.Nevin Climenhaga, Lane DesAutels & Grant Ramsey - 2021 - Noûs 55 (1):152-170.
    "Correlation is not causation" is one of the mantras of the sciences—a cautionary warning especially to fields like epidemiology and pharmacology where the seduction of compelling correlations naturally leads to causal hypotheses. The standard view from the epistemology of causation is that to tell whether one correlated variable is causing the other, one needs to intervene on the system—the best sort of intervention being a trial that is both randomized and controlled. In this paper, we argue that some purely correlational (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  24. Why Simpler Computer Simulation Models Can Be Epistemically Better for Informing Decisions.Casey Helgeson, Vivek Srikrishnan, Klaus Keller & Nancy Tuana - 2021 - Philosophy of Science 88 (2):213-233.
    For computer simulation models to usefully inform climate risk management, uncertainties in model projections must be explored and characterized. Because doing so requires running the model many ti...
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25. Revisiting the two predominant statistical problems: the stopping-rule problem and the catch-all hypothesis problem.Yusaku Ohkubo - 2021 - Annals of the Japan Association for Philosophy of Science 30:23-41.
    The history of statistics is filled with many controversies, in which the prime focus has been the difference in the “interpretation of probability” between Fre- quentist and Bayesian theories. Many philosophical arguments have been elabo- rated to examine the problems of both theories based on this dichotomized view of statistics, including the well-known stopping-rule problem and the catch-all hy- pothesis problem. However, there are also several “hybrid” approaches in theory, practice, and philosophical analysis. This poses many fundamental questions. This paper (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Hacking, Ian (1936–).Samuli Reijula - 2021 - Routledge Encyclopedia of Philosophy.
    Ian Hacking (born in 1936, Vancouver, British Columbia) is most well-known for his work in the philosophy of the natural and social sciences, but his contributions to philosophy are broad, spanning many areas and traditions. In his detailed case studies of the development of probabilistic and statistical reasoning, Hacking pioneered the naturalistic approach in the philosophy of science. Hacking’s research on social constructionism, transient mental illnesses, and the looping effect of the human kinds make use of historical materials to shed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. When to adjust alpha during multiple testing: a consideration of disjunction, conjunction, and individual testing.Mark Rubin - 2021 - Synthese 199 (3-4):10969-11000.
    Scientists often adjust their significance threshold during null hypothesis significance testing in order to take into account multiple testing and multiple comparisons. This alpha adjustment has become particularly relevant in the context of the replication crisis in science. The present article considers the conditions in which this alpha adjustment is appropriate and the conditions in which it is inappropriate. A distinction is drawn between three types of multiple testing: disjunction testing, conjunction testing, and individual testing. It is argued that alpha (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  28. Sample representation in the social sciences.Kino Zhao - 2021 - Synthese (10):9097-9115.
    The social sciences face a problem of sample non-representation, where the majority of samples consist of undergraduate students from Euro-American institutions. The problem has been identified for decades with little trend of improvement. In this paper, I trace the history of sampling theory. The dominant framework, called the design-based approach, takes random sampling as the gold standard. The idea is that a sampling procedure that is maximally uninformative prevents samplers from introducing arbitrary bias, thus preserving sample representation. I show how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  29. القراءة الحرة بين الواقع والمأمول.مصطفى الرقي - 2021 - In الصديق الصادقي العماري وآخرون & Seddik Sadiki Amari (eds.), واحات زيز وغريس: المجال والإنسان والمجتمع. pp. 133-148.
    يعتبر موضوع القراءة الحرة من المواضيع الهامة التي لم تلق من الاهتمام ما تستحقه، فهي طريق المعرفة، وهي المفتاح الحضاري الذي فتح لأمتنا منذ فجر التاريخ الإسلامي أبواب العلم والمعرفة والإبداع والشهود الحضاري، لذلك تحتل مفردات (القراءة والكتاب والعلم) موقعا متميزا في المنظومة الفكرية. فالمجتمع الذي لا يقرأ هو مجتمع مصاب بالأمية الثقافية، ويصنع لنفسه معيقات التقدم. ومن الصعب – إن لم يكن من المستحيل – التواصل مع العلوم والمعارف بمختلف ميادينها ومجالاتها ومستوياتها وأشكالها وتجلياتها خارج القراءة. فالقراءة إذن عملية (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Are Scientific Models of life Testable? A lesson from Simpson's Paradox.Prasanta S. Bandyopadhyay, Don Dcruz, Nolan Grunska & Mark Greenwood - 2020 - Sci 1 (3).
    We address the need for a model by considering two competing theories regarding the origin of life: (i) the Metabolism First theory, and (ii) the RNA World theory. We discuss two interrelated points, namely: (i) Models are valuable tools for understanding both the processes and intricacies of origin-of-life issues, and (ii) Insights from models also help us to evaluate the core objection to origin-of-life theories, called “the inefficiency objection”, which is commonly raised by proponents of both the Metabolism First theory (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Statistical significance under low power: A Gettier case?Daniel Dunleavy - 2020 - Journal of Brief Ideas.
    A brief idea on statistics and epistemology.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Classical versus Bayesian Statistics.Eric Johannesson - 2020 - Philosophy of Science 87 (2):302-318.
    In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this article is to investigate the extent to which classicists and Bayesians can agree. My conclusion is that, in certain situations, they cannot. The upshot is that, if we assume that the classicist is not allowed to have a higher degree of belief in a null hypothesis after he has rejected it than before, then he has to either have trivial or incoherent credences to begin with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. “Repeated sampling from the same population?” A critique of Neyman and Pearson’s responses to Fisher.Mark Rubin - 2020 - European Journal for Philosophy of Science 10 (3):1-15.
    Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  34. Conditional Degree of Belief and Bayesian Inference.Jan Sprenger - 2020 - Philosophy of Science 87 (2):319-335.
    Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis for Bayesian (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  35. Trung tâm ISR có bài ra mừng 130 năm Ngày sinh Chủ tịch Hồ Chí Minh.Hồ Mạnh Toàn - 2020 - ISR Phenikaa 2020 (5):1-3.
    Bài mới xuất bản vào ngày 19-5-2020 với tác giả liên lạc là NCS Nguyễn Minh Hoàng, cán bộ nghiên cứu của Trung tâm ISR, trình bày tiếp cận thống kê Bayesian cho việc nghiên cứu dữ liệu khoa học xã hội. Đây là kết quả của định hướng Nhóm nghiên cứu SDAG được nêu rõ ngay từ ngày 18-5-2019.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. A more principled use of the p-value? Not so fast: a critique of Colquhoun’s argument.Ognjen Arandjelovic - 2019 - Royal Society Open Science 6 (5):181519.
    The usefulness of the statistic known as the p-value, as a means of quantify-ing the strength of evidence for the presence of an effect from empirical data has long been questioned in the statistical community. In recent years there has been a notable increase in the awareness of both fundamental and practical limitations of the statistic within the target research fields, and especially biomedicine. In this article I analyse the recently published article which, in summary, argues that with a better (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Evidence amalgamation, plausibility, and cancer research.Marta Bertolaso & Fabio Sterpetti - 2019 - Synthese 196 (8):3279-3317.
    Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  38. An Automatic Ockham’s Razor for Bayesians?Gordon Belot - 2018 - Erkenntnis 84 (6):1361-1367.
    It is sometimes claimed that the Bayesian framework automatically implements Ockham’s razor—that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. It is shown here that the automatic razor doesn’t in fact cut it for certain mundane curve-fitting problems.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2018 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of evidence for hypotheses. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Legal Burdens of Proof and Statistical Evidence.Georgi Gardiner - 2018 - In David Coady & James Chase (eds.), Routledge Handbook of Applied Epistemology. New York: Routledge, Taylor & Francis Group.
    In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   34 citations  
  41. Multiple Regression Is Not Multiple Regressions: The Meaning of Multiple Regression and the Non-Problem of Collinearity.Michael B. Morrissey & Graeme D. Ruxton - 2018 - Philosophy, Theory, and Practice in Biology 10 (3).
    Simple regression (regression analysis with a single explanatory variable), and multiple regression (regression models with multiple explanatory variables), typically correspond to very different biological questions. The former use regression lines to describe univariate associations. The latter describe the partial, or direct, effects of multiple variables, conditioned on one another. We suspect that the superficial similarity of simple and multiple regression leads to confusion in their interpretation. A clear understanding of these methods is essential, as they underlie a large range of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Imprecise Probability and the Measurement of Keynes's "Weight of Arguments".William Peden - 2018 - IfCoLog Journal of Logics and Their Applications 5 (4):677-708.
    Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the usefulness (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Nhà khoa học Việt đứng tên một mình trên tạp chí hàng đầu về khoa học dữ liệu của Nature Research.Thùy Dương - 2017 - Dân Trí Online 2017 (10).
    Dân Trí (25/10/2017) — Lần đầu tiên, có một nhà khoa học người Việt, thực hiện công trình nghiên cứu hoàn toàn 100% tại Việt Nam, đứng tên một mình được công bố tại tạp chí Scientific Data, một tạp chí hàng đầu về khoa học dữ liệu thuộc danh mục xuất bản của Nature Research danh tiếng.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. La valeur de l'incertitude : l'évaluation de la précision des mesures physiques et les limites de la connaissance expérimentale.Fabien Grégis - 2016 - Dissertation, Université Sorbonne Paris Cité Université Paris.Diderot (Paris 7)
    Abstract : A measurement result is never absolutely accurate: it is affected by an unknown “measurement error” which characterizes the discrepancy between the obtained value and the “true value” of the quantity intended to be measured. As a consequence, to be acceptable a measurement result cannot take the form of a unique numerical value, but has to be accompanied by an indication of its “measurement uncertainty”, which enunciates a state of doubt. What, though, is the value of measurement uncertainty? What (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. Simpson's Paradox and Causality.Prasanta S. Bandyopadhyay, Mark Greenwood, Don Dcruz & Venkata Raghavan - 2015 - American Philosophical Quarterly 52 (1):13-25.
    There are three questions associated with Simpson’s Paradox (SP): (i) Why is SP paradoxical? (ii) What conditions generate SP?, and (iii) What should be done about SP? By developing a logic-based account of SP, it is argued that (i) and (ii) must be divorced from (iii). This account shows that (i) and (ii) have nothing to do with causality, which plays a role only in addressing (iii). A counterexample is also presented against the causal account. Finally, the causal and logic-based (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Philosophy as conceptual engineering: Inductive logic in Rudolf Carnap's scientific philosophy.Christopher F. French - 2015 - Dissertation, University of British Columbia
    My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Hypothesis Testing, “Dutch Book” Arguments, and Risk.Daniel Malinsky - 2015 - Philosophy of Science 82 (5):917-929.
    “Dutch Book” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “Dutch Book”–type mathematical results with principles actually endorsed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. A Merton Model of Credit Risk with Jumps.Hoang Thi Phuong Thao & Quan-Hoang Vuong - 2015 - Journal of Statistics Applications and Probability Letters 2 (2):97-103.
    In this note, we consider a Merton model for default risk, where the firm’s value is driven by a Brownian motion and a compound Poisson process.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  49. Compte rendu de « Desrosières, Alain (2014), Prouver et gouverner. Une analyse politique des statistiques publiques ». [REVIEW]Marc-Kevin Daoust - 2014 - Science Ouverte 1:1-7.
    Prouver et gouverner étudie le rôle des institutions, des conventions et des enjeux normatifs dans la construction d’indicateurs quantitatifs. Desrosières pense qu’on ne peut étudier le développement scientifique des statistiques sans prendre en compte le développement institutionnel – en particulier le rôle de l’État – dans la constitution de cette discipline.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. A Frequentist Solution to Lindley & Phillips’ Stopping Rule Problem in Ecological Realm.Adam P. Kubiak - 2014 - Zagadnienia Naukoznawstwa 50 (200):135-145.
    In this paper I provide a frequentist philosophical-methodological solution for the stopping rule problem presented by Lindley & Phillips in 1976, which is settled in the ecological realm of testing koalas’ sex ratio. I deliver criteria for discerning a stopping rule, an evidence and a model that are epistemically more appropriate for testing the hypothesis of the case studied, by appealing to physical notion of probability and by analyzing the content of possible formulations of evidence, assumptions of models and meaning (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 59