Switch to: References

Add citations

You must login to add citations.
  1. What is epistemically wrong with research affected by sponsorship bias? The evidential account.Alexander Reutlinger - 2020 - European Journal for Philosophy of Science 10 (2):1-26.
    Biased research occurs frequently in the sciences. In this paper, I will focus on one particular kind of biased research: research that is subject to sponsorship bias. I will address the following epistemological question: what precisely is epistemically wrong with biased research of this kind? I will defend the evidential account of epistemic wrongness: that is, research affected by sponsorship bias is epistemically wrong if and only if the researchers in question make false claims about the evidential support of some (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Scientific Ignorance: Probing the Limits of Scientific Research and Knowledge Production.Manuela Fernández Pinto - 2019 - Theoria. An International Journal for Theory, History and Foundations of Science 34 (2):195.
    The aim of the paper is to clarify the concept of scientific ignorance: what is it, what are its sources, and when is it epistemically detrimental for science. I present a taxonomy of scientific ignorance, distinguishing between intrinsic and extrinsic sources. I argue that the latter can create a detrimental epistemic gap, which have significant epistemic and social consequences. I provide three examples from medical research to illustrate this point. To conclude, I claim that while some types of scientific ignorance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Foundations of a Probabilistic Theory of Causal Strength.Jan Sprenger - 2018 - Philosophical Review 127 (3):371-398.
    This paper develops axiomatic foundations for a probabilistic-interventionist theory of causal strength. Transferring methods from Bayesian confirmation theory, I proceed in three steps: I develop a framework for defining and comparing measures of causal strength; I argue that no single measure can satisfy all natural constraints; I prove two representation theorems for popular measures of causal strength: Pearl's causal effect measure and Eells' difference measure. In other words, I demonstrate these two measures can be derived from a set of plausible (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Prediction in Epidemiology and Medicine.Jonathan Fuller, Alex Broadbent & Luis J. Flores - 2015 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences.
    Download  
     
    Export citation  
     
    Bookmark  
  • Effectiveness of Medical Interventions.Jacob Stegenga - 2015 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 54:34-44.
    To be effective, a medical intervention must improve one's health by targeting a disease. The concept of disease, though, is controversial. Among the leading accounts of disease-naturalism, normativism, hybridism, and eliminativism-I defend a version of hybridism. A hybrid account of disease holds that for a state to be a disease that state must both (i) have a constitutive causal basis and (ii) cause harm. The dual requirement of hybridism entails that a medical intervention, to be deemed effective, must target either (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Three Arguments for Absolute Outcome Measures.Jan Sprenger & Jacob Stegenga - 2017 - Philosophy of Science 84 (5):840-852.
    Data from medical research are typically summarized with various types of outcome measures. We present three arguments in favor of absolute over relative outcome measures. The first argument is from cognitive bias: relative measures promote the reference class fallacy and the overestimation of treatment effectiveness. The second argument is decision-theoretic: absolute measures are superior to relative measures for making a decision between interventions. The third argument is causal: interpreted as measures of causal strength, absolute measures satisfy a set of desirable (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Hollow Hunt for Harms.Jacob Stegenga - 2016 - Perspectives on Science 24 (5):481-504.
    Harms of medical interventions are systematically underestimated in clinical research. Numerous factors—conceptual, methodological, and social—contribute to this underestimation. I articulate the depth of such underestimation by describing these factors at the various stages of clinical research. Before any evidence is gathered, the ways harms are operationalized in clinical research contributes to their underestimation. Medical interventions are first tested in phase 1 ‘first in human’ trials, but evidence from these trials is rarely published, despite the fact that such trials provide the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Can the Behavioral Sciences Self-Correct? A Social Epistemic Study.Felipe Romero - 2016 - Studies in History and Philosophy of Science Part A 60:55-69.
    Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Evaluating Facts and Facting Evaluations: On the Fact-Value Relationship in HTA.Bjorn Hofmann, Ken Bond & Lars Sandman - 2018 - Journal of Evaluation in Clinical Practice 24 (5):957-965.
    Health technology assessment is an evaluation of health technologies in terms of facts and evidence. However, the relationship between facts and values is still not clear in HTA. This is problematic in an era of fake facts and truth production. Accordingly, the objective of this study is to clarify the relationship between facts and values in HTA. We start with the perspectives of the traditional positivist account of evaluating facts and the social-constructivist account of facting values. Our analysis reveals diverse (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Epistemology of Causal Inference in Pharmacology: Towards a Framework for the Assessment of Harms.Jürgen Landes, Barbara Osimani & Roland Poellinger - 2018 - European Journal for Philosophy of Science 8 (1):3-49.
    Philosophical discussions on causal inference in medicine are stuck in dyadic camps, each defending one kind of evidence or method rather than another as best support for causal hypotheses. Whereas Evidence Based Medicine advocates the use of Randomised Controlled Trials and systematic reviews of RCTs as gold standard, philosophers of science emphasise the importance of mechanisms and their distinctive informational contribution to causal inference and assessment. Some have suggested the adoption of a pluralistic approach to causal inference, and an inductive (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • In Defense of Meta-Analysis.Bennett Holman - 2019 - Synthese 196 (8):3189-3211.
    Arguments that medical decision making should rely on a variety of evidence often begin from the claim that meta-analysis has been shown to be problematic. In this paper, I first examine Stegenga’s argument that meta-analysis requires multiple decisions and thus fails to provide an objective ground for medical decision making. Next, I examine three arguments from social epistemologists that contend that meta-analyses are systematically biased in ways not appreciated by standard epistemology. In most cases I show that critiques of meta-analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Myth and Fallacy of Simple Extrapolation in Medicine.Jonathan Fuller - forthcoming - Synthese:1-21.
    Simple extrapolation is the orthodox approach to extrapolating from clinical trials in evidence-based medicine: extrapolate the relative effect size from the trial unless there is a compelling reason not to do so. I argue that this method relies on a myth and a fallacy. The myth of simple extrapolation is the idea that the relative risk is a ‘golden ratio’ that is usually transportable due to some special mathematical or theoretical property. The fallacy of simple extrapolation is an unjustified argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Validity Beyond Measurement: Why Psychometric Validity Is Insufficient for Valid Psychotherapy Research.Femke L. Truijens, Shana Cornelis, Mattias Desmet, Melissa M. De Smet & Reitske Meganck - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark