Switch to: References

Add citations

You must login to add citations.
  1. The Risk GP Model: The standard model of prediction in medicine.Jonathan Fuller & Luis J. Flores - 2015 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 54:49-61.
    With the ascent of modern epidemiology in the Twentieth Century came a new standard model of prediction in public health and clinical medicine. In this article, we describe the structure of the model. The standard model uses epidemiological measures-most commonly, risk measures-to predict outcomes (prognosis) and effect sizes (treatment) in a patient population that can then be transformed into probabilities for individual patients. In the first step, a risk measure in a study population is generalized or extrapolated to a target (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Measuring effectiveness.Jacob Stegenga - 2015 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 54:62-71.
    Measuring the effectiveness of medical interventions faces three epistemological challenges: the choice of good measuring instruments, the use of appropriate analytic measures, and the use of a reliable method of extrapolating measures from an experimental context to a more general context. In practice each of these challenges contributes to overestimating the effectiveness of medical interventions. These challenges suggest the need for corrective normative principles. The instruments employed in clinical research should measure patient-relevant and disease-specific parameters, and should not be sensitive (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Rationality and the generalization of randomized controlled trial evidence.Jonathan Fuller - 2013 - Journal of Evaluation in Clinical Practice 19 (4):644-647.
    Over the past several decades, we devoted much energy to generating, reviewing and summarizing evidence. We have given far less attention to the issue of how to thoughtfully apply the evidence once we have it. That’s fine if all we care about is that our clinical decisions are evidence-based, but not so good if we also want them to be well-reasoned. Let us not forget that evidence based medicine (EBM) grew out of an interest in making medicine ‘rational’, with the (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Explanation, understanding, objectivity and experience.Michael Loughlin, Robyn Bluhm, Drozdstoj S. Stoyanov, Stephen Buetow, Ross E. G. Upshur, Kirstin Borgerson, Maya J. Goldenberg & Elselijn Kingma - 2013 - Journal of Evaluation in Clinical Practice 19 (3):415-421.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Robustness, evidence, and uncertainty: an exploration of policy applications of robustness analysis.Nicolas Wüthrich - unknown
    Policy-makers face an uncertain world. One way of getting a handle on decision-making in such an environment is to rely on evidence. Despite the recent increase in post-fact figures in politics, evidence-based policymaking takes centre stage in policy-setting institutions. Often, however, policy-makers face large volumes of evidence from different sources. Robustness analysis can, prima facie, handle this evidential diversity. Roughly, a hypothesis is supported by robust evidence if the different evidential sources are in agreement. In this thesis, I strengthen the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Research report appraisal: how much understanding is enough?Martin Lipscomb - 2014 - Nursing Philosophy 15 (3):157-170.
    When appraising research papers, how much understanding is enough? More specifically, in deciding whether research results can inform practice, do appraisers need to substantively understand how findings are derived or is it sufficient simply to grasp that suitable analytic techniques were chosen and used by researchers? The degree or depth of understanding that research appraisers need to attain before findings can legitimately/sensibly inform practice is underexplored. In this paper it is argued that, where knowledge/justified beliefs derived from research evidence prompt (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The myth and fallacy of simple extrapolation in medicine.Jonathan Fuller - 2019 - Synthese 198 (4):2919-2939.
    Simple extrapolation is the orthodox approach to extrapolating from clinical trials in evidence-based medicine: extrapolate the relative effect size from the trial unless there is a compelling reason not to do so. I argue that this method relies on a myth and a fallacy. The myth of simple extrapolation is the idea that the relative risk is a ‘golden ratio’ that is usually transportable due to some special mathematical or theoretical property. The fallacy of simple extrapolation is an unjustified argument (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Medical necessity under weak evidence and little or perverse regulatory gatekeeping.John P. A. Ioannidis - 2023 - Clinical Ethics 18 (3):330-334.
    Medical necessity (claiming that a medical intervention or care is – at minimum – reasonable, appropriate and acceptable) depends on empirical evidence and on the interpretation of that evidence. Evidence and its interpretation define the standard of care. This commentary argues that both the evidence base and its interpretation are currently weak gatekeepers. Empirical meta-research suggests that very few medical interventions have high quality evidence in support of their effectiveness and very few of them also have relatively thorough assessments of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation