Switch to: References

Citations of:

Against external validity

Synthese 196 (8):3103-3121 (2019)

Add citations

You must login to add citations.
  1. Disagreement about Evidence-based Policy.Nick Cowen & Nancy Cartwright - forthcoming - In Maria Baghramian, J. Adam Carter & Rach Cosker-Rowland (eds.), Routledge Handbook of Disagreement. Routledge.
    Evidence based-policy (EBP) is a popular research paradigm in the applied social sciences and within government agencies. Informally, EBP represents an explicit commitment to applying scientific methods to public affairs, in contrast to ideologically-driven or merely intuitive “common-sense” approaches to public policy. More specifically, the EBP paradigm places great weight on the results of experimental research designs, especially randomised controlled trials (RCTs), and systematic literature reviews that place evidential weight on experimental results. One hope is that such research designs and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Extrapolation of Experimental Results through Analogical Reasoning from Latent Classes.Gerdien G. van Eersel, Julian Reiss & Gabriela V. Koppenol-Gonzalez - 2019 - Philosophy of Science 86 (2):219-235.
    In the human sciences, experimental research is used to establish causal relationships. However, the extrapolation of these results to the target population can be problematic. To facilitate extrapolation, we propose to use the statistical technique Latent Class Regression Analysis in combination with the analogical reasoning theory for extrapolation. This statistical technique can identify latent classes that differ in the effect of X on Y. In order to extrapolate by means of analogical reasoning, one can characterize the latent classes by a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Treatment effectiveness, generalizability, and the explanatory/pragmatic-trial distinction.Steven Tresker - 2022 - Synthese 200 (4):1-29.
    The explanatory/pragmatic-trial distinction enjoys a burgeoning philosophical and medical literature and a significant contingent of support among philosophers and healthcare stakeholders as an important way to assess the design and results of randomized controlled trials. A major motivation has been the need to provide relevant, generalizable data to drive healthcare decisions. While talk of pragmatic and explanatory trials could be seen as convenient shorthand, the distinction can also be seen as harboring deeper issues related to inferential strategies used to evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evidence amalgamation in the sciences: an introduction.Roland Poellinger, Jürgen Landes & Samuel C. Fletcher - 2019 - Synthese 196 (8):3163-3188.
    Amalgamating evidence from heterogeneous sources and across levels of inquiry is becoming increasingly important in many pure and applied sciences. This special issue provides a forum for researchers from diverse scientific and philosophical perspectives to discuss evidence amalgamation, its methodologies, its history, its pitfalls, and its potential. We situate the contributions therein within six themes from the broad literature on this subject: the variety-of-evidence thesis, the philosophy of meta-analysis, the role of robustness/sensitivity analysis for evidence amalgamation, its bearing on questions (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Two Strands of Field Experiments in Economics: A Historical-Methodological Analysis.Michiru Nagatsu & Judith Favereau - 2020 - Philosophy of the Social Sciences 50 (1):45-77.
    While the history and methodology of laboratory experiments in economics have been extensively studied by philosophers, those of field experiments have not attracted much attention until recently. What is the historical context in which field experiments have been advocated? And what are the methodological rationales for conducting experiments in the field as opposed to in the lab? This article addresses these questions by combining historical and methodological perspectives. In terms of history, we show that the movement toward field experiments in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Co-production and economics: insights from the constructive use of experimental games in adaptive resource management.Michiru Nagatsu - 2021 - Journal of Economic Methodology 28 (1):134-142.
    I envision new directions in the methodology of experimental games in the field of developmental, environmental and resource economics. Although there have been extensive discussions on experimenta...
    Download  
     
    Export citation  
     
    Bookmark  
  • Is meta-analysis of RCTs assessing the efficacy of interventions a reliable source of evidence for therapeutic decisions?Mariusz Maziarz - 2022 - Studies in History and Philosophy of Science Part A 91 (C):159-167.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Causal Pluralism in Medicine and its Implications for Clinical Practice.Mariusz Maziarz - forthcoming - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie:1-22.
    The existing philosophical views on what is the meaning of causality adequate to medicine are vastly divided. We approach this question and offer two arguments in favor of pluralism regarding concepts of causality. First, we analyze the three main types of research designs (randomized-controlled trials, observational epidemiology and laboratory research). We argue, using examples, that they allow for making causal conclusions that are best understood differently in each case (in agreement with a version of manipulationist, probabilistic and mechanistic definitions, respectively). (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Some Explanatory Issues with Woodward’s Notion of Intervention.Dalibor Makovník - 2023 - International Studies in the Philosophy of Science 36 (4):299-315.
    James Woodward’s manipulationist counterfactual theory of explanation offers strong tools for an adequate approach to explanation endeavours. One of these tools is the notion of intervention, which serves as a guiding principle for identifying explanations as causal, thus preserving the unidirectionality of explanatory praxis. Nevertheless, in this paper, I argue that in some cases of explanation, this notion has a rather redundant role since it is either impossible to define or it can be replaced by other types of manipulations or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Assessing the Overall Validity of Randomised Controlled Trials.Alexander Krauss - 2021 - International Studies in the Philosophy of Science 34 (3):159-182.
    In the biomedical, behavioural and social sciences, the leading method used to estimate causal effects is commonly randomised controlled trials (RCTs) that are generally viewed as both the source and justification of the most valid evidence. In studying the foundation and theory behind RCTs, the existing literature analyses important single issues and biases in isolation that influence causal outcomes in trials (such as randomisation, statistical probabilities and placebos). The common account of biased causal inference is described in a general way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What’s (successful) extrapolation?Donal Khosrowi - 2021 - Journal of Economic Methodology 29 (2):140-152.
    Extrapolating causal effects is becoming an increasingly important kind of inference in Evidence-Based Policy, development economics, and microeconometrics more generally. While several strategies have been proposed to aid with extrapolation, the existing methodological literature has left our understanding of what extrapolation consists of and what constitutes successful extrapolation underdeveloped. This paper addresses this lack in understanding by offering a novel account of successful extrapolation. Building on existing contributions pertaining to the challenges involved in extrapolation, this more nuanced and comprehensive account (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When Experiments Need Models.Donal Khosrowi - 2021 - Philosophy of the Social Sciences 51 (4):400-424.
    This paper argues that an important type of experiment-target inference, extrapolating causal effects, requires models to be successful. Focusing on extrapolation in Evidence-Based Policy, it is ar...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Extrapolating from experiments, confidently.Donal Khosrowi - 2023 - European Journal for Philosophy of Science 13 (2):1-28.
    Extrapolating causal effects from experiments to novel populations is a common practice in evidence-based-policy, development economics and other social science areas. Drawing on experimental evidence of policy effectiveness, analysts aim to predict the effects of policies in new populations, which might differ importantly from experimental populations. Existing approaches made progress in articulating the sorts of similarities one needs to assume to enable such inferences. It is also recognized, however, that many of these assumptions will remain surrounded by significant uncertainty in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Experimental practices and objectivity in the social sciences: re-embedding construct validity in the internal–external validity distinction.María Jiménez-Buedo & Federica Russo - 2021 - Synthese 199 (3-4):9549-9579.
    The experimental revolution in the social sciences is one of the most significant methodological shifts undergone by the field since the ‘quantitative revolution’ in the nineteenth century. One of the often valued features of social science experimentation is precisely the fact that there are clear methodological rules regarding hypothesis testing that come from the methods of the natural sciences and from the methodology of RCTs in the biomedical sciences, and that allow for the adjudication among contentious causal claims. We examine (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Data quality, experimental artifacts, and the reactivity of the psychological subject matter.Uljana Feest - 2022 - European Journal for Philosophy of Science 12 (1):1-25.
    While the term “reactivity” has come to be associated with specific phenomena in the social sciences, having to do with subjects’ awareness of being studied, this paper takes a broader stance on this concept. I argue that reactivity is a ubiquitous feature of the psychological subject matter and that this fact is a precondition of experimental research, while also posing potential problems for the experimenter. The latter are connected to the worry about distorted data and experimental artifacts. But what are (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Wishful Intelligibility, Black Boxes, and Epidemiological Explanation.Marina DiMarco - 2021 - Philosophy of Science 88 (5):824-834.
    Epidemiological explanation often has a “black box” character, meaning the intermediate steps between cause and effect are unknown. Filling in black boxes is thought to improve causal inferences by making them intelligible. I argue that adding information about intermediate causes to a black box explanation is an unreliable guide to pragmatic intelligibility because it may mislead us about the stability of a cause. I diagnose a problem that I call wishful intelligibility, which occurs when scientists misjudge the limitations of certain (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making Intelligence: Ethics, IQ, and ML Benchmarks.Borhane Blili-Hamelin & Leif Hancox-Li - manuscript
    The ML community recognizes the importance of anticipating and mitigating the potential negative impacts of benchmark research. In this position paper, we argue that more attention needs to be paid to areas of ethical risk that lie at the technical and scientific core of ML benchmarks. We identify overlooked structural similarities between human IQ and ML benchmarks. Human intelligence and ML benchmarks share similarities in setting standards for describing, evaluating and comparing performance on tasks relevant to intelligence. This enables us (...)
    Download  
     
    Export citation  
     
    Bookmark