Switch to: References

Add citations

You must login to add citations.
  1. Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting.Katie Steele & Charlotte Werndl - 2016 - British Journal for the Philosophy of Science:axw024.
    This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Experimentation in Cognitive Neuroscience and Cognitive Neurobiology.Jacqueline Anne Sullivan - 2014 - In Levy Neil & Clausen Jens (eds.), Handbook on Neuroethics. Springer.
    Neuroscience is a laboratory-based science that spans multiple levels of analysis from molecular genetics to behavior. At every level of analysis experiments are designed in order to answer empirical questions about phenomena of interest. Understanding the nature and structure of experimentation in neuroscience is fundamental for assessing the quality of the evidence produced by such experiments and the kinds of claims that are warranted by the data. This article provides a general conceptual framework for thinking about evidence and experimentation in (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Prediction in selectionist evolutionary theory.Rasmus Gr⊘Nfeldt Winther - 2009 - Philosophy of Science 76 (5):889-901.
    Selectionist evolutionary theory has often been faulted for not making novel predictions that are surprising, risky, and correct. I argue that it in fact exhibits the theoretical virtue of predictive capacity in addition to two other virtues: explanatory unification and model fitting. Two case studies show the predictive capacity of selectionist evolutionary theory: parallel evolutionary change in E. coli, and the origin of eukaryotic cells through endosymbiosis.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • How to discount double-counting when it counts: Some clarifications.Deborah G. Mayo - 2008 - British Journal for the Philosophy of Science 59 (4):857-879.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity criterion. Taking their criticism as (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Use-novel predictions and Mendeleev’s periodic table: response to Scerri and Worrall.Samuel Schindler - 2008 - Studies in History and Philosophy of Science Part A 39 (2):265-269.
    In this paper I comment on a recent paper by [Scerri, E., & Worrall, J. . Prediction and the periodic table. Studies in History and Philosophy of Science, 32, 407–452.] about the role temporally novel and use-novel predictions played in the acceptance of Mendeleev’s periodic table after the proposal of the latter in 1869. Scerri and Worrall allege that whereas temporally novel predictions—despite Brush’s earlier claim to the contrary—did not carry any special epistemic weight, use-novel predictions did indeed contribute to (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Is fertility virtuous in its own right?Daniel Nolan - 1999 - British Journal for the Philosophy of Science 50 (2):265-282.
    the virtues which are desirable for scientific theories to possess. In this paper I discuss the several species of theoretical virtues called 'fertility', and argue in each case that the desirability of 'fertility' can be explicated in terms of other, more fundamental theoretical virtues.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Download  
     
    Export citation  
     
    Bookmark   66 citations  
  • Understanding Stability in Cognitive Neuroscience Through Hacking's Lens.Jacqueline Anne Sullivan - 2021 - Philosophical Inquiries 1 (1):189-208.
    Ian Hacking instigated a revolution in 20th century philosophy of science by putting experiments (“interventions”) at the top of a philosophical agenda that historically had focused nearly exclusively on representations (“theories”). In this paper, I focus on a set of conceptual tools Hacking (1992) put forward to understand how laboratory sciences become stable and to explain what such stability meant for the prospects of unity of science and kind discovery in experimental science. I first use Hacking’s tools to understand sources (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Externalismo semántico y subdeterminación empírica. Respuesta a un desafío al realismo científico.Marc Jiménez Rolland - 2017 - Dissertation, Universidad Autónoma Metropolitana
    I offer an explicit account of the underdetermination thesis as well as of the many challenges it poses to scientific realism; a way to answer to these challenges is explored and outlined, by shifting attention to the content of theories. I argue that, even if we have solid grounds (as I contend we do) to support that some varieties of the underdetermination thesis are true, scientific realism can still offer an adequate picture of the aims and achievements of science.
    Download  
     
    Export citation  
     
    Bookmark  
  • Identifying logical evidence.Ben Martin - 2020 - Synthese 198 (10):9069-9095.
    Given the plethora of competing logical theories of validity available, it’s understandable that there has been a marked increase in interest in logical epistemology within the literature. If we are to choose between these logical theories, we require a good understanding of the suitable criteria we ought to judge according to. However, so far there’s been a lack of appreciation of how logical practice could support an epistemology of logic. This paper aims to correct that error, by arguing for a (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Judging Mechanistic Neuroscience: A Preliminary Conceptual-Analytic Framework for Evaluating Scientific Evidence in the Courtroom.Jacqueline Anne Sullivan & Emily Baron - 2018 - Psychology, Crime and Law (00):00-00.
    The use of neuroscientific evidence in criminal trials has been steadily increasing. Despite progress made in recent decades in understanding the mechanisms of psychological and behavioral functioning, neuroscience is still in an early stage of development and its potential for influencing legal decision-making is highly contentious. Scholars disagree about whether or how neuroscientific evidence might impact prescriptions of criminal culpability, particularly in instances in which evidence of an accused’s history of mental illness or brain abnormality is offered to support a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Optogenetics, Pluralism, and Progress.Jacqueline Anne Sullivan - 2018 - Philosophy of Science 85 (00):1090-1101.
    Optogenetic techniques are described as “revolutionary” for the unprecedented causal control they allow neuroscientists to exert over neural activity in awake behaving animals. In this paper, I demonstrate by means of a case study that optogenetic techniques will only illuminate causal links between the brain and behavior to the extent that their error characteristics are known and, further, that determining these error characteristics requires comparison of optogenetic techniques with techniques having well known error characteristics and consideration of the broader neural (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Reliability and Validity of Experiment in the Neurobiology of Learning and Memory.Sullivan Jacqueline Anne - 2007 - Dissertation, University of Pittsburgh
    Download  
     
    Export citation  
     
    Bookmark  
  • Some surprising facts about surprising facts.D. Mayo - 2014 - Studies in History and Philosophy of Science Part A 45:79-86.
    A common intuition about evidence is that if data x have been used to construct a hypothesis H, then x should not be used again in support of H. It is no surprise that x fits H, if H was deliberately constructed to accord with x. The question of when and why we should avoid such “double-counting” continues to be debated in philosophy and statistics. It arises as a prohibition against data mining, hunting for significance, tuning on the signal, and (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The multiplicity of experimental protocols: A challenge to reductionist and non-reductionist models of the unity of neuroscience.Jacqueline A. Sullivan - 2009 - Synthese 167 (3):511-539.
    Descriptive accounts of the nature of explanation in neuroscience and the global goals of such explanation have recently proliferated in the philosophy of neuroscience and with them new understandings of the experimental practices of neuroscientists have emerged. In this paper, I consider two models of such practices; one that takes them to be reductive; another that takes them to be integrative. I investigate those areas of the neuroscience of learning and memory from which the examples used to substantiate these models (...)
    Download  
     
    Export citation  
     
    Bookmark   87 citations  
  • Gauge symmetry and the Theta vacuum.Richard Healey - 2009 - In Mauricio Suárez, Mauro Dorato & Miklós Rédei (eds.), EPSA Philosophical Issues in the Sciences: Launch of the European Philosophy of Science Association. Dordrecht, Netherland: Springer. pp. 105--116.
    According to conventional wisdom, local gauge symmetry is not a symmetry of nature, but an artifact of how our theories represent nature. But a study of the so-called theta-vacuum appears to refute this view. The ground state of a quantized non-Abelian Yang-Mills gauge theory is characterized by a real-valued, dimensionless parameter theta—a fundamental new constant of nature. The structure of this vacuum state is often said to arise from a degeneracy of the vacuum of the corresponding classical theory, which degeneracy (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Duhem's problem, the bayesian way, and error statistics, or "what's belief got to do with it?".Deborah G. Mayo - 1997 - Philosophy of Science 64 (2):222-244.
    I argue that the Bayesian Way of reconstructing Duhem's problem fails to advance a solution to the problem of which of a group of hypotheses ought to be rejected or "blamed" when experiment disagrees with prediction. But scientists do regularly tackle and often enough solve Duhemian problems. When they do, they employ a logic and methodology which may be called error statistics. I discuss the key properties of this approach which enable it to split off the task of testing auxiliary (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • (1 other version)Explanation v. Prediction: Which Carries More Weight?Peter Achinstein - 1994 - PSA Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 (2):156-164.
    According to a standard view, predictions of new phenomena provide stronger evidence for a theory than explanations of old ones. More guardedly, a theory that predicts phenomena that did not prompt the initial formulation of that theory is better supported by those phenomena than is a theory by known phenomena that generated the theory in the first place. So say various philosophers of science, including William Whewell (1847) in the 19th century and Karl Popper (1959) in the 20th, to mention (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • From a boson to the standard model Higgs: a case study in confirmation and model dynamics.Cristin Chall, Martin King, Peter Mättig & Michael Stöltzner - 2019 - Synthese 198 (Suppl 16):3779-3811.
    Our paper studies the anatomy of the discovery of the Higgs boson at the Large Hadron Collider and its influence on the broader model landscape of particle physics. We investigate the phases of this discovery, which led to a crucial reconfiguration of the model landscape of elementary particle physics and eventually to a confirmation of the standard model. A keyword search of preprints covering the electroweak symmetry breaking sector of particle physics, along with an examination of physicists own understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Curve Fitting, the Reliability of Inductive Inference, and the Error‐Statistical Approach.Aris Spanos - 2007 - Philosophy of Science 74 (5):1046-1066.
    The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-of-fit as the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Ducks, Rabbits, and Normal Science: Recasting the Kuhn’s-Eye View of Popper’s Demarcation of Science.Deborah G. Mayo - 1996 - British Journal for the Philosophy of Science 47 (2):271-290.
    Kuhn maintains that what marks the transition to a science is the ability to carry out ‘normal’ science—a practice he characterizes as abandoning the kind of testing that Popper lauds as the hallmark of science. Examining Kuhn's own contrast with Popper, I propose to recast Kuhnian normal science. Thus recast, it is seen to consist of severe and reliable tests of low-level experimental hypotheses (normal tests) and is, indeed, the place to look to demarcate science. While thereby vindicating Kuhn on (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Predictivism and avoidance of ad hoc-ness: An empirical study.Samuel Schindler - 2024 - Studies in History and Philosophy of Science Part A 104 (C):68-77.
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Elusive Formalisation of the Risky Condition for Hypothesis Testing.José Díez & Albert Solé - 2022 - International Studies in the Philosophy of Science 34 (4):199-219.
    In this paper, we examine possible formalisations of the riskiness condition for hypothesis testing. First, we informally introduce derivability and riskiness as testing conditions together with the corresponding arguments for refutation and confirmation. Then, we distinguish two different senses of confirmation and focus our discussion on one of them with the aid of a historical example. In the remaining sections, we offer a brief overview of the main references to the risky condition in the literature and scrutinise different options for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Practical and Practice-Sensitive Account of Science as Problem-Solving.Frédéric-Ismaël Banville - unknown
    Philosophers of science have recently begun to pay more attention to scientific practice, moving away from the discipline’s focus on theories. The creation of the Society for Philosophy of Science in Practice in 2006, as well as the emergence of scholarship on experimental practice (e.g. Sullivan 2009; 2010; 2016) as well as on the tools scientists use to construct explanations and theories (e.g. Feest 2011) all point to a disciplinary shift towards a more practice-conscious philosophy of science. In addition, scholars (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How to be a scientific realist (if at all): a study of partial realism.Dean Peters - 2012 - Dissertation, London School of Economics
    "Partial realism" is a common position in the contemporary philosophy of science literature. It states that the "essential" elements of empirically successful scientific theories accurately represent corresponding features the world. This thesis makes several novel contributions related to this position. Firstly, it offers a new definition of the concept of “empirical success”, representing a principled merger between the use-novelty and unification accounts. Secondly, it provides a comparative critical analysis of various accounts of which elements are "essential" to the success of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Creativity, conservativeness & the social epistemology of science.Adrian Currie - 2019 - Studies in History and Philosophy of Science Part A 76:1-4.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Evidence in Neuroimaging: Towards a Philosophy of Data Analysis.Jessey Wright - 2017 - Dissertation, The University of Western Ontario
    Neuroimaging technology is the most widely used tool to study human cognition. While originally a promising tool for mapping the content of cognitive theories onto the structures of the brain, recently developed tools for the analysis, handling and sharing of data have changed the theoretical landscape of cognitive neuroscience. Even with these advancements philosophical analyses of evidence in neuroimaging remain skeptical of the promise of neuroimaging technology. These views often treat the analysis techniques used to make sense of data produced (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Novelty, coherence, and Mendeleev’s periodic table.Samuel Schindler - 2014 - Studies in History and Philosophy of Science Part A 45:62-69.
    Predictivism is the view that successful predictions of “novel” evidence carry more confirmational weight than accommodations of already known evidence. Novelty, in this context, has traditionally been conceived of as temporal novelty. However temporal predictivism has been criticized for lacking a rationale: why should the time order of theory and evidence matter? Instead, it has been proposed, novelty should be construed in terms of use-novelty, according to which evidence is novel if it was not used in the construction of a (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Demonstrative induction, old and new evidence and the accuracy of the electrostatic inverse square law.Ronald Laymon - 1994 - Synthese 99 (1):23 - 58.
    Maxwell claimed that the electrostatic inverse square law could be deduced from Cavendish's spherical condenser experiment. This is true only if the accuracy claims made by Cavendish and Maxwell are ignored, for both used the inverse square law as a premise in their analyses of experimental accuracy. By so doing, they assumed the very law the accuracy of which the Cavendish experiment was supposed to test. This paper attempts to make rational sense of this apparently circular procedure and to relate (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • (1 other version)Introduction: Creativity, Conservatism & the Social Epistemology of Science.Adrian Currie - forthcoming - Studies in History and Philosophy of Science A.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Preregistration and Predictivism.Hong Hui Choi - 2024 - Synthese 204 (6):1-22.
    In recent years, several scientific disciplines have been undergoing replication crises, and in response, preregistration has been offered as a solution to replicability problems. In this paper, I will draw connections between this new focus on preregistration and an older debate in the philosophy of science, namely predictivism—the thesis that predictions are epistemically superior to accommodations. Specifically, I shall argue that predictivism justifies preregistration. As it turns out, predictivists of all stripes have subtly different reasons to support preregistration. This unity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting.Charlotte Werndl & Katie Steele - 2018 - British Journal for the Philosophy of Science 69 (2):351-375.
    This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Bayesian pseudo-confirmation, use-novelty, and genuine confirmation.Gerhard Schurz - 2014 - Studies in History and Philosophy of Science Part A 45:87-96.
    According to the comparative Bayesian concept of confirmation, rationalized versions of creationism come out as empirically confirmed. From a scientific viewpoint, however, they are pseudo-explanations because with their help all kinds of experiences are explainable in an ex-post fashion, by way of ad-hoc fitting of an empirically empty theoretical framework to the given evidence. An alternative concept of confirmation that attempts to capture this intuition is the use novelty criterion of confirmation. Serious objections have been raised against this criterion. In (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Novelty and the 1919 Eclipse Experiments.Robert G. Hudson - 2003 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 34 (1):107-129.
    In her 1996 book, Error and the Growth of Experimental Knowledge, Deborah Mayo argues that use- (or heuristic) novelty is not a criterion we need to consider in assessing the evidential value of observations. Using the notion of a “severe” test, Mayo claims that such novelty is valuable only when it leads to severity, and never otherwise. To illustrate her view, she examines the historical case involving the famous 1919 British eclipse expeditions that generated observations supporting Einstein's theory of gravitation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the predilections for predictions.David Harker - 2008 - British Journal for the Philosophy of Science 59 (3):429-453.
    Scientific theories are developed in response to a certain set of phenomena and subsequently evaluated, at least partially, in terms of the quality of fit between those same theories and appropriately distinctive phenomena. To differentiate between these two stages it is popular to describe the former as involving the accommodation of data and the latter as involving the prediction of data. Predictivism is the view that, ceteris paribus, correctly predicting data confers greater confirmation than successfully accommodating data. In this paper, (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • The epistemological status of scientific theories: An investigation of the structural realist account.Ioannis Votsis - 2004 - Dissertation, London School of Economics
    In this dissertation, I examine a view called ‘Epistemic Structural Realism’, which holds that we can, at best, have knowledge of the structure of the physical world. Put crudely, we can know physical objects only to the extent that they are nodes in a structure. In the spirit of Occam’s razor, I argue that, given certain minimal assumptions, epistemic structural realism provides a viable and reasonable scientific realist position that is less vulnerable to anti-realist arguments than any of its rivals.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The error statistical philosopher as normative naturalist.Deborah Mayo & Jean Miller - 2008 - Synthese 163 (3):305 - 314.
    We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest another approach for meta-methodology based on a conglomeration of tools and strategies (from statistical modeling, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Did Pearson reject the Neyman-Pearson philosophy of statistics?Deborah G. Mayo - 1992 - Synthese 90 (2):233 - 262.
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did hold gives a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Neither truth nor empirical adequacy explain novel success.E. C. Barnes - 2002 - Australasian Journal of Philosophy 80 (4):418 – 431.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The generalizability crisis.Tal Yarkoni - 2022 - Behavioral and Brain Sciences 45:e1.
    Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned – that is, that the two must refer to roughly the same set of hypothetical observations. Here, I argue that many applications of statistical inference in psychology fail to meet this basic condition. Focusing on the most widely used (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Prediction versus accommodation and the risk of overfitting.Christopher Hitchcock & Elliott Sober - 2004 - British Journal for the Philosophy of Science 55 (1):1-34.
    an observation to formulate a theory, it is no surprise that the resulting theory accurately captures that observation. However, when the theory makes a novel prediction—when it predicts an observation that was not used in its formulation—this seems to provide more substantial confirmation of the theory. This paper presents a new approach to the vexed problem of understanding the epistemic difference between prediction and accommodation. In fact, there are several problems that need to be disentangled; in all of them, the (...)
    Download  
     
    Export citation  
     
    Bookmark   102 citations  
  • Bayes and beyond.Geoffrey Hellman - 1997 - Philosophy of Science 64 (2):191-221.
    Several leading topics outstanding after John Earman's Bayes or Bust? are investigated further, with emphasis on the relevance of Bayesian explication in epistemology of science, despite certain limitations. (1) Dutch Book arguments are reformulated so that their independence from utility and preference in epistemic contexts is evident. (2) The Bayesian analysis of the Quine-Duhem problem is pursued; the phenomenon of a "protective belt" of auxiliary statements around reasonably successful theories is explicated. (3) The Bayesian approach to understanding the superiority of (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • (1 other version)Dynamics of Theory Change: The Role of Predictions.Stephen G. Brush - 1994 - PSA Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 (2):132-145.
    “What did the President know and when did he know it?”Senator Howard Baker, Watergate hearings, 1973Why do scientists accept or reject theories? More specifically: why do they change from one theory to another? What is the role of empirical tests in the evaluation of theories?This paper focuses on a narrowly-defined question: in judging theories, do scientists give greater weight (other things being equal) to successfulnovel predictionsthan to successful deductions of previously-known facts? The affirmative answer is called the “predictivist thesis” (Maher (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Legacy Data, Radiocarbon Dating, and Robustness Reasoning.Alison Wylie - manuscript
    *PSA 2016, symposium on “Data in Time: Epistemology of Historical Data” organized by Sabina Leonelli, 5 November 2016* *See published version: "Radiocarbon Dating in Archaeology: Triangulation and Traceability" in Data Journeys in the Sciences (2020) - link below* Archaeologists put a premium on pressing “legacy data” into service, given the notoriously selective and destructive nature of their practices of data capture. Legacy data consist of material and records that been assembled over decades, sometimes centuries, often by means and for purposes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Novel work on problems of novelty? Comments on Hudson.Deborah G. Mayo - 2003 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 34 (1):131-134.
    Download  
     
    Export citation  
     
    Bookmark  
  • Selective Scientific Realism, Constructive Empiricism, and the Unification of Theories.Steven Savitt - 1993 - Midwest Studies in Philosophy 18 (1):154-165.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Clarifying the “adequate evidence condition” in educational issues and research: A Lakoffian view.Janet Fredericks Steven I. Miller - 1992 - Educational Theory 42 (4):461-472.
    Download  
     
    Export citation  
     
    Bookmark  
  • Making contact with observations.Ioannis Votsis - 2009 - In Mauricio Suárez, Mauro Dorato & Miklós Rédei (eds.), EPSA Philosophical Issues in the Sciences: Launch of the European Philosophy of Science Association. Dordrecht, Netherland: Springer. pp. 267--277.
    A stalwart view in the philosophy of science holds that, even when broadly construed so as to include theoretical auxiliaries, theories cannot make direct contact with observations. This view owes much to Bogen and Woodward’s influential distinction between data and phenomena. According to them, data are typically the kind of things that are observable or measurable like "bubble chamber photographs, patterns of discharge in electronic particle detectors and records of reaction times and error rates in various psychological experiments". Phenomena are (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Novelty, severity, and history in the testing of hypotheses: The case of the top quark.Kent W. Staley - 1996 - Philosophy of Science 63 (3):255.
    It is sometimes held that facts confirm a hypothesis only if they were not used in the construction of that hypothesis. This requirement of "use novelty" introduces a historical aspect into the assessment of evidence claims. I examine a methodological principle invoked by physicists in the experimental search for the top quark that bears a striking resemblance to this view. However, this principle is better understood, both historically and philosophically, in terms of the need to conduct a severe test than (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Freud's 'tally' argument, placebo control treatments, and the evaluation of psychotherapy.John D. Greenwood - 1996 - Philosophy of Science 63 (4):605-621.
    In this paper it is suggested that Freud's 'tally argument' (Grunbaum 1984) is not best interpreted as a risky claim concerning the efficacy of psychoanalytic therapy, but as a risky claim concerning the implications of theoretical psychoanalytic explanations of the efficacy of psychoanalytic therapy. Despite the fact that Freud never empirically established that these implications hold, the 'tally argument' does draw attention to a critical distinction that is too often neglected in contemporary empirical studies of psychoanalysis and other forms of (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations