Switch to: References

Add citations

You must login to add citations.
  1. Making coherent senses of success in scientific modeling.Beckett Sterner & Christopher DiTeresi - 2021 - European Journal for Philosophy of Science 11 (1):1-20.
    Making sense of why something succeeded or failed is central to scientific practice: it provides an interpretation of what happened, i.e. an hypothesized explanation for the results, that informs scientists’ deliberations over their next steps. In philosophy, the realism debate has dominated the project of making sense of scientists’ success and failure claims, restricting its focus to whether truth or reliability best explain science’s most secure successes. Our aim, in contrast, will be to expand and advance the practice-oriented project sketched (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why computer simulations are not inferences, and in what sense they are experiments.Florian J. Boge - 2018 - European Journal for Philosophy of Science 9 (1):1-30.
    The question of where, between theory and experiment, computer simulations (CSs) locate on the methodological map is one of the central questions in the epistemology of simulation (cf. Saam Journal for General Philosophy of Science, 48, 293–309, 2017). The two extremes on the map have them either be a kind of experiment in their own right (e.g. Barberousse et al. Synthese, 169, 557–574, 2009; Morgan 2002, 2003, Journal of Economic Methodology, 12(2), 317–329, 2005; Morrison Philosophical Studies, 143, 33–57, 2009; Morrison (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Jean Perrin and the Philosophers’ Stories: The Role of Multiple Determination in Determining Avogadro’s Number.Klodian Coko - 2020 - Hopos: The Journal of the International Society for the History of Philosophy of Science 10 (1):143-193.
    The French physicist Jean Baptiste Perrin is widely credited with providing the conclusive argument for atomism. The most well-known part of Perrin’s argument is his description of thirteen different procedures for determining Avogadro’s number (N)–the number of atoms, ions, and molecules contained in a gram-atom, gram-ion, and gram-mole of a substance, respectively. Because of its success in ending the atomism debates Perrin’s argument has been the focus of much philosophical interest. The various philosophers, however, have reached different conclusions, not only (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Non-Bayesian Accounts of Evidence: Howson’s Counterexample Countered.Gordon Brittan, Mark L. Taper & Prasanta S. Bandyopadhyay - 2016 - International Studies in the Philosophy of Science 30 (3):291-298.
    There is a debate in Bayesian confirmation theory between subjective and non-subjective accounts of evidence. Colin Howson has provided a counterexample to our non-subjective account of evidence: the counterexample refers to a case in which there is strong evidence for a hypothesis, but the hypothesis is highly implausible. In this article, we contend that, by supposing that strong evidence for a hypothesis makes the hypothesis more believable, Howson conflates the distinction between confirmation and evidence. We demonstrate that Howson’s counterexample fails (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evidence in Neuroimaging: Towards a Philosophy of Data Analysis.Jessey Wright - 2017 - Dissertation, The University of Western Ontario
    Neuroimaging technology is the most widely used tool to study human cognition. While originally a promising tool for mapping the content of cognitive theories onto the structures of the brain, recently developed tools for the analysis, handling and sharing of data have changed the theoretical landscape of cognitive neuroscience. Even with these advancements philosophical analyses of evidence in neuroimaging remain skeptical of the promise of neuroimaging technology. These views often treat the analysis techniques used to make sense of data produced (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Non-cognitive Values and Methodological Learning in the Decision-Oriented Sciences.Oliver Todt & José Luis Luján - 2017 - Foundations of Science 22 (1):215-234.
    The function and legitimacy of values in decision making is a critically important issue in the contemporary analysis of science. It is particularly relevant for some of the more application-oriented areas of science, specifically decision-oriented science in the field of regulation of technological risks. Our main objective in this paper is to assess the diversity of roles that non-cognitive values related to decision making can adopt in the kinds of scientific activity that underlie risk regulation. We start out, first, by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Science is judgement, not only calculation: a reply to Aris Spanos’s review of The cult of statistical significance.Stephen T. Ziliak & Deirdre Nansen McCloskey - 2008 - Erasmus Journal for Philosophy and Economics 1 (1):165-170.
    Download  
     
    Export citation  
     
    Bookmark  
  • Base empírica global de contrastación, base empírica local de contrastación y aserción empírica de una teoría.Pablo Lorenzano - 2012 - Agora 31 (2):71-107.
    The aim of this article is to contribute to the discussion about the so-called “empirical claim” and “empirical basis” of theory testing. First, the proposals of reconceptualization of the standard notions of partial potential model, intended application and empirical claim of a theory made by Balzer (1982, 1988, 1997a, 1997b, 2006, Balzer, Lauth & Zoubek 1993) and Gähde (1996, 2002, 2008) will be first discussed. Then, the distinction between “global” and “local empirical basis” will be introduced, linking it with that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Objective evidence and rules of strategy: Achinstein on method: Peter Achinstein: Evidence and method: Scientific strategies of Isaac Newton and James Clerk Maxwell. Oxford and New York: Oxford University Press, 2013, 177pp, $24.95 HB.William L. Harper, Kent W. Staley, Henk W. de Regt & Peter Achinstein - 2014 - Metascience 23 (3):413-442.
    Download  
     
    Export citation  
     
    Bookmark  
  • Disciplinary authority and accountability in scientific practice and learning.Michael Ford - 2008 - Science Education 92 (3):404-423.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • In favour of a Millian proposal to reform biomedical research.Julian Reiss - 2010 - Synthese 177 (3):427 - 447.
    One way to make philosophy of science more socially relevant is to attend to specific scientific practises that affect society to a great extent. One such practise is biomedical research. This paper looks at contemporary U.S. biomedical research in particular and argues that it suffers from important epistemic, moral and socioeconomic failings. It then discusses and criticises existing approaches to improve on the status quo, most prominently by Thomas Pogge (a political philosopher), Joseph Stiglitz (a Nobel-prize winning economist) and James (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Evidence and Justification in Groups with Conflicting Background Beliefs.Kent W. Staley - 2010 - Episteme 7 (3):232-247.
    Some prominent accounts of scientific evidence treat evidence as an unrelativized concept. But whether belief in a hypothesis is justified seems relative to the epistemic situation of the believer. The issue becomes yet more complicated in the context of group epistemic agents, for then one confronts the problem of relativizing to an epistemic situation that may include conflicting beliefs. As a step toward resolution of these difficulties, an ideal of justification is here proposed that incorporates both an unrelativized evidence requirement (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Novel work on problems of novelty? Comments on Hudson.Deborah G. Mayo - 2003 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 34 (1):131-134.
    Download  
     
    Export citation  
     
    Bookmark  
  • From data to phenomena: a Kantian stance.Michela Massimi - 2011 - Synthese 182 (1):101-116.
    This paper investigates some metaphysical and epistemological assumptions behind Bogen and Woodward’s data-to-phenomena inferences. I raise a series of points and suggest an alternative possible Kantian stance about data-to-phenomena inferences. I clarify the nature of the suggested Kantian stance by contrasting it with McAllister’s view about phenomena as patterns in data sets.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Evidence and experimental design in sequential trials.Jan Sprenger - 2009 - Philosophy of Science 76 (5):637-649.
    To what extent does the design of statistical experiments, in particular sequential trials, affect their interpretation? Should postexperimental decisions depend on the observed data alone, or should they account for the used stopping rule? Bayesians and frequentists are apparently deadlocked in their controversy over these questions. To resolve the deadlock, I suggest a three‐part strategy that combines conceptual, methodological, and decision‐theoretic arguments. This approach maintains the pre‐experimental relevance of experimental design and stopping rules but vindicates their evidential, postexperimental irrelevance. †To (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • What is a data model?: An anatomy of data analysis in high energy physics.Antonis Antoniou - 2021 - European Journal for Philosophy of Science 11 (4):1-33.
    Many decades ago Patrick Suppes argued rather convincingly that theoretical hypotheses are not confronted with the direct, raw results of an experiment, rather, they are typically compared with models of data. What exactly is a data model however? And how do the interactions of particles at the subatomic scale give rise to the huge volumes of data that are then moulded into a polished data model? The aim of this paper is to answer these questions by presenting a detailed case (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Rationality of Science and the Inevitability of Defining Prior Beliefs in Empirical Research.Ulrich Dettweiler - 2019 - Frontiers in Psychology 10:481878.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)What distinguishes data from models?Sabina Leonelli - 2019 - European Journal for Philosophy of Science 9 (2):22.
    I propose a framework that explicates and distinguishes the epistemic roles of data and models within empirical inquiry through consideration of their use in scientific practice. After arguing that Suppes’ characterization of data models falls short in this respect, I discuss a case of data processing within exploratory research in plant phenotyping and use it to highlight the difference between practices aimed to make data usable as evidence and practices aimed to use data to represent a specific phenomenon. I then (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Climate Simulations: Uncertain Projections for an Uncertain World.Rafaela Hillerbrand - 2014 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 45 (1):17-32.
    Between the fourth and the recent fifth IPCC report, science as well as policy making have made great advances in dealing with uncertainties in global climate models. However, the uncertainties public decision making has to deal with go well beyond what is currently addressed by policy makers and climatologists alike. It is shown in this paper that within an anthropocentric framework, a whole hierarchy of models from various scientific disciplines is needed for political decisions as regards climate change. Via what (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Curve Fitting, the Reliability of Inductive Inference, and the Error‐Statistical Approach.Aris Spanos - 2007 - Philosophy of Science 74 (5):1046-1066.
    The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-of-fit as the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (1 other version)How experimental algorithmics can benefit from Mayo’s extensions to Neyman–Pearson theory of testing.Thomas Bartz-Beielstein - 2008 - Synthese 163 (3):385 - 396.
    Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bio-informatics, where classical methods from mathematical optimization fail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The 'requirement of total evidence' and its role in phylogenetic systematics.Kirk Fitzhugh - 2006 - Biology and Philosophy 21 (3):309-351.
    The question of whether or not to partition data for the purposes of inferring phylogenetic hypotheses remains controversial. Opinions have been especially divided since Kluge's (1989, Systematic Zoology 38, 7–25) claim that data partitioning violates the requirement of total evidence (RTE). Unfortunately, advocacy for or against the RTE has not been based on accurate portrayals of the requirement. The RTE is a basic maxim for non-deductive inference, stipulating that evidence must be considered if it has relevance to an inference. Evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The problem of model selection and scientific realism.Stanislav Larski - unknown
    This thesis has two goals. Firstly, we consider the problem of model selection for the purposes of prediction. In modern science predictive mathematical models are ubiquitous and can be found in such diverse fields as weather forecasting, economics, ecology, mathematical psychology, sociology, etc. It is often the case that for a given domain of inquiry there are several plausible models, and the issue then is how to discriminate between them – this is the problem of model selection. We consider approaches (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Battle in the Statistics Wars: a simulation-based comparison of Bayesian, Frequentist and Williamsonian methodologies.Mantas Radzvilas, William Peden & Francesco De Pretis - 2021 - Synthese 199 (5-6):13689-13748.
    The debates between Bayesian, frequentist, and other methodologies of statistics have tended to focus on conceptual justifications, sociological arguments, or mathematical proofs of their long run properties. Both Bayesian statistics and frequentist (“classical”) statistics have strong cases on these grounds. In this article, we instead approach the debates in the “Statistics Wars” from a largely unexplored angle: simulations of different methodologies’ performance in the short to medium run. We conducted a large number of simulations using a straightforward decision problem based (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Productive theory-ladenness in fMRI.M. Emrah Aktunc - 2019 - Synthese 198 (9):7987-8003.
    Several developments for diverse scientific goals, mostly in physics and physiology, had to take place, which eventually gave us fMRI as one of the central research paradigms of contemporary cognitive neuroscience. This technique stands on solid foundations established by the physics of magnetic resonance and the physiology of hemodynamics and is complimented by computational and statistical techniques. I argue, and support using concrete examples, that these foundations give rise to a productive theory-ladenness in fMRI, which enables researchers to identify and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Duhem’s problem revisited: logical versus epistemic formulations and solutions.Michael Dietrich & Phillip Honenberger - 2020 - Synthese 197 (1):337-354.
    When the results of an experiment appears to disconfirm a hypothesis, how does one know whether it’s the hypothesis, or rather some auxiliary hypothesis or assumption, that is at fault? Philosophers’ answers to this question, now known as “Duhem’s problem,” have differed widely. Despite these differences, we affirm Duhem’s original position that the logical structure of this problem alone does not allow a solution. A survey of philosophical approaches to Duhem’s problem indicates that what allows any philosopher, or scientists for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The objectivity of Subjective Bayesianism.Jan Sprenger - 2018 - European Journal for Philosophy of Science 8 (3):539-558.
    Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: it opens the door to the influence of values and biases, evidence judgments can vary substantially between scientists, it is not suited for informing policy decisions. My paper rebuts these concerns by connecting the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference with null hypothesis significance tests. Second, (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • On the Concepts of Logical Fallacy and Logical Error.Marcin Koszowy - unknown
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reconsidering authority.Michael Strevens - 2007 - In Tamar Szabó Gendler & John Hawthorne (eds.), Oxford Studies in Epistemology: Volume 3. Oxford University Press UK. pp. 294-330.
    How to regard the weight we give to a proposition on the grounds of its being endorsed by an authority? I examine this question as it is raised within the epistemology of science, and I argue that “authority-based weight” should receive special handling, for the following reason. Our assessments of other scientists’ competence or authority are nearly always provisional, in the sense that to save time and money, they are not made nearly as carefully as they could be---indeed, they are (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • (1 other version)How to discount double-counting when it counts: Some clarifications.Deborah G. Mayo - 2008 - British Journal for the Philosophy of Science 59 (4):857-879.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity criterion. Taking their criticism as (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Incommensurability and theory comparison in experimental biology.Marcel Weber - 2002 - Biology and Philosophy 17 (2):155-169.
    Incommensurability of scientific theories, as conceived by Thomas Kuhnand Paul Feyerabend, is thought to be a major or even insurmountable obstacletothe empirical comparison of these theories. I examine this problem in light ofaconcrete case from the history of experimental biology, namely the oxidativephosphorylation controversy in biochemistry (ca. 1961-1977). After a briefhistorical exposition, I show that the two main competing theories which werethe subject of the ox-phos controversy instantiate some of the characteristicfeatures of incommensurable theories, namely translation failure,non-corresponding predictions, and different (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Error Rates and Uncertainty Reduction in Rule Discovery.M. Emrah Aktunc, Ceren Hazar & Emre Baytimur - 2020 - Review of Philosophy and Psychology 12 (2):435-452.
    Three new versions of Wason’s 2-4-6 rule discovery task incorporating error rates or feedback of uncertainty reduction, inspired by the error-statistical account in philosophy of science, were employed. In experiments 1 and 2, participants were instructed that some experimenter feedback would be erroneous. The results showed that performance was impaired when there was probabilistic error. In experiment 3, participants were given uncertainty reduction feedback as they generated different number triples and the negative effects of probabilistic error were not observed. These (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Was regression to the mean really the solution to Darwin’s problem with heredity?: Essay Review of Stigler, Stephen M. 2016. The Seven Pillars of Statistical Wisdom. Cambridge, Massachusetts: Harvard University Press. [REVIEW]Adam Krashniak & Ehud Lamm - 2017 - Biology and Philosophy (5):1-10.
    Statistical reasoning is an integral part of modern scientific practice. In The Seven Pillars of Statistical Wisdom Stephen Stigler presents seven core ideas, or pillars, of statistical thinking and the historical developments of each of these pillars, many of which were concurrent with developments in biology. Here we focus on Stigler’s fifth pillar, regression, and his discussion of how regression to the mean came to be thought of as a solution to a challenge for the theory of natural selection. Stigler (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Confirmation of Scientific Hypotheses as Relations.Aysel Dogan - 2005 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 36 (2):243-259.
    In spite of several attempts to explicate the relationship between a scientific hypothesis and evidence, the issue still cries for a satisfactory solution. Logical approaches to confirmation, such as the hypothetico-deductive method and the positive instance account of confirmation, are problematic because of their neglect of the semantic dimension of hypothesis confirmation. Probabilistic accounts of confirmation are no better than logical approaches in this regard. An outstanding probabilistic account of confirmation, the Bayesian approach, for instance, is found to be defective (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Justifying inference to the best explanation as a practical meta-syllogism on dialectical structures.Gregor Betz - 2013 - Synthese 190 (16):3553-3578.
    This article discusses how inference to the best explanation can be justified as a practical meta - argument. It is, firstly, justified as a practical argument insofar as accepting the best explanation as true can be shown to further a specific aim. And because this aim is a discursive one which proponents can rationally pursue in — and relative to — a complex controversy, namely maximising the robustness of one’s position, IBE can be conceived, secondly, as a meta - argument. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Strategies for securing evidence through model criticism.Kent W. Staley - 2012 - European Journal for Philosophy of Science 2 (1):21-43.
    Some accounts of evidence regard it as an objective relationship holding between data and hypotheses, perhaps mediated by a testing procedure. Mayo’s error-statistical theory of evidence is an example of such an approach. Such a view leaves open the question of when an epistemic agent is justified in drawing an inference from such data to a hypothesis. Using Mayo’s account as an illustration, I propose a framework for addressing the justification question via a relativized notion, which I designate security , (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • In Pursuit of Resistance: Pragmatic Recommendations for Doing Science within One’s Means. [REVIEW]Amy McLaughlin - 2011 - European Journal for Philosophy of Science 1 (3):353-371.
    Charles Peirce’s model of inquiry is supposed to demarcate appropriate methods of inquiry from specious ones. Cheryl Misak points out that Peirce’s explicit account fails, but can nevertheless be rescued by elements of his own system. While Misak’s criticism is a propos, her own attempt to fortify Peirce’s account does not succeed, as it falls prey to the same criticism she raises against Peirce’s explicit account. The account provided in this paper—the ‘open path’ alternative—draws from Peirce’s corollary to his “first (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Current issues in medical epistemology and statistics: a view from the frontline of medicine.John H. Park - 2022 - Synthese 200 (5):1-25.
    Clinical trials play a prominent role today in medicine, but are not without controversy. These issues start from the day physicians begin their specialization process in medical school and continues onto their day-to-day practice as attendings with referral patterns and resulting financial incentives. This combined with the lack of training in basic issues of epistemology and statistics, allows poor interpretations of clinical trials to reign free. A proposal to integrate the notion of severity to help remedy these issues are made (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Polycratic hierarchies and networks: what simulation-modeling at the LHC can teach us about the epistemology of simulation.Florian J. Boge & Christian Zeitnitz - 2020 - Synthese 199 (1-2):445-480.
    Large scale experiments at CERN’s Large Hadron Collider rely heavily on computer simulations, a fact that has recently caught philosophers’ attention. CSs obviously require appropriate modeling, and it is a common assumption among philosophers that the relevant models can be ordered into hierarchical structures. Focusing on LHC’s ATLAS experiment, we will establish three central results here: with some distinct modifications, individual components of ATLAS’ overall simulation infrastructure can be ordered into hierarchical structures. Hence, to a good degree of approximation, hierarchical (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (1 other version)The Diversity Principle and the Little Scientist Hypothesis.Daniel Osherson & Riccardo Viale - 2000 - Foundations of Science 5 (2):239-253.
    The remarkable transition from helpless infant to sophisticatedfive-year-old has long captured the attention of scholars interested inthe discovery of knowledge. To explain these achievements, developmentalpsychologists often compare children's discovery procedures to those ofprofessional scientists. For the child to be qualified as a ``littlescientist'', however, intellectual development must be shown to derivefrom rational hypothesis selection in the face of evidence. In thepresent paper we focus on one dimension of rational theory-choice,namely, the relation between hypothesis confirmation and evidencediversity. Psychological research suggests cultural (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Two Impossibility Results for Measures of Corroboration.Jan Sprenger - 2018 - British Journal for the Philosophy of Science 69 (1):139--159.
    According to influential accounts of scientific method, such as critical rationalism, scientific knowledge grows by repeatedly testing our best hypotheses. But despite the popularity of hypothesis tests in statistical inference and science in general, their philosophical foundations remain shaky. In particular, the interpretation of non-significant results—those that do not reject the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis, or provide a reason to accept it? Popper sought for measures of corroboration that could (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Critical rationalism and engineering: methodology.Mark Staples - 2015 - Synthese 192 (1):337-362.
    Engineering deals with different problem situations than science, and theories in engineering are different to theories in science. So, the growth of knowledge in engineering is also different to that in science. Nonetheless, methodological issues in engineering epistemology can be explored by adapting frameworks already established in the philosophy of science. In this paper I use critical rationalism and Popper’s three worlds framework to investigate error elimination and the growth of knowledge in engineering. I discuss engineering failure arising from the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Interpretive praxis and theory‐networks.Sangwon Lee - 2006 - Pacific Philosophical Quarterly 87 (2):213-230.
    I develop the idea of what I call an interpretive praxis as a generalized procedure for analyzing how experimenters can formulate observable predictions, discern real effects from experimental artifacts, and compare predictions with data. An interpretive praxis requires theories – theories not only about instruments and the interpretation of phenomena, but also theories that connect the use of instruments and interpretation of phenomena to high‐level theory. I will call all such theories that enable experimentation to work intermediate theories. I offer (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deflationary Methodology and Rationality of Science.Thomas Nickles - 1996 - Philosophica 58 (2).
    The last forty years have produced a dramatic reversal in leading accounts of science. Once thought necessary to (explain) scientific progress, a rigid method of science is now widely considered impossible. Study of products yields to study of processes and practices, .unity gives way to diversity, generality to particularity, logic to luck, and final justification to heuristic scaffolding. I sketch the story, from Bacon and Descartes to the present, of the decline and fall of traditional scientific method, conceived as The (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Error and inference: an outsider stand on a frequentist philosophy.Christian P. Robert - 2013 - Theory and Decision 74 (3):447-461.
    This paper is an extended review of the book Error and Inference, edited by Deborah Mayo and Aris Spanos, about their frequentist and philosophical perspective on testing of hypothesis and on the criticisms of alternatives like the Bayesian approach.
    Download  
     
    Export citation  
     
    Bookmark  
  • Error and inference: Recent exchanges on experimental reasoning, reliability, and the objectivity and rationality of science * edited by Deborah G. Mayo and Aris Spanos. [REVIEW]N. Jones - 2011 - Analysis 71 (2):406-408.
    When do data provide good evidence for a hypothesis, evidence that warrants an inference to the hypothesis? Standard answers either reject the legitimacy of induction or else allow warranted inference from data to hypothesis when there are suitable relationships between and among the data and hypotheses. The severity account rejects all of these, maintaining instead that the good evidence relation concerns not only relations between data and hypotheses but also the methods for obtaining the data and the sensitivity of these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Comparativist rationality and.Kristin Shrader-Frechette - 2004 - Topoi 23 (2):153-163.
    US testing of nuclear weapons has resulted in about 800,000 premature fatal cancers throughout the globe, and the nuclear tests of China, France, India, Russia, and the UK have added to this total. Surprisingly, however, these avoidable deaths have not received much attention, as compared, for example, to the smaller number of US fatalities on 9-11-01. This essay (1) surveys the methods and models used to assess effects of low-dose ionizing radiation from above-ground nuclear weapons tests and (2) explains some (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)How experimental algorithmics can benefit from Mayo’s extensions to Neyman–Pearson theory of testing.Thomas Bartz-Beielstein - 2008 - Synthese 163 (3):385-396.
    Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bioinformatics, where classical methods from mathematical optimization fail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Probabilidad inicial y éxito probabilístico.Valeriano Iranzo - 2009 - Análisis Filosófico 29 (1):39-71.
    Una cuestión controvertida en la teoría bayesiana de la confirmación es el estatus de las probabilidades iniciales. Aunque la tendencia dominante entre los bayesianos es considerar que la única constricción legítima sobre los valores de dichas probabilidades es la consistencia formal con los teoremas de la teoría matemática de la probabilidad, otros autores -partidarios de lo que se ha dado en llamar "bayesianismo objetivo"- defienden la conveniencia de restricciones adicionales. Mi propuesta, en el marco del bayesianismo objetivo, recoge una sugerencia (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Phenomenology and qualitative research: Amedeo Giorgi's hermetic epistemology.John Paley - 2018 - Nursing Philosophy 19 (3):e12212.
    Amedeo Giorgi has published a review article devoted to Phenomenology as Qualitative Research: A Critical Analysis of Meaning Attribution. However, anyone reading this article, but unfamiliar with the book, will get a distorted view of what it is about, whom it is addressed to, what it seeks to achieve and how it goes about presenting its arguments. Not mildly distorted, in need of the odd correction here and there, but systematically misrepresented. The article is a study in misreading. Giorgi misreads (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation