The problem of demarcating between what is scientific and what is pseudoscientific or merely unscientific - in other words, the problem of defining scientificity - remains open. The modern debate was firstly structured around Karl Popper's falsificationist epistemology from the 1930's, before diversifying a few decades later. His central idea is that what makes something scientific is not so much how adequate it is with data, but rather to what extent it might not have been so. Since the second half (...) of the century, and in the wake of criticisms, such as the Duhem-Quine thesis, that were raised against falsificationism(s), most approaches to the problem of scientific demarcation are now multicriteria and holistic. However, the approach presented in this paper does not follow the same guideline. The present work can be seen as an attempt to adapt Popper's (sophisticated) falsificationism to a model-based view of scientific knowledge. Using formalization and focusing on a particular epistemic unit of analysis (namely, empirical models) allows to properly define the popperian corroboration degree and to view scientificity as the maximization of this degree of corroboration over all available models and data. We eventually recover, in a natural way, well-accepted scientificity criteria: empirical adequacy, Lakatos' progressive problemshifts, balance between strength and simplicity, parsimony, and coherence as special cases of this general scientificity principle. From this viewpoint, the language dependency of our empirical knowledge no longer appears as a limitation of falsificationism but as one more reason to take it as a good epistemological framework. (shrink)
Model-data symbiosis is the view that there is an interdependent and mutually beneficial relationship between data and models, whereby models are not only data-laden, but data are also model-laden or model filtered. In this paper I elaborate and defend the second, more controversial, component of the symbiosis view. In particular, I construct a preliminary taxonomy of the different ways in which theoretical and simulation models are used in the production of data sets. These include data conversion, data correction, data interpolation, (...) data scaling, data fusion, data assimilation, and synthetic data. Each is defined and briefly illustrated with an example from the geosciences. I argue that model-filtered data are typically more accurate and reliable than the so-called raw data, and hence beneficially serve the epistemic aims of science. By illuminating the methods by which raw data are turned into scientifically useful data sets, this taxonomy provides a foundation for developing a more adequate philosophy of data. (shrink)
I argue that normative formal epistemology (NFE) is best understood as modelling, in the sense that this is the reconstruction of its methodology on which NFE is doing best. I focus on Bayesianism and show that it has the characteristics of modelling. But modelling is a scientific enterprise, while NFE is normative. I thus develop an account of normative models on which they are idealised representations put to normative purposes. Normative assumptions, such as the transitivity of comparative credence, are characterised (...) as modelling idealisations motivated normatively. I then survey the landscape of methodological options: what might formal epistemologists be up to? I argue the choice is essentially binary: modelling or theorising. If NFE is theorising it is doing very poorly: generating false claims with no clear methodology for separating out what is to be taken seriously. Modelling, by contrast, is a successful methodology precisely suited to the management of useful falsehoods. Regarding NFE as modelling is not costless, however. First, our normative inferences are less direct and are muddied by the presence of descriptive idealisations. Second, our models are purpose-specific and limited in their scope. I close with suggestions for how to adapt our practice. (shrink)
It is often claimed that one can avoid the kind of underdetermination that is a typical consequence of symmetries in physics by stipulating that symmetry-related models represent the same state of affairs (Leibniz Equivalence). But recent commentators (Dasgupta 2011; Pooley 2021; Pooley and Read 2021; Teitel 2021a) have responded that claims about the representational capacities of models are irrelevant to the issue of underdetermination, which concerns possible worlds themselves. In this paper I distinguish two versions of this objection: (1) that (...) a theory’s formalism does not (fully) determine the space of physical possibilities, and (2) that the relevant notion of possibility is not physical possibility. I offer a refutation of each. (shrink)
Depression is a widespread and debilitating disorder, but developing effective treatments has proven challenging. Despite success in animal models, many treatments fail in human trials. While various factors contribute to this translational failure, standardization practices in animal research are often overlooked. This paper argues that certain standardization choices in behavioral neuroscience research on depression can limit the generalizability of results from rodents to humans. This raises ethical and scientific concerns, including animal waste and a lack of progress in treating human (...) patients. To address these issues, animal ethics committees can establish reasonable expectations for preclinical research and highlight the impact of standardization on generalizability. Such efforts can help improve translation from animal models to human patients and ultimately benefit those suffering from depression. (shrink)
This article addresses the contributions of the literature on the new mechanistic philosophy of science for the scientific practice of model building in ecology. This is reflected in a one-to-one interdisciplinary collaboration between an ecologist and a philosopher of science during science-in-the-making. We argue that the identification, reconstruction and understanding of mechanisms is context-sensitive, and for this case study mechanistic modeling did not present a normative role but a heuristic one. We expect our study to provides useful epistemic tools for (...) the improvement of empirically-riven work in the debates about mechanistic explanation of ecological phenomena. (shrink)
Sometimes, scientific models are either intended to or plausibly interpreted as representing nonactual but possible targets. Call this “hypothetical modeling”. This paper raises two epistemological challenges concerning hypothetical modeling. To begin with, I observe that given common philosophical assumptions about the scope of objective possibility, hypothetical models are fallible with respect to what is objectively possible. There is thus a need to distinguish between accurate and inaccurate hypothetical modeling. The first epistemological challenge is that no account of the epistemology of (...) hypothetical models seems to cohere with the most characteristic function of scientific modeling in general, i.e., surrogative representation. The second epistemological challenge is a version of “reliability challenges” familiar from other areas. There is a challenge to explain how hypothetical models could be a reliable guide to what is possible, given that they are not and cannot be compared against their nonactual targets and updated accordingly. I close with some brief remarks on possible solutions to these challenges. (shrink)
In the context of pluralization, sophistication, and combination of formal models, it is becoming difficult to propose uniform – or even comparable – model comparison practices. This paper outlines a broad and classificatory comparative epistemology of models. The aim of this epistemology is to propose applicable, and if necessary rectifiable, conceptual tools that can be useful to modellers as well as to historians and epistemologists. The notion of model characteristic vector – incorporating concepts of function, nature, principle and use of (...) the model – is introduced for this end. This proposal has been developed to serve a better-equipped writing of a comparative and interdisciplinary history of models and simulations. (shrink)
Neuroscientists have in recent years turned to building models that aim to generate predictions rather than explanations. This “predictive turn” has swept across domains including law, marketing, and neuropsychiatry. Yet the norms of prediction remain undertheorized relative to those of explanation. I examine two styles of predictive modeling and show how they exemplify the normative dynamics at work in prediction. I propose an account of how predictive models, conceived of as technological devices for aiding decision-making, can come to be adequate (...) for purposes that are defined by both their guiding research questions and their larger social context of application. (shrink)
In an ever-changing world, when we search for answers on our present challenges, it can be tricky to extrapolate past realities when concerning science-based issues. Climate change, public health or artificial intelligence embody issues on how scientific evidence is often challenged, as false beliefs could drive the design of public policies and legislation. Therefore , how can we foresee if science can tip the scales of political legislation? In this article, we outline how models of historical cases can be used (...) to predict and understand how scientific evidence can influence the emergence (or fall-back) of science based-legislation. We also present frameworks on how to use past episodes of History of Science and Alternative History insights to build epistemic models, based on previous successful approaches on the spread of scientific misinformation. These models will help the accuracy of the design of eventual alternative realities, that can come insightful on present decision making methodologies. (shrink)
Examining AI spirituality can illuminate problematic assumptions about human spirituality and AI cognition, suggest possible directions for AI development, reduce uncertainty about future AI, and yield a methodological lens sufficient to investigate human-AI sociotechnical interaction and morality. Incompatible philosophical assumptions about human spirituality and AI limit investigations of both and suggest a vast gulf between them. An emergentist approach can replace dualist assumptions about human spirituality and identify emergent behavior in AI computation to overcome overly reductionist assumptions about computation. Using (...) general systems theory to organize models of human experience yields insight into human morality and spirituality, upon which AI modeling can also draw. In this context, the pragmatist Josiah Royce’s semiotic philosophy of spirituality identifies unanticipated overlap between symbolic AI and spirituality and suggests criteria for a human-AI community focused on modeling morality that would result in an emergent Interpreter-Spirit sufficient to influence the ongoing development of human and AI morality and spirituality. (shrink)
This paper discusses modeling from the artifactual perspective. The artifactual approach conceives models as erotetic devices. They are purpose-built systems of dependencies that are constrained in view of answering a pending scientific question, motivated by theoretical or empirical considerations. In treating models as artifacts, the artifactual approach is able to address the various languages of sciences that are overlooked by the traditional accounts that concentrate on the relationship of representation in an abstract and general manner. In contrast, the artifactual approach (...) focuses on epistemic affordances of different kinds of external representational and other tools employed in model construction. In doing so, the artifactual account gives a unified treatment of different model types as it circumvents the tendency of the fictional and other representational approaches to separate model systems from their “model descriptions”. (shrink)
A Série Investigação Filosófica, uma iniciativa do Núcleo de Ensino e Pesquisa em Filosofia do Departamento de Filosofia da UFPel e do Grupo de Pesquisa Investigação Filosófica do Departamento de Filosofia da UNIFAP, sob o selo editorial do NEPFil online e da Editora da Universidade Federal de Pelotas, com auxílio financeiro da John Templeton Foundation, tem por objetivo precípuo a publicação da tradução para a língua portuguesa de textos selecionados a partir de diversas plataformas internacionalmente reconhecidas, tal como a Stanford (...) Encyclopedia of Philosophy, por exemplo. O objetivo geral da série é disponibilizar materiais bibliográficos relevantes tanto para a utilização enquanto material didático quanto para a própria investigação filosófica. (shrink)
The notion of understanding occupies an increasingly prominent place in contemporary epistemology, philosophy of science, and moral theory. A central and ongoing debate about the nature of understanding is how it relates to the truth. In a series of influential contributions, Catherine Elgin has used a variety of familiar motivations for antirealism in philosophy of science to defend a non- factive theory of understanding. Key to her position are: (i) the fact that false theories can contribute to the upwards trajectory (...) of scientific understanding, and (ii) the essential role of inaccurate idealisations in scientific research. Using Elgin’s arguments as a foil, I show that a strictly factive theory of understanding has resources with which to offer a unified response to both the problem of idealisations and the role of false theories in the upwards trajectory of scientific understanding. Hence, strictly factive theories of understanding are viable notwithstanding these forceful criticisms. (shrink)
We were slightly concerned, upon having read Eric Winsberg, Jason Brennan and Chris Surprenant’s reply to our paper “Were Lockdowns Justified? A Return to the Facts and Evidence”, that they may have fundamentally misunderstood the nature of our argument, so we issue the following clarification, along with a comment on our motivations for writing such a piece, for the interested reader.
Models not only represent but may also influence their targets in important ways. While models’ abilities to influence outcomes has been studied in the context of economic models, often under the label ‘performativity’, we argue that this phenomenon also pertains to epidemiological models, such as those used for forecasting the trajectory of the Covid-19 pandemic. After identifying three ways in which a model by the Covid-19 Response Team at Imperial College London may have influenced scientific advice, policy, and individual responses, (...) we consider the implications of epidemiological models’ performative capacities. We argue, first, that performativity may impair models’ ability to successfully predict the course of an epidemic; but second, that it may provide an additional sense in which these models can be successful, namely by changing the course of an epidemic. (shrink)
Carrie Figdor argues for literalism, a semantic claim about psychological predicates, on the basis of a scientific claim about the nature of psychological properties. I argue that her scientific claim is based on controversial interpretations of scientific modelling, and that even if it were correct it would not justify her claims that psychological predicates are undergoing radical conceptual change.
From the beginning of time, humans believed they were the center of the universe. Such important beings could be nowhere else than at the very epicenter of existence, with all the other things revolving around them. Was this an arrogant position? Only time will tell. What is certain is that as some people were so certain of their significance, aeons later some other people became too confident in their unimportance. In such a context, the Earth quickly lost its privileged position (...) at the center of the universe and along with this, the ideas of absolute motion and time became unbearable for the modern intellect, which saw nothing but relativeness in everything. After years of accepting the ideas of relativity at face value without doubting them, scientists are now mature enough to start questioning everything as any true scientist would do, including their own basic assumptions. And one would be surprised to see that the basic assumptions of today’s science in physics (and cosmology alike) are based on philosophically dogmatic beliefs that humans are nothing more than insignificant specks of dust. These specks cannot be in any privileged position in the cosmos, nor can their frames of reference. These specks cannot be living on a planet that is not moving while everything else is. There can be no hint of our importance whatsoever. Hence, the Copernican principle that has poisoned scientific thinking for aeons now. When one analyzes the evidence provided by science to support the idea of relativity though, he would see that the same evidence can more easily and simply fit into a model where the Earth stands still. Yet, scientists preferred to revamp all physics by introducing the totally unintuitive ides of relativity – including the absolute limit of the speed of light – than even admitting the possibility of humans having any notion of central position in the cosmos. True scientists though should examine all possible explanations, including those that do not fit their beliefs. To the dismay of so many modern scientists who blindly believe the validity of the theory of relativity at face value, the movement towards a true and honest post-modern science where all assumptions are questioned, necessarily passes through a place where the Earth we live in stands still. Non-relativistic explanations of the Michelson Morley experiment, related to a motionless Earth or to ether, are viable alternatives that deserve their place in modern scientific thought. (shrink)
Quantum mechanics was reformulated as an information theory involving a generalized kind of information, namely quantum information, in the end of the last century. Quantum mechanics is the most fundamental physical theory referring to all claiming to be physical. Any physical entity turns out to be quantum information in the final analysis. A quantum bit is the unit of quantum information, and it is a generalization of the unit of classical information, a bit, as well as the quantum information itself (...) is a generalization of classical information. Classical information refers to finite series or sets while quantum information, to infinite ones. Quantum information as well as classical information is a dimensionless quantity. Quantum information can be considered as a “bridge” between the mathematical and physical. The standard and common scientific epistemology grants the gap between the mathematical models and physical reality. The conception of truth as adequacy is what is able to transfer “over” that gap. One should explain how quantum information being a continuous transition between the physical and mathematical may refer to truth as adequacy and thus to the usual scientific epistemology and methodology. If it is the overall substance of anything claiming to be physical, one can question how different and dimensional physical quantities appear. Quantum information can be discussed as the counterpart of action. Quantum information is what is conserved, action is what is changed in virtue of the fundamental theorems of Emmy Noether (1918). The gap between mathematical models and physical reality, needing truth as adequacy to be overcome, is substituted by the openness of choice. That openness in turn can be interpreted as the openness of the present as a different concept of truth recollecting Heidegger’s one as “unconcealment” (ἀλήθεια). Quantum information as what is conserved can be thought as the conservation of that openness. (shrink)
This paper adds to the philosophical literature on mechanistic explanation by elaborating two related explanatory functions of idealisation in mechanistic models. The first function involves explaining the presence of structural/organizational features of mechanisms by reference to their role as difference-makers for performance requirements. The second involves tracking counterfactual dependency relations between features of mechanisms and features of mechanistic explanandum phenomena. To make these functions salient, we relate our discussion to an exemplar from systems biological research on the mechanism for countering (...) heat shock—the heat shock response system—in Escherichia coli bacteria. This research also reinforces a more general lesson: ontic constraint accounts in the literature on mechanistic explanation provide insufficiently informative normative appraisals of mechanistic models. We close by outlining an alternative view on the explanatory norms governing mechanistic representation. (shrink)
Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathématique et causale d’un phénomène physique. (...) La différence majeure est qu’une méthode d’AM ne prétend pas représenter une causalité entre les paramètres d’entrées, qui peuvent être de plus de haute dimensionnalité, et ceux de sortie. Nous montrons dans cet article l’intérêt de mettre en œuvre les distinctions épistémologiques entre les différentes fonctions épistémiques d’un modèle, d’une part, et entre la fonction épistémique et l’usage d’un modèle, d’autre part. Enfin, la dernière partie de cet article présente nos travaux en cours sur l’évaluation d’une explication, qui peut être plus persuasive qu’informative, ce qui peut ainsi causer des problèmes d’ordre éthique. (shrink)
This book analyses the impact computerization has had on contemporary science and explains the origins, technical nature and epistemological consequences of the current decisive interplay between technology and science: an intertwining of formalism, computation, data acquisition, data and visualization and how these factors have led to the spread of simulation models since the 1950s. -/- Using historical, comparative and interpretative case studies from a range of disciplines, with a particular emphasis on the case of plant studies, the author shows how (...) and why computers, data treatment devices and programming languages have occasioned a gradual but irresistible and massive shift from mathematical models to computer simulations. -/- . (shrink)
This chapter elaborates and develops the thesis originally put forward by Mary Morgan (2005) that some mathematical models may surprise us, but that none of them can completely confound us, i.e. let us unable to produce an ex post theoretical understanding of the outcome of the model calculations. This chapter intends to object and demonstrate that what is certainly true of classical mathematical models is however not true of pluri-formalized simulations with multiple axiomatic bases. This chapter thus proposes to show (...) that - and why - some of these computational simulations that are now booming in the sciences not only surprise us but also confound us. To do so, it shows too that it is needed to elaborate and articulate with some new precision the concept of weak emergence initially due, for its part, to Mark A. Bedau (1997). (shrink)
One of the most conspicuous features of contemporary modeling practices is the dissemination of mathematical and computational methods across disciplinary boundaries. We study this process through two applications of the Ising model: the Sherrington-Kirkpatrick model of spin glasses and the Hopfield model of associative memory. The Hopfield model successfully transferred some basic ideas and mathematical methods originally developed within the study of magnetic systems to the field of neuroscience. As an analytical resource we use Paul Humphreys's discussion of computational and (...) theoretical templates. We argue that model templates are crucial for the intra- and interdisciplinary theoretical transfer. A model template is an abstract conceptual idea associated with particular mathematical forms and computational methods. (shrink)
The behavior/structure methodological dichotomy as locus of scientific inquiry is closely related to the issue of modeling and theory change in scientific explanation. Given that the traditional tension between structure and behavior in scientific modeling is likely here to stay, considering the relevant precedents in the history of ideas could help us better understand this theoretical struggle. This better understanding might open up unforeseen possibilities and new instantiations, particularly in what concerns the proposed technological modification of the human condition. The (...) sequential structure of this paper is twofold. The contribution of three philosophers better known in the humanities than in the study of science proper are laid out. The key theoretical notions interweaving the whole narrative are those of mechanization, constructability and simulation. They shall provide the conceptual bridge between these classical thinkers and the following section. Here, a panoramic view of three significant experimental approaches in contemporary scientific research is displayed, suggesting that their undisclosed ontological premises have deep roots in the Western tradition of the humanities. This ontological lock between core humanist ideals and late research in biology and nanoscience is ultimately suggested as responsible for pervasively altering what is canonically understood as “human”. (shrink)
Currently, the widely used notion of activity is increasingly present in computer science. However, because this notion is used in specific contexts, it becomes vague. Here, the notion of activity is scrutinized in various contexts and, accordingly, put in perspective. It is discussed through four scientific disciplines: computer science, biology, economics, and epistemology. The definition of activity usually used in simulation is extended to new qualitative and quantitative definitions. In computer science, biology and economics disciplines, the new simulation activity definition (...) is first applied critically. Then, activity is discussed generally. In epistemology, activity is discussed, in a prospective way, as a possible framework in models of human beliefs and knowledge. (shrink)
This is the introduction to the special issue of the Spanish journal Ágora-Papeles de Filosofía (31/2, 2012) devoted to new Ibero-American contributions to metatheoretical structuralism.
Some philosophers of science – the present author included – appeal to fiction as an interpretation of the practice of modeling. This raises the specter of an incompatibility with realism, since fiction-making is essentially non-truth-regulated. I argue that the prima facie conflict can be resolved in two ways, each involving a distinct notion of fiction and a corresponding formulation of realism. The main goal of the paper is to describe these two packages. Toward the end I comment on how to (...) choose among them. (shrink)
IN FISICA LA COMPLESSITÀ FA IL SUO INGRESSO NELLA FISICA STATISTICA PER POI APPARIRE NELLO STUDIO DEI COMPORTAMENTI COLLETTIVI NELLA MATERIA CONDENSATA E DELLA SOFT MATTER, E DA ULTIMO NELLA NUOVA TEORIA DEL CAMBIAMENTO. RIDUZIONISMO ED EMERGENZA NON SONO APPROCCI OPPOSTI BENSÌ COMPLEMENTARI.
In this paper I challenge Paolo Palmieri’s reading of the Mach-Vailati debate on Archimedes’s proof of the law of the lever. I argue that the actual import of the debate concerns the possible epistemic (as opposed to merely pragmatic) role of mathematical arguments in empirical physics, and that construed in this light Vailati carries the upper hand. This claim is defended by showing that Archimedes’s proof of the law of the lever is not a way of appealing to a non-empirical (...) source of information, but a way of explicating the mathematical structure that can represent the empirical information at our disposal in the most general way. (shrink)
Now that complex Agent-Based Models and computer simulations spread over economics and social sciences - as in most sciences of complex systems -, epistemological puzzles (re)emerge. We introduce new epistemological tools so as to show to what precise extent each author is right when he focuses on some empirical, instrumental or conceptual significance of his model or simulation. By distinguishing between models and simulations, between types of models, between types of computer simulations and between types of empiricity, section 2 gives (...) conceptual tools to explain the rationale of the diverse epistemological positions presented in section 1. Finally, we claim that a careful attention to the real multiplicity of denotational powers of symbols at stake and then to the implicit routes of references operated by models and computer simulations is necessary to determine, in each case, the proper epistemic status and credibility of a given model and/or simulation. (shrink)
Dans un nombre croissant de domaines scientifiques - sciences de la nature, sciences humaines aussi bien que sciences des artefacts -, la simulation ne joue plus le rôle de succédané temporaire d'une théorie encore en gésine parce que non encore élaborée ; c'est-à-dire qu'elle ne joue plus systématiquement le rôle d'un modèle provisoire ou d'un schéma servant à condenser les mesures. C'est qu'elle n'a pas la nature d'un signe graphique, linguistique ou mathématique. Elle joue au contraire de plus en plus (...) le rôle d'une réplication détaillée du réel : elle est une intuition reconstruite avec justesse mais sans abstraction rationnelle et généralisante, donc sans transfiguration iconique. Une des limites de l'épistémologie dialectique de Bachelard tient à ce qu'elle ne donne pas les concepts pour penser assez distinctement ces pratiques contemporaines de simulation. C'est pourquoi l'A. propose de concevoir l'idée que la science contemporaine se développe non plus selon la seule dialectique matérialisme / rationalisme, mais selon la triade rationalisme/computationalisme/matérialisme. (shrink)
In this comment on the work by Ulises Moulines I shall not refer to the interesting analysis of the ontological commitments that depends the treatment of the so-called «data models», nor shall I debate the general metaphysical principles proposed in his approach, adopting an experimentalist, instrumentalist, anti-realistic, positivist or empirical stance. I shall focus on the last part of his article in which he elaborates on the links between Wesley Salmon's causalist approach and the structuralist analysis of explanation viewed as (...) theoretical embedding, as he relates it to the structural analysis of the theoretical terms in light of a certain general shared understanding of epistemology's job. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.