The focus in the literature on scientific explanation has shifted in recent years towards modelbased approaches. The idea that there are simple and true laws of nature has met with objections from philosophers such as Nancy Cartwright (1983) and Paul Teller (2001), and this has made a strictly Hempelian D-N style explanation largely irrelevant to the explanatory practices of science (Hempel & Oppenheim, 1948). Much of science does not involve subsuming particular events under laws of nature. It is increasingly recognized (...) that science across the disciplines is to some degree a patchwork of scientific models, with different methods, strategies, and with varying degrees of successful prediction and explanation. And so accounts of scientific explanation have reflected this change of perspective and model-based approaches have flourished in the explanation literature (Batterman, 2002b; Bokulich, 2008; Craver, 2006; Woodward, 2003). (shrink)
Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) - a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target (...) systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option. (shrink)
It is sometimes said that simulation can serve as epistemic substitute for experimentation. Such a claim might be suggested by the fast-spreading use of computer simulation to investigate phenomena not accessible to experimentation (in astrophysics, ecology, economics, climatology, etc.). But what does that mean? The paper starts with a clarification of the terms of the issue and then focuses on two powerful arguments for the view that simulation and experimentation are ‘epistemically on a par’. One is based on the claim (...) that, in experimentation, no less than in simulation, it is not the system under study that is manipulated but a system that ‘stands-in’ for it. The other one highlights the pervasive use of models in experimentation. It will be argued that these arguments, as compelling as they might seem, are each based on a mistaken interpretation of experimentation and that, far from simulation and experimentation being epistemically on a par, they do not have the same epistemic function, do not produce the same kind of epistemic results. (shrink)
What is a model? Surprisingly, in philosophical texts, this question is asked (sometimes), but almost never – answered. Instead of a general answer, usually, some classification of models is considered. The broadest possible definition of modeling could sound as follows: a model is anything that is (or could be) used, for some purpose, in place of something else. If the purpose is “answering questions”, then one has a cognitive model. Could such a broad definition be useful? Isn't it empty? Can (...) one derive useful consequences from it? I'm trying to show that there is a lot of them. (shrink)
This paper represents a philosophical experiment inspired by the formalist philosophy of mathematics. In the formalist picture of cognition, the principal act of knowledge generation is represented as tentative postulation – as introduction of a new knowledge construct followed by exploration of the consequences that can be derived from it. Depending on the result, the new construct may be accepted as normative, rejected, modified etc. Languages and means of reasoning are generated and selected in a similar process. In the formalist (...) picture, all kinds of “truth” are detected intra-theoretically. Some knowledge construct may be considered as “true”, if it is accepted in a particular normative knowledge system. Some knowledge construct may be considered as persistently true, if it remains invariant during the evolution of some knowledge system for a sufficiently long time. And, if you wish, you may consider some knowledge construct as absolutely true, if you do not intend abandoning it in your knowledge system. And finally, in the formalist picture, all kinds of ontologies generated by humans can be demystified by reconstructing them within the basic solipsist ontology simply as hypothetical branches of it. (shrink)
First, I propose a new argument in favor of the Dappled World perspective introduced by Nancy Cartwright. There are systems, for which detailed models can't exist in the natural world. And this has nothing to do with the limitations of human minds or technical resources. The limitation is built into the very principle of modeling: we are trying to replace some system by another one. In full detail, this may be impossible. Secondly, I'm trying to refine the Dappled World perspective (...) by applying the correct distinction between models and theories. At the level of models, because of the above-mentioned limitations, we will always have only a patchwork of models each very restricted in its application scope. And at the level of theories, we will never have a single complete Theory of Everything allowing, without additional postulates, to generate all the models we may need for surviving in this world. (shrink)
This paper constitutes a radical departure from the existing philosophical literature on models, modeling-practices, and model-based science. I argue that the various entities and practices called 'models' and 'modeling-practices' are too diverse, too context-sensitive, and serve too many scientific purposes and roles, as to allow for a general philosophical analysis. From this recognition an alternative view emerges that I shall dub model anarchism.
In this paper I argue against a deflationist view that as representational vehicles symbols and models do their jobs in essentially the same way. I argue that symbols are conventional vehicles whose chief function is denotation while models are epistemic vehicles whose chief function is showing what their targets are like in the relevant aspects. It is further pointed out that models usually do not rely on similarity or some such relations to relate to their targets. For that referential relation (...) they reply instead on symbols (names and labels) given to them and their parts. And a Goodmanian view on pictures of fictional characters reveals the distinction between symbolic and model representations. (shrink)
Contemporary action theory is generally concerned with giving theories of action ontology. In this paper, we make the novel proposal that the standard view in action theory—the Causal Theory of Action—should be recast as a “model”, akin to the models constructed and investigated by scientists. Such models often consist in fictional, hypothetical, or idealized structures, which are used to represent a target system indirectly via some resemblance relation. We argue that recasting the Causal Theory as a model can not only (...) accomplish the goals of causal theorists, but also give the theory greater flexibility in responding to common objections. (shrink)
Maps and mapping raise questions about models and modeling and in science. This chapter archives map discourse in the founding generation of philosophers of science (e.g., Rudolf Carnap, Nelson Goodman, Thomas Kuhn, and Stephen Toulmin) and in the subsequent generation (e.g., Philip Kitcher, Helen Longino, and Bas van Fraassen). In focusing on these two original framing generations of philosophy of science, I intend to remove us from the heat of contemporary discussions of abstraction, representation, and practice of science and thereby (...) see in a more distant and neutral light the many productive ways in which maps can stand in analytically for scientific theories and models. The chapter concludes by complementing the map analogy – i.e., a scientific theory is a map of the world – with a model analogy, viz., a scientific model is a vehicle for understanding. (shrink)
The practice of scientific modelling often resorts to hypothetical, false, idealised, targetless, partial, generalised, and other types of modelling that appear to have at least partially non-actual targets. In this paper, I will argue that we can avoid a commitment to non-actual targets by sketching a framework where models are understood as having networks of possibilities as their targets. This raises a further question: what are the truthmakers for the modal claims that we can derive from models? I propose that (...) we can find truthmakers for the modal claims derived from models in actuality, even in the case of supposedly non-actual targets. I then put this framework to use by examining a case study concerning the modelling of superheavy elements. (shrink)
A well known conception of axiomatization has it that an axiomatized theory must be interpreted, or otherwise coordinated with reality, in order to acquire empirical content. An early version of this account is often ascribed to key figures in the logical empiricist movement, and to central figures in the early “formalist” tradition in mathematics as well. In this context, Reichenbach’s “coordinative definitions” are regarded as investing abstract propositions with empirical significance. We argue that over-emphasis on the abstract elements of this (...) approach fails to appreciate a rich tradition of empirical axiomatization in the late nineteenth and early twentieth centuries, evident in particular in the work of Moritz Pasch, Heinrich Hertz, David Hilbert, and Reichenbach himself. We claim that such over-emphasis leads to a misunderstanding of the role of empirical facts in Reichenbach’s approach to the axiomatization of a physical theory, and of the role of Reichenbach’s coordinative definitions in particular. (shrink)
This paper discusses modeling from the artifactual perspective. The artifactual approach conceives models as erotetic devices. They are purpose-built systems of dependencies that are constrained in view of answering a pending scientific question, motivated by theoretical or empirical considerations. In treating models as artifacts, the artifactual approach is able to address the various languages of sciences that are overlooked by the traditional accounts that concentrate on the relationship of representation in an abstract and general manner. In contrast, the artifactual approach (...) focuses on epistemic affordances of different kinds of external representational and other tools employed in model construction. In doing so, the artifactual account gives a unified treatment of different model types as it circumvents the tendency of the fictional and other representational approaches to separate model systems from their “model descriptions”. (shrink)
The epistemic value of models has traditionally been approached from a representational perspective. This paper argues that the artifactual approach evades the problem of accounting for representation and better accommodates the modal dimension of modeling. From an artifactual perspective, models are viewed as erotetic vehicles constrained by their construction and available representational tools. The modal dimension of modeling is approached through two case studies. The first portrays mathematical modeling in economics, while the other discusses the modeling practice of synthetic biology, (...) which exploits and combines models in various modes and media. Neither model intends to represent any actual target system. Rather, they are constructed to study possible mechanisms through the construction of a model system with built-in dependencies. (shrink)
Several philosophers of science claim that scientific toy models afford knowledge of possibility, but answers to the question of why toy models can be expected to competently play this role are scarce. The main line of reply is that toy models support possibility claims insofar as they are credible. I raise a challenge for this credibility-thesis, drawing on a familiar problem for imagination-based modal epistemologies, and argue that it remains unanswered in the current literature. The credibility-thesis has a long way (...) to go if it is to account for the epistemic merits of toy models. (shrink)
Over the last decades, network-based approaches have become highly popular in diverse fields of biology, including neuroscience, ecology, molecular biology and genetics. While these approaches continue to grow very rapidly, some of their conceptual and methodological aspects still require a programmatic foundation. This challenge particularly concerns the question of whether a generalized account of explanatory, organisational and descriptive levels of networks can be applied universally across biological sciences. To this end, this highly interdisciplinary theme issue focuses on the definition, motivation (...) and application of key concepts in biological network science, such as explanatory power of distinctively network explanations, network levels, and network hierarchies. (shrink)
Three metascientific concepts that have been object of philosophical analysis are the concepts oflaw, model and theory. The aim ofthis article is to present the explication of these concepts, and of their relationships, made within the framework of Sneedean or Metatheoretical Structuralism (Balzer et al. 1987), and of their application to a case from the realm of biology: Population Dynamics. The analysis carried out will make it possible to support, contrary to what some philosophers of science in general and of (...) biology in particular hold, the following claims: a) there are "laws" in biological sciences, b) many of the heterogeneous and different "models" of biology can be accommodated under some "theory", and c) this is exactly what confers great unifying power to biological theories. (shrink)
Ecological-enactive approaches to cognition aim to explain cognition in terms of the dynamic coupling between agent and environment. Accordingly, cognition of one’s immediate environment (which is sometimes labeled “basic” cognition) depends on enaction and the picking up of affordances. However, ecological-enactive views supposedly fail to account for what is sometimes called “higher” cognition, i.e., cognition about potentially absent targets, which therefore can only be explained by postulating representational content. This challenge levelled against ecological-enactive approaches highlights a putative explanatory gap between (...) basic and higher cognition. In this paper, we examine scientific cognition—a paradigmatic case of higher cognition—and argue that it shares fundamental features with basic cognition, for enaction and affordance selection are central to the scientific enterprise. Our argument focuses on modeling, and on how models promote scientific understanding. We base our argument on a non-representational account of scientific understanding and on the material engagement theory, for models are hereby conceived as material objects designed for scientific engagements. Having done so, we conclude that the explanatory gap is significantly less threatening to the ecological-enactive approach than it might appear. (shrink)
Theoretical models are widely held as sources of knowledge of reality. Imagination is vital to their development and to the generation of plausible hypotheses about reality. But how can imagination, which is typically held to be completely free, effectively instruct us about reality? In this paper I argue that the key to answering this question is in constrained uses of imagination. More specifically, I identify make-believe as the right notion of imagination at work in modelling. I propose the first overarching (...) taxonomy of types of constraints on scientific imagination that enables knowledge of reality. And I identify two main kinds of knowledge enabled by models, knowledge of the imaginary scenario specified by models and knowledge of reality. (shrink)
Discussion of modeling within philosophy of science has focused in how models, understood as finished products, represent the world. This approach has some issues accounting for the value of modeling in situations where there are controversies as to which should be the object of representation. In this work I show that a historical analysis of modeling complements the aforementioned representational program, since it allows us to examine processes of integration of analogies that play a role in the generation of criteria (...) of relevance, which are important for the configuration of the object of research. This, in turn, shows that there are norms in modeling practices whose historical reconstruction is relevant for their philosophical analysis. (shrink)
Many philosophers have drawn parallels between scientific models and fictions. In this paper I will be concerned with a recent version of the analogy, which compares models to the imagined characters of fictional literature. Though versions of the position differ, the shared idea is that modeling essentially involves imagining concrete systems analogously to the way that we imagine characters and events in response to works of fiction. Advocates of this view argue that imagining concrete systems plays an ineliminable role in (...) the practice of modeling that cannot be captured by other accounts. The approach thus leaves open what we should say about the ontological status of model-systems, and here advocates differ among themselves, defending a variety of realist or anti-realist positions. I argue that this debate over the ontological status of model-systems is misguided. If model-systems are the kinds of objects fictional realists posit, they can play no role in explaining the epistemology of modeling for an advocate of this approach. So they are at best superfluous. Defenders of the approach should focus on developing an account of the epistemological role of imagining model-systems. (shrink)
In recent decades, philosophers of science have devoted considerable efforts to understand what models represent. One popular position is that models represent fictional situations. Another position states that, though models often involve fictional elements, they represent real objects or scenarios. Though these two positions may seem to be incompatible, I believe it is possible to reconcile them. Using a threefold distinction between different signs proposed by Peirce, I develop an argument based on a proposal recently made by Kralemann and Lattman (...) (in Synthese 190:3397–3420, 2013) that shows that the two aforementioned positions can be reconciled by distinguishing different ways in which a model representation can be used. In particular, on the basis of Peirce’s distinction between icons, indices and symbols, I argue that models can sometimes function as icons, sometimes as indexes and sometimes as symbols, depending on the context in which they are considered and the use that they are developed for because they all have iconic, indexical and symbolic features. In addition, I show that conceiving models as signs enables us to develop an account of scientific representation that meets the main desiderata that Shech (in Synthese 192:3463–3485, 2015) presents. (shrink)
The aim of this dissertation is to comprehensively study various robustness arguments proposed in the literature from Levins to Lloyd as well as the opposition offered to them and pose enquiry into the degree of epistemic virtue that they provide to the model prediction results with respect to climate science and modeling. Another critical issue that this dissertation strives to examine is that of the actual epistemic notion that is operational when scientists and philosophers appeal to robustness. In attempting to (...) explicate this idea, the discussion turns to arguments provided by Schupbach who completely rejects probabilistic independence in favour of explanatory reasoning, Stegenga and Menon who still see some value in probabilistic independence, and Winsberg who takes applies Schupbach’s to climate science, going beyond models to involve multi-modal evidence. After an exhaustive discussion on these arguments, this dissertation attempts to provide a thorough and updated notion of robustness in climate modeling and climate science. (shrink)
It is plausible to think that, in order to actively employ models in their inquiries, scientists should be aware of their existence. The question is especially puzzling for realists in the case of abstract models, since it is not obvious how this is possible. Interestingly, though, this question has drawn little attention in the relevant literature. Perhaps the most obvious choice for a realist is appealing to intuition. In this paper, I argue that if scientific models were abstract entities, one (...) could not be aware of them intuitively. I deploy my argumentation by building on Chudnoff’s elaboration on intuitive awareness. Furthermore, I shortly discuss some other options to which realists could turn in order to address the question of awareness. (shrink)
The work done in the philosophy of modeling by Vaihinger (1876), Craik (1943), Rosenblueth and Wiener (1945), Apostel (1960), Minsky (1965), Klaus (1966) and Stachowiak (1973) is still almost completely neglected in the mainstream literature. However, this work seems to contain original ideas worth to be discussed. For example, the idea that diverse functions of models can be better structured as follows: in fact, models perform only a single function – they are replacing their target systems, but for different purposes. (...) Another example: the idea that all of cognition is cognition in models or by means of models. Even perception, reflexes and instincts (animal and human) can be best analyzed as modeling. The paper presents an analysis of the above-mentioned work. (shrink)
This book analyses the impact computerization has had on contemporary science and explains the origins, technical nature and epistemological consequences of the current decisive interplay between technology and science: an intertwining of formalism, computation, data acquisition, data and visualization and how these factors have led to the spread of simulation models since the 1950s. -/- Using historical, comparative and interpretative case studies from a range of disciplines, with a particular emphasis on the case of plant studies, the author shows how (...) and why computers, data treatment devices and programming languages have occasioned a gradual but irresistible and massive shift from mathematical models to computer simulations. -/- . (shrink)
The common conception of justice as reciprocity seemingly is inapplicable to relations between non-overlapping generations. This is a challenge also to John Rawls’s theory of justice as fairness. This text responds to this by way of reinterpreting and developing Rawls’s theory. First, by examining the original position as a model, some revisions of it are shown to be wanting. Second, by drawing on the methodology of constructivism, an alternative solution is proposed: an amendment to the primary goods named ‘sustainability of (...) values’. This revised original position lends support to intergenerational justice as fairness. (shrink)
The general aim of this article is to carry out a reconstruction of the theory of Population Dynamics (DP) in Ecology, according to Castle’s (2001) general stance with regard to the semantic view of theories, but doing it within the framework of metatheoretical structuralism. Thus, we will first identify Population Dynamics’ basic theory-element: its core K(DP) – with the class of potential models, the class of models (through the identification of its fundamental law) and the class of partial potential models (...) (though leaving aside the identification of its constraints and its intertheoretical links) –, and its domain of intended applications I(DP). Then, we will establish the general guiding lines of its theory-net, developing in some detail one of its main lines of specialization – namely, that related to the so-called “continuous growing” of the considered populations –, with DP’s principal “models”, and leave developing of the other of its main lines of specialization – namely, that related to the so-called “discrete growing” of the considered populations – for a further publication. (shrink)
I argue that verbal models should be included in a philosophical account of the scientific practice of modelling. Weisberg (2013) has directly opposed this thesis on the grounds that verbal structures, if they are used in science, only merely describe models. I look at examples from Darwin's On the Origin of Species (1859) of verbally constructed narratives that I claim model the general phenomenon of evolution by natural selection. In each of the cases I look at, a particular scenario is (...) described that involves at least some fictitious elements but represents the salient causal components of natural selection. I pronounce the importance of prioritising observation of scientific practice for the philosophy of modelling and I suggest that there are other likely model types that are excluded from philosophical accounts. (shrink)
In this paper, I characterize visual epistemic representations as concrete two- or three-dimensional tools for conveying information about aspects of their target systems or phenomena of interest. I outline two features of successful visual epistemic representation: that the vehicle of representation contain sufficiently accurate information about the phenomenon of interest for the user’s purpose, and that it convey this information to the user in a manner that makes it readily available to her. I argue that actual epistemic representation may involve (...) tradeoffs between these and is successful to the extent that they are present. (shrink)
Science continually contributes new models and rethinks old ones. The way inferences are made is constantly being re-evaluated. The practice and achievements of science are both shaped by this process, so it is important to understand how models and inferences are made. But, despite the relevance of models and inference in scientific practice, these concepts still remain contro-versial in many respects. The attempt to understand the ways models and infer-ences are made basically opens two roads. The first one is to (...) produce an analy-sis of the role that models and inferences play in science. The second one is to produce an analysis of the way models and inferences are constructed, especial-ly in the light of what science tells us about our cognitive abilities. The papers collected in this volume go both ways. (shrink)
The book answers long-standing questions on scientific modeling and inference across multiple perspectives and disciplines, including logic, mathematics, physics and medicine. The different chapters cover a variety of issues, such as the role models play in scientific practice; the way science shapes our concept of models; ways of modeling the pursuit of scientific knowledge; the relationship between our concept of models and our concept of science. The book also discusses models and scientific explanations; models in the semantic view of theories; (...) the applicability of mathematical models to the real world and their effectiveness; the links between models and inferences; and models as a means for acquiring new knowledge. It analyzes different examples of models in physics, biology, mathematics and engineering. Written for researchers and graduate students, it provides a cross-disciplinary reference guide to the notion and the use of models and inferences in science. (shrink)
This paper examines the adequacy of causal graph theory as a tool for modeling biological phenomena and formalizing biological explanations. I point out that the causal graph approach reaches it limits when it comes to modeling biological phenomena that involve complex spatial and structural relations. Using a case study from molecular biology, DNA-binding and -recognition of proteins, I argue that causal graph models fail to adequately represent and explain causal phenomena in this field. The inadequacy of these models is due (...) to their failure to include relevant spatial and structural information in a way that does not render the model non-explanatory, unmanageable, or inconsistent with basic assumptions of causal graph theory. (shrink)
In the quest for a new social turn in philosophy of science, exploring the prospects of a Vygotskian perspective could be of significant interest, especially due to his emphasis on the role of culture and socialisation in the development of cognitive functions. However, a philosophical reassessment of Vygotsky's ideas in general has yet to be done. As a step towards this direction, I attempt to elaborate an approach on scientific representations by drawing inspirations from Vygotsky. Specifically, I work upon Vygotsky’s (...) understanding on the nature and function of concepts, mediation and zone of proximal development. -/- I maintain that scientific representations mediate scientific cognition in a tool-like fashion (like Vygotsky’s signs). Scientific representations are consciously acquired through deliberate inquiry in a specific context, where it turns to be part of a whole system, reflecting the social practices related to scientific inquiry, just scientific concepts do in Vygotsky’s understanding. They surrogate the real processes or effects under study, by conveying some of the features of the represented systems. Vygotsky’s solution to the problem of the ontological status of concepts points to an analogous understanding for abstract models, which should be regarded neither as fictions nor as abstract objects. -/- I elucidate these views by using the examples of the double-helix model of DNA structure and the development of our understanding of the photoelectric effect. (shrink)
Two metascientific concepts that have been ― and still are ― object of philosophical analysis are the concepts of model and theory. But while the concept of scientific theory was one of the concepts to which philosophers of science devoted most attention during the 20th century, it is only in recent decades that the concept of scientific model has come to occupy a central position in philosophical reflection. However, it has done so in such a way that, at present, as (...) Jim Bogen states in the back cover of the book Scientific Models in the Philosophy of Science, by Daniela Bailer-Jones, “[t]he standard philosophical literature on the role of models in scientific reasoning is voluminous, disorganized, and confusing”. In spite of this, one of the axes that would allow us to organize at least part of this literature, and with which Bailer-Jones’ book closes, is that which is identified as one of the “contemporary philosophical issues: how theories and models relate to each other” (Bailer-Jones 2009, p. 208).That is why, in this introduction to the special issues of Metatheoria devoted to the topic of “Models and Theories in Biology”, we will present the main advances that have been made in the philosophical analysis of the concepts of model and theory in general and in biology in particular, and we will also do the same with the answers that have been given to the problem of "how theories and models relate to each other”. (shrink)
In an attempt to determine the epistemic status of computer simulation results, philosophers of science have recently explored the similarities and differences between computer simulations and experiments. One question that arises is whether and, if so, when, simulation results constitute novel empirical data. It is often supposed that computer simulation results could never be empirical or novel because simulations never interact with their targets, and cannot go beyond their programming. This paper argues against this position by examining whether, and under (...) what conditions, the features of empiricality and novelty could be displayed by computer simulation data. I show that, to the extent that certain familiar measurement results have these features, so can some computer simulation results. (shrink)
When comparing alternative courses of action, modern military decision makers often must consider both the military effectiveness and the ethical consequences of the available alternatives. The basis, design, calibration, and performance of a principles-based computational model of ethical considerations in military decision making are reported in this article. The relative ethical violation (REV) model comparatively evaluates alternative military actions based upon the degree to which they violate contextually relevant ethical principles. It is based on a set of specific ethical principles (...) deemed by philosophers and ethicists to be relevant to military courses of action. A survey of expert and non-expert human decision makers regarding the relative ethical violation of alternative actions for a set of specially designed calibration scenarios was conducted to collect data that was used to calibrate the REV model. Perhaps unsurprisingly, the survey showed that people, even experts, disagreed greatly amongst themselves regarding the scenarios’ ethical considerations. Despite this disagreement, two significant results emerged. First, after calibration the REV model performed very well in terms of replicating the ethical assessments of human experts for the calibration scenarios. The REV model outperformed an earlier model that was based on tangible consequences rather than ethical principles, that earlier model performed comparably to human experts, the experts outperformed human non-experts, and the non-experts outperformed random selection of actions. All of these performance comparisons were measured quantitatively and confirmed with suitable statistical tests. Second, although humans tended to value some principles over others, none of the ethical principles involved—even the principle of not harming civilians—completely overshadowed all of the other principles. (shrink)
Modeling is central to scientific inquiry. It also depends heavily upon the imagination. In modeling, scientists seem to turn their attention away from the complexity of the real world to imagine a realm of perfect spheres, frictionless planes and perfect rational agents. Modeling poses many questions. What are models? How do they relate to the real world? Recently, a number of philosophers have addressed these questions by focusing on the role of the imagination in modeling. Some have also drawn parallels (...) between models and fiction. This chapter examines these approaches to scientific modeling and considers the challenges they face. (shrink)
This paper explains and defends the idea that metaphysical necessity is the strongest kind of objective necessity. Plausible closure conditions on the family of objective modalities are shown to entail that the logic of metaphysical necessity is S5. Evidence is provided that some objective modalities are studied in the natural sciences. In particular, the modal assumptions implicit in physical applications of dynamical systems theory are made explicit by using such systems to define models of a modal temporal logic. Those assumptions (...) arguably include some necessitist principles. -/- Too often, philosophers have discussed ‘metaphysical’ modality — possibility, contingency, necessity — in isolation. Yet metaphysical modality is just a special case of a broad range of modalities, which we may call ‘objective’ by contrast with epistemic and doxastic modalities, and indeed deontic and teleological ones (compare the distinction between objective probabilities and epistemic or subjective probabilities). Thus metaphysical possibility, physical possibility and immediate practical possibility are all types of objective possibility. We should study the metaphysics and epistemology of metaphysical modality as part of a broader study of the metaphysics and epistemology of the objective modalities, on pain of radical misunderstanding. Since objective modalities are in general open to, and receive, natural scientific investigation, we should not treat the metaphysics and epistemology of metaphysical modality in isolation from the metaphysics and epistemology of the natural sciences. -/- In what follows, Section 1 gives a preliminary sketch of metaphysical modality and its place in the general category of objective modality. Section 2 reviews some familiar forms of scepticism about metaphysical modality in that light. Later sections explore a few of the many ways in which natural science deals with questions of objective modality, including questions of quantified modal logic. (shrink)
Robustness is often presented as a guideline for distinguishing the true or real from mere appearances or artifacts. Most of recent discussions of robustness have focused on the kind of derivational robustness analysis introduced by Levins, while the related but distinct idea of robustness as multiple accessibility, defended by Wimsatt, has received less attention. In this paper, I argue that the latter kind of robustness, when properly understood, can provide justification for ontological commitments. The idea is that we are justified (...) in believing that things studied by science are real insofar as we have robust evidence for them. I develop and analyze this idea in detail, and based on concrete examples show that it plays an important role in science. Finally, I demonstrate how robustness can be used to clarify the debate on scientific realism and to formulate new arguments. (shrink)
Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation ofmodels (...) does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach. (shrink)
This chapter elaborates and develops the thesis originally put forward by Mary Morgan (2005) that some mathematical models may surprise us, but that none of them can completely confound us, i.e. let us unable to produce an ex post theoretical understanding of the outcome of the model calculations. This chapter intends to object and demonstrate that what is certainly true of classical mathematical models is however not true of pluri-formalized simulations with multiple axiomatic bases. This chapter thus proposes to show (...) that - and why - some of these computational simulations that are now booming in the sciences not only surprise us but also confound us. To do so, it shows too that it is needed to elaborate and articulate with some new precision the concept of weak emergence initially due, for its part, to Mark A. Bedau (1997). (shrink)
I discuss here the definition of computer simulations, and more specifically the views of Humphreys, who considers that an object is simulated when a computer provides a solution to a computational model, which in turn represents the object of interest. I argue that Humphreys's concepts are not able to analyse fully successfully a case of contemporary simulation in physics, which is more complex than the examples considered so far in the philosophical literature. I therefore modify Humphreys's definition of simulation. I (...) allow for several successive layers of computational models, and I discuss the relations that exist between these models, the computer, and the object under study. An aim of my proposal is to clarify the distinction between computational models and numerical methods, and to better understand the representational and the computational functions of models in simulations. (shrink)
In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative with one (...) another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
The five studies of this special section investigate the role of models and similar representational tools in interdisciplinarity. These studies were all written by philosophers of science, who focused on interdisciplinary episodes between disciplines and sub-disciplines ranging from physics, chemistry and biology to the computational sciences, sociology and economics. The reasons we present these divergent studies in a collective form are three. First, we want to establish model-exchange as a kind of interdisciplinary event. The five case studies, which are summarized (...) in Section 2 below, show the relevance of this kind. Arguing for the relative unity of these cases will, we hope, re-orient the current debate over interdisciplinarity so as to reflect more appropriately the importance of this kind. We discuss our view of the current state of the debate in Section 3. The evidence from these cases also helps us to develop a taxonomy of interdisciplinary model exchanges in Section 4da taxonomy, we would like to add, that might be useful for the discussion of interdisciplinary exchanges beyond the context of models and their transfer. The second reason for presenting these studies together is that they provide an important source of evidence for the philosophy of science. Over the last three decades, philosophy of science has increasingly differentiated into philosophies of various disciplines. This differentiation in our view has greatly increased our understanding of the scientific practices of the respective disciplines, including the epistemological and methodological standards and conventions on which these practices are based and by which they are evaluated. But it has also made it harder to compare these practices and standards across disciplines. The studies we present here are case studies of interdisciplinary exchange: they focus on the transfer, collaborative construction or parallel use of models and similar representational tools. They therefore provide a unique opportunity to investigate various disciplinary treatments of the same or at least similar representational tools. This allows the identification and comparison of different disciplinary practices and their underlying conventions as well as of the respective normative standards of their evaluation. By tracing the paths along which models travel between disciplines and research fields we can observe to which extent discipline-external practices associated with an adopted tool are retained or replaced by disciplineinternal practices. This generates invaluable information about both disciplinarity and interdisciplinarity. We develop this argument further in Section 5. The third reason for presenting these studies in collective form is that their philosophical analysis also has important normative implications for the notion of interdisciplinarity itself. Too often in the current (non-philosophical) discourse is interdisciplinarity cast as an exclusively integrative project: interdisciplinary exchange is often claimed to be successful only if the involved disciplines become mutually more integrated as a consequence of this process. In contrast to this, many of the cases presented here show that interdisciplinary exchange can be scientifically highly successful, even if at the end of the exchange disciplinary borders remain fully intact. Indeed, borders can be fruitfully crossed without any integration across these borders. Such considerations of divergence in scientific practices and tools offer arguments against a naïve plea for unitarian or non-pluralist versions of interdisciplinarity. Disciplinary divergences may have their justifications, and attempting exchanges that require the reduction of these divergences may consequently not be justified. We pursue this argument further in Section 6. (shrink)
This paper distinguishes three concepts of "race": bio-genomic cluster/race, biological race, and social race. We map out realism, antirealism, and conventionalism about each of these, in three important historical episodes: Frank Livingstone and Theodosius Dobzhansky in 1962, A.W.F. Edwards' 2003 response to Lewontin (1972), and contemporary discourse. Semantics is especially crucial to the first episode, while normativity is central to the second. Upon inspection, each episode also reveals a variety of commitments to the metaphysics of race. We conclude by interrogating (...) the relevance of these scientific discussions for political positions and a post-racial future. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. -/- In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models (...) of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.