Models as Make-Believe offers a new approach to scientific modelling by looking to an unlikely source of inspiration: the dolls and toy trucks of children's games of make-believe.
Causal models provide a framework for making counterfactual predictions, making them useful for evaluating the truth conditions of counterfactual sentences. However, current causal models for counterfactual semantics face logical limitations compared to the alternative similarity-based approaches: they only apply to a limited subset of counterfactuals and the connection to counterfactual logic is not straightforward. This paper offers a causal framework for the semantics of counterfactuals which improves upon these logical issues. It extends the causal approach to counterfactuals to (...) handle more complex counterfactuals, including backtracking counterfactuals and those with logically complex antecedents. It also uses the notion of causal worlds to define a selection function and shows that this selection function satisfies familiar logical properties. While some limitations still arise, especially regarding counterfactuals which require breaking the laws of the causal model, this model improves upon many of the existing logical limitations of causal models. (shrink)
In this paper I propose an account of representation for scientific models based on Kendall Walton’s ‘make-believe’ theory of representation in art. I first set out the problem of scientific representation and respond to a recent argument due to Craig Callender and Jonathan Cohen, which aims to show that the problem may be easily dismissed. I then introduce my account of models as props in games of make-believe and show how it offers a solution to the problem. Finally, (...) I demonstrate an important advantage my account has over other theories of scientific representation. All existing theories analyse scientific representation in terms of relations, such as similarity or denotation. By contrast, my account does not take representation in modelling to be essentially relational. For this reason, it can accommodate a group of models often ignored in discussions of scientific representation, namely models which are representational but which represent no actual object. (shrink)
Batterman and Rice ([2014]) argue that minimal models possess explanatory power that cannot be captured by what they call ‘common features’ approaches to explanation. Minimal models are explanatory, according to Batterman and Rice, not in virtue of accurately representing relevant features, but in virtue of answering three questions that provide a ‘story about why large classes of features are irrelevant to the explanandum phenomenon’ ([2014], p. 356). In this article, I argue, first, that a method (the renormalization group) (...) they propose to answer the three questions cannot answer them, at least not by itself. Second, I argue that answers to the three questions are unnecessary to account for the explanatoriness of their minimal models. Finally, I argue that a common features account, what I call the ‘generalized ontic conception of explanation’, can capture the explanatoriness of minimal models. (shrink)
We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to (...) incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction2 Remarks about Models and Adequacy-for-Purpose3 Evidence for Calibration Can Also Yield Comparative Confirmation3.1 Double-counting I3.2 Double-counting II4 Climate Science Examples: Comparative Confirmation in Practice4.1 Confirmation due to better and worse best fits4.2 Confirmation due to more and less plausible forcings values5 Old Evidence6 Doubts about the Relevance of Past Data7 Non-comparative Confirmation and Catch-Alls8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice9 Concluding Remarks. (shrink)
Detailed examinations of scientific practice have revealed that the use of idealized models in the sciences is pervasive. These models play a central role in not only the investigation and prediction of phenomena, but in their received scientific explanations as well. This has led philosophers of science to begin revising the traditional philosophical accounts of scientific explanation in order to make sense of this practice. These new model-based accounts of scientific explanation, however, raise a number of key questions: (...) Can the fictions and falsehoods inherent in the modeling practice do real explanatory work? Do some highly abstract and mathematical models exhibit a noncausal form of scientific explanation? How can one distinguish an exploratory "how-possibly" model explanation from a genuine "how-actually" model explanation? Do modelers face tradeoffs such that a model that is optimized for yielding explanatory insight, for example, might fail to be the most predictively accurate, and vice versa? This chapter explores the various answers that have been given to these questions. (shrink)
Kripke models, interpreted realistically, have difficulty making sense of the thesis that there might have existed things that do not in fact exist, since a Kripke model in which this thesis is true requires a model structure in which there are possible worlds with domains that contain things that do not exist. This paper argues that we can use Kripke models as representational devices that allow us to give a realistic interpretation of a modal language. The method of (...) doing this is sketched, with the help of an analogy with a Galilean relativist theory of spatial properties and relations. (shrink)
In this topical section, we highlight the next step of research on modeling aiming to contribute to the emerging literature that radically refrains from approaching modeling as a scientific endeavor. Modeling surpasses “doing science” because it is frequently incorporated into decision-making processes in politics and management, i.e., areas which are not solely epistemically oriented. We do not refer to the production of models in academia for abstract or imaginary applications in practical fields, but instead highlight the real entwinement of (...) science and policy and the real erosion of their boundaries. Models in decision making – due to their strong entwinement with policy and management – are utilized differently than models in science; they are employed for different purposes and with different constraints. We claim that “being a part of decision-making” implies that models are elements of a very particular situation, in which knowledge about the present and the future is limited but dependence of decisions on the future is distinct. Emphasis on the future indicates that decisions are made about actions that have severe and lasting consequences. In these specific situations, models enable not only the acquisition of knowledge (the primary goal of science) but also enable deciding upon actions that change the course of events. As a result, there are specific ways to construct effective models and justify their results. Although some studies have explored this topic, our understanding of how models contribute to decision making outside of science remains fragmentary. This topical section aims to fill this gap in research and formulate an agenda for additional and more systematic investigations in the field. (shrink)
This book analyses the impact computerization has had on contemporary science and explains the origins, technical nature and epistemological consequences of the current decisive interplay between technology and science: an intertwining of formalism, computation, data acquisition, data and visualization and how these factors have led to the spread of simulation models since the 1950s. -/- Using historical, comparative and interpretative case studies from a range of disciplines, with a particular emphasis on the case of plant studies, the author shows (...) how and why computers, data treatment devices and programming languages have occasioned a gradual but irresistible and massive shift from mathematical models to computer simulations. -/- . (shrink)
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal (...) class='Hi'>models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding. (shrink)
This article shows how the MISS account of models—as isolations and surrogate systems—accommodates and elaborates Sugden’s account of models as credible worlds and Hausman’s account of models as explorations. Theoretical models typically isolate by means of idealization, and they are representatives of some target system, which prompts issues of resemblance between the two to arise. Models as representations are constrained both ontologically (by their targets) and pragmatically (by the purposes and audiences of the modeller), and (...) these relations are coordinated by a model commentary. Surrogate models are often about single mechanisms. They are distinguishable from substitute models, which are examined without any concern about their connections with the target. Models as credible worlds are surrogate models that are believed to provide access to their targets on account of their credibility (of which a few senses are distinguished). (shrink)
The geosciences include a wide spectrum of disciplines ranging from paleontology to climate science, and involve studies of a vast range of spatial and temporal scales, from the deep-time history of microbial life to the future of a system no less immense and complex than the entire Earth. Modeling is thus a central and indispensable tool across the geosciences. Here, we review both the history and current state of model-based inquiry in the geosciences. Research in these fields makes use of (...) a wide variety of models, such as conceptual, physical, and numerical models, and more specifically cellular automata, artificial neural networks, agent-based models, coupled models, and hierarchical models. We note the increasing demands to incorporate biological and social systems into geoscience modeling, challenging the traditional boundaries of these fields. Understanding and articulating the many different sources of scientific uncertainty – and finding tools and methods to address them – has been at the forefront of most research in geoscience modeling. We discuss not only structuralmodel uncertainties, parameter uncertainties, and solution uncertainties, but also the diverse sources of uncertainty arising from the complex nature of geoscience systems themselves. Without an examination of the geosciences, our philosophies of science and our understanding of the nature of model-based science are incomplete. (shrink)
Despite an enormous philosophical literature on models in science, surprisingly little has been written about data models and how they are constructed. In this paper, I examine the case of how paleodiversity data models are constructed from the fossil data. In particular, I show how paleontologists are using various model-based techniques to correct the data. Drawing on this research, I argue for the following related theses: First, the 'purity' of a data model is not a measure of (...) its epistemic reliability. Instead it is the fidelity of the data that matters. Second, the fidelity of a data model in capturing the signal of interest is a matter of degree. Third, the fidelity of a data model can be improved 'vicariously', such as through the use of post hoc model-based correction techniques. And, fourth, data models, like theoretical models, should be assessed as adequate (or inadequate) for particular purposes. (shrink)
This contribution provides an assessment of the epistemological role of scientific models. The prevalent view that all scientific models are representations of the world is rejected. This view points to a unified way of resolving epistemic issues for scientific models. The emerging consensus in philosophy of science that models have many different epistemic roles in science is presented and defended.
In this article, I argue that the use of scientific models that attribute intentional content to complex systems bears a striking similarity to the way in which statistical descriptions are used. To demonstrate this, I compare and contrast an intentional model with a statistical model, and argue that key similarities between the two give us compelling reasons to consider both as a type of phenomenological model. I then demonstrate how intentional descriptions play an important role in scientific methodology as (...) a type of phenomenal model, and argue that this makes them as essential as any other model of this type. (shrink)
Some philosophers of science – the present author included – appeal to fiction as an interpretation of the practice of modeling. This raises the specter of an incompatibility with realism, since fiction-making is essentially non-truth-regulated. I argue that the prima facie conflict can be resolved in two ways, each involving a distinct notion of fiction and a corresponding formulation of realism. The main goal of the paper is to describe these two packages. Toward the end I comment on how to (...) choose among them. (shrink)
We introduce a novel point of view on the “models as mediators” framework in order to emphasize certain important epistemological questions about models in science which have so far been little investigated. To illustrate how this perspective can help answer these kinds of questions, we explore the use of simplified models in high energy physics research beyond the Standard Model. We show in detail how the construction of simplified models is grounded in the need to mitigate (...) pressing epistemic problems concerning the uncertainty inherent in the present theoretical and experimental contexts. (shrink)
Historically embryogenesis has been among the most philosophically intriguing phenomena. In this paper I focus on one aspect of biological development that was particularly perplexing to the ancients: self-organisation. For many ancients, the fact that an organism determines the important features of its own development required a special model for understanding how this was possible. This was especially true for Aristotle, Alexander, and Simplicius, who all looked to contemporary technology to supply that model. However, they did not all agree on (...) what kind of device should be used. In this paper I explore the way these ancients made use of technology as a model for the developing embryo. I argue that their different choices of device reveal fundamental differences in the way each thinker understood the nature of biological development itself. In the final section of the paper I challenge the traditional view (dating back to Alexander's interpretation of Aristotle) that the use of automata in GA can simply be read off from their use in the de motu. (shrink)
In what follows, I will give examples of the sorts of step that can be taken towards spelling out the intuition that, after all, good models might be true. Along the way, I provide an outline of my account of models as ontologically and pragmatically constrained representations. And I emphasize the importance of examining models as functionally composed systems in which different components play different roles and only some components serve as relevant truth bearers. This disputes the (...) standard approach that proceeds by simply counting true and false elements in models in their entirety and concludes that models are false since they contain so many false elements. I call my alternative the functional decomposition approach. (shrink)
In this paper we define intensional models for the classical theory of types, thus arriving at an intensional type logic ITL. Intensional models generalize Henkin's general models and have a natural definition. As a class they do not validate the axiom of Extensionality. We give a cut-free sequent calculus for type theory and show completeness of this calculus with respect to the class of intensional models via a model existence theorem. After this we turn our attention (...) to applications. Firstly, it is argued that, since ITL is truly intensional, it can be used to model ascriptions of propositional attitude without predicting logical omniscience. In order to illustrate this a small fragment of English is defined and provided with an ITL semantics. Secondly, it is shown that ITL models contain certain objects that can be identified with possible worlds. Essential elements of modal logic become available within classical type theory once the axiom of Extensionality is given up. (shrink)
Boolean-valued models of set theory were independently introduced by Scott, Solovay and Vopěnka in 1965, offering a natural and rich alternative for describing forcing. The original method was adapted by Takeuti, Titani, Kozawa and Ozawa to lattice-valued models of set theory. After this, Löwe and Tarafder proposed a class of algebras based on a certain kind of implication which satisfy several axioms of ZF. From this class, they found a specific 3-valued model called PS3 which satisfies all the (...) axioms of ZF, and can be expanded with a paraconsistent negation *, thus obtaining a paraconsistent model of ZF. The logic (PS3 ,*) coincides (up to language) with da Costa and D'Ottaviano logic J3, a 3-valued paraconsistent logic that have been proposed independently in the literature by several authors and with different motivations such as CluNs, LFI1 and MPT. We propose in this paper a family of algebraic models of ZFC based on LPT0, another linguistic variant of J3 introduced by us in 2016. The semantics of LPT0, as well as of its first-order version QLPT0, is given by twist structures defined over Boolean agebras. From this, it is possible to adapt the standard Boolean-valued models of (classical) ZFC to twist-valued models of an expansion of ZFC by adding a paraconsistent negation. We argue that the implication operator of LPT0 is more suitable for a paraconsistent set theory than the implication of PS3, since it allows for genuinely inconsistent sets w such that [(w = w)] = 1/2 . This implication is not a 'reasonable implication' as defined by Löwe and Tarafder. This suggests that 'reasonable implication algebras' are just one way to define a paraconsistent set theory. Our twist-valued models are adapted to provide a class of twist-valued models for (PS3,*), thus generalizing Löwe and Tarafder result. It is shown that they are in fact models of ZFC (not only of ZF). (shrink)
Many have expected that understanding the evolution of norms should, in some way, bear on our first-order normative outlook: How norms evolve should shape which norms we accept. But recent philosophy has not done much to shore up this expectation. Most existing discussions of evolution and norms either jump headlong into the is/ought gap or else target meta-ethical issues, such as the objectivity of norms. My aim in this paper is to sketch a different way in which evolutionary considerations can (...) feed into normative thinking—focusing on stability. I will discuss two forms of argument that utilize information about social stability drawn from evolutionary models, and employs it to assess claims in political philosophy. One such argument treats stability as feature of social states that may be taken into account alongside other features. The other uses stability as a constraint on the realization of social ideals, via a version of the ought-implies-can maxim. These forms of argument are not new; indeed they have a history going back at least to early modern philosophy. But their marriage with evolutionary information is relatively recent, has a significantly novel character, and has received little attention in recent moral and political philosophy. (shrink)
Summary Analogue models are actual physical setups used to model something else. They are especially useful when what we wish to investigate is difficult to observe or experiment upon due to size or distance in space or time: for example, if the thing we wish to investigate is too large, too far away, takes place on a time scale that is too long, does not yet exist or has ceased to exist. The range and variety of analogue models (...) is too extensive to attempt a survey. In this article, I describe and discuss several different analogue model experiments, the results of those model experiments, and the basis for constructing them and interpreting their results. Examples of analogue models for surface waves in lakes, for earthquakes and volcanoes in geophysics, and for black holes in general relativity, are described, with a focus on examining the bases for claims that these analogues are appropriate analogues of what they are used to investigate. A table showing three different kinds of bases for reasoning using analogue models is provided. Finally, it is shown how the examples in this article counter three common misconceptions about the use of analogue models in physics. (shrink)
This paper discusses critically what simulation models of the evolution of cooperation can possibly prove by examining Axelrod’s “Evolution of Cooperation” (1984) and the modeling tradition it has inspired. Hardly any of the many simulation models in this tradition have been applicable empirically. Axelrod’s role model suggested a research design that seemingly allowed to draw general conclusions from simulation models even if the mechanisms that drive the simulation could not be identified empirically. But this research design was (...) fundamentally flawed. At best such simulations can claim to prove logical possibilities, i.e. they prove that certain phenomena are possible as the consequence of the modeling assumptions built into the simulation, but not that they are possible or can be expected to occur in reality. I suggest several requirements under which proofs of logical possibilities can nevertheless be considered useful. Sadly, most Axelrod-style simulations do not meet these requirements. It would be better not to use this kind of simulations at all. (shrink)
In this paper we investigate composition models of incarnation, according to which Christ is a compound of qualitatively and numerically different constituents. We focus on three-part models, according to which Christ is composed of a divine mind, a human mind, and a human body. We consider four possible relational structures that the three components could form. We argue that a ‘hierarchy of natures’ model, in which the human mind and body are united to each other in the normal (...) way, and in which they are jointly related to the divine mind by the relation of co-action, is the most metaphysically plausible model. Finally, we consider the problem of how Christ can be a single person even when his components may be considered persons. We argue that an Aristotelian metaphysics, according to which identity is a matter of function, offers a plausible solution: Christ's components may acquire a radically new identity through being parts of the whole, which enables them to be reidentified as parts, not persons. (shrink)
Cognitive agents, whether human or computer, that engage in natural-language discourse and that have beliefs about the beliefs of other cognitive agents must be able to represent objects the way they believe them to be and the way they believe others believe them to be. They must be able to represent other cognitive agents both as objects of beliefs and as agents of beliefs. They must be able to represent their own beliefs, and they must be able to represent beliefs (...) as objects of beliefs. These requirements raise questions about the number of tokens of the belief representation language needed to represent believers and propositions in their normal roles and in their roles as objects of beliefs. In this paper, we explicate the relations among nodes, mental tokens, concepts, actual objects, concepts in the belief spaces of an agent and the agent's model of other agents, concepts of other cognitive agents, and propositions. We extend, deepen, and clarify our theory of intensional knowledge representation for natural-language processing, as presented in previous papers and in light of objections raised by others. The essential claim is that tokens in a knowledge-representation system represent only intensions and not extensions. We are pursuing this investigation by building CASSIE, a computer model of a cognitive agent and, to the extent she works, a cognitive agent herself. CASSIE's mind is implemented in the SNePS knowledge-representation and reasoning system. (shrink)
I explore a challenge that idealisations pose to scientific realism and argue that the realist can best accommodate idealisations by capitalising on certain modal features of idealised models that are underwritten by laws of nature.
The central idea is that the cerebral cortex is a model building machine, where regularities in the world serve as templates for the models it builds. First it is shown how this idea can be naturalized, and how the representational contents of our internal models depend upon the evolutionarily endowed design principles of our model building machine. Current neuroscience suggests a powerful form that these design principles may take, allowing our brains to uncover deep structures of the world (...) hidden behind surface sensory stimulation, the individuals, kinds, and properties that form the objects of human perception and thought. It is then shown how this account solves various problems that arose for previous attempts at naturalizing intentionality, and also how it supports rather than undermines folk psychology. As in the parable of the blind men and the elephant, the seemingly unrelated pieces of earlier theories (information, causation, isomorphism, success, and teleology) emerge as different aspects of the evolved model-building mechanism that explains the intentional features of our kind of mind. (shrink)
Can purely predictive models be useful in investigating causal systems? I argue ‘yes’. Moreover, in many cases not only are they useful, they are essential. The alternative is to stick to models or mechanisms drawn from well-understood theory. But a necessary condition for explanation is empirical success, and in many cases in social and field sciences such success can only be achieved by purely predictive models, not by ones drawn from theory. Alas, the attempt to use theory (...) to achieve explanation or insight without empirical success therefore fails, leaving us with the worst of both worlds – neither prediction nor explanation. Best go with empirical success by any means necessary. I support these methodological claims via case studies of two impressive feats of predictive modelling: opinion polling of political elections, and weather forecasting. (shrink)
A semantics is presented for Storrs McCall's separate axiomatizations of Aristotle's accepted and rejected polysyllogisms. The polysyllogisms under discussion are made up of either assertoric or apodeictic propositions. The semantics is given by associating a property with a pair of sets: one set consists of things having the property essentially and the other of things having it accidentally. A completeness proof and a semantic decision procedure are given.
In this article, I argue that intentional psychology (i.e. the interpretation of human behaviour in terms of intentional states and propositional attitudes) plays an essential role in the sciences of the mind. However, this role is not one of identifying scientifically respectable states of the world. Rather, I argue that intentional psychology acts as a type of phenomenological model, as opposed to a mechanistic one. I demonstrate that, like other phenomenological models in science, intentional psychology is a methodological tool (...) with its own benefits and insights that complements our mechanistic understanding of systems. As a result, intentional psychology's distinctive scientific benefit is its ability to model systems in unique, non-mechanistic, ways. This allows us to generate predictions that we cannot otherwise generate using the mechanistic models of neuroscience and cognitive psychology necessary for various scientific tasks. (shrink)
I critically examine the semantic view of theories to reveal the following results. First, models in science are not the same as models in mathematics, as holders of the semantic view claim. Second, when several examples of the semantic approach are examined in detail no common thread is found between them, except their close attention to the details of model building in each particular science. These results lead me to propose a deflationary semantic view, which is simply that (...) model construction is an important component of theorizing in science. This deflationary view is consistent with a naturalized approach to the philosophy of science. (shrink)
Learning is fundamentally about action, enabling the successful navigation of a changing and uncertain environment. The experience of pain is central to this process, indicating the need for a change in action so as to mitigate potential threat to bodily integrity. This review considers the application of Bayesian models of learning in pain that inherently accommodate uncertainty and action, which, we shall propose are essential in understanding learning in both acute and persistent cases of pain.
In this paper, we apply the perspective of intra-organismal ecology by investigating a family of ecological models suitable to describe a gene therapy to a particular metabolic disorder, the adenosine deaminase deficiency (ADA-SCID). The gene therapy is modeled as the prospective ecological invasion of an organ (here, bone marrow) by genetically modified stem cells, which then operate niche construction in the cellular environment by releasing an enzyme they synthesize. We show that depending on the chosen order (a choice that (...) cannot be made on \textit{a priori} assumptions), different kinds of dynamics are expected, possibly leading to different therapeutic strategies. This drives us to discuss several features of the extension of ecology to intra-organismal ecology. (shrink)
The Capgras delusion is a condition in which a person believes that an imposter has replaced some close friend or relative. Recent theorists have appealed to Bayesianism to help explain both why a subject with the Capgras delusion adopts this delusional belief and why it persists despite counter-evidence. The Bayesian approach is useful for addressing these questions; however, the main proposal of this essay is that Capgras subjects also have a delusional conception of epistemic possibility, more specifically, they think more (...) things are possible, given what is known, than non-delusional subjects do. I argue that this is a central way in which their thinking departs from ordinary cognition and that it cannot be characterized in Bayesian terms. Thus, in order to fully understand the cognitive processing involved in the Capgras delusion, we must move beyond Bayesianism. 1 The Simple Bayesian Model2 Anomalous Evidence and the Capgras Delusion3 Impaired Reasoning4 Setting Priors5 Epistemic Modality6 Delusions of Possibility7 Delusions of Possibility in Different Contexts8 How Many Factors? (shrink)
This paper discusses critically what simulation models of the evolution ofcooperation can possibly prove by examining Axelrod’s “Evolution of Cooperation” and the modeling tradition it has inspired. Hardly any of the many simulation models of the evolution of cooperation in this tradition have been applicable empirically. Axelrod’s role model suggested a research design that seemingly allowed to draw general conclusions from simulation models even if the mechanisms that drive the simulation could not be identified empirically. But this (...) research design was fundamentally flawed, because it is not possible to draw general empirical conclusions from theoretical simulations. At best such simulations can claim to prove logical possibilities, i.e. they prove that certain phenomena are possible as the consequence of the modeling assumptions built into the simulation, but not that they are possible or can be expected to occur in reality I suggest several requirements under which proofs of logical possibilities can nevertheless be considered useful. Sadly, most Axelrod-style simulations do not meet these requirements. I contrast this with Schelling’s neighborhood segregation model, thecore mechanism of which can be retraced empirically. (shrink)
3 Abstract This paper is about modeling morality, with a proposal as to the best 4 way to do it. There is the small problem, however, in continuing disagreements 5 over what morality actually is, and so what is worth modeling. This paper resolves 6 this problem around an understanding of the purpose of a moral model, and from 7 this purpose approaches the best way to model morality.
Attempts to introduce Gestalt theory into the realm of visual neuroscience are discussed on both theoretical and experimental grounds. To define the framework in which these proposals can be defended, this paper outlines the characteristics of a standard model, which qualifies as a received view in the visual neurosciences, and of the research into natural images statistics. The objections to the standard model and the main questions of the natural images research are presented. On these grounds, this paper defends the (...) view that Gestalt psychology and experimental phenomenology provide a contribution to the research into perception by the construction of phenomenological models for an ecologically meaningful interpretation of the empirical evidence and the hypothetical constructs of the natural image research within the visual neuroscience. A formal framework for the phenomenological models is proposed, wherein Gestalt theoretical principles and empirical evidence are represented in terms of topological properties and relationships, which account for the order and structures that make the environment accessible to observers at a relevant behavioural level. It is finally argued that these models allow us to evaluate the principles and the empirical evidence of various natures which are integrated from different fields into the research into perception, and in particular into visual neurosciences. (shrink)
Biomedical science has been remarkably successful in explaining illness by categorizing diseases and then by identifying localizable lesions such as a virus and neoplasm in the body that cause those diseases. Not surprisingly, researchers have aspired to apply this powerful paradigm to addiction. So, for example, in a review of the neuroscience of addiction literature, Hyman and Malenka (2001, p. 695) acknowledge a general consensus among addiction researchers that “[a]ddiction can appropriately be considered as a chronic medical illness.” Like other (...) diseases, “Once addiction has taken hold, it tends to follow a chronic course.” (Koob and La Moal 2006, p. ?). Working from this perspective, much effort has gone into characterizing the symptomology of addiction and the brain changes that underlie them. Evidence for involvement of dopamine transmission changes in the ventral tegmental area (VTA) and nucleus accumbens (NAc) have received the greatest attention. Kauer and Malenka (2007, p. 844) put it well: “drugs of abuse can co-opt synaptic plasticity mechanisms in brain circuits involved in reinforcement and reward processing”. Our goal in this chapter to provide an explicit description of the assumptions of medical models, the different forms they may take, and the challenges they face in providing explanations with solid evidence of addiction. <br>. (shrink)
In this paper, we discuss the perspective of intra-organismal ecology by investigating a family of ecological models. We consider two types of models. First order models describe the population dynamics as being directly affected by ecological factors (here understood as nutrients, space, etc). They might be thought of as analogous to Aristotelian physics. Second order models describe the population dynamics as being indirectly affected, the ecological factors now affecting the derivative of the growth rate (that is, (...) the population acceleration), possibly through an impact on non-genetically inherited factors. Second order models might be thought of as analogous to Galilean physics. In the joint paper, we apply these ideas to a situation of gene therapy. (shrink)
Many biological investigations are organized around a small group of species, often referred to as ‘model organisms’, such as the fruit fly Drosophila melanogaster. The terms ‘model’ and ‘modelling’ also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka–Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different (...) epistemic characters. Theoretical modelling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account. 1 Introduction2 Volterra and Theoretical Modelling3 Drosophila as a Model Organism4 Generalizing from Work on Model Organisms5 Phylogenetic Inference and Model Organisms6 Further Roles of Model Organisms6.1 Preparative experimentation6.2 Model organisms as paradigms6.3 Model organisms as theoretical models6.4 Inspiration for engineers6.5 Anchoring a research community7 Conclusion. (shrink)
Building upon Nancy Cartwright's discussion of models in How the Laws of Physics Lie, this paper addresses solid state research in transition metal oxides. Historical analysis reveals that in this domain models function both as the culmination of phenomenology and the commencement of theoretical explanation. Those solid state chemists who concentrate on the description of phenomena pertinent to specific elements or compounds assess models according to different standards than those who seek explanation grounded in approximate applications of (...) the Schroedinger equation. Accurate accounts of scientific debate in this field must include both perspectives. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. -/- In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of (...)models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
Prediction Error Minimization theory (PEM) is one of the most promising attempts to model perception in current science of mind, and it has recently been advocated by some prominent philosophers as Andy Clark and Jakob Hohwy. Briefly, PEM maintains that “the brain is an organ that on aver-age and over time continually minimizes the error between the sensory input it predicts on the basis of its model of the world and the actual sensory input” (Hohwy 2014, p. 2). An interesting (...) debate has arisen with regard to which is the more adequate epistemological interpretation of PEM. Indeed, Hohwy maintains that given that PEM supports an inferential view of perception and cognition, PEM has to be considered as conveying an internalist epistemological perspective. Contrary to this view, Clark maintains that it would be incorrect to interpret in such a way the indirectness of the link between the world and our inner model of it, and that PEM may well be combined with an externalist epistemological perspective. The aim of this paper is to assess those two opposite interpretations of PEM. Moreover, it will be suggested that Hohwy’s position may be considerably strengthened by adopting Carlo Cellucci’s view on knowledge (2013). (shrink)
Recent philosophical attention to climate models has highlighted their weaknesses and uncertainties. Here I address the ways that models gain support through observational data. I review examples of model fit, variety of evidence, and independent support for aspects of the models, contrasting my analysis with that of other philosophers. I also investigate model robustness, which often emerges when comparing climate models simulating the same time period or set of conditions. Starting from Michael Weisberg’s analysis of robustness, (...) I conclude that his approach involves a version of reasoning from variety of evidence, enabling this robustness to be a confirmatory virtue.. (shrink)
The Marburg neo-Kantians argue that Hermann von Helmholtz's empiricist account of the a priori does not account for certain knowledge, since it is based on a psychological phenomenon, trust in the regularities of nature. They argue that Helmholtz's account raises the 'problem of validity' (Gueltigkeitsproblem): how to establish a warranted claim that observed regularities are based on actual relations. I reconstruct Heinrich Hertz's and Ludwig Wittgenstein's Bild theoretic answer to the problem of validity: that scientists and philosophers can depict the (...) necessary a priori constraints on states of affairs in a given system, and can establish whether these relations are actual relations in nature. The analysis of necessity within a system is a lasting contribution of the Bild theory. However, Hertz and Wittgenstein argue that the logical and mathematical sentences of a Bild are rules, tools for constructing relations, and the rules themselves are meaningless outside the theory. Carnap revises the argument for validity by attempting to give semantic rules for translation between frameworks. Russell and Quine object that pragmatics better accounts for the role of a priori reasoning in translating between frameworks. The conclusion of the tale, then, is a partial vindication of Helmholtz's original account. (shrink)
The theory of imperatives is philosophically relevant since in building it — some of the long standing problems need to be addressed, and presumably some new ones are waiting to be discovered. The relevance of the theory of imperatives for philosophical research is remarkable, but usually recognized only within the field of practical philosophy. Nevertheless, the emphasis can be put on problems of theoretical philosophy. Proper understanding of imperatives is likely to raise doubts about some of our deeply entrenched and (...) tacit presumptions. In philosophy of language it is the presumption that declaratives provide the paradigm for sentence form; in philosophy of science it is the belief that theory construction is independent from the language practice, in logic it is the conviction that logical meaning relations are constituted out of logical terminology, in ontology it is the view that language use is free from ontological commitments. The list is not exhaustive; it includes only those presumptions that this paper concerns. (shrink)
Models treating the simple properties of social groups have a common shortcoming. Typically, they focus on the local properties of group members and the features of the world with which group members interact. I consider economic models of bureaucratic corruption, to show that (a) simple properties of groups are often constituted by the properties of the wider population, and (b) even sophisticated models are commonly inadequate to account for many simple social properties. Adequate models and social (...) policies must treat certain factors that are not local to individual members of the group, even if those factors are not causally connected to those individuals. Key Words: individualism • corruption • supervenience • model • cause. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.