Models as Make-Believe offers a new approach to scientific modelling by looking to an unlikely source of inspiration: the dolls and toy trucks of children's games of make-believe.
This paper constructs a model of metaphysical indeterminacy that can accommodate a kind of ‘deep’ worldly indeterminacy that arguably arises in quantum mechanics via the Kochen-Specker theorem, and that is incompatible with prominent theories of metaphysical indeterminacy such as that in Barnes and Williams (2011). We construct a variant of Barnes and Williams's theory that avoids this problem. Our version builds on situation semantics and uses incomplete, local situations rather than possible worlds to build a model. We evaluate the resulting (...) theory and contrast it with similar alternatives, concluding that our model successfully captures deep indeterminacy. (shrink)
In this paper I propose an account of representation for scientific models based on Kendall Walton’s ‘make-believe’ theory of representation in art. I first set out the problem of scientific representation and respond to a recent argument due to Craig Callender and Jonathan Cohen, which aims to show that the problem may be easily dismissed. I then introduce my account of models as props in games of make-believe and show how it offers a solution to the problem. Finally, (...) I demonstrate an important advantage my account has over other theories of scientific representation. All existing theories analyse scientific representation in terms of relations, such as similarity or denotation. By contrast, my account does not take representation in modelling to be essentially relational. For this reason, it can accommodate a group of models often ignored in discussions of scientific representation, namely models which are representational but which represent no actual object. (shrink)
Despite an enormous philosophical literature on models in science, surprisingly little has been written about data models and how they are constructed. In this paper, I examine the case of how paleodiversity data models are constructed from the fossil data. In particular, I show how paleontologists are using various model-based techniques to correct the data. Drawing on this research, I argue for the following related theses: first, the ‘purity’ of a data model is not a measure of (...) its epistemic reliability. Instead it is the fidelity of the data that matters. Second, the fidelity of a data model in capturing the signal of interest is a matter of degree. Third, the fidelity of a data model can be improved ‘vicariously’, such as through the use of post hoc model-based correction techniques. And, fourth, data models, like theoretical models, should be assessed as adequate for particular purposes. (shrink)
Although some previous studies have investigated the relationship between moral foundations and moral judgment development, the methods used have not been able to fully explore the relationship. In the present study, we used Bayesian Model Averaging (BMA) in order to address the limitations in traditional regression methods that have been used previously. Results showed consistency with previous findings that binding foundations are negatively correlated with post-conventional moral reasoning and positively correlated with maintaining norms and personal interest schemas. In addition to (...) previous studies, our results showed a positive correlation for individualizing foundations and post-conventional moral reasoning. Implications are discussed as well as a detailed explanation of the novel BMA method in order to allow others in the field of moral education to be able to use it in their own studies. (shrink)
We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to (...) incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction2 Remarks about Models and Adequacy-for-Purpose3 Evidence for Calibration Can Also Yield Comparative Confirmation3.1 Double-counting I3.2 Double-counting II4 Climate Science Examples: Comparative Confirmation in Practice4.1 Confirmation due to better and worse best fits4.2 Confirmation due to more and less plausible forcings values5 Old Evidence6 Doubts about the Relevance of Past Data7 Non-comparative Confirmation and Catch-Alls8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice9 Concluding Remarks. (shrink)
In this article, I explore the compatibility of inference to the best explanation (IBE) with several influential models and accounts of scientific explanation. First, I explore the different conceptions of IBE and limit my discussion to two: the heuristic conception and the objective Bayesian conception. Next, I discuss five models of scientific explanation with regard to each model’s compatibility with IBE. I argue that Philip Kitcher’s unificationist account supports IBE; Peter Railton’s deductive-nomological-probabilistic model, Wesley Salmon’s statistical-relevance Model, and (...) Bas van Fraassen’s erotetic account are incompatible with IBE; and Wesley Salmon’s causal-mechanical model is merely consistent with IBE. In short, many influential models of scientific explanation do not support IBE. I end by outlining three possible conclusions to draw: (1) either philosophers of science or defenders of IBE have seriously misconstrued the concept of explanation, (2) philosophers of science and defenders of IBE do not use the term ‘explanation’ univocally, and (3) the ampliative conception of IBE, which is compatible with any model of scientific explanation, deserves a closer look. (shrink)
Many biological investigations are organized around a small group of species, often referred to as ‘model organisms’, such as the fruit fly Drosophila melanogaster. The terms ‘model’ and ‘modelling’ also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka–Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different (...) epistemic characters. Theoretical modelling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account. 1 Introduction2 Volterra and Theoretical Modelling3 Drosophila as a Model Organism4 Generalizing from Work on Model Organisms5 Phylogenetic Inference and Model Organisms6 Further Roles of Model Organisms6.1 Preparative experimentation6.2 Model organisms as paradigms6.3 Model organisms as theoretical models6.4 Inspiration for engineers6.5 Anchoring a research community7 Conclusion. (shrink)
Detailed examinations of scientific practice have revealed that the use of idealized models in the sciences is pervasive. These models play a central role in not only the investigation and prediction of phenomena, but in their received scientific explanations as well. This has led philosophers of science to begin revising the traditional philosophical accounts of scientific explanation in order to make sense of this practice. These new model-based accounts of scientific explanation, however, raise a number of key questions: (...) Can the fictions and falsehoods inherent in the modeling practice do real explanatory work? Do some highly abstract and mathematical models exhibit a noncausal form of scientific explanation? How can one distinguish an exploratory "how-possibly" model explanation from a genuine "how-actually" model explanation? Do modelers face tradeoffs such that a model that is optimized for yielding explanatory insight, for example, might fail to be the most predictively accurate, and vice versa? This chapter explores the various answers that have been given to these questions. (shrink)
This paper introduces and defends an account of model-based science that I dub model pluralism. I argue that despite a growing awareness in the philosophy of science literature of the multiplicity, diversity, and richness of models and modeling practices, more radical conclusions follow from this recognition than have previously been inferred. Going against the tendency within the literature to generalize from single models, I explicate and defend the following two core theses: any successful analysis of models must (...) target sets of models, their multiplicity of functions within science, and their scientific context and history and for almost any aspect x of phenomenon y, scientists require multiple models to achieve scientific goal z. (shrink)
This paper constitutes a radical departure from the existing philosophical literature on models, modeling-practices, and model-based science. I argue that the various entities and practices called 'models' and 'modeling-practices' are too diverse, too context-sensitive, and serve too many scientific purposes and roles, as to allow for a general philosophical analysis. From this recognition an alternative view emerges that I shall dub model anarchism.
Causal models show promise as a foundation for the semantics of counterfactual sentences. However, current approaches face limitations compared to the alternative similarity theory: they only apply to a limited subset of counterfactuals and the connection to counterfactual logic is not straightforward. This paper addresses these difficulties using exogenous interventions, where causal interventions change the values of exogenous variables rather than structural equations. This model accommodates judgments about backtracking counterfactuals, extends to logically complex counterfactuals, and validates familiar principles of (...) counterfactual logic. This combines the interventionist intuitions of the causal approach with the logical advantages of the similarity approach. (shrink)
I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independentlysupported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present (...) climate models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists’ discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques. (shrink)
Batterman and Rice ([2014]) argue that minimal models possess explanatory power that cannot be captured by what they call ‘common features’ approaches to explanation. Minimal models are explanatory, according to Batterman and Rice, not in virtue of accurately representing relevant features, but in virtue of answering three questions that provide a ‘story about why large classes of features are irrelevant to the explanandum phenomenon’ ([2014], p. 356). In this article, I argue, first, that a method (the renormalization group) (...) they propose to answer the three questions cannot answer them, at least not by itself. Second, I argue that answers to the three questions are unnecessary to account for the explanatoriness of their minimal models. Finally, I argue that a common features account, what I call the ‘generalized ontic conception of explanation’, can capture the explanatoriness of minimal models. (shrink)
One striking feature of the contemporary modelling practice is its interdisciplinary nature. The same equation forms, and mathematical and computational methods, are used across different disciplines, as well as within the same discipline. Are there, then, differences between intra- and interdisciplinary transfer, and can the comparison between the two provide more insight on the challenges of interdisciplinary theoretical work? We will study the development and various uses of the Ising model within physics, contrasting them to its applications to socio-economic systems. (...) While the renormalization group methods justify the transfer of the Ising model within physics – by ascribing them to the same universality class – its application to socio-economic phenomena has no such theoretical grounding. As a result, the insights gained by modelling socio-economic phenomena by the Ising model may remain limited. (shrink)
In this topical section, we highlight the next step of research on modeling aiming to contribute to the emerging literature that radically refrains from approaching modeling as a scientific endeavor. Modeling surpasses “doing science” because it is frequently incorporated into decision-making processes in politics and management, i.e., areas which are not solely epistemically oriented. We do not refer to the production of models in academia for abstract or imaginary applications in practical fields, but instead highlight the real entwinement of (...) science and policy and the real erosion of their boundaries. Models in decision making – due to their strong entwinement with policy and management – are utilized differently than models in science; they are employed for different purposes and with different constraints. We claim that “being a part of decision-making” implies that models are elements of a very particular situation, in which knowledge about the present and the future is limited but dependence of decisions on the future is distinct. Emphasis on the future indicates that decisions are made about actions that have severe and lasting consequences. In these specific situations, models enable not only the acquisition of knowledge (the primary goal of science) but also enable deciding upon actions that change the course of events. As a result, there are specific ways to construct effective models and justify their results. Although some studies have explored this topic, our understanding of how models contribute to decision making outside of science remains fragmentary. This topical section aims to fill this gap in research and formulate an agenda for additional and more systematic investigations in the field. (shrink)
The geosciences include a wide spectrum of disciplines ranging from paleontology to climate science, and involve studies of a vast range of spatial and temporal scales, from the deep-time history of microbial life to the future of a system no less immense and complex than the entire Earth. Modeling is thus a central and indispensable tool across the geosciences. Here, we review both the history and current state of model-based inquiry in the geosciences. Research in these fields makes use of (...) a wide variety of models, such as conceptual, physical, and numerical models, and more specifically cellular automata, artificial neural networks, agent-based models, coupled models, and hierarchical models. We note the increasing demands to incorporate biological and social systems into geoscience modeling, challenging the traditional boundaries of these fields. Understanding and articulating the many different sources of scientific uncertainty – and finding tools and methods to address them – has been at the forefront of most research in geoscience modeling. We discuss not only structuralmodel uncertainties, parameter uncertainties, and solution uncertainties, but also the diverse sources of uncertainty arising from the complex nature of geoscience systems themselves. Without an examination of the geosciences, our philosophies of science and our understanding of the nature of model-based science are incomplete. (shrink)
I provide a theory of causation within the causal modeling framework. In contrast to most of its predecessors, this theory is model-invariant in the following sense: if the theory says that C caused (didn't cause) E in a causal model, M, then it will continue to say that C caused (didn't cause) E once we've removed an inessential variable from M. I suggest that, if this theory is true, then we should understand a cause as something which transmits deviant or (...) non-inertial behavior to its effect. (shrink)
I argue that by considering Kant’s engagement with previous theorists of natural right, we can gain a clearer understanding of how he transformed the discipline from its foundations. To do this, I focus my analysis on Kant’s (critical) reception of two models of natural right with which he was very familiar: one from Alexander Baumgarten’s Elements of First Practical Philosophy [Initia philosophiae practicae primae], the other from Gottfried Achenwall’s Natural Law [Ius naturae]. The Initia served as a basis for (...) Kant’s lectures on moral philosophy for over three decades and may thus be considered as having played an important role in shaping his practical philosophy as a whole. Achenwall’s Ius naturae was the textbook that Kant employed in his lectures on natural right for over two decades. I argue that Kant distances himself from previous models of natural right in three main regards: the identification of moral laws with laws of nature, the normative connection between the principles of right and a natural end, and the place of God as the author of moral laws. I then briefly discuss what Kant retains of Baumgarten’s and Achenwall’s models of natural right. Kant’s Metaphysics of Morals preserves the view that a rational doctrine of right belongs to a broader systematic doctrine that comprehends the entire system of duties. I address the question of the division of the Sittenlehre into two branches and claim that, whereas Kant did not consider Baumgarten’s answer to the problem of distinguishing juridical from ethical principles to be satisfactory, he found a key piece of his own resolution in Achenwall’s Ius naturae, namely the definition of right as a power to coerce. (shrink)
Kripke models, interpreted realistically, have difficulty making sense of the thesis that there might have existed things that do not in fact exist, since a Kripke model in which this thesis is true requires a model structure in which there are possible worlds with domains that contain things that do not exist. This paper argues that we can use Kripke models as representational devices that allow us to give a realistic interpretation of a modal language. The method of (...) doing this is sketched, with the help of an analogy with a Galilean relativist theory of spatial properties and relations. (shrink)
Models are indispensable tools of scientific inquiry, and one of their main uses is to improve our understanding of the phenomena they represent. How do models accomplish this? And what does this tell us about the nature of understanding? While much recent work has aimed at answering these questions, philosophers' focus has been squarely on models in empirical science. I aim to show that pure mathematics also deserves a seat at the table. I begin by presenting two (...) cases: Cramér’s random model of the prime numbers and the function field model of the integers. These cases show that mathematicians, like empirical scientists, rely on unrealistic models to gain understanding of complex phenomena. They also have important implications for some much-discussed theses about scientific understanding. First, modeling practices in mathematics confirm that one can gain understanding without obtaining an explanation. Second, these cases undermine the popular thesis that unrealistic models confer understanding by imparting counterfactual knowledge. (shrink)
We critically engage two traditional views of scientific data and outline a novel philosophical view that we call the pragmatic-representational view of data. On the PR view, data are representations that are the product of a process of inquiry, and they should be evaluated in terms of their adequacy or fitness for particular purposes. Some important implications of the PR view for data assessment, related to misrepresentation, context-sensitivity, and complementary use, are highlighted. The PR view provides insight into the common (...) but little-discussed practices of iteratively reusing and repurposing data, which result in many datasets’ having a phylogeny—an origin and complex evolutionary history—that is relevant to their evaluation and future use. We relate these insights to the open-data and data-rescue movements, and highlight several future avenues of research that build on the PR view of data. (shrink)
Animal models have long been used to investigate human mental disorders, including depression, anxiety, and schizophrenia. This practice is usually justified in terms of the benefits (to humans) outweighing the costs (to the animals). I argue on utility maximization grounds that we should phase out animal models in neuropsychiatric research. Our leading theories of how human minds and behavior evolved invoke sociocultural factors whose relation to nonhuman minds, societies, and behavior has not been homologized. Thus it is not (...) at all clear that we are gaining the epistemic or clinical benefits we want from this animal-based research. (shrink)
This book analyses the impact computerization has had on contemporary science and explains the origins, technical nature and epistemological consequences of the current decisive interplay between technology and science: an intertwining of formalism, computation, data acquisition, data and visualization and how these factors have led to the spread of simulation models since the 1950s. -/- Using historical, comparative and interpretative case studies from a range of disciplines, with a particular emphasis on the case of plant studies, the author shows (...) how and why computers, data treatment devices and programming languages have occasioned a gradual but irresistible and massive shift from mathematical models to computer simulations. -/- . (shrink)
I analyse three most interesting and extensive approaches to theoretical models: classical ones—proposed by Peter Achinstein and Michael Redhead, and the rela-tively rareanalysed approach of Ryszard Wójcicki, belonging to a later phase of his research where he gave up applyingthe conceptual apparatus of logical semantics. I take into consideration the approaches to theoretical models in which they are qualified as models representing the reality. That is why I omit Max Black’s and Mary Hesse’s concepts of such (...) class='Hi'>models, as those two concepts belong to the analogue model group if we consider the main function of the model of a given class as its classification criterion. My main focus is on theoretical models with representative functions as these very models and, in a broader context, the question of representation. (shrink)
Cognitive agents, whether human or computer, that engage in natural-language discourse and that have beliefs about the beliefs of other cognitive agents must be able to represent objects the way they believe them to be and the way they believe others believe them to be. They must be able to represent other cognitive agents both as objects of beliefs and as agents of beliefs. They must be able to represent their own beliefs, and they must be able to represent beliefs (...) as objects of beliefs. These requirements raise questions about the number of tokens of the belief representation language needed to represent believers and propositions in their normal roles and in their roles as objects of beliefs. In this paper, we explicate the relations among nodes, mental tokens, concepts, actual objects, concepts in the belief spaces of an agent and the agent's model of other agents, concepts of other cognitive agents, and propositions. We extend, deepen, and clarify our theory of intensional knowledge representation for natural-language processing, as presented in previous papers and in light of objections raised by others. The essential claim is that tokens in a knowledge-representation system represent only intensions and not extensions. We are pursuing this investigation by building CASSIE, a computer model of a cognitive agent and, to the extent she works, a cognitive agent herself. CASSIE's mind is implemented in the SNePS knowledge-representation and reasoning system. (shrink)
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal (...) class='Hi'>models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding. (shrink)
Historically embryogenesis has been among the most philosophically intriguing phenomena. In this paper I focus on one aspect of biological development that was particularly perplexing to the ancients: self-organisation. For many ancients, the fact that an organism determines the important features of its own development required a special model for understanding how this was possible. This was especially true for Aristotle, Alexander, and Simplicius, who all looked to contemporary technology to supply that model. However, they did not all agree on (...) what kind of device should be used. In this paper I explore the way these ancients made use of technology as a model for the developing embryo. I argue that their different choices of device reveal fundamental differences in the way each thinker understood the nature of biological development itself. In the final section of the paper I challenge the traditional view (dating back to Alexander's interpretation of Aristotle) that the use of automata in GA can simply be read off from their use in the de motu. (shrink)
I argue that verbal models should be included in a philosophical account of the scientific practice of modelling. Weisberg (2013) has directly opposed this thesis on the grounds that verbal structures, if they are used in science, only merely describe models. I look at examples from Darwin's On the Origin of Species (1859) of verbally constructed narratives that I claim model the general phenomenon of evolution by natural selection. In each of the cases I look at, a particular (...) scenario is described that involves at least some fictitious elements but represents the salient causal components of natural selection. I pronounce the importance of prioritising observation of scientific practice for the philosophy of modelling and I suggest that there are other likely model types that are excluded from philosophical accounts. (shrink)
Three metascientific concepts that have been object of philosophical analysis are the concepts oflaw, model and theory. The aim ofthis article is to present the explication of these concepts, and of their relationships, made within the framework of Sneedean or Metatheoretical Structuralism (Balzer et al. 1987), and of their application to a case from the realm of biology: Population Dynamics. The analysis carried out will make it possible to support, contrary to what some philosophers of science in general and of (...) biology in particular hold, the following claims: a) there are "laws" in biological sciences, b) many of the heterogeneous and different "models" of biology can be accommodated under some "theory", and c) this is exactly what confers great unifying power to biological theories. (shrink)
Models, information and meaning.Marc Artiga - 2020 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 82:101284.details
There has recently been an explosion of formal models of signalling, which have been developed to learn about different aspects of meaning. This paper discusses whether that success can also be used to provide an original naturalistic theory of meaning in terms of information or some related notion. In particular, it argues that, although these models can teach us a lot about different aspects of content, at the moment they fail to support the idea that meaning just is (...) some kind of information. As an alternative, I suggest a more modest approach to the relationship between informational notions used in models and semantic properties in the natural world. (shrink)
In what follows, I will give examples of the sorts of step that can be taken towards spelling out the intuition that, after all, good models might be true. Along the way, I provide an outline of my account of models as ontologically and pragmatically constrained representations. And I emphasize the importance of examining models as functionally composed systems in which different components play different roles and only some components serve as relevant truth bearers. This disputes the (...) standard approach that proceeds by simply counting true and false elements in models in their entirety and concludes that models are false since they contain so many false elements. I call my alternative the functional decomposition approach. (shrink)
Mathematical models provide explanations of limited power of specific aspects of phenomena. One way of articulating their limits here, without denying their essential powers, is in terms of contrastive explanation.
3 Abstract This paper is about modeling morality, with a proposal as to the best 4 way to do it. There is the small problem, however, in continuing disagreements 5 over what morality actually is, and so what is worth modeling. This paper resolves 6 this problem around an understanding of the purpose of a moral model, and from 7 this purpose approaches the best way to model morality.
The book answers long-standing questions on scientific modeling and inference across multiple perspectives and disciplines, including logic, mathematics, physics and medicine. The different chapters cover a variety of issues, such as the role models play in scientific practice; the way science shapes our concept of models; ways of modeling the pursuit of scientific knowledge; the relationship between our concept of models and our concept of science. The book also discusses models and scientific explanations; models in (...) the semantic view of theories; the applicability of mathematical models to the real world and their effectiveness; the links between models and inferences; and models as a means for acquiring new knowledge. It analyzes different examples of models in physics, biology, mathematics and engineering. Written for researchers and graduate students, it provides a cross-disciplinary reference guide to the notion and the use of models and inferences in science. (shrink)
In the past few years, social epistemologists have developed several formal models of the social organisation of science. While their robustness and representational adequacy has been analysed at length, the function of these models has begun to be discussed in more general terms only recently. In this article, I will interpret many of the current formal models of the scientific community as representing the latest development of what I will call the ‘Kuhnian project’. These models share (...) with Kuhn a number of questions about the relation between individuals and communities. At the same time, they also inherit some of Kuhn’s problematic characterisations of the scientific community. In particular, current models of the social organisation of science represent the scientific community as essentially value-free. This may put into question both their representational adequacy and their normative ambitions. In the end, it will be shown that the discussion on the formal models of the scientific community may contribute in fruitful ways to the ongoing debates on value judgements in science. (shrink)
I explore a challenge that idealisations pose to scientific realism and argue that the realist can best accommodate idealisations by capitalising on certain modal features of idealised models that are underwritten by laws of nature.
Under the independence and competence assumptions of Condorcet’s classical jury model, the probability of a correct majority decision converges to certainty as the jury size increases, a seemingly unrealistic result. Using Bayesian networks, we argue that the model’s independence assumption requires that the state of the world (guilty or not guilty) is the latest common cause of all jurors’ votes. But often – arguably in all courtroom cases and in many expert panels – the latest such common cause is a (...) shared ‘body of evidence’ observed by the jurors. In the corresponding Bayesian network, the votes are direct descendants not of the state of the world, but of the body of evidence, which in turn is a direct descendant of the state of the world. We develop a model of jury decisions based on this Bayesian network. Our model permits the possibility of misleading evidence, even for a maximally competent observer, which cannot easily be accommodated in the classical model. We prove that (i) the probability of a correct majority verdict converges to the probability that the body of evidence is not misleading, a value typically below 1; (ii) depending on the required threshold of ‘no reasonable doubt’, it may be impossible, even in an arbitrarily large jury, to establish guilt of a defendant ‘beyond any reasonable doubt’. (shrink)
This article offers a characterization of what I call multiple-models juxtaposition, a strategy for managing trade-offs among modeling desiderata. MMJ displays models of distinct phenomena to...
This chapter has two aims. The first aim is to compare and contrast three different conceptual-explanatory models for thinking about mental illness with an eye towards identifying the assumptions upon which each model is based, and exploring the model’s advantages and limitations in clinical contexts. Major Depressive Disorder is used as an example to illustrate these points. The second aim is to address the question of what conceptual-theoretical framework for thinking about mental illness is most likely to facilitate the (...) discovery of causes and treatments of mental illness in research contexts. To this end, the National Institute of Mental Health’s Research Domain Criteria (RDoC) Project is briefly considered. (shrink)
Bayesian models of legal arguments generally aim to produce a single integrated model, combining each of the legal arguments under consideration. This combined approach implicitly assumes that variables and their relationships can be represented without any contradiction or misalignment, and in a way that makes sense with respect to the competing argument narratives. This paper describes a novel approach to compare and ‘average’ Bayesian models of legal arguments that have been built independently and with no attempt to make (...) them consistent in terms of variables, causal assumptions or parameterization. The approach involves assessing whether competing models of legal arguments are explained or predict facts uncovered before or during the trial process. Those models that are more heavily disconfirmed by the facts are given lower weight, as model plausibility measures, in the Bayesian model comparison and averaging framework adopted. In this way a plurality of arguments is allowed yet a single judgement based on all arguments is possible and rational. (shrink)
Economic models describe individuals in terms of underlying characteristics, such as taste for some good, sympathy level for another player, time discount rate, risk attitude, and so on. In real life, such characteristics change through experiences: taste for Mozart changes through listening to it, sympathy for another player through observing his moves, and so on. Models typically ignore change, not just for simplicity but also because it is unclear how to incorporate change. I introduce a general axiomatic framework (...) for defining, analysing and comparing rival models of change. I show that seemingly basic postulates on modelling change together have strong implications, like irrelevance of the order in which someone has his experiences and ‘linearity’ of change. This is a step towards placing the modelling of change on solid axiomatic grounds and enabling non-arbitrary incorporation of change into economic models. (shrink)
ABSTRACT Is there something specific about modelling that distinguishes it from many other theoretical endeavours? We consider Michael Weisberg’s thesis that modelling is a form of indirect representation through a close examination of the historical roots of the Lotka–Volterra model. While Weisberg discusses only Volterra’s work, we also study Lotka’s very different design of the Lotka–Volterra model. We will argue that while there are elements of indirect representation in both Volterra’s and Lotka’s modelling approaches, they are largely due to two (...) other features of contemporary model construction processes that Weisberg does not explicitly consider: the methods-drivenness and outcome-orientedness of modelling. 1Introduction 2Modelling as Indirect Representation 3The Design of the Lotka–Volterra Model by Volterra 3.1Volterra’s method of hypothesis 3.2The construction of the Lotka–Volterra model by Volterra 4The Design of the Lotka–Volterra Model by Lotka 4.1Physical biology according to Lotka 4.2Lotka’s systems approach and the Lotka–Volterra model 5Philosophical Discussion: Strategies and Tools of Modelling 5.1Volterra’s path from the method of isolation to the method of hypothesis 5.2The template-based approach of Lotka 5.3Modelling: methods-driven and outcome-oriented 6Conclusion. (shrink)
. Computational chemistry grew in a new era of “desktop modeling,” which coincided with a growing demand for modeling software, especially from the pharmaceutical industry. Parameterization of models in computational chemistry is an arduous enterprise, and we argue that this activity leads, in this specific context, to tensions among scientists regarding the epistemic opacity transparency of parameterized methods and the software implementing them. We relate one flame war from the Computational Chemistry mailing List in order to assess in detail (...) the relationships between modeling methods, parameterization, software and the various forms of their enclosure or disclosure. Our claim is that parameterization issues are an important and often neglected source of epistemic opacity and that this opacity is entangled in methods and software alike. Models and software must be addressed together to understand the epistemological tensions at stake. (shrink)
In this article, I argue that the use of scientific models that attribute intentional content to complex systems bears a striking similarity to the way in which statistical descriptions are used. To demonstrate this, I compare and contrast an intentional model with a statistical model, and argue that key similarities between the two give us compelling reasons to consider both as a type of phenomenological model. I then demonstrate how intentional descriptions play an important role in scientific methodology as (...) a type of phenomenal model, and argue that this makes them as essential as any other model of this type. (shrink)
Autonomist accounts of cognitive science suggest that cognitive model building and theory construction (can or should) proceed independently of findings in neuroscience. Common functionalist justifications of autonomy rely on there being relatively few constraints between neural structure and cognitive function (e.g., Weiskopf, 2011). In contrast, an integrative mechanistic perspective stresses the mutual constraining of structure and function (e.g., Piccinini & Craver, 2011; Povich, 2015). In this paper, I show how model-based cognitive neuroscience (MBCN) epitomizes the integrative mechanistic perspective and concentrates (...) the most revolutionary elements of the cognitive neuroscience revolution (Boone & Piccinini, 2016). I also show how the prominent subset account of functional realization supports the integrative mechanistic perspective I take on MBCN and use it to clarify the intralevel and interlevel components of integration. (shrink)
In this paper I investigate Putnam’s model-theoretic argument from a transcendent standpoint, in spite of Putnam’s well-known objections to such a standpoint. This transcendence, however, requires ascent to something more like a Tarskian meta-level than what Putnam regards as a “God’s eye view”. Still, it is methodologically quite powerful, leading to a significant increase in our investigative tools. The result is a shift from Putnam’s skeptical conclusion to a new understanding of realism, truth, correspondence, knowledge, and theories, or certain aspects (...) thereof, based on, among other things, a better understanding of what models are designed (and not designed) to do. (shrink)
People often assume that everyone can be divided by sex/gender (that is, by physical and social characteristics having to do with maleness and femaleness) into two tidy categories: male and female. Careful thought, however, leads us to reject that simple ‘binary’ picture, since not all people fall precisely into one group or the other. But if we do not think of sex/gender in terms of those two categories, how else might we think of it? Here I consider four distinct (...) class='Hi'>models; each model correctly captures some features of sex/gender, and so each is appropriate in some contexts. But the first three models are inadequate when tough questions arise, like whether trans women should be admitted as students at a women’s college or when it is appropriate for intersex athletes to compete in women’s athletic events. (‘Trans’ refers to the wide range of people who have an atypical gender identity for someone of their birth-assigned sex, and ‘intersex’ refers to people whose bodies naturally develop with markedly different physical sex characteristics than are paradigmatic of either men or women.) Such questions of inclusion and exclusion matter enormously to the people whose lives are affected by them, but ordinary notions of sex/gender offer few answers. The fourth model I describe is especially designed to make those hard decisions easier by providing a process to clarify what matters. (shrink)
This paper concerns model-checking of fragments and extensions of CTL* on infinite-state Presburger counter systems, where the states are vectors of integers and the transitions are determined by means of relations definable within Presburger arithmetic. In general, reachability properties of counter systems are undecidable, but we have identified a natural class of admissible counter systems (ACS) for which we show that the quantification over paths in CTL* can be simulated by quantification over tuples of natural numbers, eventually allowing translation of (...) the whole Presburger-CTL* into Presburger arithmetic, thereby enabling effective model checking. We provide evidence that our results are close to optimal with respect to the class of counter systems described above. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.