The prospects and limitations of defining truth in a finite model in the same language whose truth one is considering are thoroughly examined. It is shown that in contradistinction to Tarski's undefinability theorem for arithmetic, it is in a definite sense possible in this case to define truth in the very language whose truth is in question.
For computer simulation models to usefully inform climate risk management decisions, uncertainties in model projections must be explored and characterized. Because doing so requires running the model many times over, and because computing resources are finite, uncertainty assessment is more feasible using models that need less computer processor time. Such models are generally simpler in the sense of being more idealized, or less realistic. So modelers face a trade-off between realism and extent of uncertainty quantification. Seeing (...) this trade-off for the important epistemic issue that it is requires a shift in perspective from the established simplicity literature in philosophy of science. (shrink)
One of the main motivations for having a compositional semantics is the account of the productivity of natural languages. Formal languages are often part of the account of productivity, i.e., of how beings with finite capaci- ties are able to produce and understand a potentially infinite number of sen- tences, by offering a model of this process. This account of productivity con- sists in the generation of proofs in a formal system, that is taken to represent the way speakers (...) grasp the meaning of an indefinite number of sentences. The informational basis is restricted to what is represented in the lexicon. This constraint is considered as a requirement for the account of productivity, or at least of an important feature of productivity, namely, that we can grasp auto- matically the meaning of a huge number of complex expressions, far beyond what can be memorized. However, empirical results in psycholinguistics, and especially particular patterns of ERP, show that the brain integrates informa- tion of different sources very fast, without any felt effort on the part of the speaker. This shows that formal procedures do not explain productivity. How- ever, formal models are still useful in the account of how we get at the seman- tic value of a complex expression, once we have the meanings of its parts, even if there is no formal explanation of how we get at those meanings. A practice-oriented view of modeling gives an adequate interpretation of this re- sult: formal compositional semantics may be a useful model for some ex- planatory purposes concerning natural languages, without being a good model for dealing with other explananda. (shrink)
All reasoners described in the most widespread models of a rational reasoner exhibit logical omniscience, which is impossible for finite reasoners (real reasoners). The most common strategy for dealing with the problem of logical omniscience is to interpret the models using a notion of beliefs different from explicit beliefs. For example, the models could be interpreted as describing the beliefs that the reasoner would hold if the reasoner were able reason indefinitely (stable beliefs). Then the (...) class='Hi'>models would describe maximum rationality, which a finite reasoner can only approach in the limit of a reasoning sequence. This strategy has important consequences for epistemology. If a finite reasoner can only approach maximum rationality in the limit of a reasoning sequence, then the efficiency of reasoning is epistemically (and not only pragmatically) relevant. In this paper, I present an argument to this conclusion and discuss its consequences, as, for example, the vindication of the principle 'no rationality through brute-force'. (shrink)
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal (...) class='Hi'>models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding. (shrink)
Many biological investigations are organized around a small group of species, often referred to as ‘model organisms’, such as the fruit fly Drosophila melanogaster. The terms ‘model’ and ‘modelling’ also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka–Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different (...) epistemic characters. Theoretical modelling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account. 1 Introduction2 Volterra and Theoretical Modelling3 Drosophila as a Model Organism4 Generalizing from Work on Model Organisms5 Phylogenetic Inference and Model Organisms6 Further Roles of Model Organisms6.1 Preparative experimentation6.2 Model organisms as paradigms6.3 Model organisms as theoretical models6.4 Inspiration for engineers6.5 Anchoring a research community7 Conclusion. (shrink)
Recent philosophical attention to climate models has highlighted their weaknesses and uncertainties. Here I address the ways that models gain support through observational data. I review examples of model ﬁt, variety of evidence, and independent support for aspects of the models, contrasting my analysis with that of other philosophers. I also investigate model robustness, which often emerges when comparing climate models simulating the same time period or set of conditions. Starting from Michael Weisberg’s analysis of robustness, (...) I conclude that his approach involves a version of reasoning from variety of evidence, enabling this robustness to be a conﬁrmatory virtue.. (shrink)
Models as Make-Believe offers a new approach to scientific modelling by looking to an unlikely source of inspiration: the dolls and toy trucks of children's games of make-believe.
We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to (...) incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction2 Remarks about Models and Adequacy-for-Purpose3 Evidence for Calibration Can Also Yield Comparative Confirmation3.1 Double-counting I3.2 Double-counting II4 Climate Science Examples: Comparative Confirmation in Practice4.1 Confirmation due to better and worse best fits4.2 Confirmation due to more and less plausible forcings values5 Old Evidence6 Doubts about the Relevance of Past Data7 Non-comparative Confirmation and Catch-Alls8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice9 Concluding Remarks. (shrink)
Multialgebras (or hyperalgebras or non-deterministic algebras) have been much studied in mathematics and in computer science. In 2016 Carnielli and Coniglio introduced a class of multialgebras called swap structures, as a semantic framework for dealing with several Logics of Formal Inconsistency (or LFIs) that cannot be semantically characterized by a single finite matrix. In particular, these LFIs are not algebraizable by the standard tools of abstract algebraic logic. In this paper, the first steps towards a theory of non-deterministic algebraization (...) of logics by swap structures are given. Specifically, a formal study of swap structures for LFIs is developed, by adapting concepts of universal algebra to multialgebras in a suitable way. A decomposition theorem similar to Birkhoff's representation theorem is obtained for each class of swap structures. Moreover, when applied to the 3-valued algebraizable logics J3 and Ciore, their classes of algebraic models are retrieved, and the swap structures semantics become twist structures semantics (as independently introduced by M. Fidel and D. Vakarelov). This fact, together with the existence of a functor from the category of Boolean algebras to the category of swap structures for each LFI (which is closely connected with Kalman's functor), suggests that swap structures can be seen as non-deterministic twist structures. This opens new avenues for dealing with non-algebraizable logics by the more general methodology of multialgebraic semantics. (shrink)
This article shows how the MISS account of models—as isolations and surrogate systems—accommodates and elaborates Sugden’s account of models as credible worlds and Hausman’s account of models as explorations. Theoretical models typically isolate by means of idealization, and they are representatives of some target system, which prompts issues of resemblance between the two to arise. Models as representations are constrained both ontologically (by their targets) and pragmatically (by the purposes and audiences of the modeller), and (...) these relations are coordinated by a model commentary. Surrogate models are often about single mechanisms. They are distinguishable from substitute models, which are examined without any concern about their connections with the target. Models as credible worlds are surrogate models that are believed to provide access to their targets on account of their credibility (of which a few senses are distinguished). (shrink)
Classical logic is usually interpreted as the logic of propositions. But from Boole's original development up to modern categorical logic, there has always been the alternative interpretation of classical logic as the logic of subsets of any given (nonempty) universe set. Partitions on a universe set are dual to subsets of a universe set in the sense of the reverse-the-arrows category-theoretic duality--which is reflected in the duality between quotient objects and subobjects throughout algebra. Hence the idea arises of a dual (...) logic of partitions. That dual logic is described here. Partition logic is at the same mathematical level as subset logic since models for both are constructed from (partitions on or subsets of) arbitrary unstructured sets with no ordering relations, compatibility or accessibility relations, or topologies on the sets. Just as Boole developed logical finite probability theory as a quantitative treatment of subset logic, applying the analogous mathematical steps to partition logic yields a logical notion of entropy so that information theory can be refounded on partition logic. But the biggest application is that when partition logic and the accompanying logical information theory are "lifted" to complex vector spaces, then the mathematical framework of quantum mechanics is obtained. Partition logic models indefiniteness (i.e., numerical attributes on a set become more definite as the inverse-image partition becomes more refined) while subset logic models the definiteness of classical physics (an entity either definitely has a property or definitely does not). Hence partition logic provides the backstory so the old idea of "objective indefiniteness" in QM can be fleshed out to a full interpretation of quantum mechanics. (shrink)
In this paper I propose an account of representation for scientific models based on Kendall Walton’s ‘make-believe’ theory of representation in art. I first set out the problem of scientific representation and respond to a recent argument due to Craig Callender and Jonathan Cohen, which aims to show that the problem may be easily dismissed. I then introduce my account of models as props in games of make-believe and show how it offers a solution to the problem. Finally, (...) I demonstrate an important advantage my account has over other theories of scientific representation. All existing theories analyse scientific representation in terms of relations, such as similarity or denotation. By contrast, my account does not take representation in modelling to be essentially relational. For this reason, it can accommodate a group of models often ignored in discussions of scientific representation, namely models which are representational but which represent no actual object. (shrink)
We call for a new philosophical conception of models in physics. Some standard conceptions take models to be useful approximations to theorems, that are the chief means to test theories. Hence the heuristics of model building is dictated by the requirements and practice of theory-testing. In this paper we argue that a theory-driven view of models can not account for common procedures used by scientists to model phenomena. We illustrate this thesis with a case study: the construction (...) of one of the first comprehensive model of superconductivity by London brothers in 1934. Instead of theory-driven view of models, we suggest a phenomenologically-driven one. (shrink)
Detailed examinations of scientific practice have revealed that the use of idealized models in the sciences is pervasive. These models play a central role in not only the investigation and prediction of phenomena, but in their received scientific explanations as well. This has led philosophers of science to begin revising the traditional philosophical accounts of scientific explanation in order to make sense of this practice. These new model-based accounts of scientific explanation, however, raise a number of key questions: (...) Can the fictions and falsehoods inherent in the modeling practice do real explanatory work? Do some highly abstract and mathematical models exhibit a noncausal form of scientific explanation? How can one distinguish an exploratory "how-possibly" model explanation from a genuine "how-actually" model explanation? Do modelers face tradeoffs such that a model that is optimized for yielding explanatory insight, for example, might fail to be the most predictively accurate, and vice versa? This chapter explores the various answers that have been given to these questions. (shrink)
We investigate an enrichment of the propositional modal language L with a "universal" modality ■ having semantics x ⊧ ■φ iff ∀y(y ⊧ φ), and a countable set of "names" - a special kind of propositional variables ranging over singleton sets of worlds. The obtained language ℒ $_{c}$ proves to have a great expressive power. It is equivalent with respect to modal definability to another enrichment ℒ(⍯) of ℒ, where ⍯ is an additional modality with the semantics x ⊧ ⍯φ (...) iff Vy(y ≠ x → y ⊧ φ). Model-theoretic characterizations of modal definability in these languages are obtained. Further we consider deductive systems in ℒ $_{c}$ . Strong completeness of the normal ℒ $_{c}$ logics is proved with respect to models in which all worlds are named. Every ℒ $_{c}$ -logic axiomatized by formulae containing only names (but not propositional variables) is proved to be strongly frame-complete. Problems concerning transfer of properties ([in]completeness, filtration, finite model property etc.) from ℒ to ℒ $_{c}$ are discussed. Finally, further perspectives for names in multimodal environment are briefly sketched. (shrink)
The geosciences include a wide spectrum of disciplines ranging from paleontology to climate science, and involve studies of a vast range of spatial and temporal scales, from the deep-time history of microbial life to the future of a system no less immense and complex than the entire Earth. Modeling is thus a central and indispensable tool across the geosciences. Here, we review both the history and current state of model-based inquiry in the geosciences. Research in these fields makes use of (...) a wide variety of models, such as conceptual, physical, and numerical models, and more specifically cellular automata, artificial neural networks, agent-based models, coupled models, and hierarchical models. We note the increasing demands to incorporate biological and social systems into geoscience modeling, challenging the traditional boundaries of these fields. Understanding and articulating the many different sources of scientific uncertainty – and finding tools and methods to address them – has been at the forefront of most research in geoscience modeling. We discuss not only structuralmodel uncertainties, parameter uncertainties, and solution uncertainties, but also the diverse sources of uncertainty arising from the complex nature of geoscience systems themselves. Without an examination of the geosciences, our philosophies of science and our understanding of the nature of model-based science are incomplete. (shrink)
As the ongoing literature on the paradoxes of the Lottery and the Preface reminds us, the nature of the relation between probability and rational acceptability remains far from settled. This article provides a novel perspective on the matter by exploiting a recently noted structural parallel with the problem of judgment aggregation. After offering a number of general desiderata on the relation between finite probability models and sets of accepted sentences in a Boolean sentential language, it is noted that (...) a number of these constraints will be satisfied if and only if acceptable sentences are true under all valuations in a distinguished non-empty set W. Drawing inspiration from distance-based aggregation procedures, various scoring rule based membership conditions for W are discussed and a possible point of contact with ranking theory is considered. The paper closes with various suggestions for further research. (shrink)
Levins and Lewontin have contributed significantly to our philosophical understanding of the structures, processes, and purposes of biological mathematical theorizing and modeling. Here I explore their separate and joint pleas to avoid making abstract and ideal scientific models ontologically independent by confusing or conflating our scientific models and the world. I differentiate two views of theorizing and modeling, orthodox and dialectical, in order to examine Levins and Lewontin’s, among others, advocacy of the latter view. I compare the positions (...) of these two views with respect to four points regarding ontological assumptions: (1) the origin of ontological assumptions, (2) the relation of such assumptions to the formal models of the same theory, (3) their use in integrating and negotiating different formal models of distinct theories, and (4) their employment in explanatory activity. Dialectical is here used in both its Hegelian–Marxist sense of opposition and tension between alternative positions and in its Platonic sense of dialogue between advocates of distinct theories. I investigate three case studies, from Levins and Lewontin as well as from a recent paper of mine, that show the relevance and power of the dialectical understanding of theorizing and modeling. (shrink)
This paper aims 1) to introduce the notion of theoretical story as a resource and source of constraint for the construction and assessment of models of phenomena; 2) to show the relevance of this notion for a better understanding of the role and nature of values in scientific activity. The reflection on the role of values and value judgments in scientific activity should be attentive, I will argue, to the distinction between models and the theoretical story that guides (...) and constrains their construction. The aim of scientific activity is to develop understanding of phenomena, and something that serves this aim and contributes to the development of understanding has a cognitive value. Cognitive values are the features that something that plays a role in scientific activity should have so that it can serve its aim. I will focus my attention on the features of the theoretical story and of the models. (shrink)
By pure calculus of names we mean a quantifier-free theory, based on the classical propositional calculus, which defines predicates known from Aristotle’s syllogistic and Leśniewski’s Ontology. For a large fragment of the theory decision procedures, defined by a combination of simple syntactic operations and models in two-membered domains, can be used. We compare the system which employs `ε’ as the only specific term with the system enriched with functors of Syllogistic. In the former, we do not need an empty (...) name in the model, so we are able to construct a 3-valued matrix, while for the latter, for which an empty name is necessary, the respective matrices are 4-valued. (shrink)
Now that complex Agent-Based Models and computer simulations spread over economics and social sciences - as in most sciences of complex systems -, epistemological puzzles (re)emerge. We introduce new epistemological tools so as to show to what precise extent each author is right when he focuses on some empirical, instrumental or conceptual significance of his model or simulation. By distinguishing between models and simulations, between types of models, between types of computer simulations and between types of empiricity, (...) section 2 gives conceptual tools to explain the rationale of the diverse epistemological positions presented in section 1. Finally, we claim that a careful attention to the real multiplicity of denotational powers of symbols at stake and then to the implicit routes of references operated by models and computer simulations is necessary to determine, in each case, the proper epistemic status and credibility of a given model and/or simulation. (shrink)
In Economics Rules, Dani Rodrik (2015) argues that what makes economics powerful despite the limitations of each and every model is its diversity of models. Rodrik suggests that the diversity of models in economics improves its explanatory capacities, but he does not fully explain how. I offer a clearer picture of how models relate to explanations of particular economic facts or events, and suggest that the diversity of models is a means to better economic explanations.
William James advocated a form of finite theism, motivated by epistemological and moral concerns with scholastic theism and pantheism. In this article, I elaborate James’s case for finite theism and his strategy for dealing with these concerns, which I dub the problems of suffering. I contend that James is at the very least implicitly aware that the problem of suffering is not so much one generic problem but a family of related problems. I argue that one of James’s (...) great contributions to philosophical theism is his advocacy for the view that adequate theistic philosophizing is not so much about cracking this family of problems, but finding a version of the problem to embrace. (shrink)
Summary Analogue models are actual physical setups used to model something else. They are especially useful when what we wish to investigate is difficult to observe or experiment upon due to size or distance in space or time: for example, if the thing we wish to investigate is too large, too far away, takes place on a time scale that is too long, does not yet exist or has ceased to exist. The range and variety of analogue models (...) is too extensive to attempt a survey. In this article, I describe and discuss several different analogue model experiments, the results of those model experiments, and the basis for constructing them and interpreting their results. Examples of analogue models for surface waves in lakes, for earthquakes and volcanoes in geophysics, and for black holes in general relativity, are described, with a focus on examining the bases for claims that these analogues are appropriate analogues of what they are used to investigate. A table showing three different kinds of bases for reasoning using analogue models is provided. Finally, it is shown how the examples in this article counter three common misconceptions about the use of analogue models in physics. (shrink)
Three-dimensional material models of molecules were used throughout the 19th century, either functioning as a mere representation or opening new epistemic horizons. In this paper, two case studies are examined: the 1875 models of van ‘t Hoff and the 1890 models of Sachse. What is unique in these two case studies is that both models were not only folded, but were also conceptualized mathematically. When viewed in light of the chemical research of that period not only (...) were both of these aspects, considered in their singularity, exceptional, but also taken together may be thought of as a subversion of the way molecules were chemically investigated in the 19th century. Concentrating on this unique shared characteristic in the models of van ‘t Hoff and the models of Sachse, this paper deals with the shifts and displacements between their operational methods and existence: between their technical and epistemological aspects and the fact that they were folded, which was forgotten or simply ignored in the subsequent development of chemistry. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. -/- In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of (...)models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
Can purely predictive models be useful in investigating causal systems? I argue ‘yes’. Moreover, in many cases not only are they useful, they are essential. The alternative is to stick to models or mechanisms drawn from well-understood theory. But a necessary condition for explanation is empirical success, and in many cases in social and field sciences such success can only be achieved by purely predictive models, not by ones drawn from theory. Alas, the attempt to use theory (...) to achieve explanation or insight without empirical success therefore fails, leaving us with the worst of both worlds – neither prediction nor explanation. Best go with empirical success by any means necessary. I support these methodological claims via case studies of two impressive feats of predictive modelling: opinion polling of political elections, and weather forecasting. (shrink)
It is plausible to think that, in order to actively employ models in their inquiries, scientists should be aware of their existence. The question is especially puzzling for realists in the case of abstract models, since it is not obvious how this is possible. Interestingly, though, this question has drawn little attention in the relevant literature. Perhaps the most obvious choice for a realist is appealing to intuition. In this paper, I argue that if scientific models were (...) abstract entities, one could not be aware of them intuitively. I deploy my argumentation by building on Chudnoff’s elaboration on intuitive awareness. Furthermore, I shortly discuss some other options to which realists could turn in order to address the question of awareness. (shrink)
In recent decades, philosophers of science have devoted considerable efforts to understand what models represent. One popular position is that models represent fictional situations. Another position states that, though models often involve fictional elements, they represent real objects or scenarios. Though these two positions may seem to be incompatible, I believe it is possible to reconcile them. Using a threefold distinction between different signs proposed by Peirce, I develop an argument based on a proposal recently made by (...) Kralemann and Lattman (in Synthese 190:3397–3420, 2013) that shows that the two aforementioned positions can be reconciled by distinguishing different ways in which a model representation can be used. In particular, on the basis of Peirce’s distinction between icons, indices and symbols, I argue that models can sometimes function as icons, sometimes as indexes and sometimes as symbols, depending on the context in which they are considered and the use that they are developed for because they all have iconic, indexical and symbolic features. In addition, I show that conceiving models as signs enables us to develop an account of scientific representation that meets the main desiderata that Shech (in Synthese 192:3463–3485, 2015) presents. (shrink)
In this topical section, we highlight the next step of research on modeling aiming to contribute to the emerging literature that radically refrains from approaching modeling as a scientific endeavor. Modeling surpasses “doing science” because it is frequently incorporated into decision-making processes in politics and management, i.e., areas which are not solely epistemically oriented. We do not refer to the production of models in academia for abstract or imaginary applications in practical fields, but instead highlight the real entwinement of (...) science and policy and the real erosion of their boundaries. Models in decision making – due to their strong entwinement with policy and management – are utilized differently than models in science; they are employed for different purposes and with different constraints. We claim that “being a part of decision-making” implies that models are elements of a very particular situation, in which knowledge about the present and the future is limited but dependence of decisions on the future is distinct. Emphasis on the future indicates that decisions are made about actions that have severe and lasting consequences. In these specific situations, models enable not only the acquisition of knowledge (the primary goal of science) but also enable deciding upon actions that change the course of events. As a result, there are specific ways to construct effective models and justify their results. Although some studies have explored this topic, our understanding of how models contribute to decision making outside of science remains fragmentary. This topical section aims to fill this gap in research and formulate an agenda for additional and more systematic investigations in the field. (shrink)
This essay aims to provide a self-contained introduction to time in relativistic cosmology that clarifies both how questions about the nature of time should be posed in this setting and the extent to which they have been or can be answered empirically. The first section below recounts the loss of Newtonian absolute time with the advent of special and general relativity, and the partial recovery of absolute time in the form of cosmic time in some cosmological models. Section II (...) considers the beginning and end of time in a broader class of models in which there is not an analog of Newtonian absolute time. As we will see, reasonable physical assumptions imply that the universe is finite to the past, and Section III turns to consideration of the “beginning” itself. We critically review conventional wisdom that a “singularity” reveals flaws in general relativity and briefly assess ways of avoiding the singularity. (shrink)
A general class of labeled sequent calculi is investigated, and necessary and sufficient conditions are given for when such a calculus is sound and complete for a finite -valued logic if the labels are interpreted as sets of truth values. Furthermore, it is shown that any finite -valued logic can be given an axiomatization by such a labeled calculus using arbitrary "systems of signs," i.e., of sets of truth values, as labels. The number of labels needed is logarithmic (...) in the number of truth values, and it is shown that this bound is tight. (shrink)
From Hegel's philosophy of nature, this essay develops a critique of economic models and market society, based on Hegel's notion of what it takes for a formally described system to be embodied and real.
In this paper I distinguish various ways in which empirical claims about evolutionary and ecological models can be supported by data. I describe three basic factors bearing on confirmation of empirical claims: fit of the model to data; independent testing of various aspects of the model, and variety of evident. A brief description of the kinds of confirmation is followed by examples of each kind, drawn from a range of evolutionary and ecological theories. I conclude that the greater complexity (...) and precision of my approach, as compared to, for instance, a Popperian approach, can facilitate detailed analysis and comparison of empirical claims. (shrink)
In the last decades a growing body of literature in Artificial Intelligence (AI) and Cognitive Science (CS) has approached the problem of narrative understanding by means of computational systems. Narrative, in fact, is an ubiquitous element in our everyday activity and the ability to generate and understand stories, and their structures, is a crucial cue of our intelligence. However, despite the fact that - from an historical standpoint - narrative (and narrative structures) have been an important topic of investigation in (...) both these areas, a more comprehensive approach coupling them with narratology, digital humanities and literary studies was still lacking. With the aim of covering this empty space, in the last years, a multidisciplinary effort has been made in order to create an international meeting open to computer scientist, psychologists, digital humanists, linguists, narratologists etc.. This event has been named CMN (for Computational Models of Narrative) and was launched in the 2009 by the MIT scholars Mark A. Finlayson and Patrick H. Winston1. (shrink)
We separate metaphysical from epistemic questions in the evaluation of models, taking into account the distinctive functions of models as opposed to theories. The examples a\are very varied.
In what follows, I will give examples of the sorts of step that can be taken towards spelling out the intuition that, after all, good models might be true. Along the way, I provide an outline of my account of models as ontologically and pragmatically constrained representations. And I emphasize the importance of examining models as functionally composed systems in which different components play different roles and only some components serve as relevant truth bearers. This disputes the (...) standard approach that proceeds by simply counting true and false elements in models in their entirety and concludes that models are false since they contain so many false elements. I call my alternative the functional decomposition approach. (shrink)
The aim of this paper is to discuss the “Framework for M&S with Agents” (FMSA) proposed by Zeigler et al. [2000, 2009] in regard to the diverse epistemological aims of agent simulations in social sciences. We first show that there surely are great similitudes, hence that the aim to emulate a universal “automated modeler agent” opens new ways of interactions between these two domains of M&S with agents. E.g., it can be shown that the multi-level conception at the core of (...) the FMSA is similar in both contexts: notions of “levels of system specifi cation”, “behavior of models”, “simulator”and “endomorphic agents” can be partially translated in the terms linked to the “denotational hierarchy” (DH) recently introduced in a multi-level centered epistemology of M&S. Second, we suggest considering the question of “credibility” of agent M&S in social sciences when we do not try to emulate but only to simulate target systems. Whereas a stringent and standardized treatment of the heterogeneous internal relations (in the DH) between systems of formalisms is the key problem and the essential challenge in the scope of Agent M&S driven engineering, it is urgent too to address the problem of the external relations (and of the external validity, hence of the epistemic power and credibility) of such levels of formalisms in the specific domains of agent M&S in social sciences, especially when we intend to introduce the concepts of activity tracking. (shrink)
Models treating the simple properties of social groups have a common shortcoming. Typically, they focus on the local properties of group members and the features of the world with which group members interact. I consider economic models of bureaucratic corruption, to show that (a) simple properties of groups are often constituted by the properties of the wider population, and (b) even sophisticated models are commonly inadequate to account for many simple social properties. Adequate models and social (...) policies must treat certain factors that are not local to individual members of the group, even if those factors are not causally connected to those individuals. Key Words: individualism • corruption • supervenience • model • cause. (shrink)
The aim of this paper is to show that every topological space gives rise to a wealth of topological models of the modal logic S4.1. The construction of these models is based on the fact that every space defines a Boolean closure algebra (to be called a McKinsey algebra) that neatly reflects the structure of the modal system S4.1. It is shown that the class of topological models based on McKinsey algebras contains a canonical model that can (...) be used to prove a completeness theorem for S4.1. Further, it is shown that the McKinsey algebra MKX of a space X endoewed with an alpha-topologiy satisfies Esakia's GRZ axiom. (shrink)
In this paper I present a novel supertask in a Newtonian universe that destroys and creates infinite masses and energies, showing thereby that we can have infinite indeterminism. Previous supertasks have managed only to destroy or create finite masses and energies, thereby giving cases of only finite indeterminism. In the Nothing from Infinity paradox we will see an infinitude of finite masses and an infinitude of energy disappear entirely, and do so despite the conservation of energy in (...) all collisions. I then show how this leads to the Infinity from Nothing paradox, in which we have the spontaneous eruption of infinite mass and energy out of nothing. I conclude by showing how our supertask models at least something of an old conundrum, the question of what happens when the immovable object meets the irresistible force. (shrink)
Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue (...) that proper evaluation ofmodels does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach. (shrink)
“There’s Plenty of Room at the Bottom”, said the title of Richard Feynman’s 1959 seminal conference at the California Institute of Technology. Fifty years on, nanotechnologies have led computer scientists to pay close attention to the links between physical reality and information processing. Not all the physical requirements of optimal computation are captured by traditional models—one still largely missing is reversibility. The dynamic laws of physics are reversible at microphysical level, distinct initial states of a system leading to distinct (...) final states. On the other hand, as von Neumann already conjectured, irreversible information processing is expensive: to erase a single bit of information costs ~3 × 10−21 joules at room temperature. Information entropy is a thermodynamic cost, to be paid in non-computational energy dissipation. This paper addresses the problem drawing on Edward Fredkin’s Finite Nature hypothesis: the ultimate nature of the universe is discrete and finite, satisfying the axioms of classical, atomistic mereology. The chosen model is a cellular automaton with reversible dynamics, capable of retaining memory of the information present at the beginning of the universe. Such a CA can implement the Boolean logical operations and the other building bricks of computation: it can develop and host all-purpose computers. The model is a candidate for the realization of computational systems, capable of exploiting the resources of the physical world in an efficient way, for they can host logical circuits with negligible internal energy dissipation. (shrink)
In this article we raise a problem, and we offer two practical contributions to its solution. The problem is that academic communities interested in digital publishing do not have adequate tools to help them in choosing a publishing model that suits their needs. We believe that excessive focus on Open Access (OA) has obscured some important issues; moreover exclusive emphasis on increasing openness has contributed to an agenda and to policies that show clear practical shortcomings. We believe that academic communities (...) have different needs and priorities; therefore there cannot be a ranking of publishing models that fits all and is based on only one criterion or value. We thus believe that two things are needed. First, communities need help in working out what they want from their digital publications. Their needs and desiderata should be made explicit and their relative importance estimated. This exercise leads to the formulation and ordering of their objectives. Second, available publishing models should be assessed on the basis of these objectives, so as to choose one that satisfies them well. Accordingly we have developed a framework that assists communities in going through these two steps. The framework can be used informally, as a guide to the collection and systematic organization of the information needed to make an informed choice of publishing model. In order to do so it maps the values that should be weighed and the technical features that embed them. Building on our framework, we also offer a method to produce ordinal and cardinal scores of publishing models. When these techniques are applied the framework becomes a formal decision–making tool. Finally, the framework stresses that, while the OA movement tackles important issues in digital publishing, it cannot incorporate the whole range of values and interests that are at the core of academic publishing. Therefore the framework suggests a broader agenda that is relevant in making better policy decisions around academic publishing and OA. (shrink)
This article analyzes the value of geometric models to understand matter with the examples of the Platonic model for the primary four elements (fire, air, water, and earth) and the models of carbon atomic structures in the new science of crystallography. How the geometry of these models is built in order to discover the properties of matter is explained: movement and stability for the primary elements, and hardness, softness and elasticity for the carbon atoms. These geometric (...) class='Hi'>models appear to have a double quality: firstly, they exhibit visually the scientific properties of matter, and secondly they give us the possibility to visualize its whole nature. Geometrical models appear to be the expression of the mind in the understanding of physical matter. (shrink)
According to certain kinds of semantic dispositionalism, what an agent means by her words is grounded by her dispositions to complete simple tasks. This sort of position is often thought to avoid the finitude problem raised by Kripke against simpler forms of dispositionalism. The traditional objection is that, since words possess indefinite (or infinite) extensions, and our dispositions to use words are only finite, those dispositions prove inadequate to serve as ground for what we mean by our words. I (...) argue that, even if such positions (emphasizing simple tasks) avoid the traditional form of Kripke's charge, they still succumb to special cases of the finitude problem. Furthermore, I show how such positions can be augmented so as to avoid even these special cases. Doing so requires qualifying the dispositions of interest as those possessed by the abstracted version of an actual agent (in contrast to, say, an idealized version of the agent). In addition to avoiding the finitude problem in its various forms, the position that results provides new materials for appreciating the role that abstracting models can play for a dispositionalist theory of meaning. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of (...) class='Hi'>models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
This paper shows how the classical finite probability theory (with equiprobable outcomes) can be reinterpreted and recast as the quantum probability calculus of a pedagogical or "toy" model of quantum mechanics over sets (QM/sets). There are two parts. The notion of an "event" is reinterpreted from being an epistemological state of indefiniteness to being an objective state of indefiniteness. And the mathematical framework of finite probability theory is recast as the quantum probability calculus for QM/sets. The point is (...) not to clarify finite probability theory but to elucidate quantum mechanics itself by seeing some of its quantum features in a classical setting. (shrink)
This paper updates (2017) a previously-presented* model of models, which can be used to clarify discussion and analysis in a variety of disputes and debates, since many such discussions hinge on displaying or implying models about how things are related. Knowing about models does not itself supply any new information about our world, but it might help us to recognize when and how information is being conveyed on these matters, or where possibly it is being obscured. If (...) a claim P is expressed in the context of a certain model, a disagreement about P may actually not be about whether the underlying model is accurate; it could reflect misunderstandings about the model’s assumptions or conventions. Also, in practice, constructed models are rarely fully elaborated for every detail, so misunderstandings can arise from apparent “gaps” in the model. In that case, developing and considering complementary versions of models can sometimes clarify ambiguities. Following the main analysis are several extended Case examples to illustrate the wide applicability of the models approach, for clarifying, if not necessarily “resolving”, numerous debates and issues. For example, a famous “counter example” by Nelson Goodman to a logical model proposed by Carnap is shown to be not as deal-breaking as presumed historically, if only plausible complementary models had been explored. And models sometimes presented as competing in Artificial Intelligence research— “Declarative” versus “Procedural”—can instead be viewed as good examples of “complementary” models. Also reconsidered in this light are some well-known, competing positions, by Russell, Strawson, and Kripke, in the literature regarding names, pointing, definite descriptions and “using” sentences, and so on. All these concepts are locatable within the presently proposed models analysis. Questions such as where “truth” resides (whether in sentences or in uses of sentences) are not settled in this paper; yet, practical questions about how testable claims about the world can be expressed by models are clarified. _* Original version presented at a University of Waterloo Philosophy Colloquium 1984. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.