It is regrettably common for theorists to attempt to characterize the Humean dictum that one can’t get an ‘ought’ from an ‘is’ just in broadly logical terms. We here address an important new class of such approaches which appeal to model-theoretic machinery. Our complaint about these recent attempts is that they interfere with substantive debates about the nature of the ethical. This problem, developed in detail for Daniel Singer’s and Gillian Russell and Greg Restall’s accounts of Hume’s dictum, is (...) of a general type arising for the use of model-theoretic structures in cashing out substantive philosophical claims: the question of whether an abstract model-theoretic structure successfully interprets something often involves taking a stand on non-trivial issues surrounding the thing. In the particular case of Hume’s dictum, given reasonable conceptual or metaphysical claims about the ethical, Singer’s and Russell and Restall’s accounts treat obviously ethical claims as descriptive and vice versa. Consequently, their model-theoretic characterizations of Hume’s dictum are not metaethically neutral. This encourages skepticism about whether model-theoretic machinery suffices to provide an illuminating distinction between the ethical and the descriptive. (shrink)
We here make preliminary investigations into the modeltheory of DeMorgan logics. We demonstrate that Łoś's Theorem holds with respect to these logics and make some remarks about standard model-theoretic properties in such contexts. More concretely, as a case study we examine the fate of Cantor's Theorem that the classical theory of dense linear orderings without endpoints is $\aleph_{0}$-categorical, and we show that the taking of ultraproducts commutes with respect to previously established methods of constructing nonclassical (...) structures, namely, Priest's Collapsing Lemma and Dunn's Theorem in 3-Valued Logic. (shrink)
Models, Theories, and Language.Jan Faye - 2007 - In Filosofia, scienza e bioetica nel dibattito contemporaneo. Rome: Poligrafico e Zecca dello Stato. pp. 823-838.details
The semantic view on theories has been much in vogue over four decades as the successor of the syntactic view. In the present paper, I take issue with this approach by arguing that theories and models must be separated and that a theory should be considered to be a linguistic systems consisting of a vocabulary and a set of rules for the use of that vocabulary.
In Rahul Banerjee and Bikas K. Chakrabarti (eds.), Progress in Brain Research, 168: 215-246. Amsterdam: Elsevier. Electronic offprint available upon request.
The primary aim of this paper is to critically evaluate the deductive model of ethical applications, which is based on normative ethical theories like deontology and consequentialism, and to show why a number of models have failed to furnish appropriate resolutions to practical moral problems. Here, for the deductive model, I want to call it a “Linear Mechanical Model” because the basic assumption of this model is that if a normative theory is sacrosanct, then the (...) case is as it is. The conclusion derived from the case will also be correct, true and acceptable. However, traditional ethicists used to apply their ethical theories, but they did not know which moral theory was effective on the ground level of reality. The study will show readers how ethical theories are in conflict with each other in the case of euthanasia. In more precise words, “which ethical theories are said to be applied, meta-ethical or normative, or both for the resolution of ethical problems? If normative theories are said to be applied, how the application can take place when it is contrary to our experience, that (then) in a situation of moral crises, no one really applies a theory?” For that, my argument is the linear model has failed because it is rigid, often ignores the agents’ intrinsic values, and has no space to amend it, no matter how bizarre the consequence is. Its alternative is the Inductive model. For that, the paper will take three moral principles (autonomy, beneficence including maleficence, and justice) of Beauchamp & Childress. This suggests us for resolving value-laden moral problems, we should consider some steps such as a) recognising moral issues to start with; b) developing the moral imagination; c) sharpening analytical/critical skills; d) testing out disagreements; e) effecting decisions and behavior; and f) implementation, closure, and process are of vital importance, in other words, it starts with the free and informed consensus of all interested parties, but this model also has been failed because the model could not give a systematic organization to their way of resolution. Here, my argument is that the inductive model provides resolution of the practical problem but ignores what is ethically obligatory, permissible, or wrong in that situation, and there are no appropriate suggestions in the case of a moral crisis. (shrink)
What would be an adequate theory of social understanding? In the last decade, the philosophical debate has focused on TheoryTheory, Simulation Theory and Interaction Theory as the three possible candidates. In the following, we look carefully at each of these and describe its main advantages and disadvantages. Based on this critical analysis, we formulate the need for a new account of social understanding. We propose the Person ModelTheory as an independent new (...) account which has greater explanatory power compared to the existing theories. (shrink)
I provide a theory of causation within the causal modeling framework. In contrast to most of its predecessors, this theory is model-invariant in the following sense: if the theory says that C caused (didn't cause) E in a causal model, M, then it will continue to say that C caused (didn't cause) E once we've removed an inessential variable from M. I suggest that, if this theory is true, then we should understand a cause (...) as something which transmits deviant or non-inertial behavior to its effect. (shrink)
Three metascientific concepts that have been object of philosophical analysis are the concepts oflaw, model and theory. The aim ofthis article is to present the explication of these concepts, and of their relationships, made within the framework of Sneedean or Metatheoretical Structuralism (Balzer et al. 1987), and of their application to a case from the realm of biology: Population Dynamics. The analysis carried out will make it possible to support, contrary to what some philosophers of science in general (...) and of biology in particular hold, the following claims: a) there are "laws" in biological sciences, b) many of the heterogeneous and different "models" of biology can be accommodated under some "theory", and c) this is exactly what confers great unifying power to biological theories. (shrink)
In this paper we define intensional models for the classical theory of types, thus arriving at an intensional type logic ITL. Intensional models generalize Henkin's general models and have a natural definition. As a class they do not validate the axiom of Extensionality. We give a cut-free sequent calculus for type theory and show completeness of this calculus with respect to the class of intensional models via a model existence theorem. After this we turn our attention to (...) applications. Firstly, it is argued that, since ITL is truly intensional, it can be used to model ascriptions of propositional attitude without predicting logical omniscience. In order to illustrate this a small fragment of English is defined and provided with an ITL semantics. Secondly, it is shown that ITL models contain certain objects that can be identified with possible worlds. Essential elements of modal logic become available within classical type theory once the axiom of Extensionality is given up. (shrink)
Experiments in particle physics have hitherto failed to produce any significant evidence for the many explicit models of physics beyond the Standard Model (BSM) that had been proposed over the past decades. As a result, physicists have increasingly turned to model-independent strategies as tools in searching for a wide range of possible BSM effects. In this paper, we describe the Standard Model Effective Field Theory (SM-EFT) and analyse it in the context of the philosophical discussions about (...) models, theories, and (bottom-up) effective field theories. We find that while the SM-EFT is a quantum field theory, assisting experimentalists in searching for deviations from the SM, in its general form it lacks some of the characteristic features of models. Those features only come into play if put in by hand or prompted by empirical evidence for deviations. Employing different philosophical approaches to models, we argue that the case study suggests not to take a view on models that is overly permissive because it blurs the lines between the different stages of the SM-EFT research strategies and glosses over particle physicists' motivations for undertaking this bottom-up approach in the first place. Looking at EFTs from the perspective of modelling does not require taking a stance on some specific brand of realism or taking sides in the debate between reduction and emergence into which EFTs have recently been embedded. (shrink)
Boolean-valued models of set theory were independently introduced by Scott, Solovay and Vopěnka in 1965, offering a natural and rich alternative for describing forcing. The original method was adapted by Takeuti, Titani, Kozawa and Ozawa to lattice-valued models of set theory. After this, Löwe and Tarafder proposed a class of algebras based on a certain kind of implication which satisfy several axioms of ZF. From this class, they found a specific 3-valued model called PS3 which satisfies all (...) the axioms of ZF, and can be expanded with a paraconsistent negation *, thus obtaining a paraconsistent model of ZF. The logic (PS3 ,*) coincides (up to language) with da Costa and D'Ottaviano logic J3, a 3-valued paraconsistent logic that have been proposed independently in the literature by several authors and with different motivations such as CluNs, LFI1 and MPT. We propose in this paper a family of algebraic models of ZFC based on LPT0, another linguistic variant of J3 introduced by us in 2016. The semantics of LPT0, as well as of its first-order version QLPT0, is given by twist structures defined over Boolean agebras. From this, it is possible to adapt the standard Boolean-valued models of (classical) ZFC to twist-valued models of an expansion of ZFC by adding a paraconsistent negation. We argue that the implication operator of LPT0 is more suitable for a paraconsistent set theory than the implication of PS3, since it allows for genuinely inconsistent sets w such that [(w = w)] = 1/2 . This implication is not a 'reasonable implication' as defined by Löwe and Tarafder. This suggests that 'reasonable implication algebras' are just one way to define a paraconsistent set theory. Our twist-valued models are adapted to provide a class of twist-valued models for (PS3,*), thus generalizing Löwe and Tarafder result. It is shown that they are in fact models of ZFC (not only of ZF). (shrink)
Discussing theories at length, including their origin, development, and replacement by other theories, can help students in understanding of both objective and subjective aspects of the scientific process. Presenting theories in the form of- models helps in this undertaking, and the history of science provides a number of suitable models. The paper describes specific examples that have been used in in-service courses for science teachers.
This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive (...) claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. (shrink)
The purpose of this article is to present several immediate consequences of the introduction of a new constant called Lambda in order to represent the object ``nothing" or ``void" into a standard set theory. The use of Lambda will appear natural thanks to its role of condition of possibility of sets. On a conceptual level, the use of Lambda leads to a legitimation of the empty set and to a redefinition of the notion of set. It lets also clearly (...) appear the distinction between the empty set, the nothing and the ur-elements. On a technical level, we introduce the notion of pre-element and we suggest a formal definition of the nothing distinct of that of the null-class. Among other results, we get a relative resolution of the anomaly of the intersection of a family free of sets and the possibility of building the empty set from ``nothing". The theory is presented with equi-consistency results . On both conceptual and technical levels, the introduction of Lambda leads to a resolution of the Russell's puzzle of the null-class. (shrink)
Because formal systems of symbolic logic inherently express and represent the deductive inference model formal proofs to theorem consequences can be understood to represent sound deductive inference to deductive conclusions without any need for other representations.
Schaffner’s model of theory reduction has played an important role in philosophy of science and philosophy of biology. Here, the model is found to be problematic because of an internal tension. Indeed, standard antireductionist external criticisms concerning reduction functions and laws in biology do not provide a full picture of the limits of Schaffner’s model. However, despite the internal tension, his model usefully highlights the importance of regulative ideals associated with the search for derivational, and (...) embedding, deductive relations among mathematical structures in theoretical biology. A reconstructed Schaffnerian model could therefore shed light on mathematical theory development in the biological sciences and on the epistemology of mathematical practices more generally. *Received November 2006; revised March 2009. †To contact the author, please write to: Philosophy Department, University of California, Santa Cruz, 1156 High St., Santa Cruz, CA 95064; e‐mail: [email protected] (shrink)
Evolutionary anthropologists and archaeologists have been considerably successful in modelling the cumulative evolution of culture, of technological skills and knowledge in particular. Recently, one of these models has been introduced in the philosophy of science by De Cruz and De Smedt (Philos Stud 157:411–429, 2012), in an attempt to demonstrate that scientists may collectively come to hold more truth-approximating beliefs, despite the cognitive biases which they individually are known to be subject to. Here we identify a major shortcoming in that (...) attempt: De Cruz & De Smedt’s mathematical model makes one particularly strong tractability assumption that causes the model to largely miss its target (namely, truth accumulation in science), and that moreover conflicts with empirical observations. The second, more constructive part of the paper presents an alternative, agent-based model, which allows one to much better examine the conditions for scientific progress and decline. (shrink)
In recent decades, philosophers of science have devoted considerable efforts to understand what models represent. One popular position is that models represent fictional situations. Another position states that, though models often involve fictional elements, they represent real objects or scenarios. Though these two positions may seem to be incompatible, I believe it is possible to reconcile them. Using a threefold distinction between different signs proposed by Peirce, I develop an argument based on a proposal recently made by Kralemann and Lattman (...) (in Synthese 190:3397–3420, 2013) that shows that the two aforementioned positions can be reconciled by distinguishing different ways in which a model representation can be used. In particular, on the basis of Peirce’s distinction between icons, indices and symbols, I argue that models can sometimes function as icons, sometimes as indexes and sometimes as symbols, depending on the context in which they are considered and the use that they are developed for because they all have iconic, indexical and symbolic features. In addition, I show that conceiving models as signs enables us to develop an account of scientific representation that meets the main desiderata that Shech (in Synthese 192:3463–3485, 2015) presents. (shrink)
On entend généralement par « théorie des modèles » autant la métamathématique (ou sémantique formelle) que la sémantique des modèles des sciences non formelles. Cet article a pour objet la théorie des modèles scientifiques que Mario Bunge a développée dans Method, Models and Matter (1973). J’y analyse l’intégration théorique qu’opère Bunge des sciences formelles et des sciences expérimentales ou observationnelles, laquelle prend appui sur sa philosophie des sciences. Je la compare sommairement à la théorie des modèles de Gilles-Gaston Granger dans (...) le but évident d’en dégager les ressemblances et les dissimilitudes, mais aussi leur commun point d’achoppement : l’une comme l’autre usent en effet d’un concept non analysé dont la fonction épistémologique est pourtant capitale et produit les mêmes effets. Au centre de la théorie des modèles de Bunge se trouve le concept de simulation que je comparerai à celui qui est en usage dans les sciences de l’ordinateur et qui est de nos jours largement appliqué à diverses sciences, tant sociales que naturelles. Je conclurai sur les conséquences méthodologiques et métaphysiques de la théorie bungéenne des modèles. (shrink)
De Neys (2021) argues that the debate between single- and dual-process theorists of thought has become both empirically intractable and scientifically inconsequential. I argue that this is true only under the traditional framing of the debate—when single- and dual-process theories are understood as claims about whether thought processes share the same defining properties (e.g., making mathematical judgments) or have two different defining properties (e.g., making mathematical judgments autonomously versus via access to a central working memory capacity), respectively. But if single- (...) and dual-process theories are understood in cognitive modeling terms as claims about whether thought processes function to implement one or two broad types of algorithms, respectively, then the debate becomes scientifically consequential and, presumably, empirically tractable. So, I argue, the correct response to the current state of the debate is not to abandon it, as De Neys suggests, but to reframe it as a debate about cognitive models. (shrink)
A comprehensible model is proposed aimed at an analysis of the reasons for theory change in science. According to the model the origins of scientific revolutions lie not in a clash of fundamental theories with facts, but of “old” fundamental theories with each other, leading to contradictions that can only be eliminated in a more general theory. The model is illustrated with reference to physics in the early 20th century, the three “old” theories in this (...) case being Maxwellian electrodynamics, statistical mechanics and thermodynamics. Modern example, referring to general relativity and quantum field theory fusion, is highlighted. Key words: Popper, Kuhn, Lakatos, Feyerabend, Stepin, Bransky,Mamchur, mature theory, structure, Einstein, Lorentz, , Boltzmann, Planck, Hawking, De Witt. (shrink)
Currently there are at least four sizeable projects going on to establish the gravitational acceleration of massive antiparticles on earth. While general relativity and modern quantum theories strictly forbid any repulsive gravity, it has not yet been established experimentally that gravity is attraction only. With that in mind, the Elementary Process Theory (EPT) is a rather abstract theory that has been developed from the hypothesis that massive antiparticles are repulsed by the gravitational field of a body of ordinary (...) matter: the EPT essentially describes the elementary processes by which the smallest massive systems have to interact with their environments for repulsive gravity to exist. In this paper we model a nonrelativistic, one-component massive system that evolves in time by the processes as described by the EPT in an environment described by classical fields: the main result is a semi-classical model of a process at Planck scale by which a non-relativistic onecomponent system interacts with its environment, such that the interaction has both gravitational and electromagnetic aspects. Some worked-out examples are provided, among which the repulsion of an antineutron by the gravitational field of the earth. The general conclusion is that the semi-classical model of the EPT corresponds to non-relativistic classical mechanics. Further research is aimed at demonstrating that the EPT has a model that reproduces the successful predictions of general relativity. (shrink)
The purpose of this paper is to show that the Elementary Process Theory (EPT) agrees with the knowledge of the physical world obtained from the successful predictions of Special Relativity (SR). For that matter, a recently developed method is applied: a categorical model of the EPT that incorporates SR is fully specified. Ultimate constituents of the universe of the EPT are modeled as point-particles, gamma-rays, or time-like strings, all represented by integrable hyperreal functions on Minkowski space. This proves (...) that the EPT agrees with SR. (shrink)
A model-theoretic realist account of science places linguistic systems and their corresponding non-linguistic structures at different stages or different levels of abstraction of the scientific process. Apart from the obvious problem of underdetermination of theories by data, philosophers of science are also faced with the inverse (and very real) problem of overdetermination of theories by their empirical models, which is what this article will focus on. I acknowledge the contingency of the factors determining the nature – and choice – (...) of a certain model at a certain time, but in my terms, this is a matter about which we can talk and whose structure we can formalise. In this article a mechanism for tracing "empirical choices" and their particularized observational-theoretical entanglements will be offered in the form of Yoav Shoham's version of non-monotonic logic. Such an analysis of the structure of scientific theories may clarify the motivations underlying choices in favor of certain empirical models (and not others) in a way that shows that "disentangling" theoretical and observation terms is more deeply model-specific than theory-specific. This kind of analysis offers a method for getting an articulable grip on the overdetermination of theories by their models – implied by empirical equivalence – which Kuipers' structuralist analysis of the structure of theories does not offer. (shrink)
Pettit (2012) presents a model of popular control over government, according to which it consists in the government being subject to those policy-making norms that everyone accepts. In this paper, I provide a formal statement of this interpretation of popular control, which illuminates its relationship to other interpretations of the idea with which it is easily conflated, and which gives rise to a theorem, similar to the famous Gibbard-Satterthwaite theorem. The theorem states that if government policy is subject to (...) popular control, as Pettit interprets it, and policy responds positively to changes in citizens' normative attitudes, then there is a single individual whose normative attitudes unilaterally determine policy. I use the model and theorem as an illustrative example to discuss the role of mathematics in normative political theory. (shrink)
A practical viewpoint links reality, representation, and language to calculation by the concept of Turing (1936) machine being the mathematical model of our computers. After the Gödel incompleteness theorems (1931) or the insolvability of the so-called halting problem (Turing 1936; Church 1936) as to a classical machine of Turing, one of the simplest hypotheses is completeness to be suggested for two ones. That is consistent with the provability of completeness by means of two independent Peano arithmetics discussed in Section (...) I. Many modifications of Turing machines cum quantum ones are researched in Section II for the Halting problem and completeness, and the model of two independent Turing machines seems to generalize them. Then, that pair can be postulated as the formal definition of reality therefore being complete unlike any of them standalone, remaining incomplete without its complementary counterpart. Representation is formal defined as a one-to-one mapping between the two Turing machines, and the set of all those mappings can be considered as “language” therefore including metaphors as mappings different than representation. Section III investigates that formal relation of “reality”, “representation”, and “language” modeled by (at least two) Turing machines. The independence of (two) Turing machines is interpreted by means of game theory and especially of the Nash equilibrium in Section IV. Choice and information as the quantity of choices are involved. That approach seems to be equivalent to that based on set theory and the concept of actual infinity in mathematics and allowing of practical implementations. (shrink)
In this conceptual paper, the traditional conceptualization of sustainable entrepreneurship is challenged because of a fundamental tension between processes involved in sustainable development and processes involved in entrepreneurship: the concept of sustainable business models contains a paradox, because sustainability involves the reduction of information asymmetries, whereas entrepreneurship involves enhanced and secured levels of information asymmetries. We therefore propose a new and integrated theory of sustainable entrepreneurship that overcomes this paradox. The basic argument is that environmental problems have to be (...) conceptualized as wicked problems or sustainability-related ecosystem failures. Because all actors involved in the entrepreneurial process are characterized by their epistemic insufficiency regarding the solving of these problems, the role of information in the sustainable entrepreneurial process changes. On the one hand, the reduction of information asymmetries primarily aims to enable actors to become critical of sustainable entrepreneurs’ actual business models. On the other hand, the epistemic insufficiency of sustainable entrepreneurs guarantees that information asymmetries remain as a source of new sustainable business opportunities. Three further characteristics of sustainable entrepreneurs are distinguished: sustainability and entrepreneurship-related risk-taking; sustainability and entrepreneurship-related self-efficacy; and the development of satisficing and open-ended solutions, together with multiple stakeholders. (shrink)
A new non-Archimedean approach to interacted quantum fields is presented. In proposed approach, a field operator φ(x,t) no longer a standard tempered operator-valued distribution, but a non-classical operator-valued function. We prove using this novel approach that the quantum field theory with Hamiltonian P(φ)_4 exists and that the corresponding C^* algebra of bounded observables satisfies all the Haag-Kastler axioms except Lorentz covariance. We prove that the λ(φ^4 )_4 quantum field theorymodel is Lorentz covariant.
A comprehensible model is proposed aimed at an analysis of the reasons for theory change in science. According to model the origins of scientific revolutions lie not in a clash of fundamental theories with facts, but of “old” fundamental theories with each other, leading to contradictions that can only be eliminated in a more general theory. The model is illustrated with reference to physics in the early 20th century, the three “old” theories in this case (...) being Maxwellian electrodynamics, statistical mechanics and thermodynamics. Modern example, referring to general relativity and quantum field theory, is considered. Key words: Popper, Kuhn, Stepin, Einstein . (shrink)
This paper constructs a model of metaphysical indeterminacy that can accommodate a kind of ‘deep’ worldly indeterminacy that arguably arises in quantum mechanics via the Kochen-Specker theorem, and that is incompatible with prominent theories of metaphysical indeterminacy such as that in Barnes and Williams (2011). We construct a variant of Barnes and Williams's theory that avoids this problem. Our version builds on situation semantics and uses incomplete, local situations rather than possible worlds to build a model. We (...) evaluate the resulting theory and contrast it with similar alternatives, concluding that our model successfully captures deep indeterminacy. (shrink)
The incompleteness of set theory ZF C leads one to look for natural nonconservative extensions of ZF C in which one can prove statements independent of ZF C which appear to be “true”. One approach has been to add large cardinal axioms.Or, one can investigate second-order expansions like Kelley-Morse class theory, KM or Tarski-Grothendieck set theory T G or It is a nonconservative extension of ZF C and is obtained from other axiomatic set theories by the inclusion (...) of Tarski’s axiom which implies the existence of inaccessible cardinals. See also related set theory with a filter quantifier ZF (aa). In this paper we look at a set theory NC# ∞# , based on bivalent gyper infinitary logic with restricted Modus Ponens Rule In this paper we deal with set theory NC# ∞# based on bivalent gyper infinitary logic with Restricted Modus Ponens Rule. Nonconservative extensions of the canonical internal set theories IST and HST are proposed. (shrink)
The philosophy of mathematics has been accused of paying insufficient attention to mathematical practice: one way to cope with the problem, the one we will follow in this paper on extensive magnitudes, is to combine the `history of ideas' and the `philosophy of models' in a logical and epistemological perspective. The history of ideas allows the reconstruction of the theory of extensive magnitudes as a theory of ordered algebraic structures; the philosophy of models allows an investigation into the (...) way epistemology might affect relevant mathematical notions. The article takes two historical examples as a starting point for the investigation of the role of numerical models in the construction of a system of non-Archimedean magnitudes. A brief exposition of the theories developed by Giuseppe Veronese and by Rodolfo Bettazzi at the end of the 19th century will throw new light on the role played by magnitudes and numbers in the development of the concept of a non-Archimedean order. Different ways of introducing non-Archimedean models will be compared and the influence of epistemological models will be evaluated. Particular attention will be devoted to the comparison between the models that oriented Veronese's and Bettazzi's works and the mathematical theories they developed, but also to the analysis of the way epistemological beliefs affected the concepts of continuity and measurement. (shrink)
I explain why modeltheory is unsatisfactory as a semantic theory and has drawbacks as a tool for proofs on logic systems. I then motivate and develop an alternative, truth-valuational substitutional approach (TVS), and prove with it the soundness and completeness of the first order Predicate Calculus with identity and of Modal Propositional Calculus. Modal logic is developed without recourse to possible worlds. Along the way I answer a variety of difficulties that have been raised against TVS (...) and show that, as applied to several central questions, model-theoretic semantics can be considered TVS in disguise. The conclusion is that the truth-valuational substitutional approach is an adequate tool for many of our logic inquiries, conceptually preferable over model-theoretic semantics. Another conclusion is that formal logic is independent of semantics, apart from its use of the notion of truth, but that even with respect to it its assumptions are minimal. (shrink)
In this paper I propose an account of representation for scientific models based on Kendall Walton’s ‘make-believe’ theory of representation in art. I first set out the problem of scientific representation and respond to a recent argument due to Craig Callender and Jonathan Cohen, which aims to show that the problem may be easily dismissed. I then introduce my account of models as props in games of make-believe and show how it offers a solution to the problem. Finally, I (...) demonstrate an important advantage my account has over other theories of scientific representation. All existing theories analyse scientific representation in terms of relations, such as similarity or denotation. By contrast, my account does not take representation in modelling to be essentially relational. For this reason, it can accommodate a group of models often ignored in discussions of scientific representation, namely models which are representational but which represent no actual object. (shrink)
The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents comes (...) into focus. This chapter explores these issues, and from its results details a novel approach to meeting the given conditions in a simple architecture of information processing. (shrink)
This paper aims 1) to introduce the notion of theoretical story as a resource and source of constraint for the construction and assessment of models of phenomena; 2) to show the relevance of this notion for a better understanding of the role and nature of values in scientific activity. The reflection on the role of values and value judgments in scientific activity should be attentive, I will argue, to the distinction between models and the theoretical story that guides and constrains (...) their construction. The aim of scientific activity is to develop understanding of phenomena, and something that serves this aim and contributes to the development of understanding has a cognitive value. Cognitive values are the features that something that plays a role in scientific activity should have so that it can serve its aim. I will focus my attention on the features of the theoretical story and of the models. (shrink)
Causal models provide a framework for making counterfactual predictions, making them useful for evaluating the truth conditions of counterfactual sentences. However, current causal models for counterfactual semantics face limitations compared to the alternative similarity-based approach: they only apply to a limited subset of counterfactuals and the connection to counterfactual logic is not straightforward. This paper argues that these limitations arise from the theory of interventions where intervening on variables requires changing structural equations rather than the values of variables. Using (...) an alternative theory of exogenous interventions, this paper extends the causal approach to counterfactuals to handle more complex counterfactuals, including backtracking counterfactuals and those with logically complex antecedents. The theory also validates familiar principles of counterfactual logic and offers an explanation for counterfactual disagreement and backtracking readings of forward counterfactuals. (shrink)
One striking feature of the contemporary modelling practice is its interdisciplinary nature. The same equation forms, and mathematical and computational methods, are used across different disciplines, as well as within the same discipline. Are there, then, differences between intra- and interdisciplinary transfer, and can the comparison between the two provide more insight on the challenges of interdisciplinary theoretical work? We will study the development and various uses of the Ising model within physics, contrasting them to its applications to socio-economic (...) systems. While the renormalization group methods justify the transfer of the Ising model within physics – by ascribing them to the same universality class – its application to socio-economic phenomena has no such theoretical grounding. As a result, the insights gained by modelling socio-economic phenomena by the Ising model may remain limited. (shrink)
Models, Information and Meaning.Marc Artiga - 2020 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 82:101284.details
There has recently been an explosion of formal models of signalling, which have been developed to learn about different aspects of meaning. This paper discusses whether that success can also be used to provide an original naturalistic theory of meaning in terms of information or some related notion. In particular, it argues that, although these models can teach us a lot about different aspects of content, at the moment they fail to support the idea that meaning just is some (...) kind of information. As an alternative, I suggest a more modest approach to the relationship between informational notions used in models and semantic properties in the natural world. (shrink)
A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for theory choice (...) by employing a strategy used by Sen in social choice, namely, to enhance the information made available to the choice algorithms. I argue here that, despite Okasha’s claims to the contrary, the information-enhancing strategy is not compelling in the domain of theory choice. (shrink)
In this paper I apply the concept of _inter-Model Inconsistency in Set Theory_ (MIST), introduced by Carolin Antos (this volume), to select positions in the current universe-multiverse debate in philosophy of set theory: I reinterpret H. Woodin’s _Ultimate L_, J. D. Hamkins’ multiverse, S.-D. Friedman’s hyperuniverse and the algebraic multiverse as normative strategies to deal with the situation of de facto inconsistency toleration in set theory as described by MIST. In particular, my aim is to situate these (...) positions on the spectrum from inconsistency avoidance to inconsistency toleration. By doing so, I connect a debate in philosophy of set theory with a debate in philosophy of science about the role of inconsistencies in the natural sciences. While there are important differences, like the lack of threatening explosive inferences, I show how specific philosophical positions in the philosophy of set theory can be interpreted as reactions to a state of inconsistency similar to analogous reactions studied in the philosophy of science literature. My hope is that this transfer operation from philosophy of science to mathematics sheds a new light on the current discussion in philosophy of set theory; and that it can help to bring philosophy of mathematics and philosophy of science closer together. (shrink)
Kripke models, interpreted realistically, have difficulty making sense of the thesis that there might have existed things that do not in fact exist, since a Kripke model in which this thesis is true requires a model structure in which there are possible worlds with domains that contain things that do not exist. This paper argues that we can use Kripke models as representational devices that allow us to give a realistic interpretation of a modal language. The method of (...) doing this is sketched, with the help of an analogy with a Galilean relativist theory of spatial properties and relations. (shrink)
I argue that normative formal epistemology (NFE) is best understood as modelling, in the sense that this is the reconstruction of its methodology on which NFE is doing best. I focus on Bayesianism and show that it has the characteristics of modelling. But modelling is a scientific enterprise, while NFE is normative. I thus develop an account of normative models on which they are idealised representations put to normative purposes. Normative assumptions, such as the transitivity of comparative credence, are characterised (...) as modelling idealisations motivated normatively. I then survey the landscape of methodological options: what might formal epistemologists be up to? I argue the choice is essentially binary: modelling or theorising. If NFE is theorising it is doing very poorly: generating false claims with no clear methodology for separating out what is to be taken seriously. Modelling, by contrast, is a successful methodology precisely suited to the management of useful falsehoods. Regarding NFE as modelling is not costless, however. First, our normative inferences are less direct and are muddied by the presence of descriptive idealisations. Second, our models are purpose-specific and limited in their scope. I close with suggestions for how to adapt our practice. (shrink)
Cognitive agents, whether human or computer, that engage in natural-language discourse and that have beliefs about the beliefs of other cognitive agents must be able to represent objects the way they believe them to be and the way they believe others believe them to be. They must be able to represent other cognitive agents both as objects of beliefs and as agents of beliefs. They must be able to represent their own beliefs, and they must be able to represent beliefs (...) as objects of beliefs. These requirements raise questions about the number of tokens of the belief representation language needed to represent believers and propositions in their normal roles and in their roles as objects of beliefs. In this paper, we explicate the relations among nodes, mental tokens, concepts, actual objects, concepts in the belief spaces of an agent and the agent's model of other agents, concepts of other cognitive agents, and propositions. We extend, deepen, and clarify our theory of intensional knowledge representation for natural-language processing, as presented in previous papers and in light of objections raised by others. The essential claim is that tokens in a knowledge-representation system represent only intensions and not extensions. We are pursuing this investigation by building CASSIE, a computer model of a cognitive agent and, to the extent she works, a cognitive agent herself. CASSIE's mind is implemented in the SNePS knowledge-representation and reasoning system. (shrink)
According to standard rational choice theory, as commonly used in political science and economics, an agent's fundamental preferences are exogenously fixed, and any preference change over decision options is due to Bayesian information learning. Although elegant and parsimonious, such a model fails to account for preference change driven by experiences or psychological changes distinct from information learning. We develop a model of non-informational preference change. Alternatives are modelled as points in some multidimensional space, only some of whose (...) dimensions play a role in shaping the agentís preferences. Any change in these "motivationally salient" dimensions can change the agent's preferences. How it does so is described by a new representation theorem. Our model not only captures a wide range of frequently observed phenomena, but also generalizes some standard representations of preferences in political science and economics. (shrink)
In this paper I investigate Putnam’s model-theoretic argument from a transcendent standpoint, in spite of Putnam’s well-known objections to such a standpoint. This transcendence, however, requires ascent to something more like a Tarskian meta-level than what Putnam regards as a “God’s eye view”. Still, it is methodologically quite powerful, leading to a significant increase in our investigative tools. The result is a shift from Putnam’s skeptical conclusion to a new understanding of realism, truth, correspondence, knowledge, and theories, or certain (...) aspects thereof, based on, among other things, a better understanding of what models are designed (and not designed) to do. (shrink)
The book answers long-standing questions on scientific modeling and inference across multiple perspectives and disciplines, including logic, mathematics, physics and medicine. The different chapters cover a variety of issues, such as the role models play in scientific practice; the way science shapes our concept of models; ways of modeling the pursuit of scientific knowledge; the relationship between our concept of models and our concept of science. The book also discusses models and scientific explanations; models in the semantic view of theories; (...) the applicability of mathematical models to the real world and their effectiveness; the links between models and inferences; and models as a means for acquiring new knowledge. It analyzes different examples of models in physics, biology, mathematics and engineering. Written for researchers and graduate students, it provides a cross-disciplinary reference guide to the notion and the use of models and inferences in science. (shrink)
Twenty-first century science faces a dilemma. Two of its well-verified foundation stones - relativity and quantum theory - have proven inconsistent. Resolution of the conflict has resisted improvements in experimental precision leaving some to believe that some fundamental understanding in our world-view may need modification or even radical reform. Employment of the wave-front model of electrodynamics, as a propagation process with a Markov property, may offer just such a clarification.
Autonomist accounts of cognitive science suggest that cognitive model building and theory construction (can or should) proceed independently of findings in neuroscience. Common functionalist justifications of autonomy rely on there being relatively few constraints between neural structure and cognitive function (e.g., Weiskopf, 2011). In contrast, an integrative mechanistic perspective stresses the mutual constraining of structure and function (e.g., Piccinini & Craver, 2011; Povich, 2015). In this paper, I show how model-based cognitive neuroscience (MBCN) epitomizes the integrative mechanistic (...) perspective and concentrates the most revolutionary elements of the cognitive neuroscience revolution (Boone & Piccinini, 2016). I also show how the prominent subset account of functional realization supports the integrative mechanistic perspective I take on MBCN and use it to clarify the intralevel and interlevel components of integration. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.