Experimental modeling in biology involves the use of living organisms (not necessarily so-called "model organisms") in order to model or simulate biological processes. I argue here that experimental modeling is a bona fide form of scientific modeling that plays an epistemic role that is distinct from that of ordinary biological experiments. What distinguishes them from ordinary experiments is that they use what I call "in vivo representations" where one kind of causal process is used to stand in (...) for a physically different kind of process. I discuss the advantages of this approach in the context of evolutionary biology. (shrink)
The fate of optimality modeling is typically linked to that of adaptationism: the two are thought to stand or fall together (Gould and Lewontin, Proc Relig Soc Lond 205:581–598, 1979; Orzack and Sober, Am Nat 143(3):361–380, 1994). I argue here that this is mistaken. The debate over adaptationism has tended to focus on one particular use of optimality models, which I refer to here as their strong use. The strong use of an optimality model involves the claim that selection (...) is the only important influence on the evolutionary outcome in question and is thus linked to adaptationism. However, biologists seldom intend this strong use of optimality models. One common alternative that I term the weak use simply involves the claim that an optimality model accurately represents the role of selection in bringing about the outcome. This and other weaker uses of optimality models insulate the optimality approach from criticisms of adaptationism, and they account for the prominence of optimality modeling (broadly construed) in population biology. The centrality of these uses of optimality models ensures a continuing role for the optimality approach, regardless of the fate of adaptationism. (shrink)
This paper applies Causal Modeling Semantics (CMS, e.g., Galles and Pearl 1998; Pearl 2000; Halpern 2000) to the evaluation of the probability of counterfactuals with disjunctive antecedents. Standard CMS is limited to evaluating (the probability of) counterfactuals whose antecedent is a conjunction of atomic formulas. We extend this framework to disjunctive antecedents, and more generally, to any Boolean combinations of atomic formulas. Our main idea is to assign a probability to a counterfactual ( A ∨ B ) > C (...) at a causal model M by looking at the probability of C in those submodels that truthmake A ∨ B (Briggs 2012; Fine 2016, 2017). The probability of p (( A ∨ B ) > C ) is then calculated as the average of the probability of C in the truthmaking submodels, weighted by the inverse distance to the original model M. The latter is calculated on the basis of a proposal by Eva et al. (2019). Apart from solving a major problem in the research on counterfactuals, our paper shows how work in semantics, causal inference and formal epistemology can be fruitfully combined. (shrink)
Many in philosophy understand truth in terms of precise semantic values, true propositions. Following Braun and Sider, I say that in this sense almost nothing we say is, literally, true. I take the stand that this account of truth nonetheless constitutes a vitally useful idealization in understanding many features of the structure of language. The Fregean problem discussed by Braun and Sider concerns issues about application of language to the world. In understanding these issues I propose an alternative modeling (...) tool summarized in the idea that inaccuracy of statements can be accommodated by their imprecision. This yields a pragmatist account of truth, but one not subject to the usual counterexamples. The account can also be viewed as an elaborated error theory. The paper addresses some prima facie objections and concludes with implications for how we address certain problems in philosophy. (shrink)
The optimality approach to modeling natural selection has been criticized by many biologists and philosophers of biology. For instance, Lewontin (1979) argues that the optimality approach is a shortcut that will be replaced by models incorporating genetic information, if and when such models become available. In contrast, I think that optimality models have a permanent role in evolutionary study. I base my argument for this claim on what I think it takes to best explain an event. In certain contexts, (...) optimality and game-theoretic models best explain some central types of evolutionary phenomena. ‡Thanks to Michael Friedman, Helen Longino, Michael Weisberg, and especially Elliott Sober for comments on earlier drafts of this paper. †To contact the author, please write to: Department of Philosophy, Stanford University, Stanford, CA 94305-2155; e-mail: [email protected] (shrink)
I consider the application of possibility semantics to the modeling of the indeterminacy of the future. I argue that interesting problems arise in connection to the addition of object-language determinacy operator. I show that adding a two-dimensional layer to possibility semantics can help solve these problems.
Intellectualists about knowledge how argue that knowing how to do something is knowing the content of a proposition (i.e, a fact). An important component of this view is the idea that propositional knowledge is translated into behavior when it is presented to the mind in a peculiarly practical way. Until recently, however, intellectualists have not said much about what it means for propositional knowledge to be entertained under thought's practical guise. Carlotta Pavese fills this gap in the intellectualist view by (...)modeling practical modes of thought after Fregean senses. In this paper, I take up her model and the presuppositions it is built upon, arguing that her view of practical thought is not positioned to account for much of what human agents are able to do. (shrink)
Experimental activity is traditionally identified with testing the empirical implications or numerical simulations of models against data. In critical reaction to the ‘tribunal view’ on experiments, this essay will show the constructive contribution of experimental activity to the processes of modeling and simulating. Based on the analysis of a case in fluid mechanics, it will focus specifically on two aspects. The first is the controversial specification of the conditions in which the data are to be obtained. The second is (...) conceptual clarification, with a redefinition of concepts central to the understanding of the phenomenon and the conditions of its occurrence. (shrink)
Since the sixties, computational modeling has become increasingly important in both the physical and the social sciences, particularly in physics, theoretical biology, sociology, and economics. Sine the eighties, philosophers too have begun to apply computational modeling to questions in logic, epistemology, philosophy of science, philosophy of mind, philosophy of language, philosophy of biology, ethics, and social and political philosophy. This chapter analyzes a selection of interesting examples in some of those areas.
Research in experimental philosophy has increasingly been turning to corpus methods to produce evidence for empirical claims, as they open up new possibilities for testing linguistic claims or studying concepts across time and cultures. The present article reviews the quasi-experimental studies that have been done using textual data from corpora in philosophy, with an eye for the modeling and experimental design that enable statistical inference. I find that most studies forego comparisons that could control for confounds, and that only (...) a little less than half employ statistical testing methods to control for chance results. Furthermore, at least some researchers make modeling decisions that either do not take into account the nature of corpora and of the word-concept relationship, or undermine the experiment's capacity to answer research questions. I suggest that corpus methods could both provide more powerful evidence and gain more mainstream acceptance by improving their modeling practices. (shrink)
Gender is both indeterminate and multifaceted: many individuals do not fit neatly into accepted gender categories, and a vast number of characteristics are relevant to determining a person's gender. This article demonstrates how these two features, taken together, enable gender to be modeled as a multidimensional sorites paradox. After discussing the diverse terminology used to describe gender, I extend Helen Daly's research into sex classifications in the Olympics and show how varying testosterone levels can be represented using a sorites argument. (...) The most appropriate way of addressing the paradox that results, I propose, is to employ fuzzy logic. I then move beyond physiological characteristics and consider how gender portrayals in reality television shows align with Judith Butler's notion of performativity, thereby revealing gender to be composed of numerous criteria. Following this, I explore how various elements of gender can each be modeled as individual sorites paradoxes such that the overall concept forms a multidimensional paradox. Resolving this dilemma through fuzzy logic provides a novel framework for interpreting gender membership. (shrink)
Naturalistic theories of representation seek to specify the conditions that must be met for an entity to represent another entity. Although these approaches have been relatively successful in certain areas, such as communication theory or genetics, many doubt that they can be employed to naturalize complex cognitive representations. In this essay I identify some of the difficulties for developing a teleosemantic theory of cognitive representations and provide a strategy for accommodating them: to look into models of signaling in evolutionary game (...) theory. I show how these models can be used to formulate teleosemantics and expand it in new directions. (shrink)
In recent years, the emergence of a new trend in contemporary philosophy has been observed in the increasing usage of empirical research methods to conduct philosophical inquiries. Although philosophers primarily use secondary data from other disciplines or apply quantitative methods (experiments, surveys, etc.), the rise of qualitative methods (e.g., in-depth interviews, participant observations and qualitative text analysis) can also be observed. In this paper, I focus on how qualitative research methods can be applied within philosophy of science, namely within the (...) philosophical debate on modeling. Specifically, I review my empirical investigations into the issues of model de-idealization, model justification and performativity. (shrink)
We apply spatialized game theory and multi-agent computational modeling as philosophical tools: (1) for assessing the primary social psychological hypothesis regarding prejudice reduction, and (2) for pursuing a deeper understanding of the basic mechanisms of prejudice reduction.
In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative (...) with one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
What structure of scientific communication and cooperation, between what kinds of investigators, is best positioned to lead us to the truth? Against an outline of standard philosophical characteristics and a recent turn to social epistemology, this paper surveys highlights within two strands of computational philosophy of science that attempt to work toward an answer to this question. Both strands emerge from abstract rational choice theory and the analytic tradition in philosophy of science rather than postmodern sociology of science. The first (...) strand of computational research models the effect of communicative networks within groups, with conclusions regarding the potential benefit of limited communication. The second strand models the potential benefits of cognitive diversity within groups. Examples from each strand of research are used in analyzing what makes modeling of this sort both promising and distinctly philosophical, but are also used to emphasize possibilities for failure and inherent limitations as well. (shrink)
Real-world economies are open-ended dynamic systems consisting of heterogeneous interacting participants. Human participants are decision-makers who strategically take into account the past actions and potential future actions of other participants. All participants are forced to be locally constructive, meaning their actions at any given time must be based on their local states; and participant actions at any given time affect future local states. Taken together, these essential properties imply real-world economies are locally-constructive sequential games. This paper discusses a modeling (...) approach, Agent-based Computational Economics, that permits researchers to study economic systems from this point of view. ACE modeling principles and objectives are first concisely presented and explained. The remainder of the paper then highlights challenging issues and edgier explorations that ACE researchers are currently pursuing. (shrink)
Recently, Bechtel and Abrahamsen have argued that mathematical models study the dynamics of mechanisms by recomposing the components and their operations into an appropriately organized system. We will study this claim through the practice of combinational modeling in circadian clock research. In combinational modeling, experiments on model organisms and mathematical/computational models are combined with a new type of model—a synthetic model. We argue that the strategy of recomposition is more complicated than what Bechtel and Abrahamsen indicate. Moreover, synthetic (...)modeling as a kind of material recomposition strategy also points beyond the mechanistic paradigm. (shrink)
Modeling and simulation clearly have an upside. My discussion here will deal with the inevitable downside of modeling — the sort of things that can go wrong. It will set out a taxonomy for the pathology of models — a catalogue of the various ways in which model contrivance can go awry. In the course of that discussion, I also call on some of my past experience with models and their vulnerabilities.
The topics of modeling and information come together in at least two ways. Computational modeling and simulation play an increasingly important role in science, across disciplines from mathematics through physics to economics and political science. The philosophical questions at issue are questions as to what modeling and simulation are adding, altering, or amplifying in terms of scientific information. What changes with regard to information acquisition, theoretical development, or empirical confirmation with contemporary tools of computational modeling? In (...) this sense the title of this article is read in the following way: What kind of information is modeling information? What kind of information does modeling give us? (shrink)
This paper brings together Thompson's naive action explanation with interventionist modeling of causal structure to show how they work together to produce causal models that go beyond current modeling capabilities, when applied to specifically selected systems. By deploying well-justified assumptions about rationalization, we can strengthen existing causal modeling techniques' inferential power in cases where we take ourselves to be modeling causal systems that also involve actions. The internal connection between means and end exhibited in naive action (...) explanation has a modal strength like that of distinctively mathematical explanation, rather than that of causal explanation. Because it is stronger than causation, it can be treated as if it were merely causal in a causal model without thereby overextending the justification it can provide for inferences. This chapter introduces and demonstrate the usage of the Rationalization condition in causal modeling, where it is apt for the system(s) being modeled, and to provide the basics for incorporating R variables into systems of variables and R arrows into DAGs. Use of the Rationalization condition supplements causal analysis with action analysis where it is apt. (shrink)
Reflexive observations and observations of reflexivity: such agendas are by now standard practice in anthropology. Dynamic feedback loops between self and other, cause and effect, represented and representamen may no longer seem surprising; but, in spite of our enhanced awareness, little deliberate attention is devoted to modeling or grounding such phenomena. Attending to both linguistic and extra-linguistic modalities of chiasmus (the X figure), a group of anthropologists has recently embraced this challenge. Applied to contemporary problems in linguistic anthropology, chiasmus (...) functions to highlight and enhance relationships of interdependence or symbiosis between contraries, including anthropology’s four fields, the nature of human being and facets of being human. (shrink)
Unlike any other field, the science of morality has drawn attention from an extraordinarily diverse set of disciplines. An interdisciplinary research program has formed in which economists, biologists, neuroscientists, psychologists, and even philosophers have been eager to provide answers to puzzling questions raised by the existence of human morality. Models and simulations, for a variety of reasons, have played various important roles in this endeavor. Their use, however, has sometimes been deemed as useless, trivial and inadequate. The role of models (...) in the science of morality has been vastly underappreciated. This omission shall be remedied here, offering a much more positive picture on the contributions modelers made to our understanding of morality. (shrink)
Conscious experiences are characterized by mental qualities, such as those involved in seeing red, feeling pain, or smelling cinnamon. The standard framework for modeling mental qualities represents them via points in geometrical spaces, where distances between points inversely correspond to degrees of phenomenal similarity. This paper argues that the standard framework is structurally inadequate and develops a new framework that is more powerful and flexible. The core problem for the standard framework is that it cannot capture precision structure: for (...) example, consider the phenomenal contrast between seeing an object as crimson in foveal vision versus merely as red in peripheral vision. The solution I favor is to model mental qualities using regions, rather than points. I explain how this seemingly simple formal innovation not only provides a natural way of modeling precision, but also yields a variety of further theoretical fruits: it enables us to formulate novel hypotheses about the space and structures of mental qualities, formally differentiate two dimensions of phenomenal similarity, generate a quantitative model of the phenomenal sorites, and define a measure of discriminatory grain. A noteworthy consequence is that the structure of the mental qualities of conscious experiences is fundamentally different from the structure of the perceptible qualities of external objects. (shrink)
Philosophy can shed light on mathematical modeling and the juxtaposition of modeling and empirical data. This paper explores three philosophical traditions of the structure of scientific theory—Syntactic, Semantic, and Pragmatic—to show that each illuminates mathematical modeling. The Pragmatic View identifies four critical functions of mathematical modeling: (1) unification of both models and data, (2) model fitting to data, (3) mechanism identification accounting for observation, and (4) prediction of future observations. Such facets are explored using a recent (...) exchange between two groups of mathematical modelers in plant biology. Scientific debate can arise from different modeling philosophies. (shrink)
Across various fields it is argued that the self in part consists of an autobiographical self-narrative and that the self-narrative has an impact on agential behavior. Similarly, within action theory, it is claimed that the intentional structure of coherent long-term action is divided into a hierarchy of distal, proximal, and motor intentions. However, the concrete mechanisms for how narratives and distal intentions are generated and impact action is rarely fleshed out concretely. We here demonstrate how narratives and distal intentions can (...) be generated within cognitive agents and how they can impact agential behavior over long time scales. We integrate narratives and distal intentions into the LIDA model,and demonstrate how they can guide agential action in a manner that is consistent with the Global Workspace Theory of consciousness. This paper serves both as an addition to the LIDA cognitive architecture and an elucidation of how narratives and distal intention emerge and play their role in cognition and action. (shrink)
A background assumption of this paper is that the repertoire of inference schemes available to humanity is not fixed, but subject to change as new schemes are invented or refined and as old ones are obsolesced or abandoned. This is particularly visible in areas like health and environmental sciences, where enormous societal investment has been made in finding ways to reach more dependable conclusions. Computational modeling of argumentation, at least for the discourse in expert fields, will require the possibility (...) of modeling change in a stock of schemes that may be applied to generate conclusions from data. We examine Randomized Clinical Trial, an inference scheme established within medical science in the mid-20th Century, and show that its successful defense by means of practical reasoning allowed for its black-boxing as an inference scheme that generates (and warrants belief in) conclusions about the effects of medical treatments. Modeling the use of a scheme is well-understood; here we focus on modeling how the scheme comes to be established so that it is available for use. (shrink)
One recent priority of the U.S. government is developing autonomous robotic systems. The U.S. Army has funded research to design a metric of evil to support military commanders with ethical decision-making and, in the future, allow robotic military systems to make autonomous ethical judgments. We use this particular project as a case study for efforts that seek to frame morality in quantitative terms. We report preliminary results from this research, describing the assumptions and limitations of a program that assesses the (...) relative evil of two courses of action. We compare this program to other attempts to simulate ethical decision-making, assess possibilities for overcoming the trade-off between input simplification and output reliability, and discuss the responsibilities of users and designers in implementing such programs. We conclude by discussing the implications that this project highlights for the successes and challenges of developing automated mechanisms for ethical decision making. (shrink)
Behavior oftentimes allows for many possible interpretations in terms of mental states, such as goals, beliefs, desires, and intentions. Reasoning about the relation between behavior and mental states is therefore considered to be an effortful process. We argue that people use simple strategies to deal with high cognitive demands of mental state inference. To test this hypothesis, we developed a computational cognitive model, which was able to simulate previous empirical findings: In two-player games, people apply simple strategies at first. They (...) only start revising their strategies when these do not pay off. The model could simulate these findings by recursively attributing its own problem solving skills to the other player, thus increasing the complexity of its own inferences. The model was validated by means of a comparison with findings from a developmental study in which the children demonstrated similar strategic developments. (shrink)
The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in (...) terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females) judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model. (shrink)
Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an epistemic ontology containing facts about (...) knowledge, beliefs, perceptions and communication; 3) an ontology concerning future intentions, desires, and aversions; and, finally, 4) a deontic ontology for modeling obligations and prohibitions which limit agents’ actions. The architecture of the ontology framework is inspired by deontic cognitive event calculus as well as epistemic and deontic logic. We also describe a case study in which the proposed DCEO ontology supports autonomous vehicle navigation. (shrink)
When game theory was introduced to biology, the components of classic game theory models were replaced with elements more befitting evolutionary phenomena. The actions of intelligent agents are replaced by phenotypic traits; utility is replaced by fitness; rational deliberation is replaced by natural selection. In this paper, I argue that this classic conception of comprehensive reapplication is misleading, for it overemphasizes the discontinuity between human behavior and evolved traits. Explicitly considering the representational roles of evolutionary game theory brings to attention (...) neglected areas of overlap, as well as a range of evolutionary possibilities that are often overlooked. The clarifications this analysis provides are well-illustrated by—and particularly valuable for—game theoretic treatments of the evolution of social behavior. (shrink)
This study provides a basic introduction to agent-based modeling (ABM) as a powerful blend of classical and constructive mathematics, with a primary focus on its applicability for social science research. The typical goals of ABM social science researchers are discussed along with the culture-dish nature of their computer experiments. The applicability of ABM for science more generally is also considered, with special attention to physics. Finally, two distinct types of ABM applications are summarized in order to illustrate concretely the (...) duality of ABM: Real-world systems can not only be simulated with verisimilitude using ABM; they can also be efficiently and robustly designed and constructed on the basis of ABM principles. (shrink)
Should objects count as necessarily having certain properties, despite their not having those properties when they do not exist? For example, should a cat that passes out of existence, and so no longer is a cat, nonetheless count as necessarily being a cat? In this essay I examine different ways of adapting Aldo Bressan’s MLν so that it can accommodate an affirmative answer to these questions. Anil Gupta, in The Logic of Common Nouns, creates a number of languages that have (...) a kinship with Bressan’s MLν , three of which are also tailored to affirmatively answering these questions. After comparing their languages, I argue that metaphysicians and philosophers of language should prefer MLν to Gupta’s languages in most applications because it can accommodate essential properties, like being a cat, while being more uniform and less cumbersome. (shrink)
We are increasingly exposed to polarized media sources, with clear evidence that individuals choose those sources closest to their existing views. We also have a tradition of open face-to-face group discussion in town meetings, for example. There are a range of current proposals to revive the role of group meetings in democratic decision-making. Here, we build a simulation that instantiates aspects of reinforcement theory in a model of competing social influences. What can we expect in the interaction of polarized media (...) with group interaction along the lines of town meetings? Some surprises are evident from a computational model that includes both. Deliberative group discussion can be expected to produce opinion convergence. That convergence may not, however, be a cure for extreme views polarized at opposite ends of the opinion spectrum. In a large class of cases, we show that adding the influence of group meetings in an environment of self-selected media produces not a moderate central consensus but opinion convergence at one of the extremes defined by polarized media. (shrink)
The book will be useful for economists, finance and valuation professionals, market researchers, public policy analysts, data analysts, teachers or students in graduate-level classes. The book is aimed at students and beginners who are interested in forecasting modeling and analytics of economic processes and want to get an idea of its implementation.
We develop an approach to the problem of de se belief usually expressed with the question, what does the shopper with the leaky sugar bag have to learn to know that s/he is the one making the mess. Where one might have thought that some special kind of “de se” belief explains the triggering of action, we maintain that this gets the order of explanation wrong. We sketch a very simple cognitive architecture that yields de se-like behavior on which the (...) action-triggering functionality of the belief-state is what counts it as de se rather than some prior property of being “de se” explaining the triggering of action. This functionality shows that action-triggering change in belief-state also undergirds a correlative change in the objective involved in the triggered action. This model is far too simple to have any claim to showing how the de se works for humans, but it shows, by illustration, that nothing mysteriously “subjective”” need be involved in this aspect of self-conception. While our exposition is very different from those of Perry and Recanati, all three of us are developing the same kind of view. (shrink)
Modeling is central to scientific inquiry. It also depends heavily upon the imagination. In modeling, scientists seem to turn their attention away from the complexity of the real world to imagine a realm of perfect spheres, frictionless planes and perfect rational agents. Modeling poses many questions. What are models? How do they relate to the real world? Recently, a number of philosophers have addressed these questions by focusing on the role of the imagination in modeling. Some (...) have also drawn parallels between models and fiction. This chapter examines these approaches to scientific modeling and considers the challenges they face. (shrink)
This is an excerpt of a report that highlights and explores five questions which arose from The Unity of Consciousness and Sensory Integration conference at Brown University in November of 2011. This portion of the report explores the question: How should we model the unity of consciousness?
Context has emerged as a central concept in a variety of contemporary approaches to reasoning. The conference at which the papers in this volume were presented, CONTEXT 2001, was the third international, interdisciplinary conference on the topic of context, and was held in Dundee, Scotland on July 27-30, 2001.
The thesis deals with the concept of truth and the paradoxes of truth. Philosophical theories usually consider the concept of truth from a wider perspective. They are concerned with questions such as - Is there any connection between the truth and the world? And, if there is - What is the nature of the connection? Contrary to these theories, this analysis is of a logical nature. It deals with the internal semantic structure of language, the mutual semantic connection of sentences, (...) above all the connection of sentences that speak about the truth of other sentences and sentences whose truth they speak about. Truth paradoxes show that there is a problem in our basic understanding of the language meaning and they are a test for any proposed solution. It is important to make a distinction between the normative and analytical aspect of the solution. The former tries to ensure that paradoxes will not emerge. The latter tries to explain paradoxes. Of course, the practical aspect of the solution is also important. It tries to ensure a good framework for logical foundations of knowledge, for related problems in Artificial Intelligence and for the analysis of the natural language. Tarski’s analysis emphasized the T-scheme as the basic intuitive principle for the concept of truth, but it also showed its inconsistency with the classical logic. Tarski’s solution is to preserve the classical logic and to restrict the scheme: we can talk about the truth of sentences of a language only inside another essentially richer metalanguage. This solution is in harmony with the idea of reflexivity of thinking and it has become very fertile for mathematics and science in general. But it has normative nature | truth paradoxes are avoided in a way that in such frame we cannot even express paradoxical sentences. It is also too restrictive because, for the same reason we cannot express a situation in which there is a circular reference of some sentences to other sentences, no matter how common and harmless such a situation may be. Kripke showed that there is no natural restriction to the T-scheme and we have to accept it. But then we must also accept the riskiness of sentences | the possibility that under some circumstances a sentence does not have the classical truth value but it is undetermined. This leads to languages with three-valued semantics. Kripke did not give any definite model, but he gave a theoretical frame for investigations of various models | each fixed point in each three-valued semantics can be a model for the concept of truth. The solutions also have normative nature | we can express the paradoxical sentences, but we escape a contradiction by declaring them undetermined. Such a solution could become an analytical solution only if we provide the analysis that would show in a substantial way that it is the solution that models the concept of truth. Kripke took some steps in the direction of finding an analytical solution. He preferred the strong Kleene three-valued semantics for which he wrote it was "appropriate" but did not explain why it was appropriate. One reason for such a choice is probably that Kripke finds paradoxical sentences meaningful. This eliminates the weak Kleene three valued semantics which corresponds to the idea that paradoxical sentences are meaningless, and thus indeterminate. Another reason could be that the strong Kleene three valued semantics has the so-called investigative interpretation. According to this interpretation, this semantics corresponds to the classical determination of truth, whereby all sentences that do not have an already determined value are temporarily considered indeterminate. When we determine the truth value of these sentences, then we can also determine the truth value of the sentences that are composed of them. Kripke supplemented this investigative interpretation with an intuition about learning the concept of truth. That intuition deals with how we can teach someone who is a competent user of an initial language (without the predicate of truth T) to use sentences that contain the predicate T. That person knows which sentences of the initial language are true and which are not. We give her the rule to assign the T attribute to the former and deny that attribute to the latter. In that way, some new sentences that contain the predicate of truth, and which were indeterminate until then, become determinate. So the person gets a new set of true and false sentences with which he continues the procedure. This intuition leads directly to the smallest fixed point of strong Kleene semantics as an analytically acceptable model for the logical notion of truth. However, since this process is usually saturated only on some transfinite ordinal, this intuition, by climbing on ordinals, increasingly becomes a metaphor. This thesis is an attempt to give an analytical solution to truth paradoxes. It gives an analysis of why and how some sentences lack the classical truth value. The starting point is basic intuition according to which paradoxical sentences are meaningful (because we understand what they are talking about well, moreover we use it for determining their truth values), but they witness the failure of the classical procedure of determining their truth value in some "extreme" circumstances. Paradoxes emerge because the classical procedure of the truth value determination does not always give a classically supposed (and expected) answer. The analysis shows that such an assumption is an unjustified generalization from common situations to all situations. We can accept the classical procedure of the truth value determination and consequently the internal semantic structure of the language, but we must reject the universality of the exterior assumption of a successful ending of the procedure. The consciousness of this transforms paradoxes to normal situations inherent to the classical procedure. Some sentences, although meaningful, when we evaluate them according to the classical truth conditions, the classical conditions do not assign them a unique value. We can assign to them the third value, \undetermined", as a sign of definitive failure of the classical procedure. An analysis of the propagation of the failure in the structure of sentences gives exactly the strong Kleene three-valued semantics, not as an investigative procedure, as it occurs in Kripke, but as the classical truth determination procedure accompanied by the propagation of its own failure. An analysis of the circularities in the determination of the classical truth value gives the criterion of when the classical procedure succeeds and when it fails, when the sentences will have the classical truth value and when they will not. It turns out that the truth values of sentences thus obtained give exactly the largest intrinsic fixed point of the strong Kleene three-valued semantics. In that way, the argumentation is given for that choice among all fixed points of all monotone three-valued semantics for the model of the logical concept of truth. An immediate mathematical description of the fixed point is given, too. It has also been shown how this language can be semantically completed to the classical language which in many respects appears a natural completion of the process of thinking about the truth values of the sentences of a given language. Thus the final model is a language that has one interpretation and two systems of sentence truth evaluation, primary and final evaluation. The language through the symbol T speaks of its primary truth valuation, which is precisely the largest intrinsic fixed point of the strong Kleene three valued semantics. Its final truth valuation is the semantic completion of the first in such a way that all sentences that are not true in the primary valuation are false in the final valuation. (shrink)
Science continually contributes new models and rethinks old ones. The way inferences are made is constantly being re-evaluated. The practice and achievements of science are both shaped by this process, so it is important to understand how models and inferences are made. But, despite the relevance of models and inference in scientific practice, these concepts still remain contro-versial in many respects. The attempt to understand the ways models and infer-ences are made basically opens two roads. The first one is to (...) produce an analy-sis of the role that models and inferences play in science. The second one is to produce an analysis of the way models and inferences are constructed, especial-ly in the light of what science tells us about our cognitive abilities. The papers collected in this volume go both ways. (shrink)
Ecological-enactive approaches to cognition aim to explain cognition in terms of the dynamic coupling between agent and environment. Accordingly, cognition of one’s immediate environment (which is sometimes labeled “basic” cognition) depends on enaction and the picking up of affordances. However, ecological-enactive views supposedly fail to account for what is sometimes called “higher” cognition, i.e., cognition about potentially absent targets, which therefore can only be explained by postulating representational content. This challenge levelled against ecological-enactive approaches highlights a putative explanatory gap between (...) basic and higher cognition. In this paper, we examine scientific cognition—a paradigmatic case of higher cognition—and argue that it shares fundamental features with basic cognition, for enaction and affordance selection are central to the scientific enterprise. Our argument focuses on modeling, and on how models promote scientific understanding. We base our argument on a non-representational account of scientific understanding and on the material engagement theory, for models are hereby conceived as material objects designed for scientific engagements. Having done so, we conclude that the explanatory gap is significantly less threatening to the ecological-enactive approach than it might appear. (shrink)
If we consider modeling not as a heap of contingent structures, but (where possible) as evolving coordinated systems of models, then we can reasonably explain as "direct representations" even some very complicated model-based cognitive situations. Scientific modeling is not as indirect as it may seem. "Direct theorizing" comes later, as the result of a successful model evolution.
Making sense of modeling: beyond representation Content Type Journal Article Category Original paper in Philosophy of Science Pages 335-352 DOI 10.1007/s13194-011-0032-8 Authors Isabelle Peschard, Philosophy Department, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132, USA Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 3.
It is my thesis that Renaissance classical translations and imitations were often works of political surrogacy in a literary environment characterized by harsh censorship. So, for instance, the works of Homer, Virgil, and Lucan were read as coded texts, that ranged across the political spectrum.
Agent-based modeling is showing great promise in the social sciences. However, two misconceptions about the relation between social macroproperties and microproperties afflict agent-based models. These lead current models to systematically ignore factors relevant to the properties they intend to model, and to overlook a wide range of model designs. Correcting for these brings painful trade-offs, but has the potential to transform the utility of such models.
A physical model was utilized to show that the neural system can memorize a target position and is able to cause motor and sensory events that move the arm to a target with more accuracy. However, this cannot indicate in which coordinates the necessary computations are carried out. Turning off the lights causes the error to increase which is accomplished by cutting off one feedback path. The geometrical properties of arm kinematics and the properties of the kinesthetic and visual sensorial (...) systems should be better known before inferences about higher levels of processing can be drawn. (shrink)
The present Yearbook (which is the fourth in the series) is subtitled Trends & Cycles. Already ancient historians (see, e.g., the second Chapter of Book VI of Polybius' Histories) described rather well the cyclical component of historical dynamics, whereas new interesting analyses of such dynamics also appeared in the Medieval and Early Modern periods (see, e.g., Ibn Khaldūn 1958 [1377], or Machiavelli 1996 [1531] 1). This is not surprising as the cyclical dynamics was dominant in the agrarian social systems. With (...) modernization, the trend dynamics became much more pronounced and these are trends to which the students of modern societies pay more attention. Note that the term trend – as regards its contents and application – is tightly connected with a formal mathematical analysis. Trends may be described by various equations – linear, exponential, power-law, etc. On the other hand, the cliodynamic research has demonstrated that the cyclical historical dynamics can be also modeled mathematically in a rather effective way (see, e.g., Usher 1989; Chu and Lee 1994; Turchin 2003, 2005a, 2005b; Turchin and Korotayev 2006; Turchin and Nefedov 2009; Nefedov 2004; Korotayev and Komarova 2004; Korotayev, Malkov, and Khaltourina 2006; Korotayev and Khaltourina 2006; Korotayev 2007; Grinin 2007), whereas the trend and cycle components of historical dynamics turn out to be of equal importance. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.