There is a prevalent notion among cognitive scientists and philosophers of mind that computers are merely formal symbol manipulators, performing the actions they do solely on the basis of the syntactic properties of the symbols they manipulate. This view of computers has allowed some philosophers to divorce semantics from computational explanations. Semantic content, then, becomes something one adds to computational explanations to get psychological explanations. Other philosophers, such as Stephen Stich, have taken a stronger view, advocating doing (...) away with semantics entirely. This paper argues that a correct account of computation requires us to attribute content to computational processes in order to explain which functions are being computed. This entails that computational psychology must countenance mental representations. Since anti-semantic positions are incompatible with computational psychology thus construed, they ought to be rejected. Lastly, I argue that in an important sense, computers are not formal symbol manipulators. (shrink)
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition (...) seems natural but it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. (shrink)
This essay considers what it means to understand natural language and whether a computer running an artificial-intelligence program designed to understand natural language does in fact do so. It is argued that a certain kind of semantics is needed to understand natural language, that this kind of semantics is mere symbol manipulation (i.e., syntax), and that, hence, it is available to AI systems. Recent arguments by Searle and Dretske to the effect that computers cannot understand natural language are discussed, and (...) a prototype natural-language-understanding system is presented as an illustration. (shrink)
The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content (...) is extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing. (shrink)
Contemporary philosophy and theoretical psychology are dominated by an acceptance of content-externalism: the view that the contents of one's mental states are constitutively, as opposed to causally, dependent on facts about the external world. In the present work, it is shown that content-externalism involves a failure to distinguish between semantics and pre-semantics---between, on the one hand, the literal meanings of expressions and, on the other hand, the information that one must exploit in order to ascertain their literal meanings. It (...) is further shown that, given the falsity of content-externalism, the falsity of the Computational Theory of Mind (CTM) follows. It is also shown that CTM involves a misunderstanding of terms such as "computation," "syntax," "algorithm," and "formal truth." Novel analyses of the concepts expressed by these terms are put forth. These analyses yield clear, intuition-friendly, and extensionally correct answers to the questions "what are propositions?, "what is it for a proposition to be true?", and "what are the logical and psychological differences between conceptual (propositional) and non-conceptual (non-propositional) content?" Naively taking literal meaning to be in lockstep with cognitive content, Burge, Salmon, Falvey, and other semantic externalists have wrongly taken Kripke's correct semantic views to justify drastic and otherwise contraindicated revisions of commonsense. (Salmon: What is non-existent exists; at a given time, one can rationally accept a proposition and its negation. Burge: Somebody who is having a thought may be psychologically indistinguishable from somebody who is thinking nothing. Falvey: somebody who rightly believes himself to be thinking about water is psychologically indistinguishable from somebody who wrongly thinks himself to be doing so and who, indeed, isn't thinking about anything.) Given a few truisms concerning the differences between thought-borne and sentence-borne information, the data is easily modeled without conceding any legitimacy to any one of these rationality-dismantling atrocities. (It thus turns out, ironically, that no one has done more to undermine Kripke's correct semantic points than Kripke's own followers!). (shrink)
The epistemic modal auxiliaries 'must' and 'might' are vehicles for expressing the force with which a proposition follows from some body of evidence or information. Standard approaches model these operators using quantificational modal logic, but probabilistic approaches are becoming increasingly influential. According to a traditional view, 'must' is a maximally strong epistemic operator and 'might' is a bare possibility one. A competing account---popular amongst proponents of a probabilisitic turn---says that, given a body of evidence, 'must p' entails that Pr(p) (...) is high but non-maximal and 'might p' that Pr(p) is significantly greater than 0. Drawing on several observations concerning the behavior of 'must', 'might' and similar epistemic operators in evidential contexts, deductive inferences, downplaying and retractions scenarios, and expressions of epistemic tension, I argue that those two influential accounts have systematic descriptive shortcomings. To better make sense of their complex behavior, I propose instead a broadly Kratzerian account according to which 'must p' entails that Pr(p) = 1 and 'might p' that Pr(p) > 0, given a body of evidence and a set of normality assumptions about the world. From this perspective, 'must' and 'might' are vehicles for expressing a common mode of reasoning whereby we draw inferences from specific bits of evidence against a rich set of background assumptions---some of which we represent as defeasible---which capture our general expectations about the world. I will show that the predictions of this Kratzerian account can be substantially refined once it is combined with a specific yet independently motivated `grammatical' approach to the computation of scalar implicatures. Finally, I discuss some implications of these results for more general discussions concerning the empirical and theoretical motivation to adopt a probabilisitic semantic framework. (shrink)
This paper argues that the idea of a computer is unique. Calculators and analog computers are not different ideas about computers, and nature does not compute by itself. Computers, once clearly defined in all their terms and mechanisms, rather than enumerated by behavioral examples, can be more than instrumental tools in science, and more than source of analogies and taxonomies in philosophy. They can help us understand semantic content and its relation to form. This can be achieved because they (...) have the potential to do more than calculators, which are computers that are designed not to learn. Today’s computers are not designed to learn; rather, they are designed to support learning; therefore, any theory of content tested by computers that currently exist must be of an empirical, rather than a formal nature. If they are designed someday to learn, we will see a change in roles, requiring an empirical theory about the Turing architecture’s content, using the primitives of learning machines. This way of thinking, which I call the intensional view of computers, avoids the problems of analogies between minds and computers. It focuses on the constitutive properties of computers, such as showing clearly how they can help us avoid the infinite regress in interpretation, and how we can clarify the terms of the suggested mechanisms to facilitate a useful debate. Within the intensional view, syntax and content in the context of computers become two ends of physically realizing correspondence problems in various domains. (shrink)
Both in formal and computational natural language semantics, the classical correspondence view of meaning – and, more specifically, the view that the meaning of a declarative sentence coincides with its truth conditions – is widely held. Truth (in the world or a situation) plays the role of the given, and meaning is analysed in terms of it. Both language and the world feature in this perspective on meaning, but language users are conspicuously absent. In contrast, the inferentialist semantics (...) that Robert Brandom proposes in his magisterial book ‘Making It Explicit’ puts the language user centre stage. According to his theory of meaning, the utterance of a sentence is meaningful in as far as it is a move by a language user in a game of giving and asking for reasons (with reasons underwritten by a notion of good inferences). In this paper, I propose a proof-theoretic formalisation of the game of giving and asking for reasons that lends itself to computer implementation. In the current proposal, I flesh out an account of defeasible inferences, a variety of inferences which play a pivotal role in ordinary (and scientific) language use. (shrink)
This paper defends the view that the Faculty of Language is compositional, i.e., that it computes the meaning of complex expressions from the meanings of their immediate constituents and their structure. I fargue that compositionality and other competing constraints on the way in which the Faculty of Language computes the meanings of complex expressions should be understood as hypotheses about innate constraints of the Faculty of Language. I then argue that, unlike compositionality, most of the currently available non-compositional constraints (...) predict incorrect patterns of early linguistic development. This supports the view that the Faculty of Language is com- positional. More generally, this paper presents a way of framing the compositionality debate (by focusing on its implications for language acquisition) that can lead to its even- tual resolution, so it will hopefully also interest theorists who disagree with its main conclusion. (shrink)
Philosophical questions about minds and computation need to focus squarely on the mathematical theory of Turing machines (TM's). Surrogate TM's such as computers or formal systems lack abilities that make Turing machines promising candidates for possessors of minds. Computers are only universal Turing machines (UTM's)—a conspicuous but unrepresentative subclass of TM. Formal systems are only static TM's, which do not receive inputs from external sources. The theory of TM computation clearly exposes the failings of two prominent critiques, Searle's (...) Chinese room (1980) and arguments from Gödel's Incompleteness theorems (e.g., Lucas, 1961; Penrose, 1989), both of which fall short of addressing the complete TM model. Both UTM-computers and formal systems provide an unsound basis for debate. In particular, their special natures easily foster the misconception that computation entails intrinsically meaningless symbol manipulation. This common view is incorrect with respect to full-fledged TM's, which can process inputs non-formally, i.e., in a subjective and dynamically evolving fashion. To avoid a distorted understanding of the theory of computation, philosophical judgments and discussions should be grounded firmly upon the complete Turing machine model, the proper model for real computers. (shrink)
ABSTRACT -/- A propositional attitude (PA) is a belief, desire, fear, etc., that x is the case. This dissertation addresses the question of the semantic content of a specific kind of PA-instance: an instance of a belief of the form all Fs are Gs. The belief that all bachelors are sports fans has this form, while the belief that Spain is a country in Eastern Europe do not. Unlike a state of viewing the color of an orange, a belief-instance (...) is semantically contentful because it has reference, a meaning, logical implications, or a truth-value. While the intrinsic semantics view holds that either concepts or abstract objects are the source of content for PAs, the extrinsic semantics view holds that symbols of a mental language provide this content. I argue that a successful theory of intentionality must explain: (1) the truth-preserving causal powers of PAs, (2) the failure of the deductive principle Substitutivity to preserve truth over sentences that ascribe PAs, and (3) the truth-evaluability of PAs. As an internalist version of the extrinsic semantics view, I first evaluate Fodor’s Computational Theory of Mind, which says the semantic ingredients of mental states are symbols governed by rules of a mental syntax. I argue that in order to meet (1), CTM would have to associate the causal patterns of each thought-type with the inferential relations of some proposition – in an arbitrary or question-begging way. I also evaluate Fodor’s causal theory, as an externalist version of the extrinsic semantics view. This view is that lawlike causal relations between mental symbols and objects determine the reference, and thus the truth-value, of a thought. I argue that Brian Loar’s circularity objection refutes the ability of this theory to meet (3); and I endorse the intrinsic semantics perspective. I evaluate Frege’s theory of abstract, mind-external, and intrinsically semantic objects (senses), as an attempt to meet condition (2). I conclude that mind-external universals are the source of the intrinsically semantic features of concepts. Finally, I put forth a theory called ‘Bare Property Intentionality’, which describes the features of intrinsically representative and semantic concepts that connect them to these universals. (shrink)
This paper is concerned with the construction of theories of software systems yielding adequate predictions of their target systems’ computations. It is first argued that mathematical theories of programs are not able to provide predictions that are consistent with observed executions. Empirical theories of software systems are here introduced semantically, in terms of a hierarchy of computational models that are supplied by formal methods and testing techniques in computer science. Both deductive top-down and inductive bottom-up approaches in the discovery of (...)semantic software theories are refused to argue in favour of the abductive process of hypothesising and refining models at each level in the hierarchy, until they become satisfactorily predictive. Empirical theories of computational systems are required to be modular, as modular are most software verification and testing activities. We argue that logic relations must be thereby defined among models representing different modules in a semantic theory of a modular software system. We exclude that scientific structuralism is able to define module relations needed in software modular theories. The algebraic Theory of Institutions is finally introduced to specify the logic structure of modular semantic theories of computational systems. (shrink)
In this paper it is argued that the conjunction of linguistic ersatzism, the ontologically deflationary view that possible worlds are maximal and consistent sets of sentences, and possible world semantics, the view that the meaning of a sentence is the set of possible worlds at which it is true, implies that no actual speaker can effectively use virtually any language to successfully communicate information. This result is based on complexity issues that relate to our finite computational ability to (...) deal with large bodies of information and a strong, but well motivated, assumption about the cognitive accessibility of meanings of sentences ersatzers seem to be implicitly committed to. It follows that linguistic ersatzism, possible world semantics, or both must be rejected. (shrink)
In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also argue that, if semiotic systems are systems that interpret signs, then both humans (...) and computers are semiotic systems. Finally, I suggest that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, I take issue with Fetzer’s arguments to the contrary. (shrink)
I critically examine the semanticview of theories to reveal the following results. First, models in science are not the same as models in mathematics, as holders of the semanticview claim. Second, when several examples of the semantic approach are examined in detail no common thread is found between them, except their close attention to the details of model building in each particular science. These results lead me to propose a deflationary semantic (...) class='Hi'>view, which is simply that model construction is an important component of theorizing in science. This deflationary view is consistent with a naturalized approach to the philosophy of science. (shrink)
The semanticview of theories is normally considered to be an ac-count of theories congenial to Scientific Realism. Recently, it has been argued that Ontic Structural Realism could be fruitfully applied, in combination with the semanticview, to some of the philosophical issues peculiarly related to bi-ology. Given the central role that models have in the semanticview, and the relevance that mathematics has in the definition of the concept of model, the fo-cus will (...) be on population genetics, which is one of the most mathematized areas in biology. We will analyse some of the difficulties which arise when trying to use Ontic Structural Realism to account for evolutionary biology. (shrink)
Recent work on the semanticview of scientific theories is highly critical of the position. This paper identifies two common criticisms of the view, describes two popular alternatives for responding to them, and argues those responses do not suffice. Subsequently, it argues that retuning to Patrick Suppes’ interpretation of the position provides the conceptual resources for rehabilitating the semanticview.
Quantification, Negation, and Focus: Challenges at the Conceptual-Intentional Semantic Interface Tista Bagchi National Institute of Science, Technology, and Development Studies (NISTADS) and the University of Delhi Since the proposal of Logical Form (LF) was put forward by Robert May in his 1977 MIT doctoral dissertation and was subsequently adopted into the overall architecture of language as conceived under Government-Binding Theory (Chomsky 1981), there has been a steady research effort to determine the nature of LF in language in light of (...) structurally diverse languages around the world, which has ultimately contributed to the reinterpretation of LF as a Conceptual-Intentional (C-I) interface level between the computational syntactic component of the faculty of language and one or more interpretive faculties of the human mind. While this has opened up further possibilities of research in phenomena such as quantifier scope and scope interactions between negation, quantification, and focus, it has also given rise to a few real challenges to linguistic theory as well. Some of these are: (i) the split between lexical meaning – a matter supposedly belonging to the phase-wise selection of lexical arrays – and issues of semantic interpretation that arise purely from binding and scope phenomena (Mukherji 2010); (ii) partially relatedly, the level at which theta role assignment can be argued to take place, an issue that is taken up by me in Bagchi (2007); and (iii) how supposedly “pure” scopal phenomena relating to quantifiers, negation, and emphasizing expressions such as only and even (comparable to, e.g., Urdu/Hindi hii and bhii, Bangla –i and –o) also have dimensions of both focus and discourse reference. While recognizing all of these challenges, this talk aims to highlight particularly challenge (iii), both in terms of scholarship in the past and for the rich prospects for research on languages of south Asia with the semantics of quantification, negation, and focus in view. The scholarship of the past that I seek to relate this issue to is where, parallel to (and largely independently of) the research on LF that had been happening, Barwise and Cooper were developing their influential view of noun phrases as generalized quantifiers, culminating in their key 1981 article “Generalized Quantifiers and Natural Language” while, independently, McCawley, in his 1981 book Everything that Linguists have Always Wanted to Know about Logic, established through argumentation that all noun phrases semantically behave like generalized quantified expressions (further elaborated by him in the second – 1994 – revised edition of his book). I seek to demonstrate, based on limited data analysis from selected languages of south Asia, that our current understanding of quantification, negation, and focus under the Minimalist view owes something significant to the two major, but now largely marginalized, works of scholarship, and that for the way forward it is essential to adopt a more formal-semantic approach as adopted by them and also by later works such as Denis Bouchard’s (1995) The Semantics of Syntax, Mats Rooth’s work on focus (e.g., Rooth 1996, “Focus” in Shalom Lappin’s Handbook of Contemporary Semantic Theory), Heim and Kratzer’s Semantics in Generative Grammar (1998), and Yoad Winter’s (2002) Linguistic Inquiry article on semantic number, to cite just a few instances. (shrink)
Deductive inference is usually regarded as being “tautological” or “analytical”: the information conveyed by the conclusion is contained in the information conveyed by the premises. This idea, however, clashes with the undecidability of first-order logic and with the (likely) intractability of Boolean logic. In this article, we address the problem both from the semantic and the proof-theoretical point of view. We propose a hierarchy of propositional logics that are all tractable (i.e. decidable in polynomial time), although by means (...) of growing computational resources, and converge towards classical propositional logic. The underlying claim is that this hierarchy can be used to represent increasing levels of “depth” or “informativeness” of Boolean reasoning. Special attention is paid to the most basic logic in this hierarchy, the pure “intelim logic”, which satisfies all the requirements of a natural deduction system (allowing both introduction and elimination rules for each logical operator) while admitting of a feasible (quadratic) decision procedure. We argue that this logic is “analytic” in a particularly strict sense, in that it rules out any use of “virtual information”, which is chiefly responsible for the combinatorial explosion of standard classical systems. As a result, analyticity and tractability are reconciled and growing degrees of computational complexity are associated with the depth at which the use of virtual information is allowed. (shrink)
The paper develops some of the conclusions, reached in Floridi (2007), concerning the future developments of Information and Communication Technologies (ICTs) and their impact on our lives. The two main theses supported in that article were that, as the information society develops, the threshold between online and offline is becoming increasingly blurred, and that once there won't be any significant difference, we shall gradually re-conceptualise ourselves not as cyborgs but rather as inforgs, i.e. socially connected, informational organisms. In this paper, (...) I look at the development of the so-called Semantic Web and Web 2.0 from this perspective and try to forecast their future. Regarding the Semantic Web, I argue that it is a clear and well-defined project, which, despite some authoritative views to the contrary, is not a promising reality and will probably fail in the same way AI has failed in the past. Regarding Web 2.0, I argue that, although it is a rather ill-defined project, which lacks a clear explanation of its nature and scope, it does have the potentiality of becoming a success (and indeed it is already, as part of the new phenomenon of Cloud Computing) because it leverages the only semantic engines available so far in nature, us. I conclude by suggesting what other changes might be expected in the future of our digital environment. (shrink)
Computing and information, and their philosophy in the broad sense, play a most important scientific, technological and conceptual role in our world. This book collects together, for the first time, the views and experiences of some of the visionary pioneers and most influential thinkers in such a fundamental area of our intellectual development.
The mainstream view in cognitive science is that computation lies at the basis of and explains cognition. Our analysis reveals that there is no compelling evidence or argument for thinking that brains compute. It makes the case for inverting the explanatory order proposed by the computational basis of cognition thesis. We give reasons to reverse the polarity of standard thinking on this topic, and ask how it is possible that computation, natural and artificial, might be based on (...) cognition and not the other way around. (shrink)
In this paper, I revisit Frege's theory of sense and reference in the constructive setting of the meaning explanations of type theory, extending and sharpening a program–value analysis of sense and reference proposed by Martin-Löf building on previous work of Dummett. I propose a computational identity criterion for senses and argue that it validates what I see as the most plausible interpretation of Frege's equipollence principle for both sentences and singular terms. Before doing so, I examine Frege's implementation of his (...) theory of sense and reference in the logical framework of Grundgesetze, his doctrine of truth values, and views on sameness of sense as equipollence of assertions. (shrink)
This paper proposes a new Separabilist account of thick concepts, called the Expansion View (or EV). According to EV, thick concepts are expanded contents of thin terms. An expanded content is, roughly, the semantic content of a predicate along with modifiers. Although EV is a form of Separabilism, it is distinct from the only kind of Separabilism discussed in the literature, and it has many features that Inseparabilists want from an account of thick concepts. EV can also give (...) non-cognitivists a novel escape from the Anti-Disentangling Argument. §I explains the approach of all previous Separabilists, and argues that there’s no reason for Separabilists to take this approach. §II explains EV. §III fends off objections. And §IV explains how non-cognitivist proponents of EV can escape the Anti-Disentangling Argument. (shrink)
Turner argues that computer programs must have purposes, that implementation is not a kind of semantics, and that computers might need to understand what they do. I respectfully disagree: Computer programs need not have purposes, implementation is a kind of semantic interpretation, and neither human computers nor computing machines need to understand what they do.
Although the view that sees proper names as referential singular terms is widely considered orthodoxy, there is a growing popularity to the view that proper names are predicates. This is partly because the orthodoxy faces two anomalies that Predicativism can solve: on the one hand, proper names can have multiple bearers. But multiple bearerhood is a problem to the idea that proper names have just one individual as referent. On the other hand, as Burge noted, proper names can (...) have predicative uses. But the view that proper names are singular terms arguably does not have the resources to deal with Burge’s cases. In this paper I argue that the Predicate View of proper names is mistaken. I first argue against the syntactic evidence used to support the view and against the predicativist’s methodology of inferring a semantic account for proper names based on incomplete syntactic data. I also show that Predicativism can neither explain the behaviour of proper names in full generality, nor claim the fundamentality of predicative names. In developing my own view, however, I accept the insight that proper names in some sense express generality. Hence I propose that proper names—albeit fundamentally singular referential terms—express generality in two senses. First, by being used as predicates, since then they are true of many individuals; and second, by being referentially related to many individuals. I respond to the problem of multiple bearerhood by proposing that proper names are polyreferential, and also explain the behaviour of proper names in light of the wider phenomenon I called category change, and show how Polyreferentialism can account for all uses of proper names. (shrink)
Objective. Conceptualization of the definition of space as a semantic unit of language consciousness. -/- Materials & Methods. A structural-ontological approach is used in the work, the methodology of which has been tested and applied in order to analyze the subject matter area of psychology, psycholinguistics and other social sciences, as well as in interdisciplinary studies of complex systems. Mathematical representations of space as a set of parallel series of events (Alexandrov) were used as the initial theoretical basis of (...) the structural-ontological analysis. In this case, understanding of an event was considered in the context of the definition adopted in computer science – a change in the object properties registered by the observer. -/- Results. The negative nature of space realizes itself in the subject-object structure, the components interaction of which is characterized by a change – a key property of the system under study. Observer’s registration of changes is accompanied by spatial focusing (situational concretization of the field of changes) and relating of its results with the field of potentially distinguishable changes (subjective knowledge about «changing world»). The indicated correlation performs the function of space identification in terms of recognizing its properties and their subjective significance, depending on the features of the observer`s motivational sphere. As a result, the correction of the actual affective dynamics of the observer is carried out, which structures the current perception of space according to principle of the semantic fractal. Fractalization is a formation of such a subjective perception of space, which supposes the establishment of semantic accordance between the situational field of changes, on the one hand, and the worldview, as well as the motivational characteristics of the observer, on the other. -/- Conclusions. Performed structural-ontological analysis of the system formed by the interaction of the perceptual function of the psyche and the semantic field of the language made it possible to conceptualize the space as a field of changes potentially distinguishable by the observer, structurally organized according to the principle of the semantic fractal. The compositional features of the fractalization process consist in fact that the semantic fractal of space is relevant to the product of the difference between the situational field of changes and the field of potentially distinguishable changes, adjusted by the current configuration of the observer`s value-needs hierarchy and reduced by his actual affective dynamics. (shrink)
The dominant route to nondescriptivist views of normative and evaluative language is through the expressivist idea that normative terms have distinctive expressive roles in conveying our attitudes. This paper explores an alternative route based on two ideas. First, a core normative term ‘ought’ is a modal operator; and second, modal operators play a distinctive nonrepresentational role in generating meanings for the statements in which they figure. I argue that this provides for an attractive alternative to expressivist forms of nondescriptivism about (...) normative language. In the final section of the paper, I explore ways it might be extended to evaluative language. (shrink)
Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in (...) cyberspace, a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil (AE) and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics (CE). It is argued that this goes someway towards providing a methodological explanation of why cyberspace is central to so many of CE's concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent's action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil (and fort that matter good) but conversely to `receive' or `suffer from' it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively,in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics (IE) is proposed for that theory. Itis argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective (though not vice versa). In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
In this paper I consider the idea of external language and examine the role it plays in our understanding of human linguistic practice. Following Michael Devitt, I assume that the subject matter of a linguistic theory is not a psychologically real computational module, but a semiotic system of physical entities equipped with linguistic properties. 2 What are the physical items that count as linguistic tokens and in virtue of what do they possess phonetic, syntactic and semantic properties? According to (...) Devitt, the entities in question are particular bursts of sound or bits of ink that count as standard linguistic entities3 — that is, strings of phonemes, sequences of words and sentences — in virtue of the conventional rules that constitute the structure of the linguistic reality. In my view, however, the bearers of linguistic properties should rather be understood as complex physical states of affairs — that I call, following Ruth G. Millikan, complete linguistic signs4 — within which one can single out their narrow and wide components, that is, (0 sounds or inscriptions produced by the speaker and (if) salient aspects of the context of their production. Moreover, I do not share Devitt's view on the nature of linguistic properties. Even though I maintain the general idea of convention-based semantics — according to which semantic properties of linguistic tokens are essentially conventional — I reject the Lewisian robust account of conventionality. Following Millikan, I assume that language conventions involve neither regular conformity nor mutual understanding. (shrink)
Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in (...) cyberspace, a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics. It is argued that this goes some way towards providing a methodological explanation of why cyberspace is central to so many of CE’s concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent’s action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil but conversely to ‘receive’ or ‘suffer from’ it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively, in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics is proposed for that theory. It is argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective. In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
Cloud computing is considered as a new type of technology, in fact, it is an extension of the information technology's developments which are based on the pooling of resources and infrastructure to provide services depend on using the cloud, in the sense that instead of these services and resources exist on local servers or personal devices, they are gathered in the cloud and be shared on the Internet. This technology has achieved an economic success no one can deny it and (...) resorting to use it has many advantages, today it becomes known and recognized and begins to invade the libraries' area and imposes itself day by day because of the benefits afforded by using applications that have the ability to change the nature of library's services through the web hosting services which can be accessed via any internet connected device. And actually, libraries started using services provided by this technology especially in the field of digitization, indexing, supply, storage and the service of sharing and exchanging information resources in the virtual environment. For this, there is a need to conduct scientific research to find out librarians' trends and motivations about the use of cloud computing in their field of work. Where this research examines the librarians' trends towards the use of cloud computing according to the technology acceptance model (TAM). The study depended on descriptive analytical method using a questionnaire tool, and it has included all the libraries of Mentouri-Constantine1, libraries of university Abdelhamid Mehri-Constantine2, libraries of university Constantine 3, and libraries of Emir Abdelkader university of the Islamic science. And it reached a set of results including: that cloud computing is not used in a very large proportion by librarians, and they haven't a good cognitive level of its services, but in return, an interest was found by them to the cloud as a technology helps the office work implementation and the performance efficiency lifting and appreciates library services. (shrink)
There is an ongoing debate on whether or to what degree computer simulations can be likened to experiments. Many philosophers are sceptical whether a strict separation between the two categories is possible and deny that the materiality of experiments makes a difference (Morrison 2009, Parker 2009, Winsberg 2010). Some also like to describe computer simulations as a “third way” between experimental and theoretical research (Rohrlich 1990, Axelrod 2003, Kueppers/Lenhard 2005). In this article I defend the view that computer simulations (...) are not experiments but that they are tools for evaluating the consequences of theories and theoretical assumptions. In order to do so the (alleged) similarities and differences between simulations and experiments are examined. It is found that three fundamental differences between simulations and experiments remain: 1) Only experiments can generate new empirical data. 2) Only Experiments can operate directly on the target system. 3) Experiments alone can be employed for testing fundamental hypotheses. As a consequence, experiments enjoy a distinct epistemic role in science that cannot completely be superseded by computer simulations. This finding in connection with a discussion of border cases such as hybrid methods that combine measurement with simulation shows that computer simulations can clearly be distinguished from empirical methods. It is important to understand that computer simulations are not experiments, because otherwise there is a danger of systematically underestimating the need for empirical validation of simulations. (shrink)
Logics based on weak Kleene algebra (WKA) and related structures have been recently proposed as a tool for reasoning about flaws in computer programs. The key element of this proposal is the presence, in WKA and related structures, of a non-classical truth-value that is “contaminating” in the sense that whenever the value is assigned to a formula ϕ, any complex formula in which ϕ appears is assigned that value as well. Under such interpretations, the contaminating states represent occurrences of a (...) flaw. However, since different programs and machines can interact with (or be nested into) one another, we need to account for different kind of errors, and this calls for an evaluation of systems with multiple contaminating values. In this paper, we make steps toward these evaluation systems by considering two logics, HYB1 and HYB2, whose semantic interpretations account for two contaminating values beside classical values 0 and 1. In particular, we provide two main formal contributions. First, we give a characterization of their relations of (multiple-conclusion) logical consequence—that is, necessary and sufficient conditions for a set Δ of formulas to logically follow from a set Γ of formulas in HYB1 or HYB2 . Second, we provide sound and complete sequent calculi for the two logics. (shrink)
Relevant logics provide an alternative to classical implication that is capable of accounting for the relationship between the antecedent and the consequence of a valid implication. Relevant implication is usually explained in terms of information required to assess a proposition. By doing so, relevant implication introduces a number of cognitively relevant aspects in the de nition of logical operators. In this paper, we aim to take a closer look at the cognitive feature of relevant implication. For this purpose, we develop (...) a cognitively-oriented interpretation of the semantics of relevant logics. In particular, we provide an interpretation of Routley-Meyer semantics in terms of conceptual spaces and we show that it meets the constraints of the algebraic semantics of relevant logic. (shrink)
I argue, in this thesis, that proper name reference is a wholly pragmatic phenomenon. The reference of a proper name is neither constitutive of, nor determined by, the semantic content of that name, but is determined, on an occasion of use, by pragmatic factors. The majority of views in the literature on proper name reference claim that reference is in some way determined by the semantics of the name, either because their reference simply constitutes their semantics (which generally requires (...) a very fine-grained individuation of names), or because names have an indexical-like semantics that returns a referent given certain specific contextual parameters. I discuss and criticize these views in detail, arguing, essentially, in both cases, that there can be no determinate criteria for reference determination—a claim required by both types of semanticview. I also consider a less common view on proper name reference: that it is determined wholly by speakers’ intentions. I argue that the most plausible version of this view—a strong neo-Gricean position whereby all utterance content is determined by the communicative intentions of the speaker—is implausible in light of psychological data. In the positive part of my thesis, I develop a pragmatic view of proper name reference that is influenced primarily by the work of Charles Travis. I argue that the reference of proper names can only be satisfactorily accounted for by claiming that reference occurs not at the level of word meaning, but at the pragmatic level, on an occasion of utterance. I claim that the contextual mechanisms that determine the reference of a name on an occasion are the same kinds of thing that determine the truth-values of utterances according to Travis. Thus, names are, effectively, occasion sensitive in the way that Travis claims predicates and sentences (amongst other expressions) are. Finally, I discuss how further research might address how my pragmatic view of reference affects traditional issues in the literature on names, and the consequences of the view for the semantics of names. (shrink)
In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is important in building better models of cognitive processes. For that purpose, a recent study of some phenomena in rats that are capable of ‘entertaining’ future paths (Pfeiffer and Foster 2013) is analyzed. The case shows that the mechanistic account of physical computation may (...) be complemented with semantic considerations, and in many cases, it actually should. (shrink)
Throughout this paper, we are trying to show how and why our Mathematical frame-work seems inappropriate to solve problems in Theory of Computation. More exactly, the concept of turning back in time in paradoxes causes inconsistency in modeling of the concept of Time in some semantic situations. As we see in the first chapter, by introducing a version of “Unexpected Hanging Paradox”,first we attempt to open a new explanation for some paradoxes. In the second step, by applying this (...) paradox, it is demonstrated that any formalized system for the Theory of Computation based on Classical Logic and Turing Model of Computation leads us to a contradiction. We conclude that our mathematical frame work is inappropriate for Theory of Computation. Furthermore, the result provides us a reason that many problems in Complexity Theory resist to be solved.(This work is completed in 2017 -5- 2, it is in vixra in 2017-5-14, presented in Unilog 2018, Vichy). (shrink)
Complex demonstratives, expressions of the form 'That F', 'These Fs', etc., have traditionally been taken to be referring terms. Yet they exhibit many of the features of quantified noun phrases. This has led some philosophers to suggest that demonstrative determiners are a special kind of quantifier, which can be paraphrased using a context sensitive definite description. Both these views contain elements of the truth, though each is mistaken. We advance a novel account of the semantic form of complex demonstratives (...) that shows how to reconcile the view that they function like quantified noun phrases with the view that simple demonstratives function as context sensitive referring terms wherever they occur. If we are right, previous accounts of complex demonstratives have misconceived their semantic role; and philosophers relying on the majority view in employing complex demonstratives in analysis have proceeded on a false assumption. (shrink)
Semantic information is usually supposed to satisfy the veridicality thesis: p qualifies as semantic information only if p is true. However, what it means for semantic information to be true is often left implicit, with correspondentist interpretations representing the most popular, default option. The article develops an alternative approach, namely a correctness theory of truth (CTT) for semantic information. This is meant as a contribution not only to the philosophy of information but also to the philosophical (...) debate on the nature of truth. After the introduction, in Sect. 2, semantic information is shown to be translatable into propositional semantic information (i). In Sect. 3, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in Sect. 4, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In Sects. 5 and 6, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates Q by verifying and validating it (in the computer science’s sense of “verification” and “validation”); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of “proxy”) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of “commutation”); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. Sect. 7 provides some further clarifications about CTT, in the light of semantic paradoxes. Section 8 draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users. In the course of the article all technical expressions from computer science are explained. (shrink)
In a previous work we introduced the algorithm \SQEMA\ for computing first-order equivalents and proving canonicity of modal formulae, and thus established a very general correspondence and canonical completeness result. \SQEMA\ is based on transformation rules, the most important of which employs a modal version of a result by Ackermann that enables elimination of an existentially quantified predicate variable in a formula, provided a certain negative polarity condition on that variable is satisfied. In this paper we develop several extensions of (...) \SQEMA\ where that syntactic condition is replaced by a semantic one, viz. downward monotonicity. For the first, and most general, extension \SSQEMA\ we prove correctness for a large class of modal formulae containing an extension of the Sahlqvist formulae, defined by replacing polarity with monotonicity. By employing a special modal version of Lyndon's monotonicity theorem and imposing additional requirements on the Ackermann rule we obtain restricted versions of \SSQEMA\ which guarantee canonicity, too. (shrink)
Along with many other languages, English has a relatively straightforward grammatical distinction between mass-occurrences of nouns and their countoccurrences. As the mass-count distinction, in my view, is best drawn between occurrences of expressions, rather than expressions themselves, it becomes important that there be some rule-governed way of classifying a given noun-occurrence into mass or count. The project of classifying noun-occurrences is the topic of Section II of this paper. Section III, the remainder of the paper, concerns the semantic (...) differences between nouns in their mass-occurrences and those in their count-occurrences. As both the name view and the mixed view are, in my opinion, subject to serious difficulties discussed in Section III.1,I defend a version of the predicate view. Traditionally, nouns in their singular count-occurrences are also analyzed as playing the semantic role of a predicate. How, then, does the predicate view preserve the intuitive difference between nouns in their mass- and those in their count-occurrences? I suggest, in Section III.2, that there are different kinds of predicates: mass-predicates, e.g. ‘is hair’, singular count-predicates, e.g. ‘is a hair’, and plural count-predicates, e.g. ‘are hairs’. Mass-predicates and count-predicates, in my view, are not reducible to each other. The remainder of Section III takes a closer look at the differences and interrelations between these different kinds of predicates. Mass-predicates and count-predicates differ from each other truth-conditionally, and these truth-conditional differences turn out to have interesting implications, in particular concerning the part-whole relation and our practices of counting. But mass- and count-predicates are also related to each other through systematic entailment relations; these entailment relations are examined in Section III.4. (shrink)
According to the singular conception of reality, there are objects and there are singular properties, i.e. properties that are instantiated by objects separately. It has been argued that semantic considerations about plurals give us reasons to embrace a plural conception of reality. This is the view that, in addition to singular properties, there are plural properties, i.e. properties that are instantiated jointly by many objects. In this article, I propose and defend a novel semantic account of plurals (...) which dispenses with plural properties and thus undermines the semantic argument in favor of the plural conception of reality. (shrink)
With increasing publication and data production, scientific knowledge presents not simply an achievement but also a challenge. Scientific publications and data are increasingly treated as resources that need to be digitally ‘managed.’ This gives rise to scientific Knowledge Management : second-order scientific work aiming to systematically collect, take care of and mobilise first-hand disciplinary knowledge and data in order to provide new first-order scientific knowledge. We follow the work of Leonelli, Efstathiou and Hislop in our analysis of the use of (...) KM in semantic systems biology. Through an empirical philosophical account of KM-enabled biological research, we argue that KM helps produce new first-order biological knowledge that did not exist before, and which could not have been produced by traditional means. KM work is enabled by conceiving of ‘knowledge’ as an object for computational science: as explicated in the text of biological articles and computable via appropriate data and metadata. However, these founded knowledge concepts enabling computational KM risk focusing on only computationally tractable data as knowledge, underestimating practice-based knowing and its significance in ensuring the validity of ‘manageable’ knowledge as knowledge. (shrink)
A graph-theoretic account of fibring of logics is developed, capitalizing on the interleaving characteristics of fibring at the linguistic, semantic and proof levels. Fibring of two signatures is seen as a multi-graph (m-graph) where the nodes and the m-edges include the sorts and the constructors of the signatures at hand. Fibring of two models is a multi-graph (m-graph) where the nodes and the m-edges are the values and the operations in the models, respectively. Fibring of two deductive systems is (...) an m-graph whose nodes are language expressions and the m-edges represent the inference rules of the two original systems. The sobriety of the approach is confirmed by proving that all the fibring notions are universal constructions. This graph-theoretic view is general enough to accommodate very different fibrings of propositional based logics encompassing logics with non-deterministic semantics, logics with an algebraic semantics, logics with partial semantics and substructural logics, among others. Soundness and weak completeness are proved to be preserved under very general conditions. Strong completeness is also shown to be preserved under tighter conditions. In this setting, the collapsing problem appearing in several combinations of logic systems can be avoided. (shrink)
The Turing machine is one of the simple abstract computational devices that can be used to investigate the limits of computability. In this paper, they are considered from several points of view that emphasize the importance and the relativity of mathematical languages used to describe the Turing machines. A deep investigation is performed on the interrelations between mechanical computations and their mathematical descriptions emerging when a human (the researcher) starts to describe a Turing machine (the object of the study) (...) by different mathematical languages (the instruments of investigation). Together with traditional mathematical languages using such concepts as ‘enumerable sets’ and ‘continuum’ a new computational methodology allowing one to measure the number of elements of different infinite sets is used in this paper. It is shown how mathematical languages used to describe the machines limit our possibilities to observe them. In particular, notions of observable deterministic and non-deterministic Turing machines are introduced and conditions ensuring that the latter can be simulated by the former are established. (shrink)
This paper proposes an extensionalist analysis of computer simulations (CSs). It puts the emphasis not on languages nor on models, but on symbols, on their extensions, and on their various ways of referring. It shows that chains of reference of symbols in CSs are multiple and of different kinds. As they are distinct and diverse, these chains enable different kinds of remoteness of reference and different kinds of validation for CSs. Although some methodological papers have already underlined the role of (...) these various relationships of reference in CSs and of cross-validations, this diversity is still overlooked in the epistemological literature on CSs. As a consequence, a particular outcome of this analytical view is an ability to classify existing epistemological theses on CSs according to what their authors choose to select and put at the forefront: either the extensions of symbols, or the symbol-types, or the symbol-tokens, or the internal denotational hierarchies of the CS or the reference of these hierarchies to external denotational hierarchies. Through the adoption of this extensionalist view, it also becomes possible to explain more precisely the reasons why some complete reduction of CSs to classical epistemic paradigms such as “experiment” or “theoretical argument” remains doubtful. On this last point, in particular, this paper is in agreement with what many epistemologists already have acknowledged. (shrink)
John Searle once said: "The Chinese room shows what we knew all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)." I say: "Yes". Stuart C. Shapiro has said: "Does that make any sense? Yes: (...) Everything makes sense. The question is: What sense does it make?" This essay explores what sense it makes to say that syntax by itself is sufficient for semantics. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.