We present a preliminary high-level formal theory, grounded on knowledgerepresentation techniques and foundational ontologies, for the uniform and integrated representation of the different kinds of (quali- tative and quantitative) knowledge involved in the designing process. We discuss the conceptual nature of engineering design by individuating and analyzing the involved notions. These notions are then formally charac- terized by extending the DOLCE foundational ontology. Our ultimate purpose is twofold: (i) to contribute to foundational issues of design; (...) and (ii) to support the development of advanced modelling systems for (qualitative and quantitative) representation of design knowledge. (shrink)
Logic has been a—disputed—ingredient in the emergence and development of the now very large field known as knowledgerepresentation and reasoning. In this book (in progress), I select some central topics in this highly fruitful, albeit controversial, association (e.g., non-monotonic reasoning, implicit belief, logical omniscience, closed world assumption), identifying their sources and analyzing/explaining their elaboration in highly influential published work.
We discuss the role of perceptron (or threshold) connectives in the context of Description Logic, and in particular their possible use as a bridge between statistical learning of models from data and logical reasoning over knowledge bases. We prove that such connectives can be added to the language of most forms of Description Logic without increasing the complexity of the corresponding inference problem. We show, with a practical example over the Gene Ontology, how even simple instances of perceptron connectives (...) are expressive enough to represent learned, complex concepts derived from real use cases. This opens up the possibility to import concepts learnt from data into existing ontologies. (shrink)
This paper briefly outlines some advancements in paraconsistent logics for modelling knowledgerepresentation and reasoning. Emphasis is given on the so-called Logics of Formal Inconsistency (LFIs), a class of paraconsistent logics that formally internalize the very concept(s) of consistency and inconsistency. A couple of specialized systems based on the LFIs will be reviewed, including belief revision and probabilistic reasoning. Potential applications of those systems in the AI area of KRR are tackled by illustrating some examples that emphasizes the (...) importance of a fine-tuned treatment of consistency in modelling reputation systems, preferences, argumentation, and evidence. (shrink)
According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to (...) be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledgerepresentation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes. (shrink)
Concept representation is still an open problem in the field of ontology engineering and, more generally, of knowledgerepresentation. In particular, the issue of representing “non classical” concepts, i.e. concepts that cannot be defined in terms of necessary and sufficient conditions, remains unresolved. In this paper we review empirical evidence from cognitive psychology, according to which concept representation is not a unitary phenomenon. On this basis, we sketch some proposals for concept representation, taking into account (...) suggestions from psychological research. In particular, it seems that human beings employ both prototype-based and exemplar-based representations in order to represent non classical concepts. We suggest that a similar, hybrid prototype-exemplar based approach could also prove useful in the field of knowledgerepresentation technology. Finally, we propose conceptual spaces as a suitable framework for developing some aspects of this proposal. (shrink)
In this inaugural lecture I offer, against the background of a discussion of knowledgerepresentation and its tools, an overview of my research in the philosophy of science. I defend a relational model-theoretic realism as being the appropriate meta-stance most congruent with the model-theoretic view of science as a form of human engagement with the world. Making use of logics with preferential semantics within a model-theoretic paradigm, I give an account of science as process and product. I demonstrate (...) the power of the full-blown employment of this paradigm in the philosophy of science by discussing the main applications of model-theoretic realism to traditional problems in the philosophy of science. I discuss my views of the nature of logic and of its role in the philosophy of science today. I also specifically offer a brief discussion on the future of cognitive philosophy in South Africa. My conclusion is a general look at the nature of philosophical inquiry and its significance for philosophers today. South African Journal of Philosophy Vol. 25 (4) 2006: pp. 275-289. (shrink)
I argue for three points: First, evidence of the primacy of knowledgerepresentation is not evidence of primacy of knowledge. Second, knowledge-oriented mindreading research should also focus on misrepresentations and biased representations of knowledge. Third, knowledge-oriented mindreading research must confront the problem of the gold standard that arises when disagreement about knowledge complicates the interpretation of empirical findings.
The idea that pictorial art can have cognitive value, that it can enhance our understanding of the world and of our own selves, has had many advocates in art theory and philosophical aesthetics alike. It has also been argued, however, that the power of pictorial representation to convey or enhance knowledge, in particular knowledge with moral content, is not generalized across the medium.
(*This paper was awarded the Elisabeth and Werner Leinfellner Award 2017 for outstanding contributions.) -/- This paper provides an explanation of the skeptical puzzle. I argue that we can take two distinct points of view towards representations, mental representations like perceptual experiences and artificial representations like symbols. When focusing on what the representation represents we take an attached point of view. When focusing on the representational character of the representation we take a detached point view. From an attached (...) point of view, we have the intuition that we can know that p simply by using the representation and without having prior knowledge about the reliability of the source that delivers the representation. When taking a detached point of view, we tend to think that we must have this kind of prior knowledge. These two conflicting intuitions about knowledge and representations provide the basis for our intuition of immediate perceptual knowledge on the one hand and for the skeptical intuition of underdetermination on the other hand. (shrink)
In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge. We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build arti cial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of (...) the comparison of the CAs' knowledgerepresentation and processing mechanisms with those executed by humans in their everyday activities. In the fi nal part of the paper further directions of research will be explored, trying to address current limitations and future challenges. (shrink)
A speaker's use of a declarative sentence in a context has two effects: it expresses a proposition and represents the speaker as knowing that proposition. This essay is about how to explain the second effect. The standard explanation is act-based. A speaker is represented as knowing because their use of the declarative in a context tokens the act-type of assertion and assertions represent knowledge in what's asserted. I propose a semantic explanation on which declaratives covertly host a "know"-parenthetical. A (...) speaker is thereby represented as knowing the proposition expressed because that is the semantic contribution of the parenthetical. I call this view parentheticalism and defend that it better explains knowledgerepresentation than alternatives. As a consequence of outperforming assertoric explanations, parentheticalism opens the door to eliminating the act-type of assertion from linguistic theorizing. (shrink)
In this article we present an advanced version of Dual-PECCS, a cognitively-inspired knowledgerepresentation and reasoning system aimed at extending the capabilities of artificial systems in conceptual categorization tasks. It combines different sorts of common-sense categorization (prototypical and exemplars-based categorization) with standard monotonic categorization procedures. These different types of inferential procedures are reconciled according to the tenets coming from the dual process theory of reasoning. On the other hand, from a representational perspective, the system relies on the hypothesis (...) of conceptual structures represented as heterogeneous proxytypes. Dual-PECCS has been experimentally assessed in a task of conceptual categorization where a target concept illustrated by a simple common-sense linguistic description had to be identified by resorting to a mix of categorization strategies, and its output has been compared to human responses. The obtained results suggest that our approach can be beneficial to improve the representational and reasoning conceptual capabilities of standard cognitive artificial systems, and –in addition– that it may be plausibly applied to different general computational models of cognition. The current version of the system, in fact, extends our previous work, in that Dual-PECCS is now integrated and tested into two cognitive architectures, ACT-R and CLARION, implementing different assumptions on the underlying invariant structures governing human cognition. Such integration allowed us to extend our previous evaluation. (shrink)
Research on the capacity to understand others' minds has tended to focus on representations ofbeliefs,which are widely taken to be among the most central and basic theory of mind representations. Representations ofknowledge, by contrast, have received comparatively little attention and have often been understood as depending on prior representations of belief. After all, how could one represent someone as knowing something if one does not even represent them as believing it? Drawing on a wide range of methods across cognitive science, (...) we ask whether belief or knowledge is the more basic kind of representation. The evidence indicates that nonhuman primates attribute knowledge but not belief, that knowledge representations arise earlier in human development than belief representations, that the capacity to represent knowledge may remain intact in patient populations even when belief representation is disrupted, that knowledge (but not belief) attributions are likely automatic, and that explicit knowledge attributions are made more quickly than equivalent belief attributions. Critically, the theory of mind representations uncovered by these various methods exhibits a set of signature features clearly indicative of knowledge: they are not modality-specific, they are factive, they are not just true belief, and they allow for representations of egocentric ignorance. We argue that these signature features elucidate the primary function of knowledgerepresentation: facilitating learning from others about the external world. This suggests a new way of understanding theory of mind – one that is focused on understanding others' minds in relation to the actual world, rather than independent from it. (shrink)
Although other animals can make simple tools, the expanded and complex material culture of humans is unprecedented in the animal kingdom. Tool making is a slow and late-developing ability in humans, and preschool children find making tools to solve problems very challenging. This difficulty in tool making might be related to the lack of familiarity with the tools and may be overcome by children's long term perceptual-motor knowledge. Thus, in this study, the effect of tool familiarity on tool making (...) was investigated with a task in which 5-to-6-year-old children (n = 75) were asked to remove a small bucket from a vertical tube. The results show that children are better at tool making if the tool and its relation to the task are familiar to them (e.g., soda straw). Moreover, we also replicated the finding that hierarchical complexity and tool making were significantly related. Results are discussed in light of the ideomotor approach. (shrink)
We describe and try to motivate our project to build systems using both a knowledge based and a neural network approach. These two approaches are used at different stages in the solution of a problem, instead of using knowledge bases exclusively on some problems, and neural nets exclusively on others. The knowledge base (KB) is defined first in a declarative, symbolic language that is easy to use. It is then compiled into an efficient neural network (NN) (...) class='Hi'>representation, run, and the results from run time and (eventually) from learning are decompiled to a symbolic description of the knowledge contained in the network. After inspecting this recovered knowledge, a designer would be able to modify the KB and go through the whole cycle of compiling, running, and decompiling again. The central question with which this project is concerned is, therefore, How do we go from a KB to an NN, and back again? We are investigating this question by building tools consisting of a repertoire of language/translation/network types, and trying them on problems in a variety of domains. (shrink)
The standard representation theorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always probabilistically coherent with infinitely precise (...) degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles. (shrink)
We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledgerepresentation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim (...) is, firstly, to target a model-agnostic distillation approach exemplified with these two frameworks, secondly, to study how these two frameworks interact on a theoretical level, and, thirdly, to investigate use-cases in ML and AI in a comparative manner. Specifically, we envision that user-studies will help determine human understandability of explanations generated using these two frameworks. (shrink)
We present a computational analysis of de re, de dicto, and de se belief and knowledge reports. Our analysis solves a problem first observed by Hector-Neri Castañeda, namely, that the simple rule -/- `(A knows that P) implies P' -/- apparently does not hold if P contains a quasi-indexical. We present a single rule, in the context of a knowledge-representation and reasoning system, that holds for all P, including those containing quasi-indexicals. In so doing, we explore the (...) difference between reasoning in a public communication language and in a knowledge-representation language, we demonstrate the importance of representing proper names explicitly, and we provide support for the necessity of considering sentences in the context of extended discourse (for example, written narrative) in order to fully capture certain features of their semantics. (shrink)
Much recent cognitive neuroscientific work on body knowledge is representationalist: “body schema” and “body images”, for example, are cerebral representations of the body (de Vignemont 2009). A framework assumption is that representation of the body plays an important role in cognition. The question is whether this representationalist assumption is compatible with the variety of broadly situated or embodied approaches recently popular in the cognitive neurosciences: approaches in which cognition is taken to have a ‘direct’ relation to the body (...) and to the environment. A “direct” relation is one where the boundaries between the body and the head, or between the environment and the animal are not theoretically important in the understanding of cognition. These boundaries do not play a theoretically privileged role in cognitive explanations of behavior. But representationalism appears to put a representational veil between the locus of cognition and that which is represented, making cognitive relations to the body and to the environment be indirect, with a high associated computational load. For this reason, direct approaches have tried to minimize the use of internal representations (Suchman 1987; Barwise 1987; Agre and Chapman 1987; Brooks 1992; Thelen and Smith 1994; van Gelder 1995; Port and van Gelder 1995; Clark 1997, 1999; Rupert 2009, p. 180). Does a cognitive neuroscience committed to direct relations rule out a representationalist approach to body knowledge? Or is direct representationalism possible? (shrink)
In philosophy of music, formalists argue that pure instrumental music is unable to represent any content without the help of lyrics, titles, or dramatic context. In particular, they deny that music’s use of convention counts as a genuine case of representation because only intrinsic means of representing counts and conventions are extrinsic to the sound structures making up music. In this paper, I argue that convention should count as a way for music to genuinely represent content for two reasons. (...) First, the view that only intrinsic ways of representing count is too stringent. If use can ground meaning in language, then use might also ground meaning (and representation) in music, too. Second, even if we were to insist on intrinsic features, convention should count as a way for music to genuinely represent because convention is an intrinsic feature of music. Without knowledge of musical systems and encultured listening, music wouldn’t even be recognized as music, let alone be seen as possessing the kinds of structural qualities that formalists care about. Convention is already baked into our listening practices. (shrink)
Many have urged that the biggest obstacles to a physicalistic understanding of consciousness are the problems raised in connection with the subjectivity of consciousness. These problems are most acutely expressed in consideration of the knowledge argument against physicalism. I develop a novel account of the subjectivity of consciousness by explicating the ways in which mental representations may be perspectival. Crucial features of my account involve analogies between the representations involved in sensory experience and the ways in which pictorial representations (...) exhibit perspectives or points of view. I argue that the resultant account of subjectivity provides a basis for the strongest response physicalists can give to the knowledge argument. (shrink)
During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [35]) adopt a classical symbolic approach, some (e.g. LEABRA[ 48]) are based on a purely connectionist model, while others (e.g. CLARION [59]) adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and (...) reasoning are also available [34]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter G¨ardenfors [24] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by G¨ardenfors [23] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one. In particular we focus on the advantages offered by Conceptual Spaces (w.r.t. symbolic and sub-symbolic approaches) in dealing with the problem of compositionality of representations based on typicality traits. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and reasoning in CAs. (shrink)
Formal ontologies are nowadays widely considered a standard tool for knowledgerepresentation and reasoning in the Semantic Web. In this context, they are expected to play an important role in helping automated processes to access information. Namely: they are expected to provide a formal structure able to explicate the relationships between different concepts/terms, thus allowing intelligent agents to interpret, correctly, the semantics of the web resources improving the performances of the search technologies. Here we take into account a (...) problem regarding KnowledgeRepresentation in general, and ontology based representations in particular; namely: the fact that knowledge modeling seems to be constrained between conflicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In particular, most common sense concepts seem not to be captured by the stringent semantics expressed by such formalisms as, for example, Description Logics (which are the formalisms on which the ontology languages have been built). The aim of this work is to analyse this problem, suggesting a possible solution suitable for formal ontologies and semantic web representations. The questions guiding this research, in fact, have been: is it possible to provide a formal representational framework which, for the same concept, combines both the classical modelling view (accounting for compositional information) and defeasible, prototypical knowledge ? Is it possible to propose a modelling architecture able to provide different type of reasoning (e.g. classical deductive reasoning for the compositional component and a non monotonic reasoning for the prototypical one)? We suggest a possible answer to these questions proposing a modelling framework able to represent, within the semantic web languages, a multilevel representation of conceptual information, integrating both classical and non classical (typicality based) information. Within this framework we hypothesise, at least in principle, the coexistence of multiple reasoning processes involving the different levels of representation. (shrink)
Orthodox epistemological disjunctivism involves the idea that paradigm cases of visual perceptual knowledge are based on visual perceptual states which are propositional, and hence representational. Given this, the orthodox version of epistemological disjunctivism takes on controversial representational commitments in the philosophy of perception. Must epistemological disjunctivism involve these commitments? I don’t think so. Here I argue that we can take epistemological disjunctivism in a new direction and develop a version of the view free of these representational commitments. The basic (...) idea is that instead of conceiving of knowledge grounding perceptions as states in which one sees that such-and-such is the case, we should instead conceive of them as states or episodes in which one sees a thing, we should conceive of them as thing-seeings. I’ll suggest that we can cast such seeings in a knowledge grounding role without conceiving of them as representational. But this is because we can put thing-seeings to epistemological work, in the framework of epistemological disjunctivism, whilst remaining neutral on whether or not they are propositional, or representational at all. The point, then, is not to replace epistemological disjunctivism’s controversial representational commitments with controversial non-representational commitments. The point is, rather, that epistemological disjunctivism can be developed with fewer commitments in the philosophy of perception than is usually appreciated. (shrink)
This essay presents a philosophical and computational theory of the representation of de re, de dicto, nested, and quasi-indexical belief reports expressed in natural language. The propositional Semantic Network Processing System (SNePS) is used for representing and reasoning about these reports. In particular, quasi-indicators (indexical expressions occurring in intentional contexts and representing uses of indicators by another speaker) pose problems for natural-language representation and reasoning systems, because--unlike pure indicators--they cannot be replaced by coreferential NPs without changing the meaning (...) of the embedding sentence. Therefore, the referent of the quasi-indicator must be represented in such a way that no invalid coreferential claims are entailed. The importance of quasi-indicators is discussed, and it is shown that all four of the above categories of belief reports can be handled by a single representational technique using belief spaces containing intensional entities. Inference rules and belief-revision techniques for the system are also examined. (shrink)
In this paper a possible general framework for the representation of concepts in cognitive artificial systems and cognitive architectures is proposed. The framework is inspired by the so called proxytype theory of concepts and combines it with the heterogeneity approach to concept representations, according to which concepts do not constitute a unitary phenomenon. The contribution of the paper is twofold: on one hand, it aims at providing a novel theoretical hypothesis for the debate about concepts in cognitive sciences by (...) providing unexplored connections between different theories; on the other hand it is aimed at sketching a computational characterization of the problem of concept representation in cognitively inspired artificial systems and in cognitive architectures. (shrink)
Approximation involves representing things in ways that might be close to the truth but are nevertheless false. Given the widespread reliance on approximations in science and everyday life, here we ask whether it is conceptually possible for false approximations to qualify as knowledge. According to the factivity account, it is impossible to know false approximations, because knowledge requires truth. According to the representational adequacy account, it is possible to know false approximations, if they are close enough to the (...) truth for present purposes. In this paper, we adopt an experimental methodology to begin testing these two theories. When an agent provides a false and practically inadequate answer, both theories predict that people will deny knowledge. But the theories disagree about an agent who provides a false but practically adequate answer: the factivity hypothesis again predicts knowledge denial, whereas the representational adequacy hypothesis predicts knowledge attribution. Across two experiments, our principal finding was that people tended to attribute knowledge for false but practically adequate answers, which supports the representational adequacy account. We propose an interpretation of existing findings that preserves a conceptual link between knowledge and truth. According to this proposal, truth is not necessary for knowledge, but it is a feature of prototypical knowledge. (shrink)
In this chapter, I first turn to Spinoza’s obscure “ideas of ideas” doctrine and his claim that “as soon as one knows something, one knows that one knows it, and simultaneously knows that one knows that one knows, and so on, to infinity” (E2p21s). On my view, Spinoza, like Descartes, holds that a given idea can be conceived either in terms of what it represents or as an act of thinking: E2p7 (where Spinoza presents his doctrine of the “parallelism” of (...) minds and bodies) primarily concerns the former way of conceiving of an idea while E2p21 primarily concerns the latter. I propose that in E2p21, Spinoza makes a few crucial points about an adequate idea conceived as the “idea of the idea,” or as the activity of thinking: 1) when one has an adequate representation of p, one automatically knows that one is thinking an adequate representation of p, and 2) this reflective knowledge cannot be improved. I then turn to E2p43, “he who has a true idea at the same time knows that he has a true idea, and cannot doubt the truth of the thing." This is a denial of skepticism, but I think we need to be careful. E2p21 and E2p43 rule out the most hyperbolic doubts like those we see in Descartes's Third Meditation (AT VII 36), so it is the case that thinkers need no additional validation for the “adequate ideas of properties of things” and “common notions” employed in "cognition of the second kind," or reason. However, a reasoning thinker might nevertheless be troubled by doubts about the extramental world. As I have argued in other work, we can take Spinoza as following Descartes at least this far: once our reasoning thinker comes to adequate ideas of God and God’s relation to things, their ideas cannot be rendered doubtful. Here I concede that because one can reason to these adequate ideas of God and God's relation to things, scientia intuitiva is not unique in removing doubt. However, scientia intuitiva may still be distinctive in the way it removes doubt. (shrink)
It is widely accepted in epistemology that knowledge is factive, meaning that only truths can be known. We argue that this theory creates a skeptical challenge: because many of our beliefs are only approximately true, and therefore false, they do not count as knowledge. We consider several responses to this challenge and propose a new one. We propose easing the truth requirement on knowledge to allow approximately true, practically adequate representations to count as knowledge. In addition (...) to addressing the skeptical challenge, this view also coheres with several previous theoretical proposals in epistemology. (shrink)
The sensorimotor theory of perceptual experience claims that perception is constituted by bodily interaction with the environment, drawing on practical knowledge of the systematic ways that sensory inputs are disposed to change as a result of movement. Despite the theory’s associations with enactivism, it is sometimes claimed that the appeal to ‘knowledge’ means that the theory is committed to giving an essential theoretical role to internal representation, and therefore to a form of orthodox cognitive science. This paper (...) defends the role ascribed to knowledge by the theory, but argues that this knowledge can and should be identified with bodily skill rather than representation. Making the further argument that the notion of ‘representation hunger’ can be replaced with ‘prima facie representation hunger’, it concludes that although the theory could optionally be developed scientifically in part by reference to internal representation, it makes a strong and natural fit with anti-representationalist embodied or enactive cognitive science. (shrink)
A. Newell and H. A. Simon were two of the most influential scientists in the emerging field of artificial intelligence (AI) in the late 1950s through to the early 1990s. This paper reviews their crucial contribution to this field, namely to symbolic AI. This contribution was constituted mostly by their quest for the implementation of general intelligence and (commonsense) knowledge in artificial thinking or reasoning artifacts, a project they shared with many other scientists but that in their case was (...) theoretically based on the idiosyncratic notions of symbol systems and the representational abilities they give rise to, in particular with respect to knowledge. While focusing on the period 1956-1982, this review cites both earlier and later literature and it attempts to make visible their potential relevance to today's greatest unifying AI challenge, to wit, the design of wholly autonomous artificial agents (a.k.a. robots) that are not only rational and ethical, but also self-conscious. (shrink)
The paper introduces an extension of the proposal according to which conceptual representations in cognitive agents should be intended as heterogeneous proxytypes. The main contribution of this paper is in that it details how to reconcile, under a heterogeneous representational perspective, different theories of typicality about conceptual representation and reasoning. In particular, it provides a novel theoretical hypothesis - as well as a novel categorization algorithm called DELTA - showing how to integrate the representational and reasoning assumptions of the (...) theory-theory of concepts with the those ascribed to the prototype and exemplars-based theories. (shrink)
To move beyond vague platitudes about the importance of context in legal reasoning or natural language understanding, one must take account of ideas from artificial intelligence on how to represent context formally. Work on topics like prior probabilities, the theory-ladenness of observation, encyclopedic knowledge for disambiguation in language translation and pathology test diagnosis has produced a body of knowledge on how to represent context in artificial intelligence applications.
The degree to which information is encoded explicitly in a representation is related to the computational cost of recovering or using the information. Knowledge that is implicit in a system need not be represented at all, even implicitly, if the cost of recurring it is prohibitive.
Many have found it plausible that knowledge is a constitutively normative state, i.e. a state that is grounded in the possession of reasons. Many have also found it plausible that certain cases of proprioceptive knowledge, memorial knowledge, and self-evident knowledge are cases of knowledge that are not grounded in the possession of reasons. I refer to these as cases of basic knowledge. The existence of basic knowledge forms a primary objection to the idea (...) that knowledge is a constitutively normative state. In what follows I offer a way through the apparent dilemma of having to choose between either basic knowledge or the normativity of knowledge. The solution involves homing in on a state of awareness (≈non-accidental true representation) that is distinct from knowledge and which in turn grounds the normativity of knowledge in a way that is fully consistent with the existence of basic knowledge. An upshot of this is that externalist theories of knowledge turn out to be fully compatible with the thesis that knowledgeable beliefs are always beliefs that are justified by the reasons one possesses. (shrink)
This article examines the effect of material evidence upon historiographic hypotheses. Through a series of successive Bayesian conditionalizations, I analyze the extended competition among several hypotheses that offered different accounts of the transition between the Bronze Age and the Iron Age in Palestine and in particular to the “emergence of Israel”. The model reconstructs, with low sensitivity to initial assumptions, the actual outcomes including a complete alteration of the scientific consensus. Several known issues of Bayesian confirmation, including the problem of (...) old evidence, the introduction and confirmation of novel theories and the sensitivity of convergence to uncertain and disputed evidence are discussed in relation to the model’s result and the actual historical process. The most important result is that convergence of probabilities and of scientific opinion is indeed possible when advocates of rival hypotheses hold similar judgment about the factual content of evidence, even if they differ sharply in their historiographic interpretation. This speaks against the contention that understanding of present remains is so irrevocably biased by theoretical and cultural presumptions as to make an objective assessment impossible. (shrink)
The aim of this paper is to show that a comprehensive account of the role of representations in science should reconsider some neglected theses of the classical philosophy of science proposed in the first decades of the 20th century. More precisely, it is argued that the accounts of Helmholtz and Hertz may be taken as prototypes of representational accounts in which structure preservation plays an essential role. Following Reichenbach, structure-preserving representations provide a useful device for formulating an up-to-date version of (...) a (relativized) Kantian a priori. An essential feature of modern scientific representations is their mathematical character. That is, representations can be conceived as (partially) structure-preserving maps or functions. This observation suggests an interesting but neglected perspective on the history and philosophy of this concept, namely, that structure-preserving representations are closely related to a priori elements of scientific knowledge. Reichenbach’s early theory of a relativized constitutive but non-apodictic a priori component of scientific knowledge provides a further elaboration of Kantian aspects of scientific representation. To cope with the dynamic aspects of the evolution of scientific knowledge, Cassirer proposed a re-interpretation of the concept of representation that conceived of a particular representation as only one phase in a continuous process determined by pragmatic considerations. Pragmatic aspects of representations are further elaborated in the classical account of C.I. Lewis and the more modern of Hasok Chang. (shrink)
Representations are not only used in our folk-psychological explanations of behaviour, but are also fruitfully postulated, for example, in cognitive science. The mainstream view in cognitive science maintains that our mind is a representational system. This popular view requires an understanding of the nature of the entities they are postulating. Teleosemantic theories face this challenge, unpacking the normativity in the relation of representation by appealing to the teleological function of the representing state. It has been argued that, if intentionality (...) is to be explained in teleological terms, then the function of a state cannot depend on its phylogenetical history, given the metaphysical possibility of a duplicate of an intentional being that lacks an evolutionary history. In this paper, I present a method to produce, according to our current knowledge in genetic engineering, human-like individuals who are not the product of natural selection in the required sense. This variation will be used to shed light on the main replies that have been offered in the literature to the Swampman thought experiment. I argue that these replies are not satisfactory: representations should better not depend on natural selection. I conclude that a non-etiological notion of function is to be preferred for characterizing the relation of representation. (shrink)
Amodal completion is the representation of occluded parts of perceived objects. We argue for the following three claims: First, at least some amodal completion-involved experiences can ground knowledge about the occluded portions of perceived objects. Second, at least some instances of amodal completion-grounded knowledge are not sensitive, that is, it is not the case that in the nearest worlds in which the relevant claim is false, that claim is not believed true. Third, at least some instances of (...) amodal completion-grounded knowledge are not safe, that is, it is not the case that in all or nearly all near worlds where the relevant claim is believed true, that claim is in fact true. Thus, certain instances of amodal completion-grounded knowledge refute both the view that knowledge is necessarily sensitive and the view that knowledge is necessarily safe. (shrink)
A 3rd person Knowledge Level analysis of cognitive architectures -/- Abstract I provide a knowledge level analysis of the main representational and reasoning problems affecting the cognitive architectures for what concerns this issue. In providing this analysis I will show, by considering some of the main cognitive architectures currently available (e.g. SOAR, ACT-R, CLARION), how one of the main problems of such architectures is represented by the fact that their knowledgerepresentation and processing mechanisms are not (...) sufficiently constrained with “structural insights” (Lieto 2021) coming from cognitive science for dealing with commonsense knowledge and reasoning (Lebiere, Oltramari, 2018). As a possible way out to such knowledge processing issues, I present the main assumptions that have led to the development of the Dual PECCS categorization system (Lieto, Radicioni, Rho 2017) and discuss some of the lessons learned and their possible implications in the design of the knowledge modules and knowledge-processing mechanisms of integrated cognitive architectures. (shrink)
In this essay I take up Plato’s critique of poetry, which has little to do with epistemology and representational imitation, but rather the powerful effects that poeticperformances can have on audiences, enthralling them with vivid image-worlds and blocking the powers of critical reflection. By focusing on the perceived psychological dangers of poetry in performance and reception, I want to suggest that Plato’s critique was caught up in the larger story of momentous shifts in the Greek world, turning on the rise (...) of literacy and its far-reaching effects in modifying the original and persisting oral character of Greek culture. The story of Plato’s Republic in certain ways suggests something essential for comprehending the development of philosophy in Greece : that philosophy, as we understand it, would not have been possible apart from the skills and mental transformations stemming from education in reading and writing; and that primary features of oral language and practice were a significant barrier to the development of philosophical rationality. Accordingly, I go on to argue that the critique of writing in the Phaedrus is neither a defense or orality per se, nor a dismissal of writing, but rather a defense of a literate soul over against orality and the indiscriminate exposure of written texts to unworthy readers. (shrink)
The engineering knowledge research program is part of the larger effort to articulate a philosophy of engineering and an engineering worldview. Engineering knowledge requires a more comprehensive conceptual framework than scientific knowledge. Engineering is not ‘merely’ applied science. Kuhn and Popper established the limits of scientific knowledge. In parallel, the embrace of complementarity and uncertainty in the new physics undermined the scientific concept of observer-independent knowledge. The paradigm shift from the scientific framework to the broader (...) participant engineering framework entails a problem shift. The detached scientific spectator seeks the ‘facts’ of ‘objective’ reality – out there. The participant, embodied in reality, seeks ‘methods’, about how to work in the world. The engineering knowledge research program is recursively enabling. Advances in engineering knowledge are involved in the unfolding of the nature of reality. Newly understood, quantum uncertainty entails that the participant is a natural inquirer. ‘Practical reason’ is concerned with ‘how we should live’– the defining question of morality. The engineering knowledge research program is selective seeking ‘important truths’, ‘important knowledge’, ‘important methods’ that manifest value, and serve the engineering agenda of ‘the construction of the good.’ The importance of engineering knowledge research program is clear in the new STEM curriculum where educators have been challenged to rethink the relation between science and engineering. A 2015 higher education initiative to integrate engineering colleges into liberal arts and sciences colleges has stalled due to the confusion and conflict between the engineering and scientific representations of knowledge. (shrink)
In his latest book, Michael Devitt rejects Chomsky’s mentalist conception of linguistics. The case against Chomsky is based on two principal claims. First, that we can separate the study of linguistic competence from the study of its outputs: only the latter belongs to linguistic inquiry. Second, Chomsky’s account of a speaker’s competence as consisiting in the mental representation of rules of a grammar for his language is mistaken. I shall argue, first, that Devitt fails to make a case for (...) separating the study of outputs from the study of competence, and second, that Devitt mis-characterises Chomsky’s account of competence, and so his objections miss their target. Chomsky’s own views come close to a denial that speaker’s have knowledge of their language. But a satisfactory account of what speakers are able to do will need to ascribe them linguistic knowledge that they use to speak and understand. I shall explore a conception of speaker’s knowledge of language that confirrns Chomsky’s mentalist view of linguistics but which is immune to Devitt’s criticism. (shrink)
This chapter focuses on the relationship between consciousness and knowledge, and in particular on the role perceptual consciousness might play in justifying beliefs about the external world. We outline a version of phenomenal dogmatism according to which perceptual experiences immediately, prima facie justify certain select parts of their content, and do so in virtue of their having a distinctive phenomenology with respect to those contents. Along the way we take up various issues in connection with this core theme, including (...) the possibility of immediate justification, the dispute between representational and relational views of perception, the epistemic significance of cognitive penetration, the question of whether perceptual experiences are composed of more basic sensations and seemings, and questions about the existence and epistemic significance of high-level content. In a concluding section we briefly consider how some of the topics pursued here might generalize beyond perception. (shrink)
The new Chomskian orthodoxy denies that our linguistic competence gives us knowledge *of* a language, and that the representations in the language faculty are representations *of* anything. In reply, I have argued that through their intuitions speaker/hearers, (but not their language faculties) have knowledge of language, though not of any externally existing language. In order to count as knowledge, these intuitions must track linguistic facts represented in the language faculty. I defend this idea against the objections Collins (...) has raised to such an account. (shrink)
Memory is not a unitary phenomenon. Even among the group of long-term individual memory representations (known in the literature as declarative memory) there seems to be a distinction between two kinds of memory: memory of personally experienced events (episodic memory) and memory of facts or knowledge about the world (semantic memory). Although this distinction seems very intuitive, it is not so clear in which characteristic or set of interrelated characteristics lies the difference. In this article, I present the different (...) criteria proposed in the philosophical and scientific literature in order to account for this distinction: (1) the vehicle of representation; (2) the grammar of the verb “to remember”; (3) the cause of the memory; (4) the memory content; and (5) the phenomenology of memory representations. Whereas some criteria seem more plausible than others, I show that all of them are problematic and none of them really fulfill their aim. I then briefly outline a different criterion, the affective criterion, which seems a promising line of research to try to understand the grounds of this distinction. (shrink)
Epistemologists have long believed that epistemic luck undermines propositional knowledge. Action theorists have long believed that agentive luck undermines intentional action. But is there a relationship between agentive luck and epistemic luck? While agentive luck and epistemic luck have been widely thought to be independent phenomena, we argue that agentive luck has an epistemic dimension. We present several thought experiments where epistemic luck seems to undermine both knowledge-how and intentional action and we report experimental results that corroborate these (...) judgments. We argue that these findings have implications for the role of knowledge in a theory of intentional action and for debates about the nature of knowledge-how and the significance of knowledgerepresentation in folk psychology. (shrink)
Epistemologists have long believed that epistemic luck undermines propositional knowledge. Action theorists have long believed that agentive luck undermines intentional action. But is there a relationship between agentive luck and epistemic luck? While agentive luck and epistemic luck have been widely thought to be independent phenomena, we argue that agentive luck has an epistemic dimension. We present several thought experiments where epistemic luck seems to undermine both knowledge-how and intentional action and we report experimental results that corroborate these (...) judgments. We argue that these findings have implications for the role of knowledge in a theory of intentional action and for debates about the nature of knowledge-how and the significance of knowledgerepresentation in folk psychology. (shrink)
In Geneva, since the beginning of pre-service secondary teacher training at university, two different types of students in teacher preparation coexist: some of them have got part-time classes, others have no teaching assignment. In an introduction to the teaching profession, students from different disciplines of the two types take a course on the same sources of professional knowledge. By analyzing the representations of the teaching profession, we find that the process of construction of their professional identity varies according to (...) whether they have a student teaching placement or not. : A Genève, depuis l’universitarisation de la formation des enseignants du secondaire, deux statuts d’étudiants en formation initiale à l’enseignement coexistent : les uns à mi-temps en responsabilité de classe, les autres sans contact avec le terrain. Dans une unité de formation d’introduction à la profession enseignante, des étudiants de disciplines différentes des deux statuts suivent un dispositif de formation identique portant sur les savoirs de référence constitutifs de la profession. En analysant les représentations du métier d’enseignant des étudiants, on constate que l’identité professionnelle en construction de ceux-ci évolue différemment selon s’ils sont sur le terrain ou non. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.