Jennifer Lackey ('Testimonial Knowledge and Transmission' The Philosophical Quarterly 1999) and Peter Graham ('Conveying Information, Synthese 2000, 'Transferring Knowledge' Nous 2000) offered counterexamples to show that a hearer can acquire knowledge that P from a speaker who asserts that P, but the speaker does not know that P. These examples suggest testimony can generate knowledge. The showpiece of Lackey's examples is the Schoolteacher case. This paper shows that Lackey's case does not undermine the orthodox view that (...) testimony cannot generate knowledge. This paper explains why Lackey's arguments to the contrary are ineffective for they misunderstand the intuitive rationale for the view that testimony cannot generate knowledge. This paper then elaborates on a version of the case from Graham's paper 'Conveying Information' (the Fossil case) that effectively shows that testimony can generate knowledge. This paper then provides a deeper informative explanation for how it is that testimony transfers knowledge, and why there should be cases where testimony generates knowledge. (shrink)
In this paper we propose a computational framework aimed at extending the problem solving capabilities of cognitive artificial agents through the introduction of a novel, goal-directed, dynamic knowledge generation mechanism obtained via a non monotonic reasoning procedure. In particular, the proposed framework relies on the assumption that certain classes of problems cannot be solved by simply learning or injecting new external knowledge in the declarative memory of a cognitive artificial agent but, on the other hand, require a mechanism (...) for the automatic and creative re-framing, or re-formulation, of the available knowledge. We show how such mechanism can be obtained trough a framework of dynamic knowledge generation that is able to tackle the problem of commonsense concept combination. In addition, we show how such a framework can be employed in the field of cognitive architectures in order to overcome situations like the impasse in SOAR by extending the possible options of its subgoaling procedures. (shrink)
For this study, we have considered Facebook Marketplace (FBM) to understand how knowledge from the social networking world affects consumer choice and behavior, ie, users' economic decisions.[...] the FBM could be considered as a digital socioeconomic system where availability of digital trace data from user interactions would enable studies of population-level human interactions (Sundararajan et al., 2013).
The study aimed to identify the knowledge management processes and their role in achieving competitive advantage at Al-Quds Open University. The study was based on the descriptive analytical method, and the study population consists of academic and administrative staff in each of the branches of Al-Quds Open University in (Tulkarm, Nablus and Jenin). The researchers selected a sample of the study population by the intentional non-probability method, the size of (70) employees. A questionnaire was prepared and supervised by a (...) number of specialists in order to obtain the results of the study. The study concluded that there is a positive direct relationship, that is, the higher the degree of application of knowledge management processes, the greater the degree of competitive advantage. Knowledge Technology came first with a score of 80.02% on all items. Competitive advantage came second with 81.74%. In the third place came "knowledge generation" where the total score on all paragraphs in this area (78.24%). In the fourth place, "knowledge transfer" (77.21%). "Developing and storing knowledge" came in fifth place (77.13%). "Acquisition of knowledge" came in sixth place (76.45%). Knowledge Organization ranked seventh (74.26%). The study recommended that the university should enable the employees to benefit from the experiences and expertise available to help generate knowledge. The University encourages the creation of knowledge through the system of incentives and open the way for creators to apply their creations and spread and invest in excellence and creativity. The university should design work performance levels based on the integration of knowledge and organize it according to policies that support freedom of research. The need for Palestinian universities to adopt a knowledge management approach. The need to adopt a system of incentives that rewards cognitive efforts, and give workers enough freedom to enable them to apply their knowledge. (shrink)
This report describes a questionnaire-based study on 309 adult patients attending the Dental Outpatients Department of Bangladesh Medical College and Hospital, Dhaka during December 2000 to March 2001. The aim of the study was to determine the oral health knowledge of the patients in relation to their age, gender, economic and educational status. Almost two third (63.1%) of the subjects correctly said that pan chewing was bad for teeth. Three fourth (78.3%) of the subjects gave correct answer on question (...) of how to prevent teeth decay. When question was asked about cause of teeth decay, bleeding gum and action of fluoride on teeth, only 38.2%, 41.4% and 32.3% could give correct answer respectively. With a few exceptions, knowledge of oral health was comparatively poor among the older generation, females, less educated and less privileged group. (shrink)
In this paper, I argue that the method of transparency --determining whether I believe that p by considering whether p -- does not explain our privileged access to our own beliefs. Looking outward to determine whether one believes that p leads to the formation of a judgment about whether p, which one can then self-attribute. But use of this process does not constitute genuine privileged access to whether one judges that p. And looking outward will not provide for access to (...) dispositional beliefs, which are arguably more central examples of belief than occurrent judgments. First, one’s dispositional beliefs as to whether p may diverge from the occurrent judgments generated by the method of transparency. Second, even in cases where these are reliably linked — e.g., in which one’s judgment that p derives from one’s dispositional belief that p — using the judgment to self-attribute the dispositional belief requires an ‘inward’ gaze. (shrink)
It is important for the theory of knowledge to understand the factors involved in the generation of the capacities of knowledge. In the history of modern philosophy, knowledge is generally held to originate in either one or two sources, and the debates about these sources between philosophers have concerned their existence, or legitimacy. Furthermore, some philosophers have advocated scepticism about the human capacity to understand the origins of knowledge altogether. However, the developmental aspects of knowledge (...) have received relatively little attention both by past philosophers and in current philosophical discussions. This dissertation provides a historical approach to this developmental problem of knowledge by interpreting the developmental theories of knowledge of Maine de Biran (1766–1824) and Henri Bergson (18591941) from the perspective of a theory of the ‘generative factors of knowledge.’ It first studies the philosophies of Maine de Biran and Bergson separately and then brings together and compares the metaphilosophical aims drawn from these philosophers. The dissertation’s novel analysis, provided by its theory and structure, has far-reaching consequences. From a wide point of view, it fills in considerable scholarly gaps and provides great opportunities for future research in the study of the history of philosophy. From more specific points of view, it provides its most decisive contributions in such metaphysical and epistemological topics as the nature of causality, self-generated activity, the role of effort in knowing and learning, the complementary relationship between philosophy and science, and the non-conceptual basis of knowledge. (shrink)
The debate about the nature of knowledge-how is standardly thought to be divided between intellectualist views, which take knowledge-how to be a kind of propositional knowledge, and anti-intellectualist views, which take knowledge-how to be a kind of ability. In this paper, I explore a compromise position—the interrogative capacity view—which claims that knowing how to do something is a certain kind of ability to generate answers to the question of how to do it. This view combines the (...) intellectualist thesis that knowledge-how is a relation to a set of propositions with the anti-intellectualist thesis that knowledge-how is a kind of ability. I argue that this view combines the positive features of both intellectualism and anti-intellectualism. (shrink)
Here I advance a unified account of the structure of the epistemic normativity of assertion, action, and belief. According to my Teleological Account, all of these are epistemically successful just in case they fulfill the primary aim of knowledgeability, an aim which in turn generates a host of secondary epistemic norms. The central features of the Teleological Account are these: it is compact in its reliance on a single central explanatory posit, knowledge-centered in its insistence that knowledge sets (...) the fundamental epistemic norm, and yet fiercely pluralistic in its acknowledgment of the legitimacy and value of a rich range of epistemic norms distinct from knowledge. Largely in virtue of this pluralist character, I argue, the Teleological Account is far superior to extant knowledge-centered accounts. (shrink)
It seems like experience plays a positive—even essential—role in generating some knowledge. The problem is, it’s not clear what that role is. To see this, suppose that when your visual system takes in information about the world around you it skips the experience step and just automatically and immediately generates beliefs in you about your surroundings. A lot of philosophers think that, in such a case, you would (or at least could) still know, via perception, about the world (...) around you. But then that raises the question: What epistemic role was the experience playing? How did it contribute to your knowledge of your surroundings? Philosophers have given many different answers to these questions. But, for various reasons, none of them has really stuck. In this paper I offer and defend a different answer to these questions—a solution to the problem—which avoids the pitfalls of other answers. I argue that experience is, all by itself, a kind of knowledge—it’s what Bertrand Russell (1912) calls “knowledge of things”. So I argue that experience helps generate knowledge simply by being knowledge. (shrink)
This is a review essay of Quassim Cassam, Self-Knowledge for Humans (Oxford, 2014) and John Doris, Talking to Our Selves (Oxford, 2015). In it I question whether Cassam succeeds in his challenge to Richard Moran's account of first-personal authority, and whether Doris is right that experimental evidence for unconscious influences on behavior generates skeptical worries on accounts that regard accurate self-knowledge as a precondition of agency.
The internet has considerably changed epistemic practices in science as well as in everyday life. Apparently, this technology allows more and more people to get access to a huge amount of information. Some people even claim that the internet leads to a democratization of knowledge. In the following text, we will analyze this statement. In particular, we will focus on a potential change in epistemic structure. Does the internet change our common epistemic practice to rely on expert opinions? Does (...) it alter or even undermine the division of epistemic labor? The epistemological framework of our investigation is a naturalist-pragmatist approach to knowledge. We take it that the internet generates a new environment to which people seeking information must adapt. How can they, and how should they, expand their repertory of social markers to continue the venture of filtering, and so make use of the possibilities the internet apparently provides? To find answers to these questions we will take a closer look at two case studies. The first example is about the internet platform WikiLeaks that allows so-called whistle-blowers to anonymously distribute their information. The second case study is about the search engine Google and the problem of personalized searches. Both instances confront a knowledge-seeking individual with particular difficulties which are based on the apparent anonymity of the information distributor. Are there ways for the individual to cope with this problem and to make use of her social markers in this context nonetheless? (shrink)
In this paper a Knowledge-Based System (KBS) for determining the appropriate students major according to his/her preferences for sophomore student enrolled in the Faculty of Engineering and Information Technology in Al-Azhar University of Gaza was developed and tested. A set of predefined criterions that is taken into consideration before a sophomore student can select a major is outlined. Such criterion as high school score, score of subject such as Math I, Math II, Electrical Circuit I, and Electronics I taken (...) during the student freshman year, number of credits passed, student cumulative grade point average of freshman year, among others, were then used as input data to KBS. KBS was designed and developed using Simpler Level Five (SL5) Object expert system language. KBS was tested on three generation of sophomore students from the Faculty of Engineering and Information Technology of the Al-Azhar University, Gaza. The results of the evaluation show that the KBS is able to correctly determine the appropriate students major without errors. (shrink)
The sciences occasionally generate discoveries that undermine their own assumptions. Two such discoveries are characterized here: the discovery of apophenia by cognitive psychology and the discovery that physical systems cannot be locally bounded within quantum theory. It is shown that such discoveries have a common structure and that this common structure is an instance of Priest’s well-known Inclosure Schema. This demonstrates that science itself is dialetheic: it generates limit paradoxes. How science proceeds despite this fact is briefly discussed, as is (...) the connection between our results and the realism-antirealism debate. We conclude by suggesting a position of epistemic modesty. (shrink)
Internalists have criticised reliabilism for overlooking the importance of the subject's point of view in the generation of knowledge. This paper argues that there is a troubling ambiguity in the intuitive examples that internalists have used to make their case, and on either way of resolving this ambiguity, reliabilism is untouched. However, the argument used to defend reliabilism against the internalist cases could also be used to defend a more radical form of externalism in epistemology.
Imagination seems to play an epistemic role in philosophical and scientific thought experiments, mindreading, and ordinary practical deliberations insofar as it generates new knowledge of contingent facts about the world. However, it also seems that imagination is limited to creative generation of ideas. Sometimes we imagine fanciful ideas that depart freely from reality. The conjunction of these claims is what I call the puzzle of knowledge through imagination. This chapter aims to resolve this puzzle. I argue that imagination (...) has an epistemic role to play, but it is limited to the context of discovery. Imagination generates ideas, but other cognitive capacities must be employed to evaluate these ideas in order for them to count as knowledge. Consideration of the Simulation Theory's so-called "threat of collapse” provides further evidence that imagination does not, on its own, yield new knowledge of contingent facts, and it suggests a way to supplement imagination in order to get such knowledge. (shrink)
We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, (...) firstly, to target a model-agnostic distillation approach exemplified with these two frameworks, secondly, to study how these two frameworks interact on a theoretical level, and, thirdly, to investigate use-cases in ML and AI in a comparative manner. Specifically, we envision that user-studies will help determine human understandability of explanations generated using these two frameworks. (shrink)
There are two different kinds of enkratic principles for belief: evidential enkratic principles and normative enkratic principles. It’s frequently taken for granted that there’s not an important difference between them. But evidential enkratic principles are undermined by considerations that gain no traction at all against their normative counterparts. The idea that such an asymmetry exists between evidential and normative enkratic principles is surprising all on its own. It is also something that calls out for explanation. Similarly, the considerations that undermine (...) evidential enkratic principles also undermine certain narrow-scope evidential principles. This too generates explanatory questions. I show how a knowledge-first view of rationality can easily address these explanatory questions. Thus we have one more reason to put knowledge first in epistemology. (shrink)
The Knowledge Norm of Assertion claims that it is proper to assert that p only if one knows that p. Though supported by a wide range of evidence, it appears to generate incorrect verdicts when applied to utterances of “I don't know.” Instead of being an objection to KNA, I argue that this linguistic data shows that “I don't know” does not standardly function as a literal assertion about one's epistemic status; rather, it is an indirect speech act that (...) has the primary illocutionary force of opting out of the speaker's conversational responsibilities. This explanation both reveals that the opt-out is an under-appreciated type of illocutionary act with a wide range of applications, and shows that the initial data in fact supports KNA over its rivals. (shrink)
I develop an epistemic focal bias account of certain patterns of judgments about knowledge ascriptions by integrating it with a general dual process framework of human cognition. According to the focal bias account, judgments about knowledge ascriptions are generally reliable but systematically fallible because the cognitive processes that generate them are affected by what is in focus. I begin by considering some puzzling patters of judgments about knowledge ascriptions and sketch how a basic focal bias account seeks (...) to account for them. In doing so, I argue that the basic focal bias account should be integrated in a more general framework of human cognition. Consequently, I present some central aspects of a prominent general dual process theory of human cognition and discuss how focal bias may figure at various levels of processing. On the basis of this discussion, I attempt to categorize the relevant judgments about knowledge ascriptions. Given this categorization, I argue that the basic epistemic focal bias account of certain contrast effects and salient alternatives effects can be plausibly integrated with the dual process framework. Likewise, I try to explain the absence of strong intuitions in cases of far-fetched salient alternatives. As a manner of conclusion, I consider some methodological issues concerning the relationship between cognitive psychology, experimental data and epistemological theorizing. -/- . (shrink)
Philosophical investigation in synthetic biology has focused on the knowledge-seeking questions pursued, the kind of engineering techniques used, and on the ethical impact of the products produced. However, little work has been done to investigate the processes by which these epistemological, metaphysical, and ethical forms of inquiry arise in the course of synthetic biology research. An attempt at this work relying on a particular area of synthetic biology will be the aim of this chapter. I focus on the reengineering (...) of metabolic pathways through the manipulation and construction of small DNA-based devices and systems synthetic biology. Rather than focusing on the engineered products or ethical principles that result, I will investigate the processes by which these arise. As such, the attention will be directed to the activities of practitioners, their manipulation of tools, and the use they make of techniques to construct new metabolic devices. Using a science-in-practice approach, I investigate problems at the intersection of science, philosophy of science, and sociology of science. I consider how practitioners within this area of synthetic biology reconfigure biological understanding and ethical categories through active modelling and manipulation of known functional parts, biological pathways for use in the design of microbial machines to solve problems in medicine, technology, and the environment. We might describe this kind of problem-solving as relying on what Helen Longino referred to as “social cognition” or the type of scientific work done within what Hasok Chang calls “systems of practice”. My aim in this chapter will be to investigate the relationship that holds between systems of practice within metabolic engineering research and social cognition. I will attempt to show how knowledge and normative valuation are generated from this particular network of practitioners. In doing so, I suggest that the social nature of scientific inquiry is ineliminable to both knowledge acquisition and ethical evaluations. (shrink)
Dynamic conceptual reframing represents a crucial mechanism employed by humans, and partially by other animal species, to generate novel knowledge used to solve complex goals. In this talk, I will present a reasoning framework for knowledge invention and creative problem solving exploiting TCL: a non-monotonic extension of a Description Logic (DL) of typicality able to combine prototypical (commonsense) descriptions of concepts in a human-like fashion [1]. The proposed approach has been tested both in the task of goal-driven concept (...) invention [2,3] and has additionally applied within the context of serendipity-based recommendation systems [4]. I will present the obtained results, the lessons learned, and the road ahead of this research path. -/- . (shrink)
This paper is an attempt to understand how social knowledge affects human economic decision making. The paper discusses the nature of social knowledge in today’s context with special reference to how social knowledge influences consumers’ sentiments and their economic decisions. Social networks are being continuously flooded with various kinds of information and disinformation. Some of the information becomes knowledge for social network users who browse various kinds of content that are either entertaining or related to products (...) and marketing. Although reliability remains a critical issue regarding online information from social networks, it, however, provides some knowledge about diverse things, goods, and products being in use in different societies around the world. Some products advertised on social networks like Facebook may affect consumer decisions or provide information about new products. This paper presents a formal discussion on what social knowledge is and how it is originated, and what use it might have for such consumers. The paper designs a simplistic mathematical model for a theoretical understanding of the assumption that has practical implications regarding its utility in society. It is found that social networks generate enough social information that can influence user choice and preferences. The study has implications for both the users and the developers of social networking sites. (shrink)
Helmholtz's public reflection about the nature of the experiment and its role in the sciences is a historically important description, which also helps to analyze his own works. It is a part of his conception of science and nature, which can be seen as an ideal type of science and its goals. But its historical reach seems to be limited in an important respect. Helmholtz's understanding of experiments is based on the idea that their planning, realization and evaluation lies in (...) the hands of a person or group acting according to decisions of free will. In my opinion this idea is characteristic for the foundation of the experimental method in early modern times, not however for several forms of its present structures. Above all, the increasing technization of producing knowledge reduces the roIe of the subject in conducting experiments. My lecture consists of three parts. In its first part I would like to present a summary of Helmholtz's own theory of experiment and the change of his conception of science and nature. In the second part I would like to discuss three examples of his experimental practice, which were taken in chronological order from three different periods of his work; in the third part I would like to compare the examples with the change of his conception of science and nature. (shrink)
Austin’s Sense and Sensibilia (1962) generates wildly different reactions among philosophers. Interpreting Austin on perception starts with a reading of this text, and this in turn requires reading into the lectures key ideas from Austin’s work on natural language and the theory of knowledge. The lectures paint a methodological agenda, and a sketch of some first-order philosophy, done the way Austin thinks it should be done. Crucially, Austin calls for philosophers to bring a deeper understanding of natural language meaning (...) to bear as they do their tasks. In consequence Austin’s lectures provide a fascinating start—but only a start—on a number of key questions in the philosophy of perception. (shrink)
The purpose of this chapter is to outline some of the thinking behind new e-learning technology, including e-portfolios and personal learning environments. Part of this thinking is centered around the theory of connectivism, which asserts that knowledge - and therefore the learning of knowledge - is distributive, that is, not located in any given place (and therefore not 'transferred' or 'transacted' per se) but rather consists of the network of connections formed from experience and interactions with a knowing (...) community. And another part of this thinking is centered around the new, and the newly empowered, learner, the member of the net generation, who is thinking and interacting in new ways. These trends combine to form what is sometimes called 'e-learning 2.0' - an approach to learning that is based on conversation and interaction, on sharing, creation and participation, on learning not as a separate activity, but rather, as embedded in meaningful activities such as games or workflows. (shrink)
To demarcate the limits of experimental knowledge, we probe the limits of what might be called an experiment. By appeal to examples of scientific practice from astrophysics and analogue gravity, we demonstrate that the reliability of knowledge regarding certain phenomena gained from an experiment is not circumscribed by the manipulability or accessibility of the target phenomena. Rather, the limits of experimental knowledge are set by the extent to which strategies for what we call ‘inductive triangulation’ are available: (...) that is, the validation of the mode of inductive reasoning involved in the source-target inference via appeal to one or more distinct and independent modes of inductive reasoning. When such strategies are able to partially mitigate reasonable doubt, we can take a theory regarding the phenomena to be well supported by experiment. When such strategies are able to fully mitigate reasonable doubt, we can take a theory regarding the phenomena to be established by experiment. There are good reasons to expect the next generation of analogue experiments to provide genuine knowledge of unmanipulable and inaccessible phenomena such that the relevant theories can be understood as well supported. This article is part of a discussion meeting issue ‘The next generation of analogue gravity experiments’. (shrink)
In order for a reason to justify an action or attitude it must be one that is possessed by an agent. Knowledge-centric views of possession ground our possession of reasons, at least partially, either in our knowledge of them or in our being in a position to know them. On virtually all accounts, knowing P is some kind of non-accidental true belief that P. This entails that knowing P is a kind of non-accidental true representation that P. I (...) outline a novel theory of the epistemic requirement on possession in terms of this more general state of non-accidental true representation. It is just as well placed to explain the motivations behind knowledge-centric views of possession, and it is also better placed to explain the extent of the reasons we possess in certain cases of deductive belief-updates and cases involving environmental luck. I conclude with three reflections. First, I indicate how my arguments generate a dilemma for Errol Lord’s view that possessing reasons is just a matter of being in a position to manifest one’s knowledge how to use them. Second, I explain how my view can simultaneously manage cases of environmental luck without falling prey to lottery cases. Finally, I sketch the direction for a further range of counterexamples to knowledge-centric theories of possession. (shrink)
Synthetic biologists aim to generate biological organisms according to rational design principles. Their work may have many beneficial applications, but it also raises potentially serious ethical concerns. In this article, we consider what attention the discipline demands from bioethicists. We argue that the most important issue for ethicists to examine is the risk that knowledge from synthetic biology will be misused, for example, in biological terrorism or warfare. To adequately address this concern, bioethics will need to broaden its scope, (...) contemplating not just the means by which scientific knowledge is produced, but also what kinds of knowledge should be sought and disseminated. (shrink)
Historians occasionally use timelines, but many seem to regard such signs merely as ways of visually summarizing results that are presumably better expressed in prose. Challenging this language-centered view, I suggest that timelines might assist the generation of novel historical insights. To show this, I begin by looking at studies confirming the cognitive benefits of diagrams like timelines. I then try to survey the remarkable diversity of timelines by analyzing actual examples. Finally, having conveyed this (mostly untapped) potential, I argue (...) that neglecting timelines might mean neglecting significant aspects of reality that are revealed only by those signs. My overall message is that once we accept that relations are as important for the mind as what they relate, we have to pay closer attention to any semiotic device that enables or facilitates the discernment of new relations. (shrink)
Human creativity generates novel ideas to solve real-world problems. This thereby grants us the power to transform the surrounding world and extend our human attributes beyond what is currently possible. Creative ideas are not just new and unexpected, but are also successful in providing solutions that are useful, efficient and valuable. Thus, creativity optimizes the use of available resources and increases wealth. The origin of human creativity, however, is poorly understood, and semantic measures that could predict the success of generated (...) ideas are currently unknown. Here, we analyze a dataset of design problem-solving conversations in real-world settings by using 49 semantic measures based on WordNet 3.1 and demonstrate that a divergence of semantic similarity, an increased information content, and a decreased polysemy predict the success of generated ideas. The first feedback from clients also enhances information content and leads to a divergence of successful ideas in creative problem solving. These results advance cognitive science by identifying real-world processes in human problem solving that are relevant to the success of produced solutions and provide tools for real-time monitoring of problem solving, student training and skill acquisition. A selected subset of information content (IC Sánchez–Batet) and semantic similarity (Lin/Sánchez–Batet) measures, which are both statistically powerful and computationally fast, could support the development of technologies for computer-assisted enhancements of human creativity or for the implementation of creativity in machines endowed with general artificial intelligence. (shrink)
According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to (...) be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledge representation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes. (shrink)
The National Center for Biomedical Ontology is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists, funded by the National Institutes of Health (NIH) Roadmap, to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to (...) create new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease. (shrink)
Hegel's Science of Logic makes the just not low claim to be an absolute, ultimate-grounded knowledge. This project, which could not be more ambitious, has no good press in our post-metaphysical age. However: That absolute knowledge absolutely cannot exist, cannot be claimed without self-contradiction. On the other hand, there can be no doubt about the fundamental finiteness of knowledge. But can absolute knowledge be finite knowledge? This leads to the problem of a self-explication of logic (...) (in the sense of Hegel) and further, as will be shown, to a new definition of the dialectical procedure. The stringency of which results from the fact that always exactly that implicit content is explicated that was generated by the preceding explication step itself and is thus concretely comprehensible. At the same time, a new implicit content is generated by this act of explication, which requires a new explication step, and so forth. In the dialectical procedure reinterpreted in this way, dialectical arguments are not beheld, guessed at or even surreptitiously obtained, but are methodically accountable. Thereby dialectics is understood as a self-explication of logic by logical means and thus as a proof of the possibility of ultimate-grounding in the form of absolute and nevertheless finite – and thus also fallible – knowledge. (shrink)
Commonsense reasoning is one of the main open problems in the field of Artificial Intelligence (AI) while, on the other hand, seems to be a very intuitive and default reasoning mode in humans and other animals. In this talk, we discuss the different paradigms that have been developed in AI and Computational Cognitive Science to deal with this problem (ranging from logic-based methods, to diagrammatic-based ones). In particular, we discuss - via two different case studies concerning commonsense categorization and (...) class='Hi'>knowledge invention tasks - how cognitively inspired heuristics can help (both in terms of efficiency and efficacy) in the realization of intelligent artificial systems able to reason in a human-like fashion, with results comparable to human-level performances. (shrink)
This is an essay in the form of a concept-map plus explanatory notes. It outlines the path followed by Information from Noumena to Knowledge. The viewpoint is strictly Physicalist. Thus, for example, a priori concepts reach us via DNA which chemically embodies critical a posteori concepts developed as a result of the experience and survival of previous generations. In a similar vein, Intelligence, also inherited via DNA, consists of strategies of data reduction which maximize the data handling abilities of (...) the brain. (shrink)
The essential analysis of changing ideas of Space and Time for the period from the beginning of “Archimedes’ Second Revolution” is carried out to overcome the ontological groundlessness of the Knowledge and to expand its borders. Synthetic model of Triune (absolute) 12-dimensional Space-Time is built on the basis of Ontological construction method, Superaxiom and Superprinciple, the nature of Time is determined as a memory of material structure at a certain level of its holistic being.
Fundamental knowledge endures deep conceptual crisis manifested in total crisis of understanding, crisis of interpretation and representation, loss of certainty, troubles with physics, crisis of methodology. Crisis of understanding in fundamental science generates deep crisis of understanding in global society. What way should we choose for overcoming total crisis of understanding in fundamental science? It should be the way of metaphysical construction of new comprehensive model of ideality on the basis of the "modified ontology". Result of quarter-century wanderings: sum (...) of ideas, concepts and eidoses, new understanding of space, time, consciousness. (shrink)
The wealthiest nations in the World have a knowledge-based economy that depends on continued innovation based on research and development sustained by a pool of problem-solvers able to tackle the most diverse challenges. The Research University is the current gold standard for higher education and the research professors working in such an environment are the key figures responsible of fostering the new generations of problem-solvers.
Hillary Putnam has famously argued that we can know that we are not brains in a vat because the hypothesis that we are is self-refuting. While Putnam's argument has generated interest primarily as a novel response to skepticism, his original use of the brain in a vat scenario was meant to illustrate a point about the "mind/world relationship." In particular, he intended it to be part of an argument against the coherence of metaphysical realism, and thus to be part of (...) a defense of his conception of truth as idealized rational acceptability. Putnam's conclusions about the scenario are, however, actually out of line with central and plausible aspects of his own account of the relationship between our minds and the world. Reflections on semantics give us no compelling reason to suppose that claims like "I am a brain in a vat" could not turn out to be true. (shrink)
The new phase of science evolution is characterized by totality of subject and object of cognition and technology (high-hume). As a result, forming of network structure in a disciplinary matrix modern are «human dimensional» natural sciences and two paradigmal «nuclei» (attraktors). As a result, the complication of structure of disciplinary matrix and forming a few paradigm nuclei in modern «human dimensional» natural sciences are observed. In the process of social verification integration of scientific theories into the existent system of mental (...) and valued options of spiritual culture takes place. The attribute of classical science – ethics neutrality of scientific knowledge becomes an unattainable ideal. One of the trends in the evolution of theoretical epistemology is the study of migration mechanisms of generation of scientific knowledge from the sphere of its own logic and methodology of science in the field of sociology - the consideration of this process, as the resulting system of interactions of social structures and institutions. Ensuing ideas and settings become the dominant worldview of philosophical and technological civilization. -/- . (shrink)
There is an ongoing debate on whether or to what degree computer simulations can be likened to experiments. Many philosophers are sceptical whether a strict separation between the two categories is possible and deny that the materiality of experiments makes a difference (Morrison 2009, Parker 2009, Winsberg 2010). Some also like to describe computer simulations as a “third way” between experimental and theoretical research (Rohrlich 1990, Axelrod 2003, Kueppers/Lenhard 2005). In this article I defend the view that computer simulations are (...) not experiments but that they are tools for evaluating the consequences of theories and theoretical assumptions. In order to do so the (alleged) similarities and differences between simulations and experiments are examined. It is found that three fundamental differences between simulations and experiments remain: 1) Only experiments can generate new empirical data. 2) Only Experiments can operate directly on the target system. 3) Experiments alone can be employed for testing fundamental hypotheses. As a consequence, experiments enjoy a distinct epistemic role in science that cannot completely be superseded by computer simulations. This finding in connection with a discussion of border cases such as hybrid methods that combine measurement with simulation shows that computer simulations can clearly be distinguished from empirical methods. It is important to understand that computer simulations are not experiments, because otherwise there is a danger of systematically underestimating the need for empirical validation of simulations. (shrink)
In this paper we present a framework for the dynamic and automatic generation of novel knowledge obtained through a process of commonsense reasoning based on typicality-based concept combination. We exploit a recently introduced extension of a Description Logic of typicality able to combine prototypical descriptions of concepts in order to generate new prototypical concepts and deal with problem like the PET FISH (Osherson and Smith, 1981; Lieto & Pozzato, 2019). Intuitively, in the context of our application of this logic, (...) the overall pipeline of our system works as follows: given a goal expressed as a set of properties, if the knowledge base does not contain a concept able to fulfill all these properties, then our system looks for two concepts to recombine in order to extend the original knowledge based satisfy the goal. (shrink)
Fundamental Science is undergoing an acute conceptual-paradigmatic crisis of philosophical foundations, manifested as a crisis of understanding, crisis of interpretation and representation, “loss of certainty”, “trouble with physics”, and a methodological crisis. Fundamental Science rested in the "first-beginning", "first-structure", in "cogito ergo sum". The modern crisis is not only a crisis of the philosophical foundations of Fundamental Science, but there is a comprehensive crisis of knowledge, transforming by the beginning of the 21st century into a planetary existential crisis, which (...) has exacerbated the question of the existence of Humanity and life on Earth. Due to the unsolved problem of justification of Mathematics, paradigm problems in Computational mathematics have arisen. It's time to return ↔ Into Dialectics. The solution to the problem of the foundations of Mathematics, and therefore knowledge in general, is the solution to the problem of modeling (constructing) the ontological basis of knowledge - the ontological model of the primordial generating process. The idea and model of the primordial generating process, its ontological structure directs thinking to the need for the introduction of superconcept → ontological (cosmic, structural) memory, concept-attractor, supercategory, substantial semantic core of the scientific picture of the world of the nuclear-ecological-information age. Model of basic Ideality→ “Space-MatterMemory-Time” [S-MM-T]. (shrink)
Biosemiotics deal with the study of signs and meanings in living entities. Constructivism considers human knowledge as internally constructed by sense making rather than passively reflecting a pre-existing reality. Consequently, a constructivist perspective on biosemiotics leads to look at an internal active construction of meaning in living entities from basic life to humans. That subject is addressed with an existing tool: the Meaning Generator System (MGS) which is a system submitted to an internal constraint related to the nature of (...) the agent containing it (biological or artificial). Simple organisms generate meanings to satisfy a “stay alive” constraint. More complex living entities manage meaningful representations with more elaborated constraints. The generated meanings are used by the agents to implement actions aimed at satisfying the constraints. The actions can be physical, biological or mental and take place in the agent or in its environment. The case of human agency is introduced with meaningful representations that may have allowed our ancestors to become self-conscious by representing themselves as existing entities. This paper proposes to use the MGS as a thread to address the above items linking biosemiotics to constructivism with relations to normativity, agency and autonomy. Possible continuations are introduced. (shrink)
The economy is based on the prevailing legal system; however, the economy could go into a tailspin if the laws lose their impartiality. A perfect worker creates infinite high value with limited cost, and the result is a perfect product, usually eternal knowledge. However, free access to their products discourages workers, causing a substantial deviation from optimal resource allocation, and thereby making the supply of perfect products seriously inadequate. This significantly hurts the interests of future society. To maximize the (...) overall interests of humankind, the best policy would be to produce perfect products expeditiously, which in turn requires correcting the value society places on perfect products to respect the interests of perfect workers and future generations. Future society should essentially buy licenses from perfect workers instead of lending money to modern society for consumption. Then, the one-way trade between the present and the future will greatly increase. New companies and services will emerge around perfect products, and long-term economic growth rates will increase significantly. (shrink)
Fundamental Science is undergoing an acute conceptual-paradigmatic crisis of philosophical foundations, manifested as a crisis of understanding, crisis of interpretation and representation, “loss of certainty”, “trouble with physics”, and a methodological crisis. Fundamental Science rested in the "first-beginning", "first-structure", in "cogito ergo sum". The modern crisis is not only a crisis of the philosophical foundations of Fundamental Science, but there is a comprehensive crisis of knowledge, transforming by the beginning of the 21st century into a planetary existential crisis, which (...) has exacerbated the question of the existence of Humanity and life on Earth. Due to the unsolved problem of justification of Mathematics, paradigm problems in Computational mathematics have arisen. It's time to return ↔ Into Dialectics. The solution to the problem of the foundations of Mathematics, and therefore knowledge in general, is the solution to the problem of modeling (constructing) the ontological basis of knowledge - the ontological model of the primordial generating process. The idea and model of the primordial generating process, its ontological structure directs thinking to the need for the introduction of superconcept → ontological (cosmic, structural) memory, concept-attractor, supercategory, substantial semantic core of the scientific picture of the world of the nuclear-ecological-information age. Model of basic Ideality→ “Space-MatterMemory-Time” [S-MM-T]. (shrink)
In this paper, we assess the possibility of a critical knowledge of technology. In the case of facial recognition systems, 'FRS', we argue that behaviorism underlies this technology, and analyze the debate about behaviorism to show the lack of consensus about its theoretical foundations. In particular we analyze the structure of knowledge generated by FRS as affected by a technological behaviorism. Our last point is a suggestion to use the concept of dasiacritical knowledgepsila, which we borrow from Ladriere, (...) to question and challenge the technological finalities and the foundational scientific theory underlying FRS. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.