This paper investigates whether searchengines and other new modes of online communication should be covered by free speech principles. It criticizes the analogical reason-ing that contemporary American courts and scholars have used to liken searchengines to newspapers, and to extend free speech coverage to them based on that likeness. There are dissimilarities between searchengines and newspapers that undermine the key analogy, and also rival analogies that can be drawn which don’t recommend (...) free speech protection for searchengines. Partly on these bases, we argue that an analogical approach to questions of free speech coverage is of limited use in this context. Credible verdicts about how free speech principles should apply to new modes of online com-munication require us to re-excavate the normative foundations of free speech. This method for deciding free speech coverage suggests that only a subset of search engine outputs and similar online communication should receive special protection against government regulation. (shrink)
This paper applies a virtue epistemology approach to using the Internet, as to improve our information-seeking behaviours. Virtue epistemology focusses on the cognitive character of agents and is less concerned with the nature of truth and epistemic justification as compared to traditional analytic epistemology. Due to this focus on cognitive character and agency, it is a fruitful but underexplored approach to using the Internet in an epistemically desirable way. Thus, the central question in this paper is: How to use the (...) Internet in an epistemically virtuous way? Using the work of Jason Baehr, it starts by outlining nine intellectual or epistemic virtues: curiosity, intellectual autonomy, intellectual humility, attentiveness, intellectual carefulness, intellectual thoroughness, open-mindedness, intellectual courage and intellectual tenacity. It then explores how we should deploy these virtues and avoid the corresponding vices when interacting with the Internet, particularly searchengines. Whilst an epistemically virtuous use of the Internet will not guarantee that one will acquire true beliefs, understanding or even knowledge, it will strongly improve one’s information-seeking behaviours. The paper ends with arguing that teaching and assessing online intellectual virtues should be part of school and university curricula, perhaps embedded in critical thinking courses, or even better, as individual units. (shrink)
The fact that Internet companies may record our personal data and track our online behavior for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of searchengines that promise no tracking and privacy. Searchengines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between searchengines’ approach to privacy and the scientific (...) quality of the information they return. We analyzed the first 30 webpages returned searching “vaccines autism” in English, Spanish, Italian, and French. The results show that not only “alternative” searchengines but also other commercial engines often return more anti-vaccine pages (10–53%) than Google (0%). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10%) than Google. Health information returned by searchengines has an impact on public health and, specifically, in the acceptance of vaccines. The issue of information quality when seeking information for making health-related decisions also impact the ethical aspect represented by the right to an informed consent. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user-tracking is necessary but insufficient; instead, mechanisms should be developed to test searchengines from the perspective of information quality (particularly for health-related webpages) before they can be deemed trustworthy providers of public health information. (shrink)
The scholarly search engine Google Scholar (G.S.) has problems that make it not a 100% trusted search engine. In this research, we discussed a few drawbacks that we noticed in Google Scholar, one of them is related to how does it perform (add articles) option for adding new articles that are related to the registered researchers. Our suggestion is an attempt for making G.S. more efficient by improving the searching method that it uses and finally having trusted statistical (...) results. (shrink)
Information providing and gathering increasingly involve technologies like search engines, which actively shape their epistemic surroundings. Yet, a satisfying account of the epistemic responsibilities associated with them does not exist. We analyze automatically generated search suggestions from the perspective of social epistemology to illustrate how epistemic responsibilities associated with a technology can be derived and assigned. Drawing on our previously developed theoretical framework that connects responsible epistemic behavior to practicability, we address two questions: first, given the different (...) technological possibilities available to searchers, the search technology, and search providers, who should bear which responsibilities? Second, given the technology’s epistemically relevant features and potential harms, how should search terms be autocompleted? Our analysis reveals that epistemic responsibility lies mostly with search providers, which should eliminate three categories of autosuggestions: those that result from organized attacks, those that perpetuate damaging stereotypes, and those that associate negative characteristics with specific individuals.. (shrink)
In this article, it has been searched if the language and consciousness have a co-origin or not. This origin of language and consciousness problem which draws interest in areas especially like anthropology, biology and evolutionary linguistics and evolutionary psychology is tried to be handled from a philosophical point of view. By presenting the theories about the relation between language and consciousness, theories that are well-accepted but contradicted to another from certain aspects, a thinking -which tries to go beyond this contradiction- (...) about the origin of language and consciousness has been attempted. With paying regard to different approaches represented by names like Piaget, Vygotsky, Chomsky, Whorf, an investigation has been made about what the meaning of language and consciousness having a not historical but elementarily philosophical co-origin can mean. It has been aimed to present a compiling point of view with this investigation. (shrink)
This article mainly aims to make an examination over the holy. It has been inquired into how something being ascribed holy can have a meaning in philosophy. As the article's research area, the differences in both opinion and execution which have later divided Christianity into two as Catholic and Orthodox Churches have been selected. The separation of these two churches under the subject titles such as Filioque controversy, the idea of First Among Equals (primas inter pares), and ritual of Transubstantiation (...) have also shaped how they perceived and thought the holy. With these divergences being investigated, it has been tried to present how much of a share these had in giving meaning for the holy. It has been labored to manifest the role of these two Churches' -which belong to and come from the same celestial tradition- divergences in the metamorphosis and paradigm shift that the holy underwent. For the last, through the Church Fathers' opinions and views which have been seen related to subject matter, with moving beyond the divergences, it has been searched whether there is a possibility of meeting on the common ground or not . (shrink)
The scientific study of living organisms is permeated by machine and design metaphors. Genes are thought of as the ‘‘blueprint’’ of an organism, organisms are ‘‘reverse engineered’’ to discover their functionality, and living cells are compared to biochemical factories, complete with assembly lines, transport systems, messenger circuits, etc. Although the notion of design is indispensable to think about adaptations, and engineering analogies have considerable heuristic value (e.g., optimality assumptions), we argue they are limited in several important respects. In particular, the (...) analogy with human-made machines falters when we move down to the level of molecular biology and genetics. Living organisms are far more messy and less transparent than human-made machines. Notoriously, evolution is an opportunistic tinkerer, blindly stumbling on ‘‘designs’’ that no sensible engineer would come up with. Despite impressive technological innovation, the prospect of artificially designing new life forms from scratch has proven more difficult than the superficial analogy with ‘‘programming’’ the right ‘‘software’’ would suggest. The idea of applying straightforward engineering approaches to living systems and their genomes— isolating functional components, designing new parts from scratch, recombining and assembling them into novel life forms—pushes the analogy with human artifacts beyond its limits. In the absence of a one-to-one correspondence between genotype and phenotype, there is no straightforward way to implement novel biological functions and design new life forms. Both the developmental complexity of gene expression and the multifarious interactions of genes and environments are serious obstacles for ‘‘engineering’’ a particular phenotype. The problem of reverse-engineering a desired phenotype to its genetic ‘‘instructions’’ is probably intractable for any but the most simple phenotypes. Recent developments in the field of bio-engineering and synthetic biology reflect these limitations. Instead of genetically engineering a desired trait from scratch, as the machine/engineering metaphor promises, researchers are making greater strides by co-opting natural selection to ‘‘search’’ for a suitable genotype, or by borrowing and recombining genetic material from extant life forms. (shrink)
The scientific study of living organisms is permeated by machine and design metaphors. Genes are thought of as the ‘‘blueprint’’ of an organism, organisms are ‘‘reverse engineered’’ to discover their func- tionality, and living cells are compared to biochemical factories, complete with assembly lines, transport systems, messenger circuits, etc. Although the notion of design is indispensable to think about adapta- tions, and engineering analogies have considerable heuristic value (e.g., optimality assumptions), we argue they are limited in several important respects. In (...) particular, the analogy with human-made machines falters when we move down to the level of molecular biology and genetics. Living organisms are far more messy and less transparent than human-made machines. Notoriously, evolution is an oppor- tunistic tinkerer, blindly stumbling on ‘‘designs’’ that no sensible engineer would come up with. Despite impressive technological innovation, the prospect of artificially designing new life forms from scratch has proven more difficult than the superficial analogy with ‘‘programming’’ the right ‘‘software’’ would sug- gest. The idea of applying straightforward engineering approaches to living systems and their genomes— isolating functional components, designing new parts from scratch, recombining and assembling them into novel life forms—pushes the analogy with human artifacts beyond its limits. In the absence of a one-to-one correspondence between genotype and phenotype, there is no straightforward way to imple- ment novel biological functions and design new life forms. Both the developmental complexity of gene expression and the multifarious interactions of genes and environments are serious obstacles for ‘‘engi- neering’’ a particular phenotype. The problem of reverse-engineering a desired phenotype to its genetic ‘‘instructions’’ is probably intractable for any but the most simple phenotypes. Recent developments in the field of bio-engineering and synthetic biology reflect these limitations. Instead of genetically engi- neering a desired trait from scratch, as the machine/engineering metaphor promises, researchers are making greater strides by co-opting natural selection to ‘‘search’’ for a suitable genotype, or by borrowing and recombining genetic material from extant life forms. (shrink)
This paper evaluates the claim that it is possible to use nature’s variation in conjunction with retention and selection on the one hand, and the absence of ultimate groundedness of hypotheses generated by the human mind as it knows on the other hand, to discard the ascription of ultimate certainty to the rationality of human conjectures in the cognitive realm. This leads to an evaluation of the further assumption that successful hypotheses with specific applications, in other words heuristics, seem to (...) have a firm footing because they were useful in another context. I argue that usefulness evaluated through adaptation misconstrues the search for truth, and that it is possible to generate talk of randomness by neglecting aspects of a system’s insertion into a larger situation. The framing of the problem in terms of the elimination of unfit hypotheses is found to be unsatisfying. It is suggested that theories exist in a dimension where they can be kept alive rather than dying as phenotypes do. The proposal that the subconscious could suggest random variations is found to be a category mistake. A final appeal to phenomenology shows that this proposal is orphan in the history of epistemology, not in virtue of its being a remarkable find, but rather because it is ill-conceived. (shrink)
The experimental philosophy movement advocates the use of empirical methods in philosophy. The methods most often discussed and in fact employed in experimental philosophy are appropriated from the experimental paradigm in psychology. But there is a variety of other (at least partly) empirical methods from various disciplines that are and others that could be used in philosophy. The paper explores the application of corpus analysis to philosophical issues. Although the method is well established in linguistics, there are only a few (...) tentative attempts of philosophers to utilise it. Examples are introduced and the merit of corpus analysis is compared to that of using general internet searchengines and questionnaires for similar purposes. (shrink)
Societal pressures on high tech organizations to define and disseminate their ethical stances are increasing as the influences of the technologies involved expand. Many Internet-based businesses have emerged in the past decades; growing numbers of them have developed some kind of moral declaration in the form of mottos or ethical statements. For example, the corporate motto “don’t be evil” (often linked with Google/Alphabet) has generated considerable controversy about social and cultural impacts of searchengines. After addressing the origins (...) of these mottos and statements, this chapter projects the future of such ethical manifestations in the context of critically-important privacy, security, and economic concerns. The chapter analyzes potential influences of the ethical expressions on corporate social responsibility (CSR) initiatives. The chapter analyzes issues of whether “large-grained” corporate mottos can indeed serve to supply social and ethical guidance for organizations as opposed to more complex, detailed codes of ethics or comparable attempts at moral clarification. (shrink)
Using the search engine Google to locate information linked to individuals and organizations has become part of everyday functioning. This article addresses whether the “gaming” of Internet applications in attempts to modify reputations raises substantial ethical concerns. It analyzes emerging approaches for manipulation of how personally-identifiable information is accessed online as well as critically-important international differences in information handling. It investigates privacy issues involving the data mining of personally-identifiable information with searchengines and social media platforms. Notions (...) of “gaming” and “manipulation” have negative connotations as well as instrumental functions, which are distinguished in this article. The article also explores ethical matters engendered by the expanding industry of reputation management services that assist in these detailed technical matters. Ethical dimensions of online reputation are changing in the advent of reputation management, raising issues such as fairness and legitimacy of various information-related practices; the article provides scenarios and questions for classroom deliberation. (shrink)
Abstract: Background: In spite of the fact that computers continue to improve in speed and functions operation, they remain complex to use. Problems frequently happen, and it is hard to resolve or find solutions for them. This paper outlines the significance and feasibility of building a desktop PC problems diagnosis system. The system gathers problem symptoms from users’ desktops, rather than the user describes his/her problems to primary searchengines. It automatically searches global databases of problem symptoms and (...) solutions, and also allows ordinary users to contribute exact problem reports in a structured manner. Objectives: The main goal of this Knowledge Based System is to get the suitable problem desktop PC symptoms and the correct way to solve the errors. Methods: In this paper the design of the proposed Knowledge Based System which was produced to help users of desktop PC in knowing many of the problems and error such as : Power supply problems, CPU errors, RAM dumping error, hard disk errors and bad sectors and suddenly restarting PC. The proposed Knowledge Based System presents an overview about desktop PC hardware errors are given, the cause of fault are outlined and the solution to the problems whenever possible is given out. CLIPS Knowledge Based System language was used for designing and implementing the proposed expert system. Results: The proposed PC desktop troubleshooting Knowledge Based System was evaluated by IT students and they were satisfied with its performance. (shrink)
People increasingly form beliefs based on information gained from automatically filtered Internet sources such as searchengines. However, the workings of such sources are often opaque, preventing subjects from knowing whether the information provided is biased or incomplete. Users’ reliance on Internet technologies whose modes of operation are concealed from them raises serious concerns about the justificatory status of the beliefs they end up forming. Yet it is unclear how to address these concerns within standard theories of knowledge (...) and justification. To shed light on the problem, we introduce a novel conceptual framework that clarifies the relations between justified belief, epistemic responsibility, action, and the technological resources available to a subject. We argue that justified belief is subject to certain epistemic responsibilities that accompany the subject’s particular decision-taking circumstances, and that one typical responsibility is to ascertain, so far as one can, whether the information upon which the judgment will rest is biased or incomplete. What this responsibility comprises is partly determined by the inquiry-enabling technologies available to the subject. We argue that a subject’s beliefs that are formed based on Internet-filtered information are less justified than they would be if she either knew how filtering worked or relied on additional sources, and that the subject may have the epistemic responsibility to take measures to enhance the justificatory status of such beliefs.. (shrink)
For 4E cognitive science, minds are embodied, embedded, enacted, and extended. Proponents observe that we regularly ‘offload’ our thinking onto body and world: we use gestures and calculators to augment mathematical reasoning, and smartphones and searchengines as memory aids. I argue that music is a beyond-the-head resource that affords offloading. Via this offloading, music scaffolds access to new forms of thought, experience, and behaviour. I focus on music’s capacity to scaffold emotional consciousness, including the self-regulative processes constitutive (...) of emotional consciousness. In developing this idea, I consider the ‘material’ and ‘worldmaking’ character music, and I apply these considerations to two case studies: music as a tool for religious worship, and music as a weapon for torture. (shrink)
Memory evolved to supply useful, timely information to the organism’s decision-making systems. Therefore, decision rules, multiple memory systems, and the searchengines that link them should have coevolved to mesh in a coadapted, functionally interlocking way. This adaptationist perspective suggested the scope hypothesis: When a generalization is retrieved from semantic memory, episodic memories that are inconsistent with it should be retrieved in tandem to place boundary conditions on the scope of the generalization. Using a priming paradigm and a (...) decision task involving person memory, the authors tested and confirmed this hypothesis. The results support the view that priming is an evolved adaptation. They further show that dissociations between memory systems are not—and should not be—absolute: Independence exists for some tasks but not others. (shrink)
The epistemic basic structure of a society consists of those institutions that have the greatest impact on individuals’ opportunity to obtain knowledge on questions they have an interest in as citizens, individuals, and public officials. It plays a central role in the production and dissemination of knowledge and in ensuring that people have the capability to assimilate this knowledge. It includes institutions of science and education, the media, searchengines, libraries, museums, think tanks, and various government agencies. This (...) article identifies two demands of justice that apply specifically to the institutions that belong to it. First, the epistemic basic structure should serve all citizens fairly and reliably. It should provide them with the opportunity to acquire knowledge they need for their deliberations about the common good, their individual good, and how to pursue them. Second, the epistemic basic structure should produce and disseminate the knowledge that various experts and public officials need to successfully pursue justice and citizens need to effectively exercise their rights. After arguing for these duties, I discuss what policies follow from them and respond to the worry that these duties have illiberal implications. (shrink)
The internet has considerably changed epistemic practices in science as well as in everyday life. Apparently, this technology allows more and more people to get access to a huge amount of information. Some people even claim that the internet leads to a democratization of knowledge. In the following text, we will analyze this statement. In particular, we will focus on a potential change in epistemic structure. Does the internet change our common epistemic practice to rely on expert opinions? Does it (...) alter or even undermine the division of epistemic labor? The epistemological framework of our investigation is a naturalist-pragmatist approach to knowledge. We take it that the internet generates a new environment to which people seeking information must adapt. How can they, and how should they, expand their repertory of social markers to continue the venture of filtering, and so make use of the possibilities the internet apparently provides? To find answers to these questions we will take a closer look at two case studies. The first example is about the internet platform WikiLeaks that allows so-called whistle-blowers to anonymously distribute their information. The second case study is about the search engine Google and the problem of personalized searches. Both instances confront a knowledge-seeking individual with particular difficulties which are based on the apparent anonymity of the information distributor. Are there ways for the individual to cope with this problem and to make use of her social markers in this context nonetheless? (shrink)
This paper presents results found through searching publicly available U.S. data sources for information about how to handle incidental fndings (IF) in human subjects research, especially in genetics and genomics research, neuroimaging research, and CT colonography research. We searched the Web sites of 14 federal agencies, 22 professional societies, and 100 universities, as well as used the search engine Google for actual consent forms that had been posted on the Internet. Our analysis of these documents showed that there is (...) very little public guidance available for researchers as to how to deal with incidental fndings. Moreover, the guidance available is not consistent. (shrink)
I motivate three claims: Firstly, attentional traits can be cognitive virtues and vices. Secondly, groups and collectives can possess attentional virtues and vices. Thirdly, attention has epistemic, moral, social, and political importance. An epistemology of attention is needed to better understand our social-epistemic landscape, including media, social media, searchengines, political polarisation, and the aims of protest. I apply attentional normativity to undermine recent arguments for moral encroachment and to illuminate a distinctive epistemic value of occupying particular social (...) positions. A recurring theme is that disproportionate attention can distort, mislead, and misrepresent even when all the relevant claims are true and well supported by evidence. In the informational cacophony of the internet age, epistemology must foreground the cognitive virtues of attunement. (shrink)
In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for searchengines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, (...) and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other’s edits over the period 2001–2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other’s edits and these sterile “fights” may sometimes continue for years. Unlike humans on Wikipedia, bots’ interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively “dumb” bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles. (shrink)
The following essay responds to a draft article that criticises the theory of libertarian restitution in “Libertarian Rectification: Restitution, Retribution, and the Risk-Multiplier” (LR). The article was freely available to internet searchengines. Hence, it seems fair and useful to reply to these very welcome objective criticisms. It is not intellectually relevant that its author might subsequently and subjectively have thought better of them, possibly as a result of the earlier version of this reply. Generally, the article misconstrues (...) the position on retribution in LR. Eventually, it makes apparent qualifications to its own position such that there does not seem to be any clear theoretical difference between the two on the central disputed issue. LR’s position is to explain and defend only the non-normative theory of libertarian restitution: full restoration or compensation to the damaged (initiatedly imposed on) party. But it is argued that this can include a retributive aspect if that is what the victim prefers. Moreover, such restitution will tend to act as a deterrent that maximises both overall interpersonal liberty (theorised as no initiated imposed costs) and human welfare (theorised as preference satisfaction). (shrink)
Attempts to engineer a generally intelligent artificial agent have yet to meet with success, largely due to the (intercontext) frame problem. Given that humans are able to solve this problem on a daily basis, one strategy for making progress in AI is to look for disanalogies between humans and computers that might account for the difference. It has become popular to appeal to the emotions as the means by which the frame problem is solved in human agents. The purpose of (...) this paper is to evaluate the tenability of this proposal, with a primary focus on Dylan Evans’ search hypothesis and Antonio Damasio’s somatic marker hypothesis. I will argue that while the emotions plausibly help solve the intracontext frame problem, they do not function to solve or help solve the intercontext frame problem, as they are themselves subject to contextual variability. (shrink)
The project described here is carried out within the framework of the publication of Søren Kierkegaard's (1813 –55) collected writings, Søren Kierkegaards Skrifter (SKS). The edition consists of Kierkegaard's works, posthumous writings, journals, and papers, and appears in a printed version as well as a digital one. The printed version will eventually consist of 28 volumes with accompanying volumes of explanatory notes. The digital version (SKS-E) will consist of additional texts, including the author's manuscripts and preparatory studies for the published (...) works; it will also include various computer tools such as a search engine, concordances, indices of names, illustrations, maps, and other useful tools that we shall collectively call resource files. SKS-E was presented on the internet in its initial version March 30, 2007, at http://sks.dk. (shrink)
Preparing students for the real challenges in life is one of the most important goals in education. Constructivism is an approach that uses real-life experiences to construct knowledge. Problem-Based Learning (PBL), for almost five decades now, has been the most innovative constructivist pedagogy used worldwide. However, with the rising popularity, there is a need to revisit empirical studies regarding PBL to serve as a guide and basis for designing new studies, making institutional policies, and evaluating educational curricula. This need has (...) led the researchers to do a meta-analysis to analyse the effectiveness of PBL on secondary students’ achievement in different scientific disciplines. Following the set of inclusion and exclusion criteria, 11 studies in Eurasia, Africa, and America conducted from 2016 to 2020 have qualified for this study. Six of which focused on JHS (n = 1047) and five on SHS (n = 375). Studies were obtained from various meta-searchengines including Google, ERIC, and JSTOR. Further, the researchers used Harzing’s Publish and Perish software to exhaust the search process. Sample size, mean, and standard deviation were analysed using the Comprehensive MetaAnalysis version 3 to determine the effect sizes (Hedge’s g) and the results of moderator analysis, forest plot, funnel plot, and Begg-Mazumdar test. Findings have shown that PBL, as an approach to teaching science, had a large and positive effect (ES = .871) on the achievement of secondary students. However, grade levels and various scientific disciplines did not influence students’ learning achievement. The conduct of more studies on the different factors affecting PBL implementation and specific effects of PBL on various student domains is recommended to facilitate comparative educational research in the future. (shrink)
Societal pressures on high tech organizations to define and disseminate their ethical stances are increasing as the influences of the technologies involved expand. Many Internet-based businesses have emerged in the past decades; growing numbers of them have developed some kind of moral declaration in the form of mottos or ethical statements. For example, the corporate motto “don’t be evil” (often linked with Google/ Alphabet) has generated considerable controversy about social and cultural impacts of searchengines. After addressing the (...) origins of these mottos and statements, this chapter projects the future of such ethical manifestations in the context of critically-important privacy, security, and economic concerns. The chapter analyzes potential influences of the ethical expressions on corporate social responsibility (CSR) initiatives. The chapter analyzes issues of whether “large-grained” corporate mottos can indeed serve to supply social and ethical guidance for organizations as opposed to more complex, detailed codes of ethics or comparable attempts at moral clarification. (shrink)
It is generally known that the influential Kyiv researcher, professor of the St. Volodymyr University and honorary member of the Kyiv Theological Academy Ivan Sikorsky (1842–1919) made a significant contribution to the development of the psychological science of his times and gained great authority among his colleagues in the West. In recent years, many studies have been launched in Ukraine, whose authors are trying to demonstrate the relevance of his work also in terms of contemporary science. It remains unclear as (...) to when and how he was recognized in the West, which of his colleagues he influenced in his own life, how his academic achievements are now appreciated in foreign professional circles. Trying to fill this gap, the author of the paper created Sikorsky’s personal profile on the Internet platform “Google Scholar” (“Google Academy”). This Internet-based platform is regularly used to calculate citations of the contemporary scholars and publications. However, as it has turned out, Google Scholar may also be a useful tool for the historical research. This search engine collates information on almost all of the Sikorsky’s works, including those written or translated into foreign languages and published abroad. Despite the fact that Google Scholar identified and included in its own list not all of the existing references, it nevertheless helped to reconstruct a rather large and representative bibliography. Combining the quantitative and qualitative analysis of relevant information from Google Scholar as well as such tools as Google Books, Internet Archive, JSTOR, etc. helps to clarify and substantially expand the understanding of Sikorsky’s place within the history of science and the treatment of his works in the West. As it is clearly shown in the article, he became one of the brightest domestic representatives of the leading trends in the world psychology. A significant number of Western experimental psychologists of the late nineteenth and first half of the twentieth century (including such prominent figures as Alfred Binet, Franz Boas, Edouard Claparede, Edward Thorndike, etc.) were actively referring to his pioneering researches on the phenomenon of mental fatigue in children. It is also shown that the contribution made by Sikorsky to the development of psychology and pedagogy has not been forgotten by the contemporary researchers. (shrink)
Abstract: Background: In spite of the fact that computers continue to improve in speed and functions operation, they remain complex to use. Problems frequently happen, and it is hard to resolve or find solutions for them. This paper outlines the significance and feasibility of building a desktop PC problems diagnosis system. The system gathers problem symptoms from users’ desktops, rather than the user describes his/her problems to primary searchengines. It automatically searches global databases of problem symptoms and (...) solutions, and also allows ordinary users to contribute exact problem reports in a structured manner. Objectives: The main goal of this Knowledge Based System is to get the suitable problem desktop PC symptoms and the correct way to solve the errors. Methods: In this paper the design of the proposed Knowledge Based System which was produced to help users of desktop PC in knowing many of the problems and error such as : Power supply problems, CPU errors, RAM dumping error, hard disk errors and bad sectors and suddenly restarting PC. The proposed Knowledge Based System presents an overview about desktop PC hardware errors are given, the cause of fault are outlined and the solution to the problems whenever possible is given out. CLIPS Knowledge Based System language was used for designing and implementing the proposed expert system. Results: The proposed PC desktop troubleshooting Knowledge Based System was evaluated by IT students and they were satisfied with its performance. (shrink)
Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of searchengines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find satisfactory answers to these questions. (...) This report sets out some possible approaches, and describes some of the ways government is already engaging with these issues. (shrink)
This essay accounts for part of the moral, social and legal problems behind the attempts for justifying and implementing the so-called right to be forgotten in the Internet. Such right implies that –under certain circumstances–individuals are entitled to demand that searchengines remove links containing their personal information. Our inquiry reflects on a ruling issued by the Court of Justice of the European Union and made public on May 13th 2014, as well as on the recommendations conveyed in (...) Article 29 of the Data Protection Working Party, the practice of Google and the report of its Advisory Council. (shrink)
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This (...) becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness. (shrink)
Numerous problems arising in engineering applications can have several objectives to be satisfied. An important class of problems of this kind is lexicographic multi-objective problems where the first objective is incomparably more important than the second one which, in its turn, is incomparably more important than the third one, etc. In this paper, Lexicographic Multi-Objective Linear Programming (LMOLP) problems are considered. To tackle them, traditional approaches either require solution of a series of linear programming problems or apply a scalarization of (...) weighted multiple objectives into a single-objective function. The latter approach requires finding a set of weights that guarantees the equivalence of the original problem and the single-objective one and the search of correct weights can be very time consuming. In this work a new approach for solving LMOLP problems using a recently introduced computational methodology allowing one to work numerically with infinities and infinitesimals is proposed. It is shown that a smart application of infinitesimal weights allows one to construct a single-objective problem avoiding the necessity to determine finite weights. The equivalence between the original multi-objective problem and the new single-objective one is proved. A simplex-based algorithm working with finite and infinitesimal numbers is proposed, implemented, and discussed. Results of some numerical experiments are provided. (shrink)
It seems natural to think that Carnapian explication and experimental philosophy can go hand in hand. But what exactly explicators can gain from the data provided by experimental philosophers remains controversial. According to an influential proposal by Shepherd and Justus, explicators should use experimental data in the process of ‘explication preparation’. Against this proposal, Mark Pinder has recently suggested that experimental data can directly assist an explicator’s search for fruitful replacements of the explicandum. In developing his argument, he also (...) proposes a novel aspect of what makes a concept fruitful, namely, that it is taken up by the relevant community. In this paper, I defend explication preparation against Pinder’s objections and argue that his uptake proposal conflates theoretical and practical success conditions of explications. Furthermore, I argue that Pinder’s suggested experimental procedure needs substantial revision. I end by distinguishing two kinds of explication projects, and showing how experimental philosophy can contribute to each of them. (shrink)
In this essay we collect and put together a number of ideas relevant to the under- standing of the phenomenon of creativity, confining our considerations mostly to the domain of cognitive psychology while we will, on a few occasions, hint at neuropsy- chological underpinnings as well. In this, we will mostly focus on creativity in science, since creativity in other domains of human endeavor have common links with scientific creativity while differing in numerous other specific respects. We begin by briefly (...) introducing a few basic notions relating to cognition, among which the notion of ‘concepts’ is of basic relevance. The myriads of concepts lodged in our mind constitute a ‘conceptual space’ of an enormously complex structure, where con- cepts are correlated by beliefs that are themselves made up of concepts and are as- sociated with emotions. The conceptual space, moreover, is perpetually in a state of dynamic evolution that is once again of a complex nature. A major component of the dynamic evolution is made up of incessant acts of inference, where an inference occurs essentially by means of a succession of correlations among concepts set up with beliefs and heuristics, the latter being beliefs of a special kind, namely, ones relatively free of emotional associations and possessed of a relatively greater degree of justification. Beliefs, along with heuristics, have been described as the ‘mind’s software’, and con- stitute important cognitive components of the self-linked psychological resources of an individual. The self is the psychological engine driving all our mental and physical activity, and is in a state of ceaseless dynamics resulting from one’s most intimate ex- periences of the world accumulating in the course of one’s journey through life. Many of our psychological resources are of a dual character, having both a self-linked and a shared character, the latter being held in common with larger groups of people and imbibed from cultural inputs. We focus on the privately held self-linked beliefs of an individual, since these are presumably of central relevance in making possible inductive inferences – ones in which there arises a fundamental need of adopting a choice or making a decision. Beliefs, decisions, and inferences, all have the common link to the self of an individual and, in this, are fundamentally analogous to free will, where all of these have an aspect of non-determinism inherent in them. Creativity involves a major restructuring of the conceptual space where a sustained inferential process eventually links remote conceptual domains, thereby opening up the possibility of a large number of new correlations between remote concepts by a cascading process. Since the process of inductive inference depends crucially on de- cisions at critical junctures of the inferential chain, it becomes necessary to examine the basic mechanism underlying the making of decisions. In the framework that we attempt to build up for the understanding of scientific creativity, this role of decision making in the inferential process assumes central relevance. With this background in place, we briefly sketch the affect theory of decisions. Affect is an innate system of response to perceptual inputs received either from the exter- nal world or from the internal physiological and psychological environment whereby a positive or negative valence gets associated with a perceptual input. Almost every sit- uation faced by an individual, even one experienced tacitly, i.e., without overt aware-ness, elicits an affective response from him, carrying a positive or negative valence that underlies all sorts of decision making, including ones carried out unconsciously in inferential processes. Referring to the process of inferential exploration of the conceptual space that gener- ates the possibility of correlations being established between remote conceptual do- mains, such exploration is guided and steered at every stage by the affect system, analogous to the way a complex computer program proceeds through junctures where the program ascertains whether specified conditions are met with by way of generating appropriate numerical values – for instance, the program takes different routes, depending on whether some particular numerical value turns out to be positive or negative. The valence generated by the affect system in the process of adoption of a choice plays a similar role which therefore is of crucial relevance in inferential processes, especially in the exploration of the conceptual space where remote domains need to be linked up – the affect system produces a response along a single value dimension, resembling a number with a sign and a magnitude. While the affect system plays a guiding role in the exploration of the conceptual space, the process of exploration itself consists of the establishment of correlations between concepts by means of beliefs and heuristics, the self-linked ones among the latter having a special role in making possible the inferential journey along alternative routes whenever the shared rules of inference become inadequate. A successful access to a remote conceptual domain, necessary for the creative solution of a standing problem or anomaly – one that could not be solved within the limited domain hitherto accessed – requires a phase of relatively slow cumulative search and then, at some stage, a rapid cascading process when a solution is in sight. Representing the conceptual space in the form of a complex network, the overall process can be likened to one of self-organized criticality commonly observed in the dynamical evolution of complex systems. In order that inferential access to remote domains may actually be possible, it is necessary that restrictions on the exploration process – necessary for setting the context in ordinary instances of inductive inference – be relaxed and a relatively free exploration in a larger conceptual terrain be made possible. This is achieved by the mind going into the default mode, where external constraints – ones imposed by shared beliefs and modes of exploration – are made inoperative. While explaining all these various aspects of the creative process, we underline the supremely important role that analogy plays in it. Broadly speaking, analogy is in the nature of a heuristic, establishing correlations between concepts. However, analo- gies are very special in that these are particularly effective in establishing correlations among remote concepts, since analogy works without regard to the contiguity of the concepts in the conceptual space. In establishing links between concepts, analogies have the power to light up entire terrains in the conceptual space when a rapid cas- cading of fresh correlations becomes possible. The creative process occurs within the mind of a single individual or of a few closely collaborating individuals, but is then continued by an entire epistemic community, eventually resulting in a conceptual revolution. Such conceptual revolutions make pos- sible the radical revision of scientific theories whereby the scope of an extant theory is broadened and a new theoretical framework makes its appearance. The emerging theory is characterized by a certain degree of incommensurability when compared with the earlier one – a feature that may appear strange at first sight. But incommen- surability does not mean incompatibility and the apparently contrary features of the relation between the successive theories may be traced to the multi-layered structureof the conceptual space where concepts are correlated not by means of single links but by multiple ones, thereby generating multiple layers of correlation, among which some are retained and some created afresh in a conceptual restructuring. We conclude with the observation that creativity occurs on all scales. Analogous to correlations being set up across domains in the conceptual space and new domains being generated, processes with similar features can occur within the confines of a domain where a new layer of inferential links may be generated, connecting up sub- domains. In this context, insight can be looked upon as an instance of creativity within the confines of a domain of a relatively limited extent. (shrink)
Driven by the use cases of PubChemRDF and SCAIView, we have developed a first community-based clinical trial ontology (CTO) by following the OBO Foundry principles. CTO uses the Basic Formal Ontology (BFO) as the top level ontology and reuses many terms from existing ontologies. CTO has also defined many clinical trial-specific terms. The general CTO design pattern is based on the PICO framework together with two applications. First, the PubChemRDF use case demonstrates how a drug Gleevec is linked to multiple (...) clinical trials investigating Gleevec’s related chemical compounds. Second, the SCAIView text mining engine shows how the use of CTO terms in its search algorithm can identify publications referring to COVID-19-related clinical trials. Future opportunities and challenges are discussed. (shrink)
Cognitive sciences as an interdisciplinary field, involving scientific disciplines (such as computer science, linguistics, psychology, neuroscience, economics, etc.), philosophical disciplines (philosophy of language, philosophy of mind, analytic philosophy, etc.) and engineering (notably knowledge engineering), have a vast theoretical and practical content, some even conflicting. In this interdisciplinary context and on computational modeling, ontologies play a crucial role in communication between disciplines and also in a process of innovation of theories, models and experiments in cognitive sciences. We propose a model for (...) this process here. An ontological commitment is advocated as the framework of a scientific realism, which leads computational modeling to search for more realistic models, for a complex systems perspective of nature and cognition. In that way multiagent modeling of complex systems has been fulfilling an important role. (shrink)
This study, presenting a history of the measurement of light intensity from its first hesitant emergence to its gradual definition as a scientific subject, explores two major themes. The first concerns the adoption by the evolving physics and engineering communities of quantitative measures of light intensity around the turn of the twentieth century. The mathematisation of light measurement was a contentious process that hinged on finding an acceptable relationship between the mutable response of the human eye and the more easily (...) stabilised, but less encompassing, techniques of physical measurement. -/- A second theme is the exploration of light measurement as an example of ‘peripheral science’. Among the characteristics of such a science, I identify the lack of a coherent research tradition and the persistent partitioning of the subject between disparate groups of practitioners. Light measurement straddled the conventional categories of ‘science’ and ‘technology’, and was influenced by such distinct factors as utilitarian requirements, technological innovation, human perception and bureaucratisation. Peripheral fields such as this, which may be typical of much of modern science and technology, have hitherto received little attention from historians. -/- These themes are pursued with reference to the social and technological factors which were combined inextricably in the development of the subject. The intensity of light gained only sporadic attention until the late nineteenth century. Measured for the utilitarian needs of the gas lighting industry from the second half of the century, light intensity was appropriated by members of the nascent electric lighting industry, too, in their search for a standard of illumination. By the turn of the century the ‘illuminating engineering movement’ was becoming an organised, if eclectic, community which promoted research into and standards for the measurement of light intensity. -/- The twentieth-century development of the subject was moulded by organisation and institutionalisation. Between 1900 and 1920, the new national and industrial laboratories in Britain, America and Germany were crucial in stabilising the subject. In the inter-war period, committees and international commissions sought to standardise light measurement and to promote research. Such government- and industry-supported delegations, rather than academic institutions, were primarily responsible for the ‘construction’ of the subject. Practitioners increasingly came to interpret the three topics of photometry (visible light measurement), colorimetry (the measurement of colour) and radiometry (the measurement of invisible radiations) as aspects of a broader study, and enthusiastically applied them to industrial and scientific problems. -/- From the 1920s, the long-established visual methods of observation were increasingly replaced by physical means of light measurement, a process initially contingent on scientific fashion more than demonstrated superiority. New photoelectric techniques for measuring light intensity engendered new commercial instruments, a trend which accelerated in the following decade when photometric measurement was applied with limited success to a range of industrial problems. Seeds sowed in the 1920s – namely commercialisation and industrial application, the transition from visual to ‘physical’ methods, and the search for fundamental limitations in light measurement – gave the subject substantially the form it was to retain over the next half-century. (shrink)
This article would appeal to people interested in new ideas in sciences like physics, astronomy and mathematics that are not presented in a formal manner. -/- Biologists would also find the paragraphs about evolution interesting. I was afraid they'd think my ideas were a bit "out there". But I sent a short email about them last year to a London biologist who wrote an article for the journal Nature. She replied that it was "very interesting". -/- The world is fascinated (...) by electronics. Computer scientists, as well as computer buyers, would be intrigued by the fundamental role given to human electronics in creation of the universe. This obviously can only be done if time travel to the past is possible. I explain in scientific terms how it could be done (the world is also fascinated by the prospect of time travel). -/- My ideas on trips through time grew from the related topic of interstellar, and even intergalactic, travel (and those ideas are inspired by an electrical-engineering experiment at Yale University in 2009). After the ideas on time travel came the realization that this technology could be used to totally eliminate the problems of muscle and bone weakness, radiation exposure etc associated with a lengthy journey to Mars. -/- The exquisitely ordered cosmos proposed would have great appeal to religion and philosophy. Dealing as it does with time that does not exclusively operate in a straight line, the book could not only present a new view of evolution (present theory assumes time is always a straight line from past to future). Nonlinear time might also give religionists a new concept of who God is. This could possibly be that of humans from the remote future who are quantum entangled with all past, present and future states of the whole - infinite and eternal - universe; and thus have all God's powers. Such infinite power could be pantheistic but would naturally include the ability to manifest as an individual. (I know this article is very far removed from what is traditionally considered scientific. Just remember: science is the search for knowledge of how this universe works, and that search must be pursued wherever it leads - even if it leads into traditionally nonscientific areas such as religion.) -/- Finally - if we're entangled with the whole universe, we'd have to be entangled with each other. On a mundane level, this gives us extrasensory and telekinetic abilities. On a higher level, it eliminates crime and war and domestic violence since people don't normally desire to harm themselves in any way. (shrink)
I call the activity of assessing and developing improvements of our representational devices ‘conceptual engineering’.¹ The aim of this chapter is to present an argument for why conceptual engineering is important for all parts of philosophy (and, more generally, all inquiry). Section I of the chapter provides some background and defines key terms. Section II presents the argument. Section III responds to seven objections. The replies also serve to develop the argument and clarify what conceptual engineering is.
This paper investigates the connection between two recent trends in philosophy: higher-orderism and conceptual engineering. Higher-orderists use higher-order quantifiers (in particular quantifiers binding variables that occupy the syntactic positions of predicates) to express certain key metaphysical doctrines, such as the claim that there are properties. I argue that, on a natural construal, the higher-orderist approach involves an engineering project concerning, among others, the concept of existence. I distinguish between a modest construal of this project, on which it aims at engineering (...) higher-order analogues of the familiar notion of first-order existence, and an ambitious construal, on which it additionally aims at engineering a broadened notion of existence that subsumes first-order and higher-order existence. After identifying a substantial problem for the ambitious project, I investigate a possible response which is based on adopting a cumulative type theory as the background higher-order logic. While effective against the problem at hand, this strategy turns out to undermine a major reason to embrace higher-orderism in the first place, namely the idea that higher-orderism dissolves a range of otherwise intractable debates in metaphysics. Higher-orderists are therefore best advised to pursue their engineering project on the modest variant and against the background of standard type theory. (shrink)
Conceptual engineers aim to revise rather than describe our concepts. But what are concepts? And how does one engineer them? Answering these questions is of central importance for implementing and theorizing about conceptual engineering. This paper discusses and criticizes two influential views of this issue: semanticism, according to which conceptual engineers aim to change linguistic meanings, and psychologism, according to which conceptual engineers aim to change psychological structures. I argue that neither of these accounts can give us the full story. (...) Instead, I propose and defend the Dual Content View of Conceptual Engineering. On this view, conceptual engineering targets concepts, where concepts are understood as having two kinds of contents: referential content and cognitive content. I show that this view is independently plausible and that it gives us a comprehensive account of conceptual engineering that helps to make progress on some of the most difficult problems surrounding conceptual engineering. (shrink)
Conceptual engineering is thought to face an ‘implementation challenge’: the challenge of securing uptake of engineered concepts. But is the fact that implementation is challenging really a defect to be overcome? What kind of picture of political life would be implied by making engineering easy to implement? We contend that the ambition to obviate the implementation challenge goes against the very idea of liberal democratic politics. On the picture we draw, the implementation challenge can be overcome by institutionalizing control over (...) conceptual uptake, and there are contexts—such as professions that depend on coordinated conceptual innovation—in which there are good reasons to institutionalize control in this fashion. But the liberal fear of this power to control conceptual uptake ending up in the wrong hands, combined with the democratic demand for freedom of thought as a precondition of genuine consent, yields a liberal democratic rationale for keeping implementation challenging. (shrink)
The development of cultured meat has gained urgency through the increasing problems associated with meat, but what it might become is still open in many respects. In existing debates, two main moral profiles can be distinguished. Vegetarians and vegans who embrace cultured meat emphasize how it could contribute to the diminishment of animal suffering and exploitation, while in a more mainstream profile cultured meat helps to keep meat eating sustainable and affordable. In this paper we argue that these profiles do (...) not exhaust the options and that feelings as well as imagination are needed to explore possible future options. On the basis of workshops, we present a third moral profile, “the pig in the backyard”. Here cultured meat is imagined as an element of a hybrid community of humans and animals that would allow for both the consumption of animal protein and meaningful relations with domestic animals. Experience in the workshops and elsewhere also illustrates that thinking about cultured meat inspires new thoughts on “normal” meat. In short, the idea of cultured meat opens up new search space in various ways. We suggest that ethics can take an active part in these searches, by fostering a process that integrates feelings, imagination and rational thought and that expands the range of our moral identities. (shrink)
We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...) well as providing sandboxes or workspaces to help various stakeholders build practical wisdom in systems that are sufficiently realistic to aid transferring skills learned to real-world use. The latter are needed for the design of exercises and methods of evaluation within these workspaces, as well as ways of empirically assessing the transfer of wisdom from workspace to world. Systematic interaction between these three disciplines (and others) is the best approach to engineering wisdom for the machine age. (shrink)
Conceptual engineering is now a central topic in contemporary philosophy. Just 4-5 years ago it wasn’t. People were then engaged in the engineering of various philosophical concepts (in various sub-disciplines), but typically not self-consciously so. Qua philosophical method, conceptual engineering was under-explored, often ignored, and poorly understood. In my lifetime, I have never seen interest in a philosophical topic grow with such explosive intensity. The sociology behind this is fascinating and no doubt immensely complex (and an excellent case study for (...) those interested in the dynamics of academic disciplines). That topic, however, will have to wait for another occasion. Suffice it to say that if Fixing Language (FL) contributed even a little bit to this change of focus in philosophical methodology, it would have achieved one of its central goals. In that connection, it is encouraging that the papers in this symposium are in fundamental agreement about the significance and centrality of conceptual engineering to philosophy. That said, the goal of FL was not only to advocate for a topic, but also to defend a particular approach to it: The Austerity Framework. These replies have helped me see clearer the limitations of that view and points where my presentation was suboptimal. The responses below are in part a reconstruction of what I had in mind while writing the book and in part an effort to ameliorate. I’m grateful to the symposiasts for helping me get a better grip on these very hard issues. (shrink)
In recent years, genetically engineered (GE) mosquitoes have been proposed as a public health measure against the high incidence of mosquito-borne diseases among the poor in regions of the global South. While uncertainties as well as risks for humans and ecosystems are entailed by the open-release of GE mosquitoes, a powerful global health governance non-state organization is funding the development of and advocating the use of those bio-technologies as public health tools. In August 2016, the US Food and Drug Agency (...) (FDA) approved the uncaged field trial of a GE Aedes aegypti mosquito in Key Haven, Florida. The FDA’s decision was based on its assessment of the risks of the proposed experimental public health research project. The FDA is considered a global regulatory standard setter. So, its approval of the uncaged field trial could be used by proponents of GE mosquitoes to urge countries in the global South to permit the use of those bio-technologies. -/- From a public health ethics perspective, this paper evaluates the FDA’s 2016 risk assessment of the proposed uncaged field trial of the GE mosquito to determine whether it qualified as a realistic risk evaluation. -/- The FDA’s risk assessment of the proposed uncaged field trial did not proximate the conditions under which the GE mosquitoes would be used in regions of the global South where there is a high prevalence of mosquito-borne diseases. -/- Given that health and disease have political-economic determinants, whether a risk assessment of a product is realistic or not particularly matters with respect to interventions meant for public health problems that disproportionately impact socio-economically marginalized populations. If ineffective public health interventions are adopted based on risk evaluations that do not closely mirror the conditions under which those products would actually be used, there could be public health and ethical costs for those populations. (shrink)
There is no agreement on whether any invertebrates are conscious and no agreement on a methodology that could settle the issue. How can the debate move forward? I distinguish three broad types of approach: theory-heavy, theory-neutral and theory-light. Theory-heavy and theory-neutral approaches face serious problems, motivating a middle path: the theory-light approach. At the core of the theory-light approach is a minimal commitment about the relation between phenomenal consciousness and cognition that is compatible with many specific theories of consciousness: the (...) hypothesis that phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus. This “facilitation hypothesis” can productively guide inquiry into invertebrate consciousness. What is needed? At this stage, not more theory, and not more undirected data gathering. What is needed is a systematic search for consciousness-linked cognitive abilities, their relationships to each other, and their sensitivity to masking. (shrink)
Conceptual engineering means to provide a method to assess and improve our concepts working as cognitive devices. But conceptual engineering still lacks an account of what concepts are (as cognitive devices) and of what engineering is (in the case of cognition). And without such prior understanding of its subject matter, or so it is claimed here, conceptual engineering is bound to remain useless, merely operating as a piecemeal approach, with no overall grip on its target domain. The purpose of this (...) programmatic paper is to overcome this knowledge gap by providing some guidelines for developing the theories of concepts and of cognition that will ground the systematic unified framework needed to effectively implement conceptual engineering as a widely applicable method for the cognitive optimization of our conceptual devices. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.