‘There is no place in the phenomenology of fully absorbed coping’, writes Hubert Dreyfus, ‘for mindfulness. In flow, as Sartre sees, there are only attractive and repulsive forces drawing appropriate activity out of an active body’1. Among the many ways in which history animates dynamical systems at a range of distinctive timescales, the phenomena of embodied human habit, skilful movement, and absorbed coping are among the most pervasive and mundane, and the most philosophically puzzling. In this essay we examine both (...) habitual and skilled movement, sketching the outlines of a multidimensional framework within which the many differences across distinctive cases and domains might be fruitfully understood. Both the range of movement phenomena which can plausibly be seen as instances of habit or skill, and the space of possible theories of such phenomena are richer and more disparate than philosophy easily encompasses. We seek to bring phenomenology into contact with relevant movements in psychological theories of skilful action, in the belief that phenomenological philosophy and cognitive science can be allies rather than antagonists. (shrink)
This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism (...) or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelli-gence for moral enhancement, but not in artificial intelligence that relies solely on ambient intelligence technologies. (shrink)
Mandevillian intelligence is a specific form of collective intelligence in which individual cognitive shortcomings, limitations and biases play a positive functional role in yielding various forms of collective cognitive success. When this idea is transposed to the epistemological domain, mandevillian intelligence emerges as the idea that individual forms of intellectual vice may, on occasion, support the epistemic performance of some form of multi-agent ensemble, such as a socio-epistemic system, a collective doxastic agent, or an epistemic group agent. (...) As a specific form of collective intelligence, mandevillian intelligence is relevant to a number of debates in social epistemology, especially those that seek to understand how group (or collective) knowledge arises from the interactions between a collection of individual epistemic agents. Beyond this, however, mandevillian intelligence raises issues that are relevant to the research agendas of both virtue epistemology and applied epistemology. From a virtue epistemological perspective, mandevillian intelligence encourages us to adopt a relativistic conception of intellectual vice/virtue, enabling us to see how individual forms of intellectual vice may (sometimes) be relevant to collective forms of intellectual virtue. In addition, mandevillian intelligence is relevant to the nascent sub-discipline of applied epistemology. In particular, mandevillian intelligence forces us see the potential epistemic value of (e.g., technological) interventions that create, maintain or promote individual forms of intellectual vice. (shrink)
Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...) and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision making based on shared information, shared deliberation, and shared mind between practitioner and patient. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...) We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose (...) an elegant answer based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
The monograph’s twofold purpose is to recognize epistemological intelligence as a distinguishable variety of human intelligence, one that is especially important to philosophers, and to understand the challenges posed by the psychological profile of philosophers that can impede the development and cultivation of the skills associated with epistemological intelligence.
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used (...) to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...) used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate (...) problems, recent cases of unexpected effects of AI are the consequences of those very choices that enabled the field to succeed, and this is why it will be difficult to solve them. In this chapter we review three of these choices, investigating their connection to some of today’s challenges in AI, including those relative to bias, value alignment, privacy and explainability. We introduce the notion of “ethical debt” to describe the necessity to undertake expensive rework in the future in order to address ethical problems created by a technical system. (shrink)
A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels (...) of AI development, namely, before it starts self-improvement, during its takeoff, when it uses various instruments to escape its initial confinement, or after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level. (shrink)
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the (...) agent knows to be codes of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. (...) Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security. This paper can serve as a comprehensive reference for the topic of uncontrollability. (shrink)
We describe a strategy for integration of data that is based on the idea of semantic enhancement. The strategy promises a number of benefits: it can be applied incrementally; it creates minimal barriers to the incorporation of new data into the semantically enhanced system; it preserves the existing data (including any existing data-semantics) in their original form (thus all provenance information is retained, and no heavy preprocessing is required); and it embraces the full spectrum of data sources, types, models, and (...) modalities (including text, images, audio, and signals). The result of applying this strategy to a given body of data is an evolving Dataspace that allows the application of a variety of integration and analytic processes to diverse data contents. We conceive semantic enhancement (SE) as a lightweight and flexible process that leverages the richness of the structured contents of the Dataspace without adding storage and processing burdens to what, in the intelligence domain, will be an already storage- and processing-heavy starting point. SE works not by changing the data to which it is applied, but rather by adding an extra semantic layer to this data. We sketch how the semantic enhancement approach can be applied consistently and in cumulative fashion to new data and data-models that enter the Dataspace. (shrink)
As available intelligence data and information expand in both quantity and variety, new techniques must be deployed for search and analytics. One technique involves the semantic enhancement of data through the creation of what are called ‘ontologies’ or ‘controlled vocabularies.’ When multiple different bodies of heterogeneous data are tagged by means of terms from common ontologies, then these data become linked together in ways which allow more effective retrieval and integration. We describe a simple case study to show how (...) these benefits are being achieved, and we describe our strategy for developing a suite of ontologies to serve the needs of the war-fighter in the ever more complex battlespace environments of the future. (shrink)
For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets (...) of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them. (shrink)
We describe a strategy that is being used for the horizontal integration of warfighter intelligence data within the framework of the US Army’s Distributed Common Ground System Standard Cloud (DSC) initiative. The strategy rests on the development of a set of ontologies that are being incrementally applied to bring about what we call the ‘semantic enhancement’ of data models used within each intelligence discipline. We show how the strategy can help to overcome familiar tendencies to stovepiping of (...) class='Hi'>intelligence data, and describe how it can be applied in an agile fashion to new data resources in ways that address immediate needs of intelligence analysts. (shrink)
This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition (...) to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities. -/- . (shrink)
Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all our (...) jobs. If instead they are intelligent enough to replace us, the risk they become unfriendly increases, given that they would not need us and humans would just compete for valuable resources. Their hostility will not promote our moral passivity. Secondly, the use of AI assistants in our personal lives will become a problem only if we rely on them for almost all our decision-making and motivation. But such a (maximally) pervasive level of dependence raises the question of whether humans would accept it, and consequently whether the crisis of passivity will arise. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. (...) The key issues this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines (...) can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages (...) 317-342 - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
This essay describes computational semantic networks for a philosophical audience and surveys several approaches to semantic-network semantics. In particular, propositional semantic networks are discussed; it is argued that only a fully intensional, Meinongian semantics is appropriate for them; and several Meinongian systems are presented.
What characterizes most technical or theoretical accounts of memory is their reliance upon an internal storage model. Psychologists and neurophysiologists have suggested neural traces (either dynamic or static) as the mechanism for this storage, and designers of artificial intelligence have relied upon the same general model, instantiated magnetically or electronically instead of neurally, to do the same job. Both psychology and artificial intelligence design have heretofore relied, without much question, upon the idea that memory is to be understood (...) as a matter of internal storage. In what follows, I shall first sketch the most important reasons for skepticism about this model, and I shall then propose an outline of an alternative way of talking about memory. This will provide an appropriate framework for suggesting a few implications for future work in artificial intelligence. (shrink)
Artificial Intelligence is part of the Industrial Revolution 4.0 and already exists today. This shows that the future has come and everyone must prepare for the implementation of Artificial Intelligence to face the transformation of the digital era, especially the world of education. The community service workshop was attended by 66 participants, namely students, teachers, and structural officials of SMK Negeri 2 Singkawang. The workshop was held using demonstration methods, lectures, discussions and question and answer. This workshop provides (...) information to teachers and students about the importance of Artificial Intelligence (AI) in the digital transformation process. For teachers and students at SMKN 2 Singkawang it was introduced that algorithms or artificial intelligence methods could be given simply by representing problems into simple solutions with several examples of implementing artificial intelligence using Microsoft Excel and utilizing VBA macros. (shrink)
The intelligence cycle is a set of processes used to provide useful information for decision-making. The cycle consists of several processes. The related counter-intelligence area is tasked with preventing information efforts from others. A basic model of the process of collecting and analyzing information is called the "intelligence cycle". This model can be applied, and, like all the basic models, it does not reflect the fullness of real-world operations. Through intelligence cycle activities, information is collected and (...) assembled, raw information is transformed into processed information, analyzed and made available to users. DOI: 10.13140/RG.2.2.25665.81760. (shrink)
Intelligence services are currently focusing on the fight against terrorism, leaving relatively little resources to monitor other security threats. For this reason, they often ignore external information activities that do not pose immediate threats to their government's interests. Extremely few external services operate globally. Almost all other services focus on immediate neighbors or regions. These services usually depend on relationships with these global services for information on areas beyond their immediate neighborhoods, and often sell their regional expertise for what (...) they need globally. A feature of both internal and external services is that they behave like a caste. DOI: 10.13140/RG.2.2.25847.68006. (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored (...) or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
The importance of unconscious cognition is seeping into popular consciousness. A number of recent books bridging the academic world and the reading public stress that at least a portion of decision-making depends not on conscious reasoning, but instead on cognition that occurs below awareness. However, these books provide a limited perspective on how the unconscious mind works and the potential power of intuition. This essay is an effort to expand the picture. It is structured around the book that has garnered (...) the most attention, Malcolm Gladwell’s Blink , but it also considers Gut Feelings by Gerd Gigerenzer and How Doctors Think by Jerome Groopman . These books help deepen the . (shrink)
La relation entre l'intelligence émotionnelle et la personnalité a été prise en compte dans plusieurs modèles d'intelligence émotionnelle, tels que les modèles mixtes de Bar-On et Goleman. Dans ces modèles, les composants de l'intelligence émotionnelle sont similaires à ceux de la théorie de la personnalité. Ce chevauchement est évident dans les comparaisons empiriques des constructions. Même dans le modèle de Mayer et Salovey, des corrélations empiriques significatives avec la personnalité peuvent être mises en évidence. Pour la plupart (...) des spécialistes, la connaissance ou l'intelligence cognitive ne peuvent pas être les seuls prédicteurs du succès, ils ont montré que la capacité de prédire la performance d'un leader dépend d'une série de compétences. DOI: 10.13140/RG.2.2.33674.18888 . (shrink)
Methodology, in intelligence, consists of the methods used to make decisions about threats, especially in the intelligence analysis discipline. The enormous amount of information collected by intelligence agencies often puts them in the inability to analyze them all. The US intelligence community collects over one billion daily information. The nature and characteristics of the information gathered as well as their credibility also have an impact on the intelligence analysis. Clark proposed a methodology for analyzing information (...) by addressing the target-centric intelligence cycle as an alternative to the traditional information cycle. DOI: 10.13140/RG.2.2.23471.89769. (shrink)
The model of human intelligence that is most widely adopted derives from psychometrics and behavioral genetics. This standard approach conceives intelligence as a general cognitive ability that is genetically highly heritable and describable using quantitative traits analysis. The paper analyzes intelligence within the debate on natural kinds and contends that the general intelligence conceptualization does not carve psychological nature at its joints. Moreover, I argue that this model assumes an essentialist perspective. As an alternative, I consider (...) an HPC theory of intelligence and evaluate how it deals with essentialism and with intuitions coming from cognitive science. Finally, I highlight some concerns about the HPC model as well, and conclude by suggesting that it is unnecessary to treat intelligence as a kind in any sense. (shrink)
Some prominent scientists and philosophers have stated openly that moral and political considerations should influence whether we accept or promulgate scientific theories. This widespread view has significantly influenced the development, and public perception, of intelligence research. Theories related to group differences in intelligence are often rejected a priori on explicitly moral grounds. Thus the idea, frequently expressed by commentators on science, that science is “self-correcting”—that hypotheses are simply abandoned when they are undermined by empirical evidence—may not be correct (...) in all contexts. In this paper, documentation spanning from the early 1970s to the present is collected, which reveals the influence of scientists’ moral and political commitments on the study of intelligence. It is suggested that misrepresenting findings in science to achieve desirable social goals will ultimately harm both science and society. (shrink)
Today, artificial intelligence, especially machine learning, is structurally dependent on human participation. Technologies such as Deep Learning (DL) leverage networked media infrastructures and human-machine interaction designs to harness users to provide training and verification data. The emergence of DL is therefore based on a fundamental socio-technological transformation of the relationship between humans and machines. Rather than simulating human intelligence, DL-based AIs capture human cognitive abilities, so they are hybrid human-machine apparatuses. From a perspective of media philosophy and social-theoretical (...) critique, I differentiate five types of “media technologies of capture” in AI apparatuses and analyze them as forms of power relations between humans and machines. Finally, I argue that the current hype about AI implies a relational and distributed understanding of (human/artificial) intelligence, which I categorize under the term “cybernetic AI”. This form of AI manifests in socio-technological apparatuses that involve new modes of subjectivation, social control and discrimination of users. (shrink)
Epistemic communities are informal networks of knowledge-based experts who influence decision-makers in defining issues they face, identifying different solutions, and evaluating results. Epistemic communities have the greatest influence in conditions of political uncertainty and visibility, usually following a crisis or triggering event. Counterintelligence is primarily considered an analytical discipline, focusing on the study of intelligence services. The basis of all counterintelligence activities is the study of individual intelligence services, an analytical process to understand the behavior of foreign entities (...) (formal mission, internal and external policy, history and myths within the entity, the people who compose it). DOI: 10.13140/RG.2.2.22837.52962 . (shrink)
The analysts are in the field of "knowledge". Intelligence refers to knowledge and the types of problems addressed are knowledge problems. So, we need a concept of work based on knowledge. We need a basic understanding of what we know and how we know, what we do not know, and even what can be known and what is not known. The analysis should provide a useful basis for conceptualizing intelligence functions, of which the most important are "estimation" and (...) "prediction". The intelligence itself, in its basic form, has a decision-making function. Intelligence analysis applies individual and collective cognitive methods to assess data and test assumptions in a secret socio-cultural context. DOI: 10.13140/RG.2.2.25298.40646. (shrink)
The future rests under the sign of technology. Given the prevalence of technological neutrality and inevitabilism, most conceptualizations of the future tend to ignore moral problems. In this paper we argue that every choice about future technologies is a moral choice and even the most technology-dominated scenarios of the future are, in fact, moral provocations we have to imagine solutions to. We begin by explaining the intricate connection between morality and the future. After a short excursion into the history of (...) Artificial Intelligence, we analyse two possible scenarios, which show that building the future with technology is, first and foremost, a moral endeavor. (shrink)
We describe on-going work on IAO-Intel, an information artifact ontology developed as part of a suite of ontologies designed to support the needs of the US Army intelligence community within the framework of the Distributed Common Ground System (DCGS-A). IAO-Intel provides a controlled, structured vocabulary for the consistent formulation of metadata about documents, images, emails and other carriers of information. It will provide a resource for uniform explication of the terms used in multiple existing military dictionaries, thesauri and metadata (...) registries, thereby enhancing the degree to which the content formulated with their aid will be available to computational reasoning. (shrink)
Marino & Merskin (2019) demonstrate that sheep are more cognitively complex than typically thought. We should be cautious in interpreting the implications of these results for welfare considerations to avoid perpetuating mistaken beliefs about the moral value of intelligence as opposed to sentience. There are, however, still important ways in which this work can help improve sheeps’ lives.
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI (...) safety; and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
This paper will address the moral implications of non-coercive interrogations in intelligence contexts. U.S. Army and CIA interrogation manuals define non-coercive interrogation as interrogation which avoids the use of physical pressure, relying instead on oral gambits. These methods, including some that involve deceit and emotional manipulation, would be mostly familiar to viewers of TV police dramas. As I see it, there are two questions that need be answered relevant to this subject. First, under what circumstances, if any, may a (...) state agent use deception or manipulation in the course of his or her duties? Second, if there are classes of persons who, by their activities, lose a legitimate expectation for honest-dealing, how are state agents to proceed when the identity of such persons is unclear? (shrink)
In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...) lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature. (shrink)
The use of machine translation as artificial intelligence (AI) keeps increasing and the world’s most popular a translation tool is Google Translate (GT). This tool is not merely used for the benefits of learning and obtaining information from foreign languages through translation but has also been used as a medium of interaction and communication in hospitals, airports and shopping centres. This paper aims to explore machine translation accuracy in translating French-Indonesian culinary texts (recipes). The samples of culinary text were (...) taken from the internet. The research results show that the semiotic model of machine translation in GT is the translation from the signifier (forms) of the source language to the signifier (forms) of the target language by emphasizing the equivalence of the concept (signified) of the source language and the target language. GT aids to translate the existing French-Indonesian culinary text concepts through words, phrases and sentences. A problem encountered in machine translation for culinary texts is a cultural equivalence. GT machine translation cannot accurately identify the cultural context of the source language and the target language, so the results are in the form of a literal translation. However, the accuracy of GT can be improved by refining the translation of cultural equivalents through words, phrases and sentences from one language to another. (shrink)
In the Fall of 1983, I offered a junior/senior-level course in Philosophy of Artificial Intelligence, in the Department of Philosophy at SUNY Fredonia, after returning there from a year’s leave to study and do research in computer science and artificial intelligence (AI) at SUNY Buffalo. Of the 30 students enrolled, most were computerscience majors, about a third had no computer background, and only a handful had studied any philosophy. (I might note that enrollments have subsequently increased in the (...) Philosophy Department’s AI-related courses, such as logic, philosophy of mind, and epistemology, and that several computer science students have added philosophy as a second major.) This article describes that course, provides material for use in such a course, and offers a bibliography of relevant articles in the AI, cognitive science, and philosophical literature. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of (...) collected chapters dedicated to examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, (...) what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
Abstract Philosophical discussion of Alan Turing’s writings on intelligence has mostly revolved around a single point made in a paper published in the journal Mind in 1950. This is unfortunate, for Turing’s reflections on machine (artificial) intelligence, human intelligence, and the relation between them were more extensive and sophisticated. They are seen to be extremely well-considered and sound in retrospect. Recently, IBM developed a question-answering computer (Watson) that could compete against humans on the game show Jeopardy! There (...) are hopes it can be adapted to other contexts besides that game show, in the role of a collaborator of, rather than a competitor to, humans. Another, different, research project --- an artificial intelligence program put into operation in 2010 --- is the machine learning program NELL (Never Ending Language Learning), which continuously ‘learns’ by ‘reading’ massive amounts of material on millions of web pages. Both of these recent endeavors in artificial intelligence rely to some extent on the integration of human guidance and feedback at various points in the machine’s learning process. In this paper, I examine Turing’s remarks on the development of intelligence used in various kinds of search, in light of the experience gained to date on these projects. (shrink)
The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD (...) thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.