This volume brings together the advanced research results obtained by the European COST Action 2102: “Cross Modal Analysis of Verbal and Nonverbal Communication”. The research published in this book was discussed at the 3rd joint EUCOGII-COST 2102 International Training School entitled “Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces: Theoretical and Practical Issues ” and held in Caserta, Italy, on March 15-19, 2010.
The European Association for Cognitive Systems is the association resulting from the EUCog network, which has been active since 2006. It has ca. 1000 members and is currently chaired by Vincent C. Müller. We ran our annual conference on December 08-09 2016, kindly hosted by the Technical University of Vienna with Markus Vincze as local chair. The invited speakers were David Vernon and Paul F.M.J. Verschure. Out of the 49 submissions for the meeting, we accepted 18 a papers and (...) 25 as posters (after double-blind reviewing). Papers are published here as “full papers” or “short papers” while posters are published here as “short papers” or “abstracts”. Some of the papers presented at the conference will be published in a separate special volume on ‘Cognitive Robot Architectures’ with the journal Cognitive Systems Research. - RC, VCM, YS, MV. (shrink)
This book constitutes refereed proceedings of the COST 2102 International Training School on Cognitive Behavioural Systems held in Dresden, Germany, in February 2011. The 39 revised full papers presented were carefully reviewed and selected from various submissions. The volume presents new and original research results in the field of human-machine interaction inspired by cognitive behavioural human-human interaction features. The themes covered are on cognitive and computational social information processing, emotional and social believable Human-Computer Interaction (HCI) systems, behavioural and contextual analysis (...) of interaction, embodiment, perception, linguistics, semantics and sentiment analysis in dialogues and interactions, algorithmic and computational issues for the automatic recognition and synthesis of emotional states. (shrink)
Proceedings of the papers presented at the Symposium on "Revisiting Turing and his Test: Comprehensiveness, Qualia, and the Real World" at the 2012 AISB and IACAP Symposium that was held in the Turing year 2012, 2–6 July at the University of Birmingham, UK. Ten papers. - http://www.pt-ai.org/turing-test --- Daniel Devatman Hromada: From Taxonomy of Turing Test-Consistent Scenarios Towards Attribution of Legal Status to Meta-modular Artificial Autonomous Agents - Michael Zillich: My Robot is Smarter than Your Robot: On the Need for (...) a Total Turing Test for Robots - Adam Linson, Chris Dobbyn and Robin Laney: Interactive Intelligence: Behaviour-based AI, Musical HCI and the Turing Test - Javier Insa, Jose Hernandez-Orallo, Sergio España - David Dowe and M.Victoria Hernandez-Lloreda: The anYnt Project Intelligence Test (Demo) - Jose Hernandez-Orallo, Javier Insa, David Dowe and Bill Hibbard: Turing Machines and Recursive Turing Tests — Francesco Bianchini and Domenica Bruni: What Language for Turing Test in the Age of Qualia? - Paul Schweizer: Could there be a Turing Test for Qualia? - Antonio Chella and Riccardo Manzotti: Jazz and Machine Consciousness: Towards a New Turing Test - William York and Jerry Swan: Taking Turing Seriously (But Not Literally) - Hajo Greif: Laws of Form and the Force of Function: Variations on the Turing Test. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 (...) - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...) examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...) and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
This volume offers very selected papers from the 2014 conference of the “International Association for Computing and Philosophy” (IACAP) - a conference tradition of 28 years. - - - Table of Contents - 0 Vincent C. Müller: - Editorial - 1) Philosophy of computing - 1 Çem Bozsahin: - What is a computational constraint? - 2 Joe Dewhurst: - Computing Mechanisms and Autopoietic Systems - 3 Vincenzo Fano, Pierluigi Graziani, Roberto Macrelli and Gino Tarozzi: - Are Gandy Machines really (...) local? - 4 Doukas Kapantais: - A refutation of the Church-Turing thesis according to some interpretation of what the thesis says - 5 Paul Schweizer: - In What Sense Does the Brain Compute? - 2) Philosophy of computer science & discovery - 6 Mark Addis, Peter Sozou, Peter C R Lane and Fernand Gobet: - Computational Scientific Discovery and Cognitive Science Theories - 7 Nicola Angius and Petros Stefaneas: - Discovering Empirical Theories of Modular Software Systems. An Algebraic Approach. - 8 Selmer Bringsjord, John Licato, Daniel Arista, Naveen Sundar Govindarajulu and Paul Bello: - Introducing the Doxastically Centered Approach to Formalizing Relevance Bonds in Conditionals - 9 Orly Stettiner: - From Silico to Vitro: - Computational Models of Complex Biological Systems Reveal Real-world Emergent Phenomena - 3) Philosophy of cognition & intelligence - 10 Douglas Campbell: - Why We Shouldn’t Reason Classically, and the Implications for Artificial Intelligence - 11 Stefano Franchi: - Cognition as Higher Order Regulation - 12 Marcello Guarini: - Eliminativisms, Languages of Thought, & the Philosophy of Computational Cognitive Modeling - 13 Marcin Miłkowski: - A Mechanistic Account of Computational Explanation in Cognitive Science and Computational Neuroscience - 14 Alex Tillas: - Internal supervision & clustering: - A new lesson from ‘old’ findings? - 4) Computing & society - 15 Vasileios Galanos: - Floridi/Flusser: - Parallel Lives in Hyper/Posthistory - 16 Paul Bello: - Machine Ethics and Modal Psychology - 17 Marty J. Wolf and Nir Fresco: - My Liver Is Broken, Can You Print Me a New One? - 18 Marty J. Wolf, Frances Grodzinsky and Keith W. Miller: - Robots, Ethics and Software – FOSS vs. Proprietary Licenses. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...) this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...) from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)
Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...) the suggestions ultimately unmotivated, the discussion shows that our epistemic condition with respect to the moral status of others does raise problems, and that the human tendency to empathise with things that do not have moral status should be taken seriously—we suggest that it produces a “derived moral status”. Finally, it turns out that there is typically no individual in real AI that could even be said to be the bearer of moral status. Overall, there is no reason to think that robot rights are an issue now. (shrink)
The central concept of this edited volume is "blended cognition", the natural skill of human beings for combining constantly different heuristics during their several task-solving activities. Something that was sometimes observed like a problem as “bad reasoning”, is now the central key for the understanding of the richness, adaptability and creativity of human cognition. The topic of this book connects in a significant way with the disciplines of psychology, neurology, anthropology, philosophy, logics, engineering, logics, and AI. In a nutshell: understanding (...) better humans for designing better machines. It contains a Preface by the editors and 12 chapters. (shrink)
Einleitung - 1 Erklärung und Referenz (1973) - 2 Sprache und Wirklichkeit (1975) - 3 Was ist ‹Realismus›? (1975) - 4 Modelle und Wirklichkeit (1980) - 5 Referenz und Wahrheit (1980) - 6 Wie man zugleich interner Realist und transzendentaler Idealist sein kann (1980) - 7 Warum es keine Fertigwelt gibt (1982) - 8 Wozu die Philosophen? (1986) - 9 Realismus mit menschlichem Antlitz (1988/90) - 10 Irrealismus und Dekonstruktion (1992) - -/- Bibliographie der Schriften von Hilary Putnam - -/- (...) Literaturverzeichnis - Register -. (shrink)
The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that (...) facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or (...) considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
In October 2011, the “2nd European Network for Cognitive Systems, Robotics and Interaction”, EUCogII, held its meeting in Groningen on “Autonomous activity in real-world environments”, organized by Tjeerd Andringa and myself. This is a brief personal report on why we thought autonomy in real-world environments is central for cognitive systems research and what I think I learned about it. --- The theses that crystallized are that a) autonomy is a relative property and a matter of degree, b) increasing autonomy of (...) an artificial system from its makers and users is a necessary feature of increasingly intelligent systems that can deal with the real-world and c) more such autonomy means less control but at the same time improved interaction with the system. (shrink)
The theory that all processes in the universe are computational is attractive in its promise to provide an understandable theory of everything. I want to suggest here that this pancomputationalism is not sufficiently clear on which problem it is trying to solve, and how. I propose two interpretations of pancomputationalism as a theory: I) the world is a computer and II) the world can be described as a computer. The first implies a thesis of supervenience of the physical over computation (...) and is thus reduced ad absurdum. The second is underdetermined by the world, and thus equally unsuccessful as theory. Finally, I suggest that pancomputationalism as metaphor can be useful. – At the Paderborn workshop in 2008, this paper was presented as a commentary to the relevant paper by Gordana Dodig-Crnkovic " Info-Computationalism and Philosophical Aspects of Research in Information Sciences". (shrink)
This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...) the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...) the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
Cognition is commonly taken to be computational manipulation of representations. These representations are assumed to be digital, but it is not usually specified what that means and what relevance it has for the theory. I propose a specification for being a digital state in a digital system, especially a digital computational system. The specification shows that identification of digital states requires functional directedness, either for someone or for the system of which it is a part. In the case or digital (...) representations, to be a token of a representational type, where the function of the type is to represent. [An earlier version of this paper was discussed in the web-conference "Interdisciplines" https://web.archive.org/web/20100221125700/http://www.interdisciplines.org/adaptation/papers/7 ]. (shrink)
Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...) used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn. (shrink)
Digital ethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention, but how did it come about and what were the developments that lead to its existence? What are the traditions, the concerns, the technological and social developments that pushed digital ethics? How did ethical issues change with digitalisation of human life? How did the traditional discipline of philosophy respond? The article provides an overview, proposing historical epochs: ‘pre-modernity’ prior to (...) digital computation over data, via the ‘modernity’ of digital data processing to our present ‘post-modernity’ when not only the data is digital, but our lives themselves are largely digital. In each section, the situation in technology and society is sketched, and then the developments in digital ethics are explained. Finally, a brief outlook is provided. (shrink)
Die Entwicklungen in der Künstlichen Intelligenz (KI) sind spannend. Aber wohin geht die Reise? Ich stelle eine Analyse vor, der zufolge exponentielles Wachstum von Rechengeschwindigkeit und Daten die entscheidenden Faktoren im bisherigen Fortschritt waren. Im Folgenden erläutere ich, unter welchen Annahmen dieses Wachstum auch weiterhin Fortschritt ermöglichen wird: 1) Intelligenz ist eindimensional und messbar, 2) Kognitionswissenschaft wird für KI nicht benötigt, 3) Berechnung (computation) ist hinreichend für Kognition, 4) Gegenwärtige Techniken und Architektur sind ausreichend skalierbar, 5) Technological Readiness Levels (TRL) (...) erweisen sich als machbar. Diese Annahmen werden sich als dubios erweisen. (shrink)
If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...) evaluating such proposals. (shrink)
The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...) joined as premises and the argument for the existential risk of AI turns out invalid. If the interpretation is incorrect and both premises use the same notion of intelligence, then at least one of the premises is false and the orthogonality thesis remains itself orthogonal to the argument to existential risk from AI. In either case, the standard argument for existential risk from AI is not sound.—Having said that, there remains a risk of instrumental AI to cause very significant damage if designed or used badly, though this is not due to superintelligence or a singularity. (shrink)
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. (...) In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism. (shrink)
This paper investigates the view that digital hypercomputing is a good reason for rejection or re-interpretation of the Church-Turing thesis. After suggestion that such re-interpretation is historically problematic and often involves attack on a straw man (the ‘maximality thesis’), it discusses proposals for digital hypercomputing with Zeno-machines , i.e. computing machines that compute an infinite number of computing steps in finite time, thus performing supertasks. It argues that effective computing with Zeno-machines falls into a dilemma: either they are specified such (...) that they do not have output states, or they are specified such that they do have output states, but involve contradiction. Repairs though non-effective methods or special rules for semi-decidable problems are sought, but not found. The paper concludes that hypercomputing supertasks are impossible in the actual world and thus no reason for rejection of the Church-Turing thesis in its traditional interpretation. (shrink)
I see four symbol grounding problems: 1) How can a purely computational mind acquire meaningful symbols? 2) How can we get a computational robot to show the right linguistic behavior? These two are misleading. I suggest an 'easy' and a 'hard' problem: 3) How can we explain and re-produce the behavioral ability and function of meaning in artificial computational agents?4) How does physics give rise to meaning?
Floridi and Taddeo propose a condition of “zero semantic commitment” for solutions to the grounding problem, and a solution to it. I argue briefly that their condition cannot be fulfilled, not even by their own solution. After a look at Luc Steels' very different competing suggestion, I suggest that we need to re-think what the problem is and what role the ‘goals’ in a system play in formulating the problem. On the basis of a proper understanding of computing, I come (...) to the conclusion that the only sensible ground-ing problem is how we can explain and re-produce the behavioral ability and function of meaning in artificial computational agents. (shrink)
Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...) crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the re-distribution of risk. Not only is this possible in principle, but some killer robots will actually satisfy these requirements. An implication of our argument is that there is a public responsibility to regulate killer robots ’ design and manufacture. (shrink)
This is the short version, in French translation by Anne Querrien, of the originally jointly authored paper: Müller, Vincent C., ‘Autonomous killer robots are probably good news’, in Ezio Di Nucci and Filippo Santoni de Sio, Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. - - - L’article qui suit présente un nouveau système d’armes fondé sur des robots qui risque d’être prochainement utilisé. À la différence des drones qui sont manoeuvrés à (...) distance mais comportent une part importante de discernement humain, il s’agit de machines programmées pour défendre, attaquer, ou tuer de manière autonome. Les auteurs, philosophes, préfèrent prévenir de leur prochaine diffusion et obtenir des Nations Unies leur régulation. Une campagne internationale propose plutôt leur interdiction. (shrink)
There is much discussion about whether the human mind is a computer, whether the human brain could be emulated on a computer, and whether at all physical entities are computers (pancomputationalism). These discussions, and others, require criteria for what is digital. I propose that a state is digital if and only if it is a token of a type that serves a particular function - typically a representational function for the system. This proposal is made on a syntactic level, assuming (...) three levels of description (physical, syntactic, semantic). It suggests that being digital is a matter of discovery or rather a matter of how we wish to describe the world, if a functional description can be assumed. Given the criterion provided and the necessary empirical research, we should be in a position to decide on a given system (e.g. the human brain) whether it is a digital system and can thus be reproduced in a different digital system (since digital systems allow multiple realization). (shrink)
Concordance of the poetic works of Giorgos Seferis which presents all the principal “words” of the texts in an alphabetical list, stating how often each word occurs, giving a precise location and a relevant piece of text for each occurrence. We found ca. 9500 different Greek words in 39000 different occurrences, so our concordance has 50.000 lines of text. The technical procedure required four main steps: text entry and tagging, production of the concordance, correction of the contexts, formatting for print.
The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...) consciousness, humankind, life etc. etc. – and at the same time it has contributed substantially to answering these questions. There is thus a substantial tradition of work, both on AI by philosophers and of theory within AI itself. - The volume contains papers by Bostrom, Dreyfus, Gomila, O'Regan and Shagrir. (shrink)
The paper discusses the extended mind thesis with a view to the notions of “agent” and of “mind”, while helping to clarify the relation between “embodiment” and the “extended mind”. I will suggest that the extended mind thesis constitutes a reductio ad absurdum of the notion of ‘mind’; the consequence of the extended mind debate should be to drop the notion of the mind altogether – rather than entering the discussion how extended it is.
We discuss at some length evidence from the cognitive science suggesting that the representations of objects based on spatiotemporal information and featural information retrieved bottomup from a visual scene precede representations of objects that include conceptual information. We argue that a distinction can be drawn between representations with conceptual and nonconceptual content. The distinction is based on perceptual mechanisms that retrieve information in conceptually unmediated ways. The representational contents of the states induced by these mechanisms that are available to a (...) type of awareness called phenomenal awareness constitute the phenomenal content of experience. The phenomenal content of perception contains the existence of objects as separate things that persist in time and time, spatiotemporal information, and information regarding relative spatial relations, motion, surface properties, shape, size, orientation, color, and their functional properties. (shrink)
In this paper I want to propose an argument to support Jerry Fodor’s thesis (Fodor 1983) that input systems are modular and thus informationally encapsulated. The argument starts with the suggestion that there is a “grounding problem” in perception, i. e. that there is a problem in explaining how perception that can yield a visual experience is possible, how sensation can become meaningful perception of something for the subject. Given that visual experience is actually possible, this invites a transcendental argument (...) that explains the conditions of its possibility. I propose that one of these conditions is the existence of a visual module in Fodor’s sense that allows the step from sensation to object-identifying perception, thus enabling visual experience. It seems to follow that there is informationally encapsulated nonconceptual content in visual perception. (shrink)
Epistemic theories of truth, such as those presumed to be typical for anti-realism, can be characterised as saying that what is true can be known in principle: p → ◊Kp. However, with statements of the form “p & ¬Kp”, a contradiction arises if they are both true and known. Analysis of the nature of the paradox shows that such statements refute epistemic theories of truth only if the the anti-realist motivation for epistemic theories of truth is not taken into account. (...) The motivation in a link of understandability ans meaningful- ness suggests to change the above principle and to restrict the theory to logically simple sentences, in which case the paradox does not arise. This suggestion also allows to see the deep philosophical problems for anti-realism those counterexamples are pointing at. (shrink)
From a European perspective the US debate about gun control is puzzling because we have no such debate: It seems obvious to us that dangerous weapons need tight control and that ‘guns’ fall under that category. I suggest that this difference occurs due to different habits that generate different attitudes and support this explanation with an analogy to the habits about knives. I conclude that it is plausible that individual knife-people or gun-people do not want tight regulatory legislation—but tight knife (...) and gun legislation is morally obligatory anyway. We need to give up our habits for the greater good. (shrink)
The paper argues that the reference of perceptual demonstratives is fixed in a causal nondescriptive way through the nonconceptual content of perception. That content consists first in spatiotemporal information establishing the existence of a separate persistent object retrieved from a visual scene by the perceptual object segmentation processes that open an object-file for that object. Nonconceptual content also consists in other transducable information, that is, information that is retrieved directly in a bottom-up way from the scene (motion, shape, etc). The (...) nonconceptual content of the mental states induced when one uses a perceptual demonstrative constitutes the mode of presentation of the perceptual demonstrative that individuates but does not identify the object of perceptual awareness and allows reference to it. On that account, perceptual demonstratives put us in a de re relationship with objects in the world through the non-conceptual information retrieved directly from the objects in the environment. (shrink)
"Data mining is not an invasion of privacy because access to data is only by machines, not by people": this is the argument that is investigated here. The current importance of this problem is developed in a case study of data mining in the USA for counterterrorism and other surveillance purposes. After a clarification of the relevant nature of privacy, it is argued that access by machines cannot warrant the access to further information, since the analysis will have to be (...) made either by humans or by machines that understand. It concludes that the current data mining violates the right to privacy and should be subject to the standard legal constraints for access to private information by people. (shrink)
While the 2010 EPSRC principles for robotics state a set of 5 rules of what ‘should’ be done, I argue they should differentiate between legal obligations and ethical demands. Only if we make this difference can we state clearly what the legal obligations already are, and what additional ethical demands we want to make. I provide suggestions how to revise the rules in this light and how to make them more structured.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.