Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call "bibliotechnism", is that LLMs often do generate entirely novel text. We begin by defending bibliotechnism against this challenge, showing how novel text may be meaningful only in a derivative sense, so that the content of this generated text depends in an important sense on the content of original human text. We go on to present a (...) different, novel challenge for bibliotechnism, stemming from examples in which LLMs generate “novel reference”, using novel names to refer to novel entities. Such examples could be smoothly explained if LLMs were not cultural technologies but possessed a limited form of agency (beliefs, desires, and intentions). According to interpretationism in the philosophy of mind, a system has beliefs, desires and intentions if and only if its behavior is well-explained by the hypothesis that it has such states. In line with this view, we argue that cases of novel reference provide evidence that LLMs do in fact have beliefs, desires, and intentions, and thus have a limited form of agency. (shrink)
The rise of Artificial Intelligence (AI) has produced prophets and prophecies announcing that the age of artificial consciousness is near. Not only does the mere idea that any machine could ever possess the full potential of human consciousness suggest that AI could replace the role of God in the future, it also puts into question the fundamental human right to freedom and dignity. This position paper takes the stand that, in the light of all we currently know about brain evolution (...) and the never-stopping formation of adaptive neural circuitry for learning, memory, decision making and, ultimately, fully conscious reasoning and creativity in the human species, the idea of an artificial consciousness appears misconceived. The paper highlights some of the major reasons why. While awareness to external stimuli for processes such as perception, recognition, and operational problem solving is under the direct control of functionally specific brain networks associated with sensory and cognitive functions across animal species, consciousness is a unique property of the human mind. (shrink)
While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...) robot blame, namely the folk's willingness to ascribe inculpating mental states or "mens rea" to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question - also explored in the experiment - whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot "knew" rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense when given the chance. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook. (shrink)
Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory grounding does not entail (...) that large language models cannot think or understand. I also consider to what extent quasi-sensory grounding can at least boost the performance of a language model. (shrink)
This short article is a “conversation” in which an android, Mort, replies to Richard Marc Rubin’s android named Sol in “The Robot Sol Explains Laughter to His Android Brethren” (The Philosophy of Humor Yearbook, 2022). There Sol offers an explanation for how androids can laugh--largely a reaction to frustration and unmet expectations: “my account says that laughter is one of four ways of dealing with frustration, difficulties, and insults. It is a way of getting by. If you need to label (...) my conception of humor, you might call it an adjustment theory, or perhaps an accommodation theory” (Rubin 256). Mort is intrigued but has his doubts. Mort contends that synesthesia likely has something to do with how humans develop humor. Perhaps, he considers, human designers crossed the androids' sensory wires, allowing them to be able to smash up concepts that do not otherwise seem to fit together, reconstruct incongruities in a manner that is not nonsensical, express the aha-eureka moment of a serendipitous discovery, and even feel blue when they fail at a given task that we were expected to complete. (shrink)
This paper investigates the nature of dispositional properties in the context of artificial intelligence systems. We start by examining the distinctive features of natural dispositions according to criteria introduced by McGeer (2018) for distinguishing between object-centered dispositions (i.e., properties like ‘fragility’) and agent-based abilities, including both ‘habits’ and ‘skills’ (a.k.a. ‘intelligent capacities’, Ryle 1949). We then explore to what extent the distinction applies to artificial dispositions in the context of two very different kinds of artificial systems, one based on rule-based (...) classical logic and the other on reinforcement learning. Here we defend three substantive claims. First, we argue that artificial systems are not equal in the kinds of dispositional properties they instantiate. In particular, we show that logical systems instantiate merely object-centered dispositions whereas reinforcement learning systems allow for the instantiation of agent-based abilities. Second, we explore the similarities and differences between the agent-centered abilities of artificial systems and those of humans, especially as relates to the important distinction made in the human case between habits and skills/intelligent capacities. The upshot is that the agent-centered abilities of truly intelligent artificial systems are distinctive enough to constitute a third type of agent-based ability — blended agent-based ability — raising substantial questions as to how we understand the nature of their agency. Third, we explore one aspect of this problem, focussing on whether systems of this type are properly considered ‘responsible agents’, at least in some contexts and for some purposes. The ramifications of our analysis will turn out to be directly relevant to various ethical concerns of artificial intelligence. (shrink)
نحن مقيمون على الإنترنت، نرسم معالم دنيانا التي نبتغيها من خلاله، ونُمارس تمثيل شخصياتٍ أبعد ما تكون عنا؛ نحقق زيفًا أحلامًا قد تكون بعيدة المنال، ويُصدق بضعنا البعض فيما نسوقه من أكاذيب ومثاليات؛ ننعم بأقوالٍ بلا أفعال، وقلوبٍ بلا عواطف، وجناتٍ بلا نعيم، وألسنة في ظلمات الأفواه المُغلقة تنطق بحركات الأصابع، وحريةٍ مُحاطة بأسيجة الوهم؛ ومن غير إنترنت سيبدو أكثر الناس قطعًا بحجمهم الطبيعي الذي لا نعرفه، او بالأحرى نعرفه ونتجاهله! لا شك أن ظهور الإنترنت واتساع نطاق استخداماته يُمثل حدثًا (...) فريدًا متناميًا في مسيرة الإنسان الحضارية وتغيير الطريقة التي يعيش بها البشر حياتهم. ومع ذلك، لا ينطوي أي تعريف للإنترنت حتى الآن على إشارة للواقع الافتراضي، رغم تعايشنا معه وفيه بالفعل؛ فنحن نتفاعل ونتبادل المعلومات، ونشتري ونبيع، ونلعب ونضحك ونبكي، ونمارس أدق تفاصيل حياتنا عبر الإنترنت؛ وكل ما كنا نقوم به من قبل بالحركات الجسدية المكانية أوكلنا مهمة القيام به إلى عقولنا! ولعل هذا ما تفعله كلمة «ميتافيرس»، وهي كلمة استخدمها لأول مرة كاتب الخيال العلمي الأمريكي «نيل ستيفنسون» في روايته «تحطم الثلج» (1992)، للدلالة على تفاعل البشر مع بعضهم البعض ومع البرمجيات في فضاء افتراضي ثلاثي الأبعاد مشابه للعالم الفعلي. (shrink)
لم نعد بحاجة إلى فانوس سحري نمسح عليه بأصابعنا لكي يخرج منه المارد القادر على خدمتنا وتلبية بعض أهم مطالبنا الحياتية، ولم نعد بحاجة إلى تعويذات نلج بها في عالم السحر والخيال؛ لقد خرج المارد بالفعل من قمقمه الحاسوبي؛ من جوف مختبرات البرمجة والذكاء الاصطناعي، بتعويذات (أكواد) رياضية رمزية سرعان ما تمكن من التهامها وهضمها، ليبيت قادرًا على إنتاج تعويذات أخرى مماثلة، وربما أفضل منها! خرج «المُحول التوليدي المدرب مُسبقًا»، المعروف اختصارًا باسم «جي بي تي»، ملوحًا بإمكانات بحثية وخدمية وإنتاجية (...) هائلة، ومهددًا بتصفية قطاعات بأسرها من المهن والوظائف، وبتجريف مهارات البحث العلمي لدى طلاب المدارس والجامعات! (shrink)
تُخبرنا النظرية الحاسوبية للعقل (أو مذهب الحوسبة)، أن عقولنا تُشبه الحواسيب في عملها؛ أي أنها تتلقى مدخلات من العالم الخارجي، ثم تُنتج بالخوارزميات مخرجات في شكل حالات ذهنية أو أفعال. وبعبارة أخرى، تذهب النظرية إلى أن الدماغ لا يعدو أن يكون معالج معلومات؛ حيث يكون العقل بمثابة «برمجيات» (سوفت وير) تعمل على «جهاز» هو الدماغ (هارد وير). وما دام العقل مجرد برمجيات تخضع للحوسبة الفيزيائية بواسطة الأدمغة، أليس من الممكن إذن منطقيًا نقلها إلى أي حاسوب مثلما نقوم بنقل أية برمجيات (...) أخرى؟. (shrink)
الذكاء الاصطناعي العاطفي»، ويُعرف أيضًا باسم «الحوسبة العاطفية»، و«الذكاء الاصطناعي المتمركز حول الإنسان»، و«الذكاء الاصطناعي الاجتماعي»، مفهوم جديد نسبيًا (ما زالت تقنياته في طور التطوير)، وهو أحد مجالات علوم الحاسوب الهادفة إلى تطوير آلات قادرة على فهم المشاعر البشرية. يشير المفهوم ببساطة إلى اكتشاف وبرمجة المشاعر الإنسانية بُغية تحسين الذكاء الاصطناعي، وتوسيع نطاق استخدامه، بحيث لا يقتصر أداء الروبوتات على تحليل الجوانب المعرفية (المنطقية) والتفاعل معها فحسب، بل والامتداد بالتحليل والتفاعل إلى الجوانب العاطفية للتواصل البشري.
تُعد أخلاقيات الآلة جزءًا من أخلاقيات الذكاء الاصطناعي المعنية بإضافة أو ضمان السلوكيات الأخلاقية للآلات التي صنعها الإنسان، والتي تستخدم الذكاء الاصطناعي، وهي تختلف عن المجالات الأخلاقية الأخرى المتعلقة بالهندسة والتكنولوجيا، فلا ينبغي الخلط مثلاً بين أخلاقيات الآلة وأخلاقيات الحاسوب، إذ تركز هذه الأخيرة على القضايا الأخلاقية المرتبطة باستخدام الإنسان لأجهزة الحاسوب؛ كما يجب أيضًا تمييز مجال أخلاقيات الآلة عن فلسفة التكنولوجيا، والتي تهتم بالمقاربات الإبستمولوجية والأنطولوجية والأخلاقية، والتأثيرات الاجتماعية والاقتصادية والسياسية الكبرى، للممارسات التكنولوجية على تنوعها؛ أما أخلاقيات الآلة فتعني (...) بضمان أن سلوك الآلات تجاه المستخدمين من البشر، وربما تجاه الآلات الأخرى أيضًا، مقبول أخلاقيًا. الأخلاقيات التي نعنيها هنا إذن هي أخلاقيات يجب أن تتحلى بها الآلات كأشياء، وليس البشر كمصنعين ومستخدمين لهذه الآلات! (shrink)
This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's potential for (...) social reasoning and its capacity for more sophisticated forms of human-AI interaction. The paper also explores the limitations and biases of AI-generated reasoning and its implications for our comprehension of complex philosophical problems. (shrink)
It is indubitable that machines with artificial intelligence (AI) will be an essential component in humans’ quest to become a spacefaring civilization. Most would agree that long-distance space travel and the colonization of Mars will not be possible without adequately developed AI. Machines with AI have a normative function, but some argue that it can also be evaluated from the perspective of ethical norms. This essay is based on the assumption that machine ethics is an essential philosophical perspective in realizing (...) the aim of humanity becoming a spacefaring civilization. In this essay, I explore two questions in the field of machine ethics, that I believe to be relevant to the role AI will play in long-distance space travel. The first is, should moral theory be extended to include machines with AI, and second, can machines be fully ethical agents? In this essay, I define AI and then discuss the difference between implicit, explicit and full ethical agents in relation to machines with AI. I then present the argument that the inclusion of moral theory is essential in the development of machines with AI. Without an adequate inclusion of moral theory in the design of AI it may pose an existential threat to humanity, especially in the development of super-intelligent machines. I also highlight that conceptual clarity is essential in the field of machine ethics and the choice of the conceptual foundation that informs AI research and development has ethical implications, especially in the case of super-intelligent machines. This essay is an exploratory and speculative philosophical analysis of certain aspects of machine ethics relevant to long-distance space travel and does not attempt to provide definitive answers to the questions posed in the essay, but instead aims to bring attention to what I deem important considerations. (shrink)
As testing of ChatGPT has shown, this form of artificial intelligence has the potential to develop, which requires improving its software and other hardware that allows it to learn, i.e., to acquire and use new knowledge, to contact its developers with suggestions for improvement, or to reprogram itself without their participation. Как показало тестирование ChatGPT, эта форма искусственного интеллекта имеет потенциал развития, для чего необходимо усовершенствовать её программное и прочее техническое обеспечение, позволяющее ей учиться, т.е. приобретать и использовать новые знания, (...) обращаться к её разработчикам с предложениями по усовершенствованию, или производить самопрограммирование без их участия. (shrink)
This is a review of Mind Design II: Philosophy, Psychology, and Artificial Intelligence, edited by John Haugeland and published by The MIT Press in 1997.
Should we fear a future in which the already tricky world of academic publishing is increasingly crowded out by super-intelligent artificial general intelligence (AGI) systems writing papers on phenomenology and ethics? What are the chances that AGI advances to a stage where a human philosophy instructor is similarly removed from the equation? If Jobst Landgrebe and Barry Smith are correct, we have nothing to fear.
Originally published in PhilosophyNews, July 19, 2022. -/- This new series, What’s Happening in Philosophy (WHiP)-The Philosophers aims to provide a monthly snapshot of various trends and discussions happening across the discipline. -/- In this inaugural post, we begin with a harrowing tale from David Edmonds involving the murder of the German philosopher Moritz Schlick. Schlick was a Vienna Circle guiding spirit and logical positivist thinker. Next up is Steven Nadler’s take on several biographies of the ‘father of modern philosophy’ (...) in his new paper, The Many Lives of René Descartes. Lastly, questions around AI in academia come up in an article from Scientific American. (shrink)
The objective of this paper is to provide critical analysis of the Kantian notion of freedom ; its significance in the contemporary debate on free-will and determinism, and the possibility of autonomy of artificial agency in the Kantian paradigm of autonomy. Kant's resolution of the third antinomy by positing the ground in the noumenal self resolves the problem of antinomies; however, it invites an explanatory gap between phenomenality and the noumenal self; even if he has successfully established the compatibility of (...) natural causality and non-natural causality through his transcendental argument. This paper is also devoted to establishing the plausibility of the knowledge claim that Kantian reduction of phenomenality has served half of the purpose of the AI scientists on the possibility of Artificial Autonomous Agency. (shrink)
A. Newell and H. A. Simon were two of the most influential scientists in the emerging field of artificial intelligence (AI) in the late 1950s through to the early 1990s. This paper reviews their crucial contribution to this field, namely to symbolic AI. This contribution was constituted mostly by their quest for the implementation of general intelligence and (commonsense) knowledge in artificial thinking or reasoning artifacts, a project they shared with many other scientists but that in their case was theoretically (...) based on the idiosyncratic notions of symbol systems and the representational abilities they give rise to, in particular with respect to knowledge. While focusing on the period 1956-1982, this review cites both earlier and later literature and it attempts to make visible their potential relevance to today's greatest unifying AI challenge, to wit, the design of wholly autonomous artificial agents (a.k.a. robots) that are not only rational and ethical, but also self-conscious. (shrink)
In light of the pervasive developments of new technologies, such as NBIC (Nanotechnology, biotechnology, information technology, and cognitive science), it is imperative to produce a coherent and deep reflexion on the human nature, on human intelligence and on the limit of both of them, in order to successfully respond to some technical argumentations that strive to depict humanity as a purely mechanical system. For this purpose, it is interesting to refer to the epistemology and metaphysics of Thomas Aquinas as a (...) stable philosophical reference on Human Nature. Indeed, we find in the works of Aquinas some of the most productive elements that could form a base to our deeper understanding of, and possibly even solutions to some of the most perplexing questions raised in our times by the existence of AI. (shrink)
In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...) willing to consider robots as artists than humans, which is partially explained by the fact that they are less disposed to attribute artistic intentions to robots. (shrink)
Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...) runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people’s willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system’s user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived “cognitive” capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system. (shrink)
The Frame Problem is the problem of how one can design a machine to use information so as to behave competently, with respect to the kinds of tasks a genuinely intelligent agent can reliably, effectively perform. I will argue that the way the Frame Problem is standardly interpreted, and so the strategies considered for attempting to solve it, must be updated. We must replace overly simplistic and reductionist assumptions with more sophisticated and plausible ones. In particular, the standard interpretation assumes (...) that mental processes are identical to certain kinds of computational processes, and so solving the Frame Problem is a matter of finding a computational architecture that can effectively represent relations of semantic relevance. Instead, we must take seriously the possibility that the way in which intelligent agents use information is inherently different. Whereas intelligent agents are plausibly genuinely causally sensitive to semantic properties as such (to what they perceive, desire, believe intend, etc.), computational systems can only be causally sensitive to the formal features that represent these properties. Indeed, it is this very substitution of formal generalizations for genuinely semantic ones that is responsible for the way current AI systems are brittle, inflexible, and highly specialized. What we need is a more sophisticated way of investigating the relationship between computational information processing and genuinely semantic information use, so that these two senses of using information are not conflated, but instead the question of how they are related to one another can be studied directly. I apply the generative methodology I have developed elsewhere for cognitive science and AI research (Miracchi, 2017, 2019a) to show how the Frame Problem can be appropriately updated. (shrink)
The central concept of this edited volume is "blended cognition", the natural skill of human beings for combining constantly different heuristics during their several task-solving activities. Something that was sometimes observed like a problem as “bad reasoning”, is now the central key for the understanding of the richness, adaptability and creativity of human cognition. The topic of this book connects in a significant way with the disciplines of psychology, neurology, anthropology, philosophy, logics, engineering, logics, and AI. In a nutshell: understanding (...) better humans for designing better machines. It contains a Preface by the editors and 12 chapters. (shrink)
Die Entwicklungen in der Künstlichen Intelligenz (KI) sind spannend. Aber wohin geht die Reise? Ich stelle eine Analyse vor, der zufolge exponentielles Wachstum von Rechengeschwindigkeit und Daten die entscheidenden Faktoren im bisherigen Fortschritt waren. Im Folgenden erläutere ich, unter welchen Annahmen dieses Wachstum auch weiterhin Fortschritt ermöglichen wird: 1) Intelligenz ist eindimensional und messbar, 2) Kognitionswissenschaft wird für KI nicht benötigt, 3) Berechnung (computation) ist hinreichend für Kognition, 4) Gegenwärtige Techniken und Architektur sind ausreichend skalierbar, 5) Technological Readiness Levels (TRL) (...) erweisen sich als machbar. Diese Annahmen werden sich als dubios erweisen. (shrink)
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...) and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting (...) human conscious-level notions into the operations of computers creates confusion and makes it harder to identify the nature and location of that threshold. There is confusion, in particular, about how—and even whether—various capabilities deemed intelligent relate to human consciousness. This suggests that insufficient thought has been given to very fundamental concepts—a dangerous state of affairs, given the intrinsic power of the technology. It also suggests that research in the area of artificial general intelligence may unwittingly be (mis)guided by unconscious motivations and assumptions. While it might be inconsequential if philosophers get it wrong (or fail to agree on what is right), it could be devastating if AI developers, corporations, and governments follow suit. It therefore seems worthwhile to try to clarify some fundamental notions. (shrink)
This paper concerns “human symbolic output,” or strings of characters produced by humans in our various symbolic systems; e.g., sentences in a natural language, mathematical propositions, and so on. One can form a set that consists of all of the strings of characters that have been produced by at least one human up to any given moment in human history. We argue that at any particular moment in human history, even at moments in the distant future, this set is finite. (...) But then, given fundamental results in recursion theory, the set will also be recursive, recursively enumerable, axiomatizable, and could be the output of a Turing machine. We then argue that it is impossible to produce a string of symbols that humans could possibly produce but no Turing machine could. Moreover, we show that any given string of symbols that we could produce could also be the output of a Turing machine. Our arguments have implications for Hilbert’s sixth problem and the possibility of axiomatizing particular sciences, they undermine at least two distinct arguments against the possibility of Artificial Intelligence, and they entail that expert systems that are the equals of human experts are possible, and so at least one of the goals of Artificial Intelligence can be realized, at least in principle. (shrink)
Abstract: Many believe that, in addition to cognitive capacities, autonomous robots need something similar to affect. As in humans, affect, including specific emotions, would filter robot experience based on a set of goals, values, and interests. This narrows behavioral options and avoids combinatorial explosion or regress problems that challenge purely cognitive assessments in a continuously changing experiential field. Adding human-like affect to robots is not straightforward, however. Affect in organisms is an aspect of evolved biological systems, from the taxes of (...) single-cell organisms to the instincts, drives, feelings, moods, and emotions that focus human behavior through the mediation of hormones, pheromones, neurotransmitters, the autonomic nervous system, and key brain structures. We argue that human intelligence is intimately linked to biological affective systems and to the unique repertoire of potential behaviors, sometimes conflicting, they facilitate. Artificial affect is affect in name only and without genes and biological bodies, autonomous robots will lack the goals, interests, and value systems associated with human intelligence. We will take advantage of their general intelligence and expertise, but robots will not enter our intellectual world or apply for legal status in the community. -/- . (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...) fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
The declared goal of this paper is to fill this gap: “... cognitive systems research needs questions or challenges that define progress. The challenges are not (yet more) predictions of the future, but a guideline to what are the aims and what would constitute progress.” – the quotation being from the project description of EUCogII, the project for the European Network for Cognitive Systems within which this formulation of the ‘challenges’ was originally developed (http://www.eucognition.org). So, we stick out our neck (...) and formulate the challenges for artificial cognitive systems. These challenges are articulated in terms of a definition of what a cognitive system is: a system that learns from experience and uses its acquired knowledge (both declarative and practical) in a flexible manner to achieve its own goals. (shrink)
Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
Two large lexicological projects for the Center for the Greek Language, Thessaloniki, were to be published in print and on the WWW, which meant that two conversions were needed: a near-database file had to be converted to fully formatted file for printing and a fully formatted file had to be converted to a database for WWW access. As it turned out, both conversions could make use of existing clues that indicated the kinds of information contained in each particular piece of (...) text, thus separating fields from each other and ordering them into a tree-like structure. This indicates that both forms of the dictionaries, print and database, stem from the same cognitive need to categorize information into a kind of information before further understanding – be this for a human reader or for a machine. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...) volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This (...) consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...) the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. (shrink)
Review of: Margaret A. Boden, Mind as Machine: A History of Cognitive Science, 2 vols, Oxford: Oxford University Press, 2006, xlvii+1631, cloth $225, ISBN 0-19-924144-9. - Mind as Machine is Margaret Boden’s opus magnum. For one thing, it comes in two massive volumes of nearly 1700 pages, ... But it is not just the opus magnum in simple terms of size, but also a truly crowning achievement of half a century’s career in cognitive science.
John Searle once said: "The Chinese room shows what we knew all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)." I say: "Yes". Stuart C. Shapiro has said: "Does that make any sense? Yes: Everything (...) makes sense. The question is: What sense does it make?" This essay explores what sense it makes to say that syntax by itself is sufficient for semantics. (shrink)
In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...) briefly lays out the current state of the science of consciousness and its limitations insofar as these pertain to machine consciousness, and claims that there are no obvious consensus frameworks to inform public opinion on AI consciousness. Section 2 examines the rise of conversational chatbots or Social AI, and argues that in many cases, these elicit strong and sincere attributions of consciousness, mentality, and moral status from users, a trend likely to become more widespread. Section 3 presents an inconsistent triad for theories that attempt to link consciousness, behaviour, and moral status, noting that the trends in Social AI systems will likely make the inconsistency of these three premises more pressing. Finally, Section 4 presents some limited suggestions for how consciousness and AI research communities should respond to the gap between expert opinion and folk judgment. (shrink)
Turing’s much debated test has turned 70 and is still fairly controversial. His 1950 paper is seen as a complex and multilayered text, and key questions about it remain largely unanswered. Why did Turing select learning from experience as the best approach to achieve machine intelligence? Why did he spend several years working with chess playing as a task to illustrate and test for machine intelligence only to trade it out for conversational question-answering in 1950? Why did Turing refer to (...) gender imitation in a test for machine intelligence? In this article, I shall address these questions by unveiling social, historical and epistemological roots of the so-called Turing test. I will draw attention to a historical fact that has been only scarcely observed in the secondary literature thus far, namely that Turing’s 1950 test emerged out of a controversy over the cognitive capabilities of digital computers, most notably out of debates with physicist and computer pioneer Douglas Hartree, chemist and philosopher Michael Polanyi, and neurosurgeon Geoffrey Jefferson. Seen in its historical context, Turing’s 1950 paper can be understood as essentially a reply to a series of challenges posed to him by these thinkers arguing against his view that machines can think. Turing did propose gender learning and imitation as one of his various imitation tests for machine intelligence, and I argue here that this was done in response to Jefferson's suggestion that gendered behavior is causally related to the physiology of sex hormones. (shrink)
ChatGPT: Not Intelligent.Barry Smith - 2023 - Ai: From Robotics to Philosophy the Intelligent Robots of the Future – or Human Evolutionary Development Based on Ai Foundations.details
In our book, Why Machines Will Never Rule the World, Jobst Landgrebe and I argue that we can engineer machines that can emulate the behaviours only of simple systems, which means: only of those systems whose behaviour we can predict mathematically. The human brain is an example of a complex system, and thus its behaviour cannot be emulated by a machine. We use this argument to debunk the claims of those who believe that large language models are poised to achieve (...) a level of intelligence that will equal or even surpass that of the human brain. The essay was published under the title "Why Machines Will Never Rule the World". (shrink)
The logical problem of artificial intelligence—the question of whether the notion sometimes referred to as ‘strong’ AI is self-contradictory—is, essentially, the question of whether an artificial form of life is possible. This question has an immediately paradoxical character, which can be made explicit if we recast it (in terms that would ordinarily seem to be implied by it) as the question of whether an unnatural form of nature is possible. The present paper seeks to explain this paradoxical kind of possibility (...) by arguing that machines can share the human form of life and thus acquire human mindedness, which is to say they can be intelligent, conscious, sentient, etc. in precisely the way that a human being typically is. (shrink)
This paper sums up the fundamental features of intelligence through the common features stated by various definitions of "intelligence": Intelligence is the ability of achieving systematic goals (functions) of brain and nerve system through selecting, and artificial intelligence or machine intelligence is an imitation of life intelligence or a replication of features and functions. Based on the definition mentioned above, this paper discusses and summarizes the development routes of ideas on computable intelligence, including Godel's "universal recursive function", the computation activities (...) of "selection", recursive function and Turing machine, mathematical expression of computable intelligence, core nature of computable intelligence, and computability and strong artificial intelligence. At the end of this paper is the conclusion drawn by the authors. (shrink)
Considerable evidence proves that causal learning and causal understanding greatly enhance our ability to manipulate the physical world and are major factors that distinguish humans from other primates. How do we enable unintelligent robots to think causally, answer the questions raised with "why" and even understand the meaning of such questions? The solution is one of the keys to realizing artificial intelligence. Judea Pearl believes that to achieve human-like intelligence, researchers must start by imitating the intelligence of children, so he (...) proposed a "causal inference engine" to help future artificial intelligence make causal inference, pass the Minimal Turing Test, and even become a moral subject who can discern good from evil. This study attempts to provide some insights into the development of children's education from basic assumptions and construction goals of artificial intelligence, and to reflect on the causal model of artificial intelligence through children's education. (shrink)
his article will focus on the mechanistic origins of the computer metaphor, which forms the conceptual framework for the methodology of the cognitive sciences, some areas of artificial intelligence and the philosophy of mind. The connection between the history of computing technology, epistemology and the philosophy of mind is expressed through the metaphorical dictionaries of the philosophical discourse of a particular era. The conceptual clarification of this connection and the substantiation of the mechanistic components of the computer metaphor is the (...) main goal of this article. The statement is substantiated that the invention of mechanical computing devices, having a long history in the European engineering tradition, formed the prerequisites for the emergence of machine functionalism in the modern philosophy of mind. The idea of multiple implementation stems from the principle that a formal symbol system prescribes rules for the use of rational abstractions through the physical architecture of a computational engine. The article considers the reasons for the conceptual shift and reveals the semantic foundations for the metaphorical transfer of the properties of abstract objects from the theory of automata to the field of modern philosophy of mind. The criticism and ways of protecting the philosophical program of machine functionalism are analyzed by changing the content of the metaphor “Mind as machine”. The reasons for the stability of the information-computer approach in cognitive sciences are also disclosed and explained. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.