In this paper, I focus on AIs as very different, or at least potentially very different, kinds of language users from what humans are. Is the metasemantics for AI language use different, in the way Cappelen and Dever argue? Is it reasonable to think that AIs will come to use languages importantly different from human languages, what I call alien languages?
In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...) briefly lays out the current state of the science of consciousness and its limitations insofar as these pertain to machine consciousness, and claims that there are no obvious consensus frameworks to inform public opinion on AI consciousness. Section 2 examines the rise of conversational chatbots or Social AI, and argues that in many cases, these elicit strong and sincere attributions of consciousness, mentality, and moral status from users, a trend likely to become more widespread. Section 3 presents an inconsistent triad for theories that attempt to link consciousness, behaviour, and moral status, noting that the trends in Social AI systems will likely make the inconsistency of these three premises more pressing. Finally, Section 4 presents some limited suggestions for how consciousness and AI research communities should respond to the gap between expert opinion and folk judgment. (shrink)
The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...) to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can’t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists. (shrink)
The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally (...) investigate whether the folk think that certain (hypothetical) robots made of silicon and steel would have the same conscious states as certain familiar biological beings with the same patterns of dispositions to peripheral behavior as the robots. Our findings provide evidence that the folk largely reject the view that silicon-based robots would have the sensations that they, the folk, attribute to the biological beings in question. (shrink)
AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...) features. We argue that, unlike comparatively low-tech affective artifacts, affective artificial agents display a specific form of agency, which prevents them from being perceived by their users as extensions of their selves. In addition to this, we claim that their functioning crucially depends on the simulation of human-like emotion-driven behavior and requires a distinctive form of transparency—we call it emotional transparency—that might give rise to ethical and normative tensions. (shrink)
David Chalmers has recently developed a novel strategy of refuting external world skepticism, one he dubs the structuralist solution. In this paper, I make three primary claims: First, structuralism does not vindicate knowledge of other minds, even if it is combined with a functionalist approach to the metaphysics of minds. Second, because structuralism does not vindicate knowledge of other minds, the structuralist solution vindicates far less worldly knowledge than we would hope for from a solution to skepticism. Third, these results (...) suggest that the problem of external world skepticism should perhaps be construed as two different problems, since the problem might turn out to require two substantively different solutions, one for knowledge of the kind that is not dependent on other minds and one for knowledge that is. (shrink)
Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)? People have intuitively started ascribing emotions or consciousness to social AI (‘affective artificial agents’), with consequences that range from love to suicide. The philosophical question of whether such ascriptions are warranted is thus very relevant. This paper advances the argument that LLMs instantiate language users in Ludwig Wittgenstein’s sense but that ascribing psychological predicates to these systems remains a functionalist temptation. Social AIs (...) are not full-blown language users, but rather more like Italo Calvino’s literature machines. The ideas of LLMs as Wittgensteinian language users and Calvino’s literature-producing writing machine are combined. This sheds light on the misguided functionalist temptation inherent in moving from equating the two to the ascription of psychological predicates to social AI. Finally, the framework of mortal computation is used to show that social AIs lack the basic autopoiesis needed for narrative façons de parler and their role in the sensemaking of human (inter)action. Such psychological predicate ascriptions could make sense: the transition ‘from quantity to quality’ can take place, but its route lies somewhere between life and death, not between affective artifacts and emotion approximation by literature machines. (shrink)
Istoria paralelă a evoluției inteligenței umane și a inteligenței artificiale este o călătorie fascinantă, evidențiind căile distincte, dar interconectate, ale evoluției biologice și inovației tehnologice. Această istorie poate fi văzută ca o serie de evoluții interconectate, fiecare progres în inteligența umană deschizând calea pentru următorul salt în inteligența artificială. Inteligența umană și inteligența artificială s-au împletit de mult timp, evoluând în traiectorii paralele de-a lungul istoriei. Pe măsură ce oamenii au căutat să înțeleagă și să reproducă inteligența, IA a apărut (...) ca un domeniu dedicat creării de sisteme capabile de sarcini care necesită în mod tradițional intelect uman. Această carte analizează rădăcinile evolutive ale inteligenței, explorează apariția inteligenței artificiale, analizează istoria paralelă a inteligenței umane și a inteligenței artificiale, urmărind dezvoltarea lor, interacțiunile și impactul profund pe care l-au avut una asupra celeilalte, și imaginează peisajele viitoare în care converg inteligența umană și cea artificială. Să explorăm această istorie, comparând reperele cheie și evoluțiile din ambele tărâmuri. (shrink)
The parallel history of the evolution of human intelligence and artificial intelligence is a fascinating journey, highlighting the distinct but interconnected paths of biological evolution and technological innovation. This history can be seen as a series of interconnected developments, each advance in human intelligence paving the way for the next leap in artificial intelligence. Human intelligence and artificial intelligence have long been intertwined, evolving in parallel trajectories throughout history. As humans have sought to understand and reproduce intelligence, AI has emerged (...) as a field dedicated to creating systems capable of tasks that traditionally require human intellect. This book examines the evolutionary roots of intelligence, explores the emergence of artificial intelligence, examines the parallel history of human intelligence and artificial intelligence, tracing their development, interactions, and profound impact they have had on each other, and envisions future landscapes where intelligence converges human and artificial. Let's explore this history, comparing key milestones and developments in both realms. (shrink)
Communicating interdisciplinary information is difficult, even when two fields are ostensibly discussing the same topic. In this work, I’ll discuss the capacity for analogical reasoning to provide a framework for developing novel judgments utilizing similarities in separate domains. I argue that analogies are best modeled after Paul Bartha’s By Parallel Reasoning, and that they can be used to create a Toulmin-style warrant that expresses a generalization. I argue that these comparisons provide insights into interdisciplinary research. In order to demonstrate this (...) concept, I will demonstrate that fruitful comparisons can be made between Buddhism and Artificial Intelligence research. (shrink)
Although even very advanced artificial systems do not meet the demanding conditions which are required for humans to be a proper participant in a social interaction, we argue that not all human-machine interactions (HMIs) can appropriately be reduced to mere tool-use. By criticizing the far too demanding conditions of standard construals of intentional agency we suggest a minimal approach that ascribes minimal agency to some artificial systems resulting in the proposal of taking minimal joint actions as a case of a (...) social HMI. Analyzing such HMIs, we utilize Dennett’s stance epistemology, and argue that taking either an intentional stance or design stance can be misleading for several reasons, and instead propose to introduce a new stance that is able to capture social HMIs—the AI-stance. (shrink)
As we await the increasingly likely advent of genuinely intelligent artificial systems, a fair amount of consideration has been given to how we humans will interact with them. Less consideration has been given to how—indeed if—we humans will love them. What would human-AI romantic relationships look like? What do such relationships tell us about the nature of love? This chapter explores these questions via consideration of several works of science fiction, focusing especially on the Black Mirror episode “Be Right Back” (...) and the Spike Jonze's movie *Her*. As I suggest, there may well be cases where it is both possible and appropriate for a human to fall in love with a machine. (shrink)
The open-domain Frame Problem is the problem of determining what features of an open task environment need to be updated following an action. Here we prove that the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable. We discuss two other open-domain problems closely related to the Frame Problem, the system identification problem and the symbol-grounding problem, and show that they are similarly undecidable. We then reformulate the Frame Problem as a quantum decision problem, and show (...) that it is undecidable by any finite quantum computer. (shrink)
Purpose of the article is to identify the religious factor in the teaching of transhumanism, to determine its role in the ideology of this flow of thought and to identify the possible limits of technology interference in human nature. Theoretical basis. The methodological basis of the article is the idea of transhumanism. Originality. In the foreseeable future, robots will be able to pass the Turing test, become “electronic personalities” and gain political rights, although the question of the possibility of machine (...) consciousness and self-awareness remains open. In the face of robots, people create their assistants, evolutionary competition with which they will almost certainly lose with the initial data. For successful competition with robots, people will have to change, ceasing to be people in the classical sense. Changing the nature of man will require the emergence of a new – posthuman – anthropology. Conclusions. Against the background of scientific discoveries, technical breakthroughs and everyday improvements of the last decades, an anthropological revolution has taken shape, which made it possible to set the task of creating inhumanly intelligent creatures, as well as changing human nature, up to discussing options for artificial immortality. The history of man ends and the history of the posthuman begins. We can no longer turn off this path, however, in our power to preserve our human qualities in the posthuman future. The theme of the soul again reminded of itself, but from a different perspective – as the theme of consciousness and self-awareness. It became again relevant in connection with the development of computer and cloud technologies, artificial intelligence technologies, etc. If a machine ever becomes a "man", then can a man become a "machine"? However, even if such a hypothetical probability would turn into reality, we cannot talk about any form of individual immortality or about the continuation of existence in a different physical form. A digital copy of the soul will still remain a copy, and I see no fundamental possibility of isolating a substrate-independent mind from the human body. Immortality itself is necessary not so much for stopping someone’s fears or encouraging someone’s hopes, but for the final solution of a religious issue. However, the gods hold the keys to heaven hard and are unlikely to admit our modified descendants there. (shrink)
Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...) are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy. (shrink)
This paper investigates the concept of behavioral autonomy in Artificial Life by drawing a parallel to the use of teleological notions in the study of biological life. Contrary to one of the leading assumptions in Artificial Life research, I argue that there is a significant difference in how autonomous behavior is understood in artificial and biological life forms: the former is underlain by human goals in a way that the latter is not. While behavioral traits can be explained in relation (...) to evolutionary history in biological organisms, in synthetic life forms behavior depends on a design driven by a research agenda, further shaped by broader human goals. This point will be illustrated with a case study on a synthetic life form. Consequently, the putative epistemic benefit of reaching a better understanding of behavioral autonomy in biological organisms by synthesizing artificial life forms is subject to doubt: the autonomy observed in such artificial organisms may be a mere projection of human agency. Further questions arise in relation to the need to spell out the relevant human aims when addressing potential social or ethical implications of synthesizing artificial life forms. (shrink)
Latest Sermon from the Church of Fundamentalist Naturalism by Pastor Hofstadter. Like his much more famous (or infamous for its relentless philosophical errors) work Godel, Escher, Bach, it has a superficial plausibility but if one understands that this is rampant scientism which mixes real scientific issues with philosophical ones (i.e., the only real issues are what language games we ought to play) then almost all its interest disappears. I provide a framework for analysis based in evolutionary psychology and the work (...) of Wittgenstein (since updated in my more recent writings). -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019). (shrink)
Computers can mimic human intelligence, sometimes quite impressively. This has led some to claim that, a.) computers can actually acquire intelligence, and/or, b.) the human mind may be thought of as a very sophisticated computer. In this paper I argue that neither of these inferences are sound. The human mind and computers, I argue, operate on radically different principles.
We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history (...) as products of engineering would undermine their autonomy and thus responsibility. (shrink)
With their “bottom-up” approach, Holk Cruse and Malte Schilling present a highly intriguing perspective on those mental phenomena that have fascinated humankind since ancient times. Among them are those aspects of our inner lives that are at the same time most salient and yet most elusive: we are conscious beings with complex emotions, thinking and acting in pursuit of various goals. Starting with, from a biological point of view, very basic abilities, such as the ability to move and navigate in (...) an unpredictable environment, Cruse & Schilling have developed, step-by-step, a robotic system with the ability to plan future actions and, to a limited extent, to verbally report on its own internal states. The authors then offer a compelling argument that their system exhibits aspects of various higher-level mental phenomena such as emotion, attention, intention, volition, and even consciousness. The scientific investigation of the mind is faced with intricate problems at a very fundamental, methodological level. Not only is there a good deal of conceptual vagueness and uncertainty as to what the explananda precisely are, but it is also unclear what the best strategy might be for addressing the phenomena of interest. Cruse & Schilling’s bio-robotic “bottom-up” approach is designed to provide answers to such questions. In this commentary, I begin, in the first section, by presenting the main ideas behind this approach as I understand them. In the second section, I turn to an examination of its scope and limits. Specifically, I will suggest a set of constraints on good explanations based on the bottom-up approach. What criteria do such explanations have to meet in order to be of real scientific value? I maintain that there are essentially three such criteria: biological plausibility, adequate matching criteria, and transparency. Finally, in the third section, I offer directions for future research, as Cruse & Schilling’s bottom-up approach is well suited to provide new insights in the domain of social cognition and to explain its relation to phenomena such as language, emotion, and self. (shrink)
Much has been written about the possibility of human trust in robots. In this article we consider a more specific relationship: that of a human follower’s obedience to a social robot who leads through the exercise of referent power and what Weber described as ‘charismatic authority.’ By studying robotic design efforts and literary depictions of robots, we suggest that human beings are striving to create charismatic robot leaders that will either (1) inspire us through their display of superior morality; (2) (...) enthrall us through their possession of superhuman knowledge; or (3) seduce us with their romantic allure. Rejecting a contractarian-individualist approach which presumes that human beings will be able to consciously ‘choose’ particular robot leaders, we build on the phenomenological-social approach to trust in robots to argue that charismatic robot leaders will emerge naturally from our world’s social fabric, without any rational decision on our part. Finally, we argue that the stability of these leader-follower relations will hinge on a fundamental, unresolved question of robotic intelligence: is it possible for synthetic intelligences to exist that are morally, intellectually, and emotionally sophisticated enough to exercise charismatic authority over human beings—but not so sophisticated that they lose the desire to do so? (shrink)
Functionalism of robot pain claims that what is definitive of robot pain is functional role, defined as the causal relations pain has to noxious stimuli, behavior and other subjective states. Here, I propose that the only way to theorize role-functionalism of robot pain is in terms of type-identity theory. I argue that what makes a state pain for a neuro-robot at a time is the functional role it has in the robot at the time, and this state is type identical (...) to a specific circuit state. Support from an experimental study shows that if the neural network that controls a robot includes a specific 'emotion circuit', physical damage to the robot will cause the disposition to avoid movement, thereby enhancing fitness, compared to robots without the circuit. Thus, pain for a robot at a time is type identical to a specific circuit state. (shrink)
In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also argue that, if semiotic systems are systems that interpret signs, then both humans and computers are (...) semiotic systems. Finally, I suggest that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, I take issue with Fetzer’s arguments to the contrary. (shrink)
Detractors of Searle’s Chinese Room Argument have arrived at a virtual consensus that the mental properties of the Man performing the computations stipulated by the argument are irrelevant to whether computational cognitive science is true. This paper challenges this virtual consensus to argue for the first of the two main theses of the persons reply, namely, that the mental properties of the Man are what matter. It does this by challenging many of the arguments and conceptions put forth by the (...) systems and logical replies to the Chinese Room, either reducing them to absurdity or showing how they lead, on the contrary, to conclusions the persons reply endorses. The paper bases its position on the Chinese Room Argument on additional philosophical considerations, the foundations of the theory of computation, and theoretical and experimental psychology. The paper purports to show how all these dimensions tend to support the proposed thesis of the persons reply. (shrink)
This paper is a follow-up of the first part of the persons reply to the Chinese Room Argument. The first part claims that the mental properties of the person appearing in that argument are what matter to whether computational cognitive science is true. This paper tries to discern what those mental properties are by applying a series of hypothetical psychological and strengthened Turing tests to the person, and argues that the results support the thesis that the Man performing the computations (...) characteristic of understanding Chinese actually understands Chinese. The supposition that the Man does not understand Chinese has gone virtually unquestioned in this foundational debate. The persons reply acknowledges the intuitive power behind that supposition, but knows that brute intuitions are not epistemically sacrosanct. Like many intuitions humans have had, and later deposed, this intuition does not withstand experimental scrutiny. The second part of the persons reply consequently holds that computational cognitive science is confirmed by the Chinese Room thought experiment. (shrink)
It is possible to survey humankind and be proud, even to smile, for we accomplish great things. Art and science are two notable worthy human accomplishments. Consonant with art and science are some of the ways we treat each other. Sacrifice and heroism are two admirable human qualities that pervade human interaction. But, as everyone knows, all this goodness is more than balanced by human depravity. Moral corruption infests our being. Why?
In 1949, the Department of Philosophy at the University of Manchester organized a symposium “Mind and Machine” with Michael Polanyi, the mathematicians Alan Turing and Max Newman, the neurologists Geoff rey Jeff erson and J. Z. Young, and others as participants. Th is event is known among Turing scholars, because it laid the seed for Turing’s famous paper on “Computing Machinery and Intelligence”, but it is scarcely documented. Here, the transcript of this event, together with Polanyi’s original statement and his (...) notes taken at a lecture by Jeff erson, are edited and commented for the fi rst time. Th e originals are in the Regenstein Library of the University of Chicago. Th e introduction highlights elements of the debate that included neurophysiology, mathematics, the mind-body-machine problem, and consciousness and shows that Turing’s approach, as documented here, does not lend itself to reductionism. (shrink)
While the recent special issue of JCS on machine consciousness (Volume 14, Issue 7) was in preparation, a collection of papers on the same topic, entitled Artificial Consciousness and edited by Antonio Chella and Riccardo Manzotti, was published. 1 The editors of the JCS special issue, Ron Chrisley, Robert Clowes and Steve Torrance, thought it would be a timely and productive move to have authors of papers in their collection review the papers in the Chella and Manzotti book, and include (...) these reviews in the special issue of the journal. Eight of the JCS authors (plus Uziel Awret) volunteered to review one or more of the fifteen papers in Artificial Consciousness; these individual reviews were then collected together with a minimal amount of editing to produce a seamless chapter-by-chapter review of the entire book. Because the number and length of contributions to the JCS issue was greater than expected, the collective review of Artificial Con- sciousness had to be omitted, but here at last it is. Each paper’s review is written by a single author, so any comments made may not reflect the opinions of all nine of the joint authors! (shrink)
We explicate representational content by addressing how representations that ex- plain intelligent behavior might be acquired through processes of Darwinian evo- lution. We present the results of computer simulations of evolved neural network controllers and discuss the similarity of the simulations to real-world examples of neural network control of animal behavior. We argue that focusing on the simplest cases of evolved intelligent behavior, in both simulated and real organisms, reveals that evolved representations must carry information about the creature’s environ- ments (...) and further can do so only if their neural states are appropriately isomor- phic to environmental states. Further, these informational and isomorphism rela- tions are what are tracked by content attributions in folk-psychological and cognitive scientific explanations of these intelligent behaviors. (shrink)
One of the most influential arguments against the claim that computers can think is that while our intentionality is intrinsic, that of computers is derived: it is parasitic on the intentionality of the programmer who designed the computer-program. Daniel Dennett chose a surprising strategy for arguing against this asymmetry: instead of denying that the intentionality of computers is derived, he endeavours to argue that human intentionality is derived too. I intend to examine that biological plausibility of Dennett’s suggestion and show (...) that Dennett’s argument for the claim that human intentionality is derived because it was designed by natural selection is based on the misunderstanding of how natural selection works. (shrink)
Ask most any cognitive scientist working today if a digital computational system could develop aesthetic sensibility and you will likely receive the optimistic reply that this remains an open empirical question. However, I attempt to show, while drawing upon the later Wittgenstein, that the correct answer is in fact available. And it is a negative a priori. It would seem, for example, that recent computational successes in generative AI and textual attribution, most notably those of Donald Foster (famed finder of (...) Ted Kazinski a.k.a. “the Unibomber”) speak favorably of the connectionist model's capacity to overcome the “aspect blindness” handicap in this domain. I argue however that such results are only achievable when rigid input‐to‐output parameters are given, and that this element is precisely what is absent in standard examples of aesthetic judgment. I thus conclude that while the connectionist model anticipated by Turing may provide the best approach for the AI project, its capacity for meeting its own sufficiency requirements is necessarily crippled by its inability to share in what can be generally referred to as the collective engagements of human solidarity. (shrink)
In mid-May of 2001, I attended a fascinating workshop at Cold Spring Harbor Labs. The conference was held at the lab's Banbury Center, an elegant mansion and its beautiful surrounding estate, located on Banbury Lane, in the outskirts of Lloyd Harbor, overlooking the north shore of Long Island in New York. The estate was formerly owned by Charles Sammis Robertson. In 1976, Robertson donated his estate, and an endowment for its upkeep, to the Lab. The donation included the Robertson's mansion, (...) now called Robertson House, and a large, seven-car garage that would become the actual conference center. The Center was opened on Sunday, June 14, 1977, by Francis Crick who gave a talk entitled "How Scientists Work." For us, Banbury was an idyllic location with great food where we could talk about the most difficult problem in all o f science: what is the nature and cause of consciousness? (shrink)
Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these days, (...) and to date they have all been equally plausible and equally successful. And, just to be explicit, I do not mean “IQ” by “intelligence.” I am using “intelligence” in the way AI uses it: as a semi-techical term referring to a general property of all intelligent systems, animal (including humans), or machine, alike.). (shrink)
Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, our subjectivity, our essence as transcendent beings. With each advance, they steal our freedom and dignity. Who are these denizens of darkness, these usurpers of all that is good and holy? None other than humanity’s arch-foe: The Cognitive Scientists -- AI researchers, fallen philosophers, psychologists, and other benighted lovers of computers. Unless they are (...) stopped, humanity -- you and I -- will soon be nothing but numbers and algorithms locked away on magnetic tape. (shrink)
Over three decades ago, in a brief but provocative essay, Paul Ziff argued for the thesis that robots cannot have feelings because they are "mechanisms, not organisms, not living creatures. There could be a broken-down robot but not a dead one. Only living creatures can literally have feelings."[i] Since machines are not living things they cannot have feelings. In the first half of my paper I review Ziff's arguments against the idea that robots could be conscious, especially his appeal to (...) our linguistic usage. In the second half of the essay I try to provide a deeper ontological understanding of why we ought not attribute minds to nonliving artifacts. I argue that inanimate mechanisms are incapable of genuinely active and purposive behavior. They are importantly different in kind from living human beings and animals. (shrink)
Hauser argues that his pocket calculator (Cal) has certain arithmetical abilities: it seems Cal calculates. That calculating is thinking seems equally untendentious. Yet these two claims together provide premises for a seemingly valid syllogism whose conclusion - Cal thinks - most would deny. He considers several ways to avoid this conclusion, and finds them mostly wanting. Either we ourselves can't be said to think or calculate if our calculation-like performances are judged by the standards proposed to rule out Cal; or (...) the standards- e.g., autonomy and self-consciousness- make it impossible to verify whether anything or anyone (save oneself) meets them. While appeals to the intentionality of thought or the unity of minds provide more credible lines of resistance, available accounts of intentionality and mental unity are insufficiently clear and warranted to provide very substantial arguments against Cal's title to be called a thinking thing. Indeed, considerations favoring granting that title are more formidable than is generally appreciated. Rapaport's comments suggest that, on a strong view of thinking, mere calculating is not thinking (and pocket calculators don't think), but on a weak, but unexciting, sense of thinking, pocket calculators do think. He closes with some observations on the implications of this conclusion. (shrink)
Today, software instruments support all parts of engineering work, from design to creation. Many engineering processes call for tedious routine appointments and torments with manual handoffs and data storehouses. AI designers train profound brain networks and incorporate them into software structures.
I detta dokument ger jag mina teorier, idéer och förklaringar till hur jag tror att medvetandet fungerar, vad det är och varför vi har det. Enligt mig är alla levande varelser bara biologiska maskiner, skulpterade av evolutionen för att bli så anpassad som möjligt till den miljö som de befinner sig i. Allt fungerar på ett speciellt sätt, det går att förklaras och förstås, det finns ingen magi, och medvetandet är inget undantag, det finns där av en anledning.
I conducted an experiment by using four different artificial intelligence models developed by OpenAI to estimate the persuasiveness and rational justification of various philosophical stances. The AI models used were text-davinci-003, text-ada-001, text-curie-001, and text-babbage-001, which differed in complexity and the size of their training data sets. For the philosophical stances, the list of 30 questions created by Bourget & Chalmers (2014) was used. The results indicate that it seems that each model has its own plausible ‘cognitive’ style. The outcomes (...) of the ‘strongest’ model correlate with the average philosophers' stance. (shrink)
Recent advances in stem cell-derived human brain organoids and microelectrode array (MEA) tech- nology raise profound questions about the potential for these systems to give rise to sentience. Brain organoids are 3D tissue constructs that recapitulate key aspects of brain development and function, while MEAs enable bidirectional communication with neuronal cultures. As brain organoids become more sophisticated and integrated with MEAs, the question arises: Could such a system support not only intelligent computation, but subjective experience? This paper explores the philosophical (...) implications of this thought experiment, considering scenarios in which brain organoids exhibit signs of sensory awareness, distress, preference, and other hallmarks of sentience. It examines the ethical quandaries that would arise if compelling evidence of sentience were found in brain organoids, such as the moral status of these entities and the permissibility of different types of research. The paper also explores how the phenomenon of organoid sentience might shed light on the nature of consciousness and the plausibility of artificial sentience. While acknowledging the speculative nature of these reflections, the paper argues that the possibility of sentient brain organoids deserves serious consideration given the rapid pace of advances in this field. Grappling with these questions proactively could help set important ethical boundaries for future research and highlight critical avenues of scientific and philosophical inquiry. The thought experiment of sentient brain organoids thus serves as a valuable lens for examining deep issues at the intersection of neuroscience, ethics, and the philosophy of mind. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.