Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...) and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision making based on shared information, shared deliberation, and shared mind between practitioner and patient. (shrink)
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on ArtificialIntelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be (...) used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination. (shrink)
This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificialintelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as (...) Kantianism or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelli-gence for moral enhancement, but not in artificialintelligence that relies solely on ambient intelligence technologies. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...) We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Artificialintelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made (...) and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
The current paradigm of ArtificialIntelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of (...) separate problems, recent cases of unexpected effects of AI are the consequences of those very choices that enabled the field to succeed, and this is why it will be difficult to solve them. In this chapter we review three of these choices, investigating their connection to some of today’s challenges in AI, including those relative to bias, value alignment, privacy and explainability. We introduce the notion of “ethical debt” to describe the necessity to undertake expensive rework in the future in order to address ethical problems created by a technical system. (shrink)
Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all our (...) jobs. If instead they are intelligent enough to replace us, the risk they become unfriendly increases, given that they would not need us and humans would just compete for valuable resources. Their hostility will not promote our moral passivity. Secondly, the use of AI assistants in our personal lives will become a problem only if we rely on them for almost all our decision-making and motivation. But such a (maximally) pervasive level of dependence raises the question of whether humans would accept it, and consequently whether the crisis of passivity will arise. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificialintelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificialintelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificialintelligence raises (...) or will raise. The key issues this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of ArtificialIntelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
The first decade of this century has seen the nascency of the first mathematical theory of general artificialintelligence. This theory of Universal ArtificialIntelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the (...) award-winning PhD thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can (...) expect, what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be (...) fully controlled. Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security. This paper can serve as a comprehensive reference for the topic of uncontrollability. (shrink)
This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines (...) can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand. (shrink)
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificialintelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and (...) AI safety; and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificialintelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the (...) sets of options we choose from and the way those options are framed. Moreover, artificialintelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them. (shrink)
The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...) pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded. (shrink)
A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels (...) of AI development, namely, before it starts self-improvement, during its takeoff, when it uses various instruments to escape its initial confinement, or after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level. (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificialintelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of ArtificialIntelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues (...) are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Today, artificialintelligence, especially machine learning, is structurally dependent on human participation. Technologies such as Deep Learning (DL) leverage networked media infrastructures and human-machine interaction designs to harness users to provide training and verification data. The emergence of DL is therefore based on a fundamental socio-technological transformation of the relationship between humans and machines. Rather than simulating human intelligence, DL-based AIs capture human cognitive abilities, so they are hybrid human-machine apparatuses. From a perspective of media philosophy and (...) social-theoretical critique, I differentiate five types of “media technologies of capture” in AI apparatuses and analyze them as forms of power relations between humans and machines. Finally, I argue that the current hype about AI implies a relational and distributed understanding of (human/artificial) intelligence, which I categorize under the term “cybernetic AI”. This form of AI manifests in socio-technological apparatuses that involve new modes of subjectivation, social control and discrimination of users. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificialintelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human (...)intelligence. This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
This paper investigates how the simulation of intelligence, an activity that has been considered the notional task of ArtificialIntelligence, does not comprise its duplication. Briefly touching on the distinction between conceivability and possibility, and commenting on Ryan’s approach to fiction in terms of the interplay between possible worlds and her principle of minimal departure, we specify verisimilitude in ArtificialIntelligence as the accurate resemblance of intelligence by its simulation and, from this characterization, claim (...) the metaphysical impossibility of duplicating intelligence, as neither verisimilarly nor convincingly simulating intelligence involves its duplication. To this end, we argue by a representative case of simulation that, albeit conceivable, Turing’s test for machine intelligence wrongly equates the occurrence of indistinguishable intelligence performance to intelligence duplication, which is grounded in a prima facie conceivable but metaphysically impossible view that separates intelligence from its origin. Finally, we establish the following criterion for evaluating simulation in ArtificialIntelligence: simulations succeed in AI if and only if they are able to epistemically persuade human beings that intelligence has been duplicated, that is, if and only if verisimilar simulations can convincingly minimally depart from actual intelligence. (shrink)
This article offers an overview of the main first-order ethical questions raised by robots and ArtificialIntelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In (...) addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities. -/- . (shrink)
This essay describes computational semantic networks for a philosophical audience and surveys several approaches to semantic-network semantics. In particular, propositional semantic networks are discussed; it is argued that only a fully intensional, Meinongian semantics is appropriate for them; and several Meinongian systems are presented.
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificialintelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificialintelligence, Risks of ArtificialIntelligence is (...) the first volume of collected chapters dedicated to examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of ArtificialIntelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General ArtificialIntelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...) lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature. (shrink)
Defining ArtificialIntelligence and Artificial General Intelligence remain controversial and disputed. They stem from a longer-standing controversy of what is the definition of consciousness, which if solved could possibly offer a solution to defining AI and AGI. Central to these problems is the paradox that appraising AI and Consciousness requires epistemological objectivity of domains that are ontologically subjective. I propose that applying the philosophy of art, which also aims to define art through a lens of epistemological (...) objectivity where the domains are ontologically subjective, can further elucidate this unsolved question. In this sense, Art and AI and ultimately consciousness are multifaceted domains where conventional complexity theory and current philosophical approaches may be augmented by aesthetic principles ranging from classical Aristotelian essentialism to Wittgensteinian anti-essentialism. This approach of AI as art may offer novel solutions to characterising and elucidating the ciphers of consciousness and AI. (shrink)
Artificialintelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificialintelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificialintelligence and avoiding possible adverse consequences requires societies to find satisfactory (...) answers to these questions. This report sets out some possible approaches, and describes some of the ways government is already engaging with these issues. (shrink)
Artificialintelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims (...) to: (i) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies. (shrink)
The argument presented in this paper is not a direct attack or defence of the Chinese Room Argument (CRA), but relates to the premise at its heart, that syntax is not sufficient for semantics, via the closely associated propositions that semantics is not intrinsic to syntax and that syntax is not intrinsic to physics. However, in contrast to the CRA’s critique of the link between syntax and semantics, this paper will explore the associated link between syntax and physics. The main (...) argument presented here is not significantly original – it is a simple reflection upon that originally given by Hilary Putnam (Putnam 1988) and criticised by David Chalmers and others: instead of seeking to justify Putnam’s claim that, “every open system implements every Finite State Automaton (FSA)”, and hence that psychological states of the brain cannot be functional states of a computer, I will seek to establish the weaker result that, over a finite time window every open system implements the trace of a particular FSA Q, as it executes program (p) on input (x). That this result leads to panpsychism is clear as, equating Q (p, x) to a specific Strong AI program that is claimed to instantiate phenomenal states as it executes, and following Putnam’s procedure, identical computational (and ex hypothesi phenomenal) states (ubiquitous little ‘pixies’) can be found in every open physical system. (shrink)
On the one hand, people have witnessed a lot of amazing technological inventions and innovations in the multifaceted performances of artificialintelligence systems ever since the earliest stages of their development. Activities previously done with a lot of manual and muscular efforts are now accomplished with no sweat and just at the tip of one’s finger. I would venture to say that artificialintelligence is among the highest scientific and technological achievements of humanity in the post-modern (...) civilization. Yet on the other hand, are we worried that humanity will soon be threatened by the dark side of AI systems when the truth of the matter is long before the advent of AI, humanity has always been threatened by the evil forces of totalitarian powers well-entrenched in governments and big capitalist empires in control of nations’ economies? Future AI systems employed and mobilized in the service of these political and economic powers will certainly heighten the degree of their oppressive domination and intensify the common people’s oppression. (shrink)
The future rests under the sign of technology. Given the prevalence of technological neutrality and inevitabilism, most conceptualizations of the future tend to ignore moral problems. In this paper we argue that every choice about future technologies is a moral choice and even the most technology-dominated scenarios of the future are, in fact, moral provocations we have to imagine solutions to. We begin by explaining the intricate connection between morality and the future. After a short excursion into the history of (...)ArtificialIntelligence, we analyse two possible scenarios, which show that building the future with technology is, first and foremost, a moral endeavor. (shrink)
In the Fall of 1983, I offered a junior/senior-level course in Philosophy of ArtificialIntelligence, in the Department of Philosophy at SUNY Fredonia, after returning there from a year’s leave to study and do research in computer science and artificialintelligence (AI) at SUNY Buffalo. Of the 30 students enrolled, most were computerscience majors, about a third had no computer background, and only a handful had studied any philosophy. (I might note that enrollments have subsequently increased (...) in the Philosophy Department’s AI-related courses, such as logic, philosophy of mind, and epistemology, and that several computer science students have added philosophy as a second major.) This article describes that course, provides material for use in such a course, and offers a bibliography of relevant articles in the AI, cognitive science, and philosophical literature. (shrink)
The theory and philosophy of artificialintelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of ArtificialIntelligence” that was held in October 2011 in Thessaloniki. ArtificialIntelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, (...) perception, reasoning, learning, language, action, interaction, consciousness, humankind, life etc. etc. – and at the same time it has contributed substantially to answering these questions. There is thus a substantial tradition of work, both on AI by philosophers and of theory within AI itself. - The volume contains papers by Bostrom, Dreyfus, Gomila, O'Regan and Shagrir. (shrink)
Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where (...) it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. -/- Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime. (shrink)
As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
In this paper, we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently living (...) people reach longevity escape velocity and survive until more advanced AI appears. When AI comes close to human level, the main contribution to life extension will come from AI integration with humans through brain-computer interfaces, integrated AI assistants capable of autonomously diagnosing and treating health issues, and cyber systems embedded into human bodies. Lastly, we speculate about the more remote future, when AI reaches the level of superintelligence and such life-extension methods as uploading human minds and creating nanotechnological bodies may become possible, thus lowering the probability of human death close to zero. We suggest that medical AI based superintelligence could be safer than, say, military AI, as it may help humans to evolve into part of the future superintelligence via brain augmentation, uploading, and a network of self-improving humans. Medical AI’s value system is focused on human benefit. (shrink)
Of primary importance in formulating a response to the increasing prevalence and power of artificialintelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies (...) to be assimilated and regarded as “normal,” and (3) the tendency of human beings to anthropomorphize. This list is not intended as exhaustive, nor is it seen to preclude entirely a clear ontology, however, these challenges are a necessary set of topics for consideration. Each of these factors is seen to present a 'moving target' for discussion, which poses a challenge for both technical specialists and non-practitioners of AI systems development (e.g., philosophers and theologians) to speak meaningfully given that the corpus of AI structures and capabilities evolves at a rapid pace. Finally, we present avenues for moving forward, including opportunities for collaborative synthesis for scholars in philosophy and science. (shrink)
Upanishads are traditionally commented upon as texts of theology and religion. But the contents of the Upanishads can also be viewed and commented from modern science point of view. Elements of modern science present in the Upanishads and Advaita Siddhanta will be listed. -/- The nature of maya will be discussed with modern scientific awareness. This awareness will be further used in understanding human mental processes and the ways to model them contributing to the natural language comprehension field of (...) class='Hi'>artificialintelligence. -/- The nature of the Dvaita and Advaita conscious states and their implications to human cognitive processes will be discussed. The use and significance of the ‘absence of mind’ conscious state will also be presented. -/- The use of above understanding in modeling human understanding process and its comparison to physiological psychology – the physics and chemistry of psychology - will be proposed. (shrink)
Artificialintelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools (...) made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn. (shrink)
With the advancement of artificialintelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational (...) people) will be the biggest critics of the human rights. Whereas to make artificialintelligence sustainable, it is very important to reconcile it with human rights. Above all, there is a need to find a consensus between human rights and robotics rights in the framework of our established legal systems. (shrink)
My paper analyses the analogy between Computers and the Thomistic separate substances, and argues that Aquinas' account of angels as cognitively intuitive and non-discursive makes the analogical gap between these impossible to bridge. From there, I point the direction away from computers as the way for us to move up the order of cognitive excellence. Instead, the gifts of the Holy Spirit are the way to go, since by them we participate in this intuitivity. I then lay out the ascetical (...) presuppositions for the successful participation of this gifts, in particular the necessity for the passive purgations, according to the division of the ascetical life into three stages by Garrigou-Lagrange OP. (shrink)
The ethical concerns regarding the successful development of an ArtificialIntelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to (...) believe that it is very unlikely, the mere possibility of humanity causing extreme suffering to an AI is important enough to warrant serious consideration. This paper starts from the observation that both concerns rely on problematic philosophical assumptions. Rather than tackling these assumptions directly, it proceeds to present an argument that if one takes these assumptions seriously, then one has a moral obligation to advocate for a ban on the development of a conscious AI. (shrink)
Technology and ArtificialIntelligence, both today and in the near future, are dominated by automated algorithms that combine optimization with models based on the human brain to learn, predict, and even influence the large-scale behavior of human users. Such applications can be understood to be outgrowths of historical trends in industry and academia, yet have far-reaching and even unintended consequences for social and political life around the world. Countries in different parts of the world take different regulatory views (...) for the role and protection of user data, and this will in turn determine the course of development for technology and AI in the near future. (shrink)
Cybernetic Revelation explores the dual philosophical histories of deconstruction and artificialintelligence, tracing the development of concepts like "logos" and the notion of modeling the mind technologically from pre-history to contemporary thinkers such as Slavoj Zizek and Steven Pinker. The writing is clear and accessible throughout, yet the text probes deeply into major philosophers seen by JD Casten as "conceptual engineers." -/- Philosophers covered include: Anaximander, Heraclitus, Parmenides, Plato, Aristotle, Philo, Augustine, Shakespeare, Descartes, Spinoza, Leibniz, Locke, Berkeley, Hume, (...) Kant, Hegel, Nietzsche, Freud, Jung, Joyce, Dewey, Wittgenstein, Heidegger, Adorno, Benjamin, Derrida, Chomsky, Zizek, Pinker, Dennett, Hofstadter, Stiegler + more; with special chapters on: AI's history, Complexity, Deconstructing AI, Aesthetics, Consciousness + more... (shrink)
Bringing together literary scholars, computer scientists, ethicists, philosophers of mind, and scholars from affiliated disciplines, this collection of essays offers important and timely insights into the pasts, presents, and, above all, possible futures of ArtificialIntelligence. This book covers topics such as ethics and morality, identity and selfhood, and broader issues about AI, addressing questions about the individual, social, and existential impacts of such technologies. Through the works of science fiction authors such as Isaac Asimov, Stanislaw Lem, Ann (...) Leckie, Iain M. Banks, and Martha Wells, alongside key visual productions such as Ex Machina, Westworld, and Her, contributions illustrate how science fiction might inform potential futures as well as acting as a springboard to bring disciplinary knowledge to bear on significant developments of ArtificialIntelligence. Addressing a broad, interdisciplinary audience, both expert and non-expert readers gain an in-depth understanding of the wide range of pressing issues to which ArtificialIntelligence gives rise, and the ways in which science fiction narratives have been used to represent them. Using science fiction in this manner enables readers to see how even fictional worlds and imagined futures have very real impacts on how we understand these technologies. As such, readers are introduced to theoretical positions on ArtificialIntelligence through fictional works as well as encouraged to reflect on the diverse aspects of ArtificialIntelligence through its many philosophical, social, legal, scientific, and cultural ramifications. (shrink)
Report for "The Reasoner" on the conference "Philosophy and Theory of ArtificialIntelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
In this paper, I explore how an embodied perspective on cognition might inform research on artificialintelligence. Many embodied cognition theorists object to the central role that representations play on the traditional view of cognition. Based on these objections, it may seem that the lesson from embodied cognition is that AI should abandon representation as a central component of intelligence. However, I argue that the lesson from embodied cognition is actually that AI research should shift its focus (...) from how to utilize explicit representations to how to create and use tacit representations. To develop this suggestion, I provide an overview of the commitments of the classical view and distinguish three critiques of the role that representations play in that view. I provide further exploration and defense of Daniel Dennett’s distinction between explicit and tacit representations. I argue that we should understand the embodied cognition approach using a framework that includes tacit representations. Given this perspective, I will explore some AI research areas that may be recommended by an embodied perspective on cognition. (shrink)
The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and ArtificialIntelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive (...) inferences. The gratuitous generalisation capability refers to a discrepancy between the cognitive demands of a task to be accomplished and the performance (accuracy) of a used ML/AI model. If the latter exceeds the former because the model was optimised to achieve the best possible accuracy, it becomes inefficient and its operation harmful to the environment. The future dominated by the non-anthropic induction describes a use of ML/AI so all-pervasive that most of the inductive inferences become furnished by ML/AI generalisations. The paper argues that the present debate deserves an expansion connecting the environmental costs of research and ineffective ML/AI uses (the issue of gratuitous generalisation capability) with the (near) future marked by the all-pervasive Human-ArtificialIntelligence Nexus. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.