Switch to: References

Citations of:

The Singularity: A Philosophical Analysis

In Uzi Awret & U. Awret (eds.), The Singularity: Could Artificial Intelligence Really Out-Think Us ? Imprint Academic. pp. 12-88 (2016)

Add citations

You must login to add citations.
  1. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • In Defense of Fanaticism.Hayden Wilkinson - 2022 - Ethics 132 (2):445-477.
    Which is better: a guarantee of a modest amount of moral value, or a tiny probability of arbitrarily large value? To prefer the latter seems fanatical. But, as I argue, avoiding such fanaticism brings severe problems. To do so, we must decline intuitively attractive trade-offs; rank structurally identical pairs of lotteries inconsistently, or else admit absurd sensitivity to tiny probability differences; have rankings depend on remote, unaffected events ; and often neglect to rank lotteries as we already know we would (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Against the singularity hypothesis.David Thorstad - forthcoming - Philosophical Studies:1-25.
    The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. I show how leading philosophical defenses of the singularity hypothesis (Chalmers 2010, Bostrom 2014) fail (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2024 - Journal of Experimental and Theoretical Artificial Intelligence 36 (2):257-272.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Prudential Longtermism.Johan E. Gustafsson & Petra Kosonen - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    According to Longtermism, our acts’ expected influence on the expected value of the world is mainly determined by their effects in the far future. There is, given total utilitarianism, a straightforward argument for Longtermism due to the enormous number of people that might exist in the future, but this argument does not work on person-affecting views. In this paper, we will argue that these views might also lead to Longtermism if Prudential Longtermism is true. Prudential Longtermism holds for a person (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)What Matters in Survival: Self-determination and The Continuity of Life Trajectories.Heidi Savage - 2024 - Acta Analytica 39 (1):37-56.
    In this paper, I argue that standard psychological continuity theory does not account for an important feature of what is important in survival – having the property of personhood. I offer a theory that can account for this, and I explain how it avoids the implausible consequences of standard psychological continuity theory, as well as having certain other advantages over that theory.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Racing to the precipice: a model of artificial intelligence development.Stuart Armstrong, Nick Bostrom & Carl Shulman - 2016 - AI and Society 31 (2):201-206.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The race for an artificial general intelligence: implications for public policy.Wim Naudé & Nicola Dimitri - 2020 - AI and Society 35 (2):367-379.
    An arms race for an artificial general intelligence would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper, an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that, in a winner-takes-all race, where players must invest in R&D, only the most competitive teams will participate. Thus, given the difficulty of AGI, the number of competing teams is unlikely (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Echoes of myth and magic in the language of Artificial Intelligence.Roberto Musa Giuliano - 2020 - AI and Society 35 (4):1009-1024.
    To a greater extent than in other technical domains, research and progress in Artificial Intelligence has always been entwined with the fictional. Its language echoes strongly with other forms of cultural narratives, such as fairytales, myth and religion. In this essay we present varied examples that illustrate how these analogies have guided not only readings of the AI enterprise by commentators outside the community but also inspired AI researchers themselves. Owing to their influence, we pay particular attention to the similarities (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Moral Case for Long-Term Thinking.Hilary Greaves, William MacAskill & Elliott Thornley - 2021 - In Natalie Cargill & Tyler M. John (eds.), The Long View: Essays on Policy, Philanthropy, and the Long-term Future. London: FIRST. pp. 19-28.
    This chapter makes the case for strong longtermism: the claim that, in many situations, impact on the long-run future is the most important feature of our actions. Our case begins with the observation that an astronomical number of people could exist in the aeons to come. Even on conservative estimates, the expected future population is enormous. We then add a moral claim: all the consequences of our actions matter. In particular, the moral importance of what happens does not depend on (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Language Agents and Malevolent Design.Inchul Yum - 2024 - Philosophy and Technology 37 (104):1-19.
    Language agents are AI systems capable of understanding and responding to natural language, potentially facilitating the process of encoding human goals into AI systems. However, this paper argues that if language agents can achieve easy alignment, they also increase the risk of malevolent agents building harmful AI systems aligned with destructive intentions. The paper contends that if training AI becomes sufficiently easy or is perceived as such, it enables malicious actors, including rogue states, terrorists, and criminal organizations, to create powerful (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transhumanist immortality: Understanding the dream as a nightmare.Pablo García-Barranquero - 2021 - Scientia et Fides 9 (1):177-196.
    This paper offers new arguments to reject the alleged dream of immortality. In order to do this, I firstly introduce an amendment to Michael Hauskeller’s approach of the “immortalist fallacy”. I argue that the conclusion “we do not want to live forever” does not follow from the premise “we do not want to die”. Next, I propose the philosophical turn from “normally” to “under these circumstances” to resolve this logical error. Then, I review strong philosophical critiques of this transhumanist purpose (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)The philosophy of computer science.Raymond Turner - 2013 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The Philosophy of Online Manipulation.Michael Klenk & Fleur Jongepier (eds.) - 2022 - Routledge.
    Are we being manipulated online? If so, is being manipulated by online technologies and algorithmic systems notably different from human forms of manipulation? And what is under threat exactly when people are manipulated online? This volume provides philosophical and conceptual depth to debates in digital ethics about online manipulation. The contributions explore the ramifications of our increasingly consequential interactions with online technologies such as online recommender systems, social media, user-friendly design, micro-targeting, default-settings, gamification, and real-time profiling. The authors in this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The story of humanity and the challenge of posthumanity.Zoltán Boldizsár Simon - 2019 - History of the Human Sciences 32 (2).
    Today’s technological-scientific prospect of posthumanity simultaneously evokes and defies historical understanding. On the one hand, it implies a historical claim of an epochal transformation concerning posthumanity as a new era. On the other, by postulating the birth of a novel, better-than-human subject for this new era, it eliminates the human subject of modern Western historical understanding. In this article, I attempt to understand posthumanity as measured against the story of humanity as the story of history itself. I examine the fate (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Could You Merge With AI? Reflections on the Singularity and Radical Brain Enhancement.Cody Turner & Susan Schneider - 2020 - In Markus Dirk Dubber, Frank Pasquale & Sunit Das (eds.), The Oxford Handbook of Ethics of Ai. Oxford Handbooks. pp. 307-325.
    This chapter focuses on AI-based cognitive and perceptual enhancements. AI-based brain enhancements are already under development, and they may become commonplace over the next 30–50 years. We raise doubts concerning whether radical AI-based enhancements transhumanists advocate will accomplish the transhumanists goals of longevity, human flourishing, and intelligence enhancement. We urge that even if the technologies are medically safe and are not used as tools by surveillance capitalism or an authoritarian dictatorship, these enhancements may still fail to do their job for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2021 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AAAI: an Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Uploading and Branching Identity.Michael A. Cerullo - 2015 - Minds and Machines 25 (1):17-36.
    If a brain is uploaded into a computer, will consciousness continue in digital form or will it end forever when the brain is destroyed? Philosophers have long debated such dilemmas and classify them as questions about personal identity. There are currently three main theories of personal identity: biological, psychological, and closest continuer theories. None of these theories can successfully address the questions posed by the possibility of uploading. I will argue that uploading requires us to adopt a new theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Big Historical Foundations for Deep Future Speculations: Cosmic Evolution, Atechnogenesis, and Technocultural Civilization.Cadell Last - 2017 - Foundations of Science 22 (1):39-124.
    Big historians are attempting to construct a general holistic narrative of human origins enabling an approach to studying the emergence of complexity, the relation between evolutionary processes, and the modern context of human experience and actions. In this paper I attempt to explore the past and future of cosmic evolution within a big historical foundation characterized by physical, biological, and cultural eras of change. From this analysis I offer a model of the human future that includes an addition and/or reinterpretation (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The disconnection thesis.David Roden - 2012 - In Amnon H. Eden & James H. Moor (eds.), Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer.
    In his 1993 article ‘The Coming Technological Singularity: How to survive in the posthuman era’ the computer scientist Virnor Vinge speculated that developments in artificial intelligence might reach a point where improvements in machine intelligence result in smart AI’s producing ever-smarter AI’s. According to Vinge the ‘singularity’, as he called this threshold of recursive self-improvement, would be a ‘transcendental event’ transforming life on Earth in ways that unaugmented humans are not equipped to envisage. In this paper I argue Vinge’s idea (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • An argument for the impossibility of machine intelligence (preprint).Jobst Landgrebe & Barry Smith - 2021 - Arxiv.
    Since the noun phrase `artificial intelligence' (AI) was coined, it has been debated whether humans are able to create intelligence using technology. We shed new light on this question from the point of view of themodynamics and mathematics. First, we define what it is to be an agent (device) that could be the bearer of AI. Then we show that the mainstream definitions of `intelligence' proposed by Hutter and others and still accepted by the AI community are too weak even (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A History of First Step Fallacies.Hubert L. Dreyfus - 2012 - Minds and Machines 22 (2):87-99.
    In the 1960s, without realizing it, AI researchers were hard at work finding the features, rules, and representations needed for turning rationalist philosophy into a research program, and by so doing AI researchers condemned their enterprise to failure. About the same time, a logician, Yehoshua Bar-Hillel, pointed out that AI optimism was based on what he called the “first step fallacy”. First step thinking has the idea of a successful last step built in. Limited early success, however, is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • How Philosophy of Mind Can Shape the Future.Susan Schneider & Pete Mandik - 2017 - In Amy Kind (ed.), Philosophy of Mind in the Twentieth and Twenty-First Centuries: The History of the Philosophy of Mind, Volume 6. New York: Routledge. pp. 303-319.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • COVID-19 and Singularity: Can the Philippines Survive Another Existential Threat?Robert James M. Boyles, Mark Anthony Dacela, Tyrone Renzo Evangelista & Jon Carlos Rodriguez - 2022 - Asia-Pacific Social Science Review 22 (2):181–195.
    In general, existential threats are those that may potentially result in the extinction of the entire human species, if not significantly endanger its living population. Among the said threats include, but not limited to, pandemics and the impacts of a technological singularity. As regards pandemics, significant work has already been done on how to mitigate, if not prevent, the aftereffects of this type of disaster. For one, certain problem areas on how to properly manage pandemic responses have already been identified, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Corporate Agency and Possible Futures.Tim Mulgan - 2018 - Journal of Business Ethics 154 (4):901-916.
    We need an account of corporate agency that is temporally robust – one that will help future people to cope with challenges posed by corporate groups in a range of credible futures. In particular, we need to bequeath moral resources that enable future people to avoid futures dominated by corporate groups that have no regard for human beings. This paper asks how future philosophers living in broken or digital futures might re-imagine contemporary debates about corporate agency. It argues that the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Philosophers & futurists, catch up.Jürgen Schmidhuber - 2012 - Journal of Consciousness Studies 19 (1-2):173-182.
    Responding to Chalmers' The Singularity , I argue that progress towards self-improving Ais is already substantially beyond what many futurists and philosophers are aware of. Instead of rehashing well-trodden topics of the previous millennium, let us start focusing on relevant new millennium results.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The methodological rigor of anticipatory bioethics.Bert Gordijn & Henk ten Have - 2014 - Medicine, Health Care and Philosophy 17 (3):323-324.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Can the predictive processing model of the mind ameliorate the value-alignment problem?William Ratoff - 2021 - Ethics and Information Technology 23 (4):739-750.
    How do we ensure that future generally intelligent AI share our values? This is the value-alignment problem. It is a weighty matter. After all, if AI are neutral with respect to our wellbeing, or worse, actively hostile toward us, then they pose an existential threat to humanity. Some philosophers have argued that one important way in which we can mitigate this threat is to develop only AI that shares our values or that has values that ‘align with’ ours. However, there (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Eight Kinds of Critters: A Moral Taxonomy for the Twenty-Second Century.Michael Bess - 2018 - Journal of Medicine and Philosophy 43 (5):585-612.
    Over the coming century, the accelerating advance of bioenhancement technologies, robotics, and artificial intelligence (AI) may significantly broaden the qualitative range of sentient and intelligent beings. This article proposes a taxonomy of such beings, ranging from modified animals to bioenhanced humans to advanced forms of robots and AI. It divides these diverse beings into three moral and legal categories—animals, persons, and presumed persons—describing the moral attributes and legal rights of each category. In so doing, the article sets forth a framework (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is Species Integrity a Human Right? A Rights Issue Emerging from Individual Liberties with New Technologies.Lantz Fleming Miller - 2014 - Human Rights Review 15 (2):177-199.
    Currently, some philosophers and technicians propose to change the fundamental constitution of Homo sapiens, as by significantly altering the genome, implanting microchips in the brain, and pursuing related techniques. Among these proposals are aspirations to guide humanity’s evolution into new species. Some philosophers have countered that such species alteration is unethical and have proposed international policies to protect species integrity; yet, it remains unclear on what basis such right to species integrity would rest. An answer may come from an unexpected (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A brain in a vat cannot break out: why the singularity must be extended, embedded and embodied.Francis Heylighen & Center Leo Apostel Ecco - 2012 - Journal of Consciousness Studies 19 (1-2):126-142.
    The present paper criticizes Chalmers's discussion of the Singularity, viewed as the emergence of a superhuman intelligence via the self-amplifying development of artificial intelligence. The situated and embodied view of cognition rejects the notion that intelligence could arise in a closed 'brain-in-a-vat' system, because intelligence is rooted in a high-bandwidth, sensory-motor interaction with the outside world. Instead, it is proposed that superhuman intelligence can emerge only in a distributed fashion, in the form of a self-organizing network of humans, computers, and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Belief in the singularity is logically brittle.Selmer Bringsjord - 2012 - Journal of Consciousness Studies 19 (7-8):14.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • “We Now Control Our Evolution”: Circumventing Ethical and Logical Cul-de-Sacs of an Anticipated Engineering Revolution.Lantz Fleming Miller - 2014 - Science and Engineering Ethics 20 (4):1011-1025.
    Philosophers, scientists, and other researchers have increasingly characterized humanity as having reached an epistemic and technical stage at which “we can control our own evolution.” Moral–philosophical analysis of this outlook reveals some problems, beginning with the vagueness of “we.” At least four glosses on “we” in the proposition “we, humanity, control our evolution” can be made: “we” is the bundle of all living humans, a leader guiding the combined species, each individual acting severally, or some mixture of these three involving (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Thinking inside the box: Using and controlling an oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - forthcoming - Minds and Machines.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Of Animals, Robots and Men.Christine Tiefensee & Johannes Marx - 2015 - Historical Social Research 40 (4):70-91.
    Domesticated animals need to be treated as fellow citizens: only if we conceive of domesticated animals as full members of our political communities can we do justice to their moral standing—or so Sue Donaldson and Will Kymlicka argue in their widely discussed book Zoopolis. In this contribution, we pursue two objectives. Firstly, we reject Donaldson and Kymlicka’s appeal for animal citizenship. We do so by submitting that instead of paying due heed to their moral status, regarding animals as citizens misinterprets (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Why AI shall emerge in the one of possible worlds?Ignacy Sitnicki - 2019 - AI and Society 34 (2):365-371.
    The aim of this paper is to present some philosophical considerations about the supposed AI emergence in the future. However, the predicted timeline of this process is uncertain. To avoid any kind of speculations on the proposed analysis from a scientific point of view, a metaphysical approach is undertaken as a modal context of the discussion. I argue that modal claim about possible AI emergence at a certain point of time in the future is justified from a temporal perspective. Therefore, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations