Switch to: References

Add citations

You must login to add citations.
  1. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Against the singularity hypothesis.David Thorstad - forthcoming - Philosophical Studies:1-25.
    The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. I show how leading philosophical defenses of the singularity hypothesis (Chalmers 2010, Bostrom 2014) fail (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Extension and Replacement.Michal Masny - forthcoming - Philosophical Studies.
    Many people believe that it is better to extend the length of a happy life than to create a new happy life, even if the total welfare is the same in both cases. Despite the popularity of this view, one would be hard-pressed to find a fully compelling justification for it in the literature. This paper develops a novel account of why and when extension is better than replacement that applies not just to persons but also to non-human animals and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2024 - Journal of Experimental and Theoretical Artificial Intelligence 36 (2):257-272.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • (1 other version)What Matters in Survival: Self-determination and The Continuity of Life Trajectories.Heidi Savage - 2024 - Acta Analytica 39 (1):37-56.
    In this paper, I argue that standard psychological continuity theory does not account for an important feature of what is important in survival – having the property of personhood. I offer a theory that can account for this, and I explain how it avoids the implausible consequences of standard psychological continuity theory, as well as having certain other advantages over that theory.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Kant Meets Cyberpunk.Eric Schwitzgebel - 2019 - Disputatio 11 (55).
    I defend a how-possibly argument for Kantian (or Kant*-ian) transcendental idealism, drawing on concepts from David Chalmers, Nick Bostrom, and the cyberpunk subgenre of science fiction. If we are artificial intelligences living in a virtual reality instantiated on a giant computer, then the fundamental structure of reality might be very different than we suppose. Indeed, since computation does not require spatial properties, spatiality might not be a feature of things as they are in themselves but instead only the way that (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Transhumanist immortality: Understanding the dream as a nightmare.Pablo García-Barranquero - 2021 - Scientia et Fides 9 (1):177-196.
    This paper offers new arguments to reject the alleged dream of immortality. In order to do this, I firstly introduce an amendment to Michael Hauskeller’s approach of the “immortalist fallacy”. I argue that the conclusion “we do not want to live forever” does not follow from the premise “we do not want to die”. Next, I propose the philosophical turn from “normally” to “under these circumstances” to resolve this logical error. Then, I review strong philosophical critiques of this transhumanist purpose (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Existentialist risk and value misalignment.Ariela Tubert & Justin Tiehen - forthcoming - Philosophical Studies.
    We argue that two long-term goals of AI research stand in tension with one another. The first involves creating AI that is safe, where this is understood as solving the problem of value alignment. The second involves creating artificial general intelligence, meaning AI that operates at or beyond human capacity across all or many intellectual domains. Our argument focuses on the human capacity to make what we call “existential choices”, choices that transform who we are as persons, including transforming what (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Racing to the precipice: a model of artificial intelligence development.Stuart Armstrong, Nick Bostrom & Carl Shulman - 2016 - AI and Society 31 (2):201-206.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The race for an artificial general intelligence: implications for public policy.Wim Naudé & Nicola Dimitri - 2020 - AI and Society 35 (2):367-379.
    An arms race for an artificial general intelligence would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper, an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that, in a winner-takes-all race, where players must invest in R&D, only the most competitive teams will participate. Thus, given the difficulty of AGI, the number of competing teams is unlikely (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Echoes of myth and magic in the language of Artificial Intelligence.Roberto Musa Giuliano - 2020 - AI and Society 35 (4):1009-1024.
    To a greater extent than in other technical domains, research and progress in Artificial Intelligence has always been entwined with the fictional. Its language echoes strongly with other forms of cultural narratives, such as fairytales, myth and religion. In this essay we present varied examples that illustrate how these analogies have guided not only readings of the AI enterprise by commentators outside the community but also inspired AI researchers themselves. Owing to their influence, we pay particular attention to the similarities (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Language Agents and Malevolent Design.Inchul Yum - 2024 - Philosophy and Technology 37 (104):1-19.
    Language agents are AI systems capable of understanding and responding to natural language, potentially facilitating the process of encoding human goals into AI systems. However, this paper argues that if language agents can achieve easy alignment, they also increase the risk of malevolent agents building harmful AI systems aligned with destructive intentions. The paper contends that if training AI becomes sufficiently easy or is perceived as such, it enables malicious actors, including rogue states, terrorists, and criminal organizations, to create powerful (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Energy Requirements Undermine Substrate Independence and Mind-Body Functionalism.Paul Thagard - 2022 - Philosophy of Science 89 (1):70-88.
    Substrate independence and mind-body functionalism claim that thinking does not depend on any particular kind of physical implementation. But real-world information processing depends on energy, and energy depends on material substrates. Biological evidence for these claims comes from ecology and neuroscience, while computational evidence comes from neuromorphic computing and deep learning. Attention to energy requirements undermines the use of substrate independence to support claims about the feasibility of artificial intelligence, the moral standing of robots, the possibility that we may be (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Aggregation in an infinite, relativistic universe.Hayden Wilkinson - forthcoming - Erkenntnis:1-29.
    Aggregative moral theories face a series of devastating problems when we apply them in a physically realistic setting. According to current physics, our universe is likely _infinitely large_, and will contain infinitely many morally valuable events. But standard aggregative theories are ill-equipped to compare outcomes containing infinite total value so, applied in a realistic setting, they cannot compare any outcomes a real-world agent must ever choose between. This problem has been discussed extensively, and non-standard aggregative theories proposed to overcome it. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)The philosophy of computer science.Raymond Turner - 2013 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Mind uploading: a philosophical counter-analysis.Massimo Pigliucci - 2014 - In Russell Blackford & Damien Broderick, Intelligence Unbound: The Future of Uploaded and Machine Minds. Wiley-Blackwell. pp. 119-130.
    A counter analysis of David Chalmers' claims about the possibility of mind uploading within the context of the Singularity event.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Uploads, Faxes, and You: Can Personal Identity Be Transmitted?Jonah Goldwater - 2021 - American Philosophical Quarterly 58 (3):233–250.
    Abstract. Could a person or mind be uploaded—transmitted to a computer or network—and thereby survive bodily death? I argue ‘mind uploading’ is possible only if a mind is an abstract object rather than a concrete particular. Two implications are notable. One, if someone can be uploaded someone can be multiply-instantiated, such that there could be as many instances of a person as copies of a book. Second, mind uploading’s possibility is incompatible with the leading theories of personal identity, insofar as (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The story of humanity and the challenge of posthumanity.Zoltán Boldizsár Simon - 2019 - History of the Human Sciences 32 (2).
    Today’s technological-scientific prospect of posthumanity simultaneously evokes and defies historical understanding. On the one hand, it implies a historical claim of an epochal transformation concerning posthumanity as a new era. On the other, by postulating the birth of a novel, better-than-human subject for this new era, it eliminates the human subject of modern Western historical understanding. In this article, I attempt to understand posthumanity as measured against the story of humanity as the story of history itself. I examine the fate (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Advantages of artificial intelligences, uploads, and digital minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of hardware (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Could You Merge With AI? Reflections on the Singularity and Radical Brain Enhancement.Cody Turner & Susan Schneider - 2020 - In Markus Dirk Dubber, Frank Pasquale & Sunit Das, The Oxford Handbook of Ethics of Ai. Oxford Handbooks. pp. 307-325.
    This chapter focuses on AI-based cognitive and perceptual enhancements. AI-based brain enhancements are already under development, and they may become commonplace over the next 30–50 years. We raise doubts concerning whether radical AI-based enhancements transhumanists advocate will accomplish the transhumanists goals of longevity, human flourishing, and intelligence enhancement. We urge that even if the technologies are medically safe and are not used as tools by surveillance capitalism or an authoritarian dictatorship, these enhancements may still fail to do their job for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The new AI spring: a deflationary view.Jocelyn Maclure - 2020 - AI and Society 35 (3):747-750.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2021 - In Carissa Véliz, The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Uploading and Branching Identity.Michael A. Cerullo - 2015 - Minds and Machines 25 (1):17-36.
    If a brain is uploaded into a computer, will consciousness continue in digital form or will it end forever when the brain is destroyed? Philosophers have long debated such dilemmas and classify them as questions about personal identity. There are currently three main theories of personal identity: biological, psychological, and closest continuer theories. None of these theories can successfully address the questions posed by the possibility of uploading. I will argue that uploading requires us to adopt a new theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi, Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Multiplicity Objection against Uploading Optimism.Clas Weber - forthcoming - Synthese.
    Could we transfer you from your biological substrate to an electronic hardware by simulating your brain on a computer? The answer to this question divides optimists and pessimists about mind uploading. Optimists believe that you can genuinely survive the transition; pessimists think that surviving mind uploading is impossible. An influential argument against uploading optimism is the multiplicity objection. In a nutshell, the objection is as follows: If uploading optimism were true, it should be possible to create not only one, but (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AAAI: an Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent C. Müller, Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The psychopathology of metaphysics.Billon Alexandre - 2024 - Metaphilosophy 1 (01):1-28.
    According to a common philosophical intuition, the deep nature of things is hidden from us, and the world as we know it through perception and science is somehow shallow and lacking in reality. For all we knwo, the intuition goes, we could be living in a cave facing shadows, in a dream or even in a computer simulation, This “intuition of unreality” clashes with a strong, but perhaps more naive, intuition to the effect that the world as we know it (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Mind Uploading: A Philosophical Counter‐Analysis.Massimo Pigliucci - 2014 - In Russell Blackford & Damien Broderick, Intelligence Unbound. Wiley. pp. 119–130.
    This chapter sets aside the question of whether a Singularity will occur, to focus on the closely related issue of MU, specifically as presented by one of its most articulate proponents, David Chalmers. The fundamental premise of Chalmers' arguments about MU is some strong version of the Computational Theory of Mind (CTM). The chapter proceeds in the following fashion: first, it recalls Chalmers' main arguments; second, it argues that the ideas of MU and CTM do not take seriously enough the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott, The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent C. Müller, Philosophy and theory of artificial intelligence 2017. Berlin: Springer.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Big Historical Foundations for Deep Future Speculations: Cosmic Evolution, Atechnogenesis, and Technocultural Civilization.Cadell Last - 2017 - Foundations of Science 22 (1):39-124.
    Big historians are attempting to construct a general holistic narrative of human origins enabling an approach to studying the emergence of complexity, the relation between evolutionary processes, and the modern context of human experience and actions. In this paper I attempt to explore the past and future of cosmic evolution within a big historical foundation characterized by physical, biological, and cultural eras of change. From this analysis I offer a model of the human future that includes an addition and/or reinterpretation (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The disconnection thesis.David Roden - 2012 - In Amnon H. Eden & James H. Moor, Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer.
    In his 1993 article ‘The Coming Technological Singularity: How to survive in the posthuman era’ the computer scientist Virnor Vinge speculated that developments in artificial intelligence might reach a point where improvements in machine intelligence result in smart AI’s producing ever-smarter AI’s. According to Vinge the ‘singularity’, as he called this threshold of recursive self-improvement, would be a ‘transcendental event’ transforming life on Earth in ways that unaugmented humans are not equipped to envisage. In this paper I argue Vinge’s idea (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Metaphysical Daring as a Posthuman Survival Strategy.Pete Mandik - 2015 - Midwest Studies in Philosophy 39 (1):144-157.
    I develop an argument that believing in the survivability of a mind uploading procedure conveys value to its believers that is assessable independently of assessing the truth of the belief. Regardless of whether the first-order metaphysical belief is true, believing it conveys a kind of Darwinian fitness to the believer. Of course, a further question remains of whether having that Darwinian property can be a basis—in a rational sense of being a basis—for one’s holding the belief. I’ll also make some (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A History of First Step Fallacies.Hubert L. Dreyfus - 2012 - Minds and Machines 22 (2):87-99.
    In the 1960s, without realizing it, AI researchers were hard at work finding the features, rules, and representations needed for turning rationalist philosophy into a research program, and by so doing AI researchers condemned their enterprise to failure. About the same time, a logician, Yehoshua Bar-Hillel, pointed out that AI optimism was based on what he called the “first step fallacy”. First step thinking has the idea of a successful last step built in. Limited early success, however, is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • An argument for the impossibility of machine intelligence (preprint).Jobst Landgrebe & Barry Smith - 2021 - Arxiv.
    Since the noun phrase `artificial intelligence' (AI) was coined, it has been debated whether humans are able to create intelligence using technology. We shed new light on this question from the point of view of themodynamics and mathematics. First, we define what it is to be an agent (device) that could be the bearer of AI. Then we show that the mainstream definitions of `intelligence' proposed by Hutter and others and still accepted by the AI community are too weak even (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How Philosophy of Mind Can Shape the Future.Susan Schneider & Pete Mandik - 2017 - In Amy Kind, Philosophy of Mind in the Twentieth and Twenty-First Centuries: The History of the Philosophy of Mind, Volume 6. New York: Routledge. pp. 303-319.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How Competitive Can Virtuous Envy Be?Rosalind Chaplin - 2024 - Apa Studies 23 (2):30-33.
    Download  
     
    Export citation  
     
    Bookmark