Switch to: References

Add citations

You must login to add citations.
  1. The problem of AI identity.Soenke Ziesche & Roman V. Yampolskiy - manuscript
    The problem of personal identity is a longstanding philosophical topic albeit without final consensus. In this article the somewhat similar problem of AI identity is discussed, which has not gained much traction yet, although this investigation is increasingly relevant for different fields, such as ownership issues, personhood of AI, AI welfare, brain–machine interfaces, the distinction between singletons and multi-agent systems as well as to potentially support finding a solution to the problem of personal identity. The AI identity problem analyses the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms and Posthuman Governance.James Hughes - 2017 - Journal of Posthuman Studies.
    Since the Enlightenment, there have been advocates for the rationalizing efficiency of enlightened sovereigns, bureaucrats, and technocrats. Today these enthusiasms are joined by calls for replacing or augmenting government with algorithms and artificial intelligence, a process already substantially under way. Bureaucracies are in effect algorithms created by technocrats that systematize governance, and their automation simply removes bureaucrats and paper. The growth of algorithmic governance can already be seen in the automation of social services, regulatory oversight, policing, the justice system, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How feasible is the rapid development of artificial superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Vulnerable World Hypothesis.Nick Bostrom - 2018
    Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Post-postbiological evolution?Milan M. Cirkovic - unknown
    It has already become a commonplace to discuss postbiological evolution in various contexts of futures studies, bioethics, cognitive sciences, philosophical anthropology, or even economics and SETI studies. The assumption is that technological/cultural evolution will soon entirely substitute for the biological processes which underlie human existence – and, by analogy, the existence of other independently evolved intelligent beings, if any. Various modes of postbiological evolution of humans have been envisioned in both fictional and discursive contexts. Little thought has been devoted so (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing.Steven Umbrello & Seth D. Baum - 2018 - Futures 100:63-73.
    Atomically precise manufacturing (APM) is the assembly of materials with atomic precision. APM does not currently exist, and may not be feasible, but if it is feasible, then the societal impacts could be dramatic. This paper assesses the net societal impacts of APM across the full range of important APM sectors: general material wealth, environmental issues, military affairs, surveillance, artificial intelligence, and space travel. Positive effects were found for material wealth, the environment, military affairs (specifically nuclear disarmament), and space travel. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (1 other version)Existential risks: analyzing human extinction scenarios and related hazards.Nick Bostrom - 2002 - J Evol Technol 9 (1).
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...)
    Download  
     
    Export citation  
     
    Bookmark   80 citations  
  • Thinking inside the box: Using and controlling an oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - forthcoming - Minds and Machines.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • The Future of Human Evolution.Nick Bostrom - unknown
    Evolutionary development is sometimes thought of as exhibiting an inexorable trend towards higher, more complex, and normatively worthwhile forms of life. This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about. We then consider how such catastrophic outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated policy (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Artificial Intelligence and Mind-reading Machines— Towards a Future Techno-Panoptic Singularity.Aura Elena Schussler - 2020 - Postmodern Openings 11 (4):334-346.
    The present study focuses on the situation in which mind-reading machines will be connected, initially through the incorporation of weak AI, and then in conjunction to strong AI, an aspect that, ongoing, will no longer have a simple medical role, as is the case at present, but one of surveillance and monitoring of individuals—an aspect that is heading us towards a future techno-panoptic singularity. Thus, the general objective of this paper raises the problem of the ontological stability of human nature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence.Elias G. Carayannis & John Draper - 2023 - AI and Society 38 (6):2679-2692.
    This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History.Phil Torres - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press.
    This chapter argues that dual-use emerging technologies are distributing unprecedented offensive capabilities to nonstate actors. To counteract this trend, some scholars have proposed that states become a little “less liberal” by implementing large-scale surveillance policies to monitor the actions of citizens. This is problematic, though, because the distribution of offensive capabilities is also undermining states’ capacity to enforce the rule of law. I will suggest that the only plausible escape from this conundrum, at least from our present vantage point, is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations