Order:
  1. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  2. Advantages of artificial intelligences, uploads, and digital minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of hardware (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  3. Coalescing minds: Brain uploading-related group mind scenarios.Kaj Sotala & Harri Valpola - 2012 - International Journal of Machine Consciousness 4 (01):293-312.
    We present a hypothetical process of mind coalescence, where arti cial connections are created between two or more brains. This might simply allow for an improved form of communication. At the other extreme, it might merge the minds into one in a process that can be thought of as a reverse split-brain operation. We propose that one way mind coalescence might happen is via an exocortex, a prosthetic extension of the biological brain which integrates with the brain as seamlessly as (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  4. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  5. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  6. How feasible is the rapid development of artificial superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation