Switch to: References

Citations of:

An Overview of Models of Technological Singularity

In Max More & Natasha Vita-More (eds.), The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. Chichester, West Sussex, UK: Wiley-Blackwell. pp. 376–394 (2013)

Add citations

You must login to add citations.
  1. GOLEMA XIV prognoza rozwoju ludzkiej cywilizacji a typologia osobliwości technologicznych.Rachel Palm - 2023 - Argument: Biannual Philosophical Journal 13 (1):75–89.
    The GOLEM XIV’s forecast for the development of the human civilisation and a typology of technological singularities: In the paper, a conceptual analysis of technological singularity is conducted and results in the concept differentiated into convergent singularity, existential singularity, and forecasting singularity, based on selected works of Ray Kurzweil, Nick Bostrom, and Vernor Vinge respectively. A comparison is made between the variants and the forecast of GOLEM XIV (a quasi-alter ego and character by Stanisław Lem) for the possible development of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Technological singularity and transhumanism.Piero Gayozzo - 2021 - Teknokultura. Revista de Cultura Digital y Movimientos Sociales 18 (2):195-200.
    The technological innovations of the Fourth Industrial Revolution have facilitated the formulation of strategies to transcend human limitations; strategies that are widely supported by the transhumanist philosophy. The purpose of this article is to explain the relationship between ‘transhumanism’ and ‘technological singularity’, to which end the Fourth Industrial Revolution and transhumanism are also briefly covered. Subsequently, the three main models of technological singularity are evaluated and a definition of this futuristic concept is offered. Finally, the author provides a reflection on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations