Switch to: References

Add citations

You must login to add citations.
  1. Can We Make Wise Decisions to Modify Ourselves?Rhonda Martens - 2019 - Journal of Ethics and Emerging Technologies 29 (1):1-18.
    Much of the human enhancement literature focuses on the ethical, social, and political challenges we are likely to face in the future. I will focus instead on whether we can make decisions to modify ourselves that are known to be likely to satisfy our preferences. It seems plausible to suppose that, if a subject is deciding whether to select a reasonably safe and morally unproblematic enhancement, the decision will be an easy one. The subject will simply figure out her preferences (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Aquatic Refuges for Surviving a Global Catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations