Switch to: References

Add citations

You must login to add citations.
  1. Global Catastrophic Risk and the Drivers of Scientist Attitudes Towards Policy.Christopher Nathan & Keith Hyams - 2022 - Science and Engineering Ethics 28 (6):1-18.
    An anthropogenic global catastrophic risk is a human-induced risk that threatens sustained and wide-scale loss of life and damage to civilisation across the globe. In order to understand how new research on governance mechanisms for emerging technologies might assuage such risks, it is important to ask how perceptions, beliefs, and attitudes towards the governance of global catastrophic risk within the research community shape the conduct of potentially risky research. The aim of this study is to deepen our understanding of emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence.Elias G. Carayannis & John Draper - 2023 - AI and Society 38 (6):2679-2692.
    This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Racing to the precipice: a model of artificial intelligence development.Stuart Armstrong, Nick Bostrom & Carl Shulman - 2016 - AI and Society 31 (2):201-206.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History.Phil Torres - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press.
    This chapter argues that dual-use emerging technologies are distributing unprecedented offensive capabilities to nonstate actors. To counteract this trend, some scholars have proposed that states become a little “less liberal” by implementing large-scale surveillance policies to monitor the actions of citizens. This is problematic, though, because the distribution of offensive capabilities is also undermining states’ capacity to enforce the rule of law. I will suggest that the only plausible escape from this conundrum, at least from our present vantage point, is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations