Switch to: References

Add citations

You must login to add citations.
  1. Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence.Elias G. Carayannis & John Draper - 2023 - AI and Society 38 (6):2679-2692.
    This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions.Nadisha-Marie Aliman, Leon Kester & Roman Yampolskiy - 2021 - Philosophies 6 (1):6.
    In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently _transdisciplinary_ AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing _concrete practical examples_. Distinguishing between unintentionally and intentionally triggered AI risks (...)
    Download  
     
    Export citation  
     
    Bookmark