Abstract
One of the strands of the Transhumanist movement, Singulitarianism, studies the possibility that high-level artificial intelligence may be created in the future, debating ways to ensure that the interaction between human society and advanced artificial intelligence can occur safely and beneficially. But how can we guarantee this safe interaction? Are there any indications that a Singularity may be on the horizon? In trying to answer these questions, We'll make a small introduction to the area of security research in artificial intelligence. We'll review some of the current paradigms in the development of autonomous intelligent systems and evidence that we can use to prospect the coming of a possible technological Singularity. Finally, we will present a reflection using the COVID-19 pandemic, something that showed that our biggest problem in managing existential risks is our lack of coordination skills as a global society.