Abstract
In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or a national state coupled with AI-based government system. In this article, I demonstrate that the earliest timing of Dangerous AI, corresponding to 10 per cent of its arrival probability, is before 2030. Several partly independent sources of information are in accordance:
1. The growth of the hardware available for AI research makes human-brain-equivalents of compute available for AI research in the 2020s. It is fuelled by specialized AI-chips, the use of many chips in one processing unit, and the larger research budgets, among other things.
2. The neural network performance and other characteristics, like the number of parameters, is quickly increasing every year, and extrapolating this tendency suggests that roughly human-level performance in a few years, around 2025.
3. Expert polls show around 10 per cent of the probability of an early appearance of artificial general intelligence (AGI) in the next decade, that is, before 2030.
4. Hyperbolic growth in different big history models converges around 2025-2030 (the technological singularity).
5. Anthropic arguments (similar to the Doomsday argument) suggest that qualified observers are more likely to appear near the end of the AI research epoch, as the number of such observers grew exponentially. This number doubles every 5-10 years, and thus we are likely to find ourselves around a decade before the end of AI research, which will happen consequently around 2030.