Facing Janus: An Explanation of the Motivations and Dangers of AI Development


This paper serves as an intuition building mechanism for understanding the basics of AI, misalignment, and the reasons for why strong AI is being pursued. The approach is to engage with both pro and anti AI development arguments to gain a deeper understanding of both views, and hopefully of the issue as a whole. We investigate the basics of misalignment, common misconceptions, and the arguments for why we would want to pursue strong AI anyway. The paper delves into various aspects of the problem, including existential risk, deception, hedonic adaptation, and the potential for the complete extinction of humanity. By integrating multiple elements of philosophy, this paper aims to provide a holistic understanding of the alignment problem and its significance for the future of humanity.

Author's Profile

Aaron Graifman
Arizona State University


Added to PP

242 (#67,245)

6 months
134 (#30,570)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?