Abstract
This paper serves as an intuition building mechanism for understanding the basics of AI, misalignment, and the reasons for why strong AI is being pursued. The approach is to engage with both pro and anti AI development arguments to gain a deeper understanding of both views, and hopefully of the issue as a whole. We investigate the basics of misalignment, common misconceptions, and the arguments for why we would want to pursue strong AI anyway. The paper delves into various aspects of the problem, including existential risk, deception, hedonic adaptation, and the potential for the complete extinction of humanity. By integrating multiple elements of philosophy, this paper aims to provide a holistic understanding of the alignment problem and its significance for the future of humanity.