Abstract
This paper examines how metaphors shape our thinking about and conceptualizing of artificial intelligence (AI), noting that their inherent imprecision leads to discrepancies in our understanding and objectives for AI. By exploring the concept of 'bad metaphors' that equate artificial intelligence with human intelligence, paper argues that these metaphors often carry additional, unintended meanings that distort our understanding and expectations of AI. The terms “artificial” and “intelligence” themselves are ambiguous and ideologically loaded, contributing to the complexity. The paper critiques the anthropocentric and mechanistic metaphors, like “AI as human” and “brain-computer,” which perpetuate unrealistic expectations. By deconstructing these metaphors within broader cultural and historical contexts, the paper calls for a more nuanced and precise understanding of AI, moving beyond simplistic and potentially misleading analogies.