Abstract
While there has been much discussion of whether AI systems could function as moral agents or acquire sentience, there has been relatively little discussion of whether AI systems could have free will. In this article, I sketch a framework for thinking about this question. I argue that, to determine whether an AI system has free will, we should not look for some mysterious property, expect its underlying algorithms to be indeterministic, or ask whether the system is unpredictable. Rather, we should simply ask whether it is explanatorily indispensable to view the system as an intentional agent, with the capacity for choice between alternative possibilities and control over the resulting actions. If the answer is “yes”, then the system counts as having free will in a pragmatic and diagnostically useful sense.