Abstract
This paper aims, first, to argue against using opaque AI technologies in decision making processes, and second to suggest that we need to possess a qualitative form of understanding about them. It first argues that opaque artificially intelligent technologies are suitable for users who remain indifferent to the understanding of decisions made by means of these technologies. According to virtue ethics, this implies that these technologies are not well-suited for those who care about realizing their moral capacity. The paper then draws on discussions on scientific understanding to suggest that an AI technology becomes understandable to its users when they are provided with a qualitative account of the consequences of using it. As a result, explainable AI methods can render an AI technology understandable to its users by presenting the qualitative implications of employing the technology for their lives.