Abstract
Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic situation of human agents with respect to these systems. While these diagnoses are independently discussed in the literature, juxtaposing them and exploring possible interrelations will help to get a view of the relevant distinctions between conceptions of opacity and their empirical bearing. In pursuit of this aim, two pertinent conditions affecting computer models in general and contemporary AI in particular are outlined and discussed: opacity as a problem of computational tractability and opacity as a problem of the universality of the computational method.