The Concept of Accountability in AI Ethics and Governance

In Justin Bullock, Y. C. Chen, Johannes Himmelreich, V. Hudson, M. Korinek, M. Young & B. Zhang (eds.), Oxford Handbook of AI Governance. Oxford: Oxford University Press (2022)
  Copy   BIBTEX


Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the idea that AI operates within an accountability gap arising from technical features of AI as well as the social context in which it is deployed. The chapter also evaluates various proposals for closing this gap. I conclude that the role of accountability in AI ethics and governance is vital but also more limited than some suggest. Accountability’s primary job description is to verify compliance with substantive normative principles—once those principles are settled. Theories of accountability cannot ultimately tell us what substantive standards to account for, especially when norms are contested or still emerging. Nonetheless, formal mechanisms of accountability provide a way of diagnosing and discouraging egregious wrongdoing even in the absence of normative agreement. Providing accounts can also be an important first step toward the development of more comprehensive regulatory standards for AI.

Author's Profile


Added to PP

1,675 (#4,911)

6 months
897 (#841)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?