Abstract
We argue that, to be trustworthy, Computa-
tional Intelligence (CI) has to do what it is
entrusted to do for permissible reasons and
to be able to give rationalizing explanations
of its behavior which are accurate and gras-
pable. We support this claim by drawing par-
allels with trustworthy human persons, and we
show what difference this makes in a hypo-
thetical CI hiring system. Finally, we point out
two challenges for trustworthy CI and sketch
a mechanism which could be used to gener-
ate sufficiently accurate as well as graspable
rationalizing explanations for CI behavior.