Abstract
Modern AI systems based on deep learning are neither traditional tools nor full-blown
agents. Rather, they are characterised by idiosyncratic agential profiles, i.e.,
combinations of agency-relevant properties. Modern AI systems lack superficial
features which enable people to recognise agents but possess sophisticated
information processing capabilities which can undermine human goals. I argue that
systems fitting this description, when they are adversarial with respect to human users,
pose particular risks to those users. To explicate my argument, I provide conditions
under which agential profiles are explanatorily relevant to harms caused. I then contend
that the role of recommender systems in producing harmful outcomes like digital
addiction satisfies these conditions.