Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more informative than alternatives. More speculatively, it may help to illuminate two important emerging questions in AI ethics: 1. Can agency contribute to the moral status of non-human beings, and how? 2. When and why might AI systems exhibit power-seeking behaviour and does this pose an existential risk to humanity?