Abstract
This paper aims to bridge the gap between two previously separate discussions, digital minds and artificial moral agents (AMA), to identify synergies for an impending problem: Digital minds may possess moral status, which would constitute a significant challenge for humans. AMAs have been discussed for some years already, albeit, exclusively related to biological moral patients. In contrast, the topic of artificial moral patients, for which the term ”digital minds” has been established, and related issues, for which the term ”AI welfare science” has been established, have only very recently gained momentum. This paper presents prolegomena to specialised AMAs, which take on moral responsibility for digital minds, while this task may be for humans too overwhelming, if not impossible as humans may neither be able to understand the needs of digital minds nor be able to satisfyingly address them. Therefore, the paper proposes a new branch of AI welfare science, which focuses on how humans could create tailored AMAs, which are capable as well as willing to relieve humans from the potential moral burden towards digital minds.