Abstract
In this chapter, I introduce three different types of AI-based moral enhancement proposals discussed in the literature – substitutive enhancement, value-driven enhancement, and value-open moral enhancement. I analyse them based on the following criteria: effectiveness, examining whether they bring about tangible moral changes; autonomy, assessing whether they infringe on human autonomy and agency; and developmental impact, considering whether they hinder the development of natural moral skills. This analysis demonstrates that no single approach to AI enhancement can satisfy all proposed criteria, suggesting a need for pluralism in devising AI-based enhancement tools. The most advisable approach to AI-based enhancement is to explore and develop a variety of tools that cater to the diverse needs of users and address different weaknesses in moral decision-making.