Abstract
The development of AI tools, such as large language models and speech emotion and facial expression recognition systems, has raised new ethical concerns about AI’s impact on human relationships. While much of the debate has focused on human-AI relationships, less attention has been devoted to another class of ethical issues, which arise when AI mediates human-to-human relationships. This paper opens the debate on these issues by analyzing the case of romantic relationships, particularly those in which one partner uses AI tools, such as ChatGPT, to resolve a conflict and apologize. After reviewing some possible, non-exhaustive, explanations for the moral wrongness of using AI tools in such cases, I introduce the notion of second-person authenticity: a form of authenticity that is assessed by the other person in the relationship (e.g., a partner). I then argue that at least some actions within romantic relationships should respect a standard of authentic conduct since the value of such actions depends on who actually performs them and not only on the quality of the outcome produced. Therefore, using AI tools in such circumstances may prevent agents from meeting this standard. I conclude by suggesting that the proposed theoretical framework could also apply to other human-to-human relationships, such as the doctor-patient relationship, when these are mediated by AI; I offer some preliminary reflections on such applications.