Abstract
Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles they prioritise. The study uses scenarios, embedded with the five AI4People ethical principles (Beneficence, Non-maleficence, Autonomy, Justice, and Explicability), involving ASAs taking on roles traditionally played by humans to understand whether these roles and behaviours (including dialogues) are seen as acceptable or unacceptable. Results from 268 participants reveal the greatest sensitivity to ASA behaviours that relate to Autonomy, Justice, Explicability, and the privacy of their personal data. Models were created using Schwartz’s Refined Values as a possible indicator of how stakeholders discern and prioritise the different AI4People ethical principles when interacting with ASAs. Our findings raise issues around the ethical acceptability of ASAs for nudging and changing behaviour due to participants’ desire for autonomy and their concerns over deceptive ASA behaviours such as pretending to have memories and emotions.