A principlist-based study of the ethical design and acceptability of artificial social agents

International Journal of Human-Computer Studies 172 (2023)
  Copy   BIBTEX

Abstract

Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles they prioritise. The study uses scenarios, embedded with the five AI4People ethical principles (Beneficence, Non-maleficence, Autonomy, Justice, and Explicability), involving ASAs taking on roles traditionally played by humans to understand whether these roles and behaviours (including dialogues) are seen as acceptable or unacceptable. Results from 268 participants reveal the greatest sensitivity to ASA behaviours that relate to Autonomy, Justice, Explicability, and the privacy of their personal data. Models were created using Schwartz’s Refined Values as a possible indicator of how stakeholders discern and prioritise the different AI4People ethical principles when interacting with ASAs. Our findings raise issues around the ethical acceptability of ASAs for nudging and changing behaviour due to participants’ desire for autonomy and their concerns over deceptive ASA behaviours such as pretending to have memories and emotions.

Author's Profile

Paul Formosa
Macquarie University

Analytics

Added to PP
2023-01-04

Downloads
1,386 (#10,437)

6 months
236 (#9,097)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?