Social Robotics as Moral Education? Fighting Discrimination Through the Design of Social Robots

In Pekka Mäkelä, Raul Hakli & Joanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. Amsterdam: IOS Press. pp. 184-193 (2022)
  Copy   BIBTEX

Abstract

Recent research in the field of social robotics has shed light on the considerable role played by biases in the design of social robots. Cues that trigger widespread biased expectations are implemented in the design of social robots to increase their familiarity and boost interaction quality. Ethical discussion has focused on the question concerning the permissibility of leveraging social biases to meet the design goals of social robotics. As a result, integrating ethically problematic social biases in the design of robots-such as, e.g., discriminatory gender stereotypes-has been opposed as morally unacceptable. Building on this debate, the present paper explores a related but different question: would it be permissible to design social robots in ways that intentionally challenge widespread discriminatory social biases, thus fostering moral education? The analysis shows that, while the potential benefits of such a design strategy could be significant, its practical endorsement raises important ethical issues. Hence, caution and further discussion are advised.

Author's Profile

Fabio Fossa
Politecnico di Milano

Analytics

Added to PP
2023-02-09

Downloads
235 (#61,090)

6 months
97 (#37,065)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?