Authenticity and co-design: On responsibly creating relational robots for children

In Mizuko Ito, Remy Cross, Karthik Dinakar & Candice Odgers (eds.), Algorithmic Rights and Protections for Children. MIT Press. pp. 85-121 (2023)
  Copy   BIBTEX

Abstract

Meet Tega. Blue, fluffy, and AI-enabled, Tega is a relational robot: a robot designed to form relationships with humans. Created to aid in early childhood education, Tega talks with children, plays educational games with them, solves puzzles, and helps in creative activities like making up stories and drawing. Children are drawn to Tega, describing him as a friend, and attributing thoughts and feelings to him ("he's kind," "if you just left him here and nobody came to play with him, he might be sad"). Scholars and members of the public alike have raised the alarm about relational robots, worrying that the relationships that people, and especially children, form with such robots are objectionably inauthentic. We, members of an interdisciplinary team that developed and studies Tega, make this inauthenticity worry precise and offer practical recommendations for addressing it. We distinguish two kinds of (in)authenticity -- inauthenticity as unreality, and inauthenticity as deception -- arguing that neither is just what others have thought it is. With our distinction in hand, we argue that the authenticity concern can be met only through co-design methods -- methods that give stakeholders of all kinds (e.g. parents, educators, and children themselves) a genuine say in how relational robots are built.

Author Profiles

Milo Phillips-Brown
University of Edinburgh
Marion Boulicault
Massachusetts Institute of Technology

Analytics

Added to PP
2023-07-13

Downloads
429 (#52,768)

6 months
126 (#38,053)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?