Trusting the (ro)botic other: By assumption?
SIGCAS Computers and Society 45 (3):255-260 (2015)
Abstract
How may human agents come to trust (sophisticated) artificial
agents? At present, since the trust involved is non-normative, this
would seem to be a slow process, depending on the outcomes of
the transactions. Some more options may soon become available
though. As debated in the literature, humans may meet (ro)bots as
they are embedded in an institution. If they happen to trust the
institution, they will also trust them to have tried out and tested
the machines in their back corridors; as a consequence, they
approach the robots involved as being trustworthy (“zones of
trust”). Properly speaking, users rely on the overall accountability
of the institution. Besides this option we explore some novel ways
for trust development: trust becomes normatively laden and
thereby the mechanism of exclusive reliance on the normative
force of trust (as-if trust) may come into play - the efficacy of
which has already been proven for persons meeting face-to-face
or over the Internet (virtual trust). For one thing, machines may
evolve into moral machines, or machines skilled in the art of
deception. While both developments might seem to facilitate
proper trust and turn as-if trust into a feasible option, they are
hardly to be taken seriously (while being science-fiction, immoral,
or both). For another, the new trend in robotics is towards
coactivity between human and machine operators in a team (away
from making robots as autonomous as possible). Inside the team
trust is a necessity for smooth operations. In support of this,
humans in particular need to be able to develop and maintain
accurate mental models of their machine counterparts.
Nevertheless, the trust involved is bound to remain nonnormative.
It is argued, though, that excellent opportunities exist
to build relations of trust toward outside users who are pondering
their reliance on the coactive team. The task of managing this trust
has to be allotted to human operators of the team, who operate as
linking pin between the outside world and the team. Since the
robotic team has now been turned into an anthropomorphic team,
users may well develop normative trust towards them;
correspondingly, trusting the team in as-if fashion becomes
feasible.
Categories
PhilPapers/Archive ID
DELTTR
Upload history
Archival date: 2015-10-30
View other versions
View other versions
Added to PP index
2015-10-30
Total views
102 ( #38,746 of 57,149 )
Recent downloads (6 months)
6 ( #53,701 of 57,149 )
2015-10-30
Total views
102 ( #38,746 of 57,149 )
Recent downloads (6 months)
6 ( #53,701 of 57,149 )
How can I increase my downloads?
Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.