Can Artificial Entities Assert?

In Sanford C. Goldberg (ed.), The Oxford Handbook of Assertion. Oxford University Press. pp. 415-436 (2018)
  Copy   BIBTEX

Abstract

There is an existing debate regarding the view that technological instruments, devices, or machines can assert ‎or testify. A standard view in epistemology is that only humans can testify. However, the notion of quasi-‎testimony acknowledges that technological devices can assert or testify under some conditions, without ‎denying that humans and machines are not the same. Indeed, there are four relevant differences between ‎humans and instruments. First, unlike humans, machine assertion is not imaginative or playful. Second, ‎machine assertion is prescripted and context restricted. As such, computers currently cannot easily switch ‎contexts or make meaningful relevant assertions in contexts for which they were not programmed. Third, ‎while both humans and computers make errors, they do so in different ways. Computers are very sensitive to ‎small errors in input, which may cause them to make big errors in output. Moreover, automatic error control ‎is based on finding irregularities in data without trying to establish whether they make sense. Fourth, ‎testimony is produced by a human with moral worth, while quasi-testimony is not. Ultimately, the notion of ‎quasi-testimony can serve as a bridge between different philosophical fields that deal with instruments and ‎testimony as sources of knowledge, allowing them to converse and agree on a shared description of reality, ‎while maintaining their distinct conceptions and ontological commitments about knowledge, humans, and ‎nonhumans.‎

Author Profiles

Boaz Miller
Zefat Academic College
Ori Freiman
McMaster University

Analytics

Added to PP
2019-07-03

Downloads
1,260 (#11,837)

6 months
221 (#10,626)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?