Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence

Abstract

Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting human conscious-level notions into the operations of computers creates confusion and makes it harder to identify the nature and location of that threshold. There is confusion, in particular, about how—and even whether—various capabilities deemed intelligent relate to human consciousness. This suggests that insufficient thought has been given to very fundamental concepts—a dangerous state of affairs, given the intrinsic power of the technology. It also suggests that research in the area of artificial general intelligence may unwittingly be (mis)guided by unconscious motivations and assumptions. While it might be inconsequential if philosophers get it wrong (or fail to agree on what is right), it could be devastating if AI developers, corporations, and governments follow suit. It therefore seems worthwhile to try to clarify some fundamental notions.

Author's Profile

Analytics

Added to PP
2018-06-23

Downloads
408 (#39,290)

6 months
67 (#60,143)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?