Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence

Download Edit this record How to cite View on PhilPapers
Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting human conscious-level notions into the operations of computers creates confusion and makes it harder to identify the nature and location of that threshold. There is confusion, in particular, about how—and even whether—various capabilities deemed intelligent relate to human consciousness. This suggests that insufficient thought has been given to very fundamental concepts—a dangerous state of affairs, given the intrinsic power of the technology. It also suggests that research in the area of artificial general intelligence may unwittingly be (mis)guided by unconscious motivations and assumptions. While it might be inconsequential if philosophers get it wrong (or fail to agree on what is right), it could be devastating if AI developers, corporations, and governments follow suit. It therefore seems worthwhile to try to clarify some fundamental notions.
PhilPapers/Archive ID
Upload history
Archival date: 2018-06-23
View other versions
Added to PP index

Total views
133 ( #32,869 of 55,809 )

Recent downloads (6 months)
16 ( #39,390 of 55,809 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.