Abstract
Elon Musk famously predicted that an artificial intelligence superior to the smartest individual human would arrive by the year 2025. In response, Gary Marcus offered Musk a $1 million bet to the effect that he would be proved wrong. In specifying the conditions of this bet (which Musk did not take) Marcus lists the following ‘tasks that ordinary people can perform’ which, he claimed, AI will not be able to perform by the end of 2025.
• Reliably drive a car in a novel location that they haven’t previously encountered, even in the face of unusual circumstances like hand lettered signs, without the assistance of other humans.
• Learn to ride a mountain bike off-road through forest trails.
• Babysit children in an unfamiliar home and keep them safe.
• Tend to the physical and psychological needs of an elderly or infirm person.
Each of these tasks involves the application of practical or tacit knowledge, or what is also called ‘knowing how’, which is to say knowledge of a sort that is captured not by means of sentences or propositions or explicit rules, but rather through the expertise demonstrated in human actions. The talk addresses the puzzle that nowhere in the AI literature do we find a discussion of knowing how as a feature of human intelligence.