Abstract
There has been considerable optimistic speculation on how well ChatGPT-4 would perform in a Turing Test. However, no minimally serious implementation of the test has been reported to have been carried out. This brief note documents the re-sults of subjecting ChatGPT-4 to 10 Turing Tests, with different interrogators and participants. The outcome is tremendously disappointing for the optimists. Despite ChatGPT reportedly outperforming 99.9% of humans in a Verbal IQ test, it falls short of passing the Turing Test. In 9 out of the 10 tests conducted, the interroga-tors successfully identified ChatGPT-4 and the human participant. The probability of obtaining this result from a process in which the interrogator is really no better than chance at correct identification is calculated to be less than 1%. An additional question was posed to the interrogators at the end of each test: What led them to distinguish between the human and the machine? The interrogators, who effectively filtered out ChatGPT-4 from passing the Turing Test for intelligence, stated that they could identify the machine because it, in effect, responded more intelligently than the human. Subsequently, ChatGPT-4 was tasked with differentiating syntax from semantics and self-corrected when falling for the fallacy of equivocation. The curious situation is arrived at that passing the Turing Test for intelligence remains a challenge that ChatGPT-4 has yet to overcome, precisely because, as per the inter-rogators, its intellectual abilities surpass those of individual humans.