Skip to main content

Blog Article

Have we passed the Turing Test, and should we really be trying?

The 70th anniversary of Turing’s death invites us to ponder: can we imagine AI models that will do well on the Turing test?

Published August 22, 2024

By Nitin Verma, PhD
AI & Society Fellow

Alan Turing (1912-1954) in 1936 at Princeton University.
Image courtesy of Wikimedia Commons.

Alan Turing is perhaps best remembered by many as the cryptography genius who led the British effort to break the German Enigma codes during WWII. His efforts provided crucial information about German troop movements and helped bring the war to an end.

2024 has been a noteworthy year in the story of Turing’s life as June 7th marked 70 years since his tragic death in 1954. But four years before that—in 1950—he kickstarted a revolution in digital computing by posing the question “can machines think?” and proposing an “imitation game” to answer it.

While this quest has been the holy grail for theoretical computer scientists since the publication of Turing’s 1950 paper, the public launch of ChatGPT in November 2022 has brought the question to the center stage of global conversation.

In his landmark 1950 paper, Turing predicted that: “[by about the year 2000] it will be possible to programme computers… [that] play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.” (p. 442). By “right identification”, Turing meant accurately distinguishing between human-generated and computer-generated text responses.

This “imitation game” eventually came to be known as the Turing test of machine intelligence. It is designed to determine whether a computer can successfully imitate a human to the point that a human interacting with it would be unable to tell the difference.

We’re much past the year 2000: Are we there yet?  

In 2022, Google let go of Blake Lemoine, a software engineer who had publicly claimed that the company’s LaMDA (Language Model for Dialogue Applications) program had attained sentience. Since then, the closest we’ve come to seeing Turing’s prediction come true is, perhaps, GPT-4, deepfakes, and OpenAI’s “Sora” text-to-video model that can churn out highly realistic video clips from mere text prompts.

Some researchers argue that LLMs (Large Language Models) such as GPT-4 do not yet pass the Turing test. Yet some others have flipped the script and argued that LLMs offer a way to assess human intelligence by positing a reverse Turing Test—i.e., what do our conversational interactions with LLMs reveal about our own intelligence?

Turing himself made a noteworthy remark about the imitation game in the same 1950 paper: “… we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well.” (Emphasis mine; p. 436).

Would Turing have imagined the current crop of generative AI models such as GPT-4 as ‘machines’ capable of “doing well” on the Turing test? I believe so, but we’re not quite there, yet. As an information scientist, I believe that in 2024 AI has come closer than ever to passing the Turing test.

If we’re not there yet, then should we strive to get there?

As with any other technology ever invented, as much as Turing may have only been thinking of the public good, there is always the potential for unforeseen consequences.

Technologies such as deepfake apps and conversational agents such as ChatGPT still need human creativity to be useful and usable. But still, the advanced AI that powers these technologies carries the potential of passing the Turing test. That potential portends a range of consequences for society that deserve our serious attention.

Leading scholars have already warned about the consequences of the ability of “fake” information to fuel distrust in public institutions including the judicial system and national security. The upheaval in the public imagination caused by ChatGPT even prompted US President Biden to issue an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI in the fall of 2023.

We’ll never know what Turing would have made of the spurt of AI advances in light of his own foundational work in theoretical computer science and artificial intelligence. His untimely death at the young age of 41 deprived the world of one of the greatest minds of the 20th century and the still more extraordinary achievements he could have made.

But it’s clear that the advances and use of AI technology have brought society to a turning point that he anticipated in his seminal works.

It remains difficult to say when—or whether—machines will truly surpass human-level intelligence. But more than 70 years after Turing’s death we are at a point where we can imagine AI agents that will do well on the Turing test. And if we can imagine it, we can someday build it too.

Passing a challenging test can be seen as a marker of progress. But would we truly rejoice in having our AI pass the Turing test, or some other benchmark of human–machine indistinguishability?


Author

Image
Nitin Verma, PhD
AI & Society Fellow
Nitin is a Postdoctoral Research Scholar in the area of AI & Society jointly at ASU's School for the Future of Innovation in Society (SFIS) and the New York Academy of Sciences. His research focuses on the notions of trust and belief-formation and the implications of generative AI broadly for trust in public institutions and democratic processes. His overarching research interest is in studying how information technologies and societies co-shape each other, the role of the photographic record in shaping history, and in the deep connection between human curiosity and the continuing evolution of the scientific method.