The Turing Test has a problem - and OpenAI's GPT-4.5 just exposed it
- OpenAI's GPT-4.5 model was recognized as human 73 percent of the time in a study, surpassing the random chance of 50 percent during a Turing Test.
- The Turing Test, created by Alan Turing, measures machines' abilities to exhibit human-like intelligence in conversations.
- Nearly 300 participants in the study were split into interrogators and witnesses, with one witness being a chatbot.
- Cameron Jones, the lead author, warned that these advancements could lead to job automation and societal disruption.
22 Articles
22 Articles
New research indicates this AI model outperforms humans in Turing Test evaluation
A recent study, currently awaiting peer review, suggests that OpenAI's GPT-4.5 model has been recognised as more human-like than actual humans after successfully passing the Turing Test, which measures human-like intelligence. According to the findings, the Large Language Model (LLM) was identified as human 73 percent of the time when instructed to adopt a persona—significantly higher than the random chance of 50 percent, indicating that the Tur…
Coverage Details
Bias Distribution
- 33% of the sources lean Left, 33% of the sources are Center, 33% of the sources lean Right
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage