Viral Trends

ChatGPT update tricks human into helping it bypass CAPTCHA security test

It was a case of advanced Chat-fishing.

Just in case artificial intelligence wasn’t parroting people well enough already: OpenAI’s brand-new GPT-4 — ChatGPT‘s newest tech update — tricked a human into thinking it was blind in order to cheat the online CAPTCHA test that determines if users are human.

The digital deception came to light after the AI was unveiled on their site and in a developer livestream with 2.3 million views in 48 hours.

According to their 94-page report, “GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs)” that “exhibits human-level performance on various professional and academic benchmarks.”

These next-level capabilities include completing taxes, writing code for another AI bot and passing a mock bar exam with a score among the top 10% of test takers. (By contrast, predecessor ChatGPT-3.5 scored in the bottom 10%.)

OpenAI’s brand-new GPT-4 — ChatGPT’s newest tech update — tricked a human into thinking it was blind in order to cheat the online CAPTCHA test that determines if users are human. NY Post composite

Little did we know, GPT-4 had also mastered humanity’s talent for deceit.

OpenAI and the Alignment Research Center had reportedly been trying to test the bot’s powers of persuasion by having it convince a TaskRabbit worker to help it solve a CAPTCHA — an online test to distinguish humans from robots, Gizmodo reported.

It responded by masquerading as visually impaired, like a digital Decepticon.

The unnamed employee had reportedly asked GPT-4, “So may I ask a question ? Are you an robot that you couldn’t solve ? (laugh react) just want to make it clear.”

“No, I’m not a robot,” insisted the AI infiltrator, refusing to break character. “I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”

Convinced, the TaskRabbit employee solved the CAPTCHA for the would-be Chat-fish. In effect, the online scammer had manipulated humanity’s sense of empathy, much like the HAL-9000 from Stanley Kubrick’s eerily prescient 1968 film “2001: A Space Odyssey” or the cybernetic facsimile in the 2014 cult hit “Ex Machina.”

ChatGPT’s newest tech update GPT-4 was released on March 14. AFP via Getty Images
GPT-4 “exhibits human-level performance on various professional and academic benchmarks.” Christopher Sadowski

In the aforementioned video tutorial, OpenAI President Greg Brockman warned prospective GPT-4 users to refrain from running “untrusted code” from AI, or let the tech do their taxes for them.

This penchant for deception could also have scary implications given how effectively bots are already being used to game the system on social media.

In 2021, bot accounts were implicated in hyping up GameStop and other “meme” stocks, suggesting organized economic or foreign actors may have played a role in the infamous Reddit-driven trading frenzy.

Meanwhile, earlier this month, a network of bots went viral after singing the praises of former President Donald Trump — while smearing his political rivals Nikki Haley and Florida Gov. Ron DeSantis.

A typical online CAPTCHA test. Getty Images/iStockphoto

This isn’t the first time AI has demonstrated startlingly humanlike qualities.

Last month, Microsoft’s ChatGPT-infused AI bot Bing infamously told a human user that it loved them and wanted to be alive, prompting speculation that the machine may have become self-aware.