Tech

Scientists found over 1,000 AI bots on X stealing selfies to create fake accounts

Scientists revealed in a study last month that X, formerly known as Twitter, has a real bot problem, with about 1,140 artificial intelligence-powered accounts that “post machine-generated content and steal selfies to create fake personas.”

The research, conducted by a student-teacher team at the Observatory on Social Media at Indiana University, found a network of fake accounts on X in what they called the “Fox8” botnet, which reportedly utilizes ChatGPT to generate content that “aims to promote suspicious website and spread harmful content.”

The bot accounts attempt to convince people to invest in fake cryptocurrencies, and have even been thought to steal from existing crypto wallets, scientists Kai-Cheng Yang and Filippo Menczer found.

Their posts often include hashtags such as #bitcoin, #crypto, and #web3, and frequently interact with human-run accounts like Forbes’ crypto-centered X account (@ForbesCrypto) and blockchain-centered news site Watcher Guru (@WatcherGuru), the study found.

Beyond looting crypto, Fox8 accounts “were found to distort online conversations and spread misinformation in various contexts, from elections to public health crises,” Yang and Menczer said.

A student-teacher team at the Observatory on Social Media at Indiana University found a network of 1,140 fake accounts on X that reportedly utilize ChatGPT to generate “suspicious and harmful content.” Getty Images/iStockphoto

The goal of a botnet is to spam X users with a slew of AI-generated posts. By tweeting frequently, these posts have an increased chance of being seen by a higher number of legitimate users, therefore heightening the probability that a human would click on a fraudulent URL.

To appear more human-like, this botnet — a network of hundreds of harmful, spam accounts — not only nab photos from real users but also “frequently interact with each other through retweets and replies,” boast profile descriptions, and even “have 74 followers, 140 friends and 149.6 tweets on average.”

These elements suggest that “Fox8 bots are actively participating in activities on Twitter [now known as X],” and make them more believable to the human user.

The Fox8 profiles — most of which “were created over seven years ago, with some being created in 2023” — “commonly mention cryptocurrencies and blockchains,” Indiana University researchers found.

The study noted that botnets like Fox8 have historically been very obvious since they’ve traditionally posted unconvincing content and tweets with unnatural language.

However, advancements in language models — specifically ChatGPT — have made accounts within Fox8 increasingly difficult to detect by “significantly enhancing the capabilities of bots across all dimensions.”

These accounts spam human users with AI-generated posts in an effort to convince people to invest in fake cryptocurrencies. They have even been thought to steal from existing crypto wallets. SOPA Images/LightRocket via Getty Images

“With the advent and availability of free AI APIs [application programming interfaces] like ChatGPT, we wanted to see if these tools are already being exploited to fool people. And it turns out they are, sadly but not surprisingly,” Menczer told The Post.

These counterfeit accounts are now so convincing, that even when Yang and Menczer applied a large language model (LLM) content detector, even the “state-of-the-art” tech couldn’t “effectively distinguish between human and LLM-powered bots in the wild.”

Researchers didn’t reveal the handle associated with these accounts.

However, they disclosed that they found out which accounts were in the botnet after discovering “self-revealing tweets posted by these accounts accidentally.”

“Based on this clue, we searched Twitter [X] for the phrase ‘as an ai language model,’ between Oct. 1, 2022, and April 23, 2023,” the researchers explained, which “led to 12,226 tweets by 9,112 unique accounts,” though “there is no guarantee that all these accounts are LLM-powered bots.”

The researchers concluded that 76% of those tweets are likely “humans posting or retweeting ChatGPT outputs, while the remaining accounts are likely bots using LLMs for content generation.”

Menczer told The Post that his main conclusions from the study are that “this is just the tip of the iceberg, [that] malicious bots developed by slightly more careful bad actors would not be detectable, and [that] significant resources should be devoted to developing appropriate countermeasures and regulation.”

“Currently, there are no effective methods to detect AI-generated content,” he added.

Because of the advancements in ChatGPT, it’s become increasingly difficult to differentiate accounts in a botnet from legitimate accounts run by humans, the study said. AP

The Post has sought comment from OpenAI, ChatGPT’s corporate parent.

After Menczer and Yang published this study in July, someone at X reportedly took down the 1,140 illegitimate bots, according to Wired.

Menczer told the outlet that he would normally notify X of the university’s findings, though he didn’t with this study because “they are not really responsive.”

When The Post reached out to X for comment, its press line responded with an automated message that said: “We’ll get back to you soon.”