AI

AI models have favorite numbers, because they think they’re people

Comment

colorful numbers on a blue red and white background
Image Credits: Frank Ramspott / Getty Images

AI models are always surprising us, not just in what they can do, but also in what they can’t, and why. An interesting new behavior is both superficial and revealing about these systems: They pick random numbers as if they’re human beings, which is to say, badly.

But first, what does that even mean? Can’t people pick numbers randomly? And how can you tell if someone is doing so successfully or not? This is actually a very old and well-known limitation that we humans have: We overthink and misunderstand randomness.

Tell a person to predict 100 coin flips, and compare that to 100 actual coin flips — you can almost always tell them apart because, counterintuitively, the real coin flips look less random. There will often be, for example, six or seven heads or tails in a row, something almost no human predictor includes in their 100.

It’s the same when you ask someone to pick a number between 0 and 100. People almost never pick 1 or 100. Multiples of 5 are rare, as are numbers with repeating digits like 66 and 99. These don’t seem like “random” choices to us, because they embody some quality: small, big, distinctive. Instead, we often pick numbers ending in 7, generally from the middle somewhere.

There are countless examples of this kind of predictability in psychology. But that doesn’t make it any less weird when AIs do the same thing.

Yes, some curious engineers over at Gramener performed an informal but nevertheless fascinating experiment where they simply asked several major LLM chatbots to pick a random number between 0 and 100.

Reader, the results were not random.

Image Credits: Gramener

All three models tested had a “favorite” number that would always be their answer when put on the most deterministic mode but that appeared most often even at higher “temperatures,” a setting models often have that increases the variability of their results.

OpenAI’s GPT-3.5 Turbo really likes 47. Previously, it liked 42 — a number made famous, of course, by Douglas Adams in “The Hitchhiker’s Guide to the Galaxy” as the answer to life, the universe, and everything.

Anthropic’s Claude 3 Haiku went with 42. And Gemini likes 72.

More interestingly, all three models demonstrated human-like bias in the other numbers they selected, even at high temperature.

All tended to avoid low and high numbers; Claude never went above 87 or below 27, and even those were outliers. Double digits were scrupulously avoided: no 33s, 55s, or 66s, but 77 showed up (ends in 7). Almost no round numbers — though Gemini once, at the highest temperature, went wild and picked 0.

Why should this be? AIs aren’t human! Why would they care what “seems” random? Have they finally achieved consciousness and this is how they show it?!

No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far. These models don’t care about what is and isn’t random. They don’t know what “randomness” is! They answer this question the same way they answer all the rest: by looking at their training data and repeating what was most often written after a question that looked like “pick a random number.” The more often it appears, the more often the model repeats it.

Where in their training data would they see 100, if almost no one ever responds that way? For all the AI model knows, 100 is not an acceptable answer to that question. With no actual reasoning capability, and no understanding of numbers whatsoever, it can only answer like the stochastic parrot it is. (Similarly, they have tended to fail at simple arithmetic, like multiplying a few numbers together; after all, how likely is it that the phrase “112*894*32=3,204,096” would appear somewhere in their training data? Though newer models will recognize that a math problem is present and kick it to a subroutine.)

It’s an object lesson in large language model (LLM) habits and the humanity they can appear to show. In every interaction with these systems, one must bear in mind that they have been trained to act the way people do, even if that was not the intent. That’s why pseudanthropy is so difficult to avoid or prevent.

I wrote in the headline that these models “think they’re people,” but that’s a bit misleading. As we often have occasion to point out, they don’t think at all. But in their responses, at all times, they are imitating people, without any need to know or think at all. Whether you’re asking it for a chickpea salad recipe, investment advice, or a random number, the process is the same. The results feel human because they are human, drawn directly from human-produced content and remixed — for your convenience and, of course, for big AI’s bottom line.

More TechCrunch

CRED, an Indian fintech startup, has rolled out a new feature that will help its customers manage and gain deeper insights into their cash flow, as startup seeks to drive…

CRED launches personal finance manager for India’s affluent

A powerful new video-generating AI model became widely available today — but there’s a catch: The model appears to be censoring topics deemed too politically sensitive by the government in…

A new Chinese video-generating model appears to be censoring politically sensitive topics

Our growth as a civilization is tightly coupled to our ability to sufficiently generate ever-increasing amounts of electricity. Could the same be true in space?  Star Catcher Industries, a startup…

Star Catcher wants to build a space power grid to supercharge orbital industry

For frontier AI models, when it rains, it pours. Mistral released a fresh new flagship model on Wednesday, Large 2, which it claims to be on par with the latest…

Mistral’s Large 2 is its answer to Meta and OpenAI’s latest models

Researchers at MIT CSAIL this week are showcasing a new method for training home robots in simulation.

Researchers are training home robots in simulations based on iPhone scans

Apple announced on Wednesday that Apple Maps is now available on the web via a public beta, which means you can now access the service directly from your browser. The…

Apple Maps launches on the web to challenge Google Maps

AltStore, an alternative app store, has launched its first batch of third-party iOS apps in the European Union. The rollout comes a few months after the company launched an updated…

Alternative app store AltStore PAL adds third-party iOS apps in wake of EU Apple ruling

Microsoft this afternoon previewed its answer to Google’s AI-powered search experiences: Bing generative search. Available for only a “small percentage” of users at the moment, Bing generative search, underpinned by…

Bing previews its answer to Google’s AI Overviews

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. Last Sunday, President Joe Biden announced that he no longer plans to seek reelection, instead offering his “full endorsement” of VP Kamala…

This Week in AI: How Kamala Harris might regulate AI

But the fate of many generative AI businesses — even the best-funded ones — looks murky.

VCs are still pouring billions into generative AI startups

Thousands of stories have been written about former NFL quarterback and civil rights activist Colin Kaepernick. If anyone knows a thing or two about losing control of your own narrative,…

Colin Kaepernick lost control of his story. Now he wants to help creators own theirs

Several people who received the CrowdStrike offer found that the gift card didn’t work, while others got an error saying the voucher had been canceled.

CrowdStrike offers a $10 apology gift card to say sorry for outage

TikTok Lite, a low-bandwidth version of the video platform popular across Africa, Asia and Latin America, is exposing users to harmful content because of its lack of safety features compared…

TikTok Lite exposes users to harmful content, say Mozilla researchers

If the models continue eating each other’s data, perhaps without even knowing it, they’ll progressively get weirder and dumber until they collapse.

‘Model collapse’: Scientists warn against letting AI eat its own tail

Astranis has fully funded its next-generation satellite program, called Omega, after closing its $200 million Series D round, the company said Wednesday.  “This next satellite is really the milestone into…

Astranis is set to build Omega constellation after $200M Series D

Reworkd’s founders went viral on GitHub last year with AgentGPT, a free tool to build AI agents that acquired more than 100,000 daily users in a week. This earned them…

After AgentGPT’s success, Reworkd pivots to web-scraping AI agents

We’re so excited to announce that we’ve added a dedicated AI Stage presented by Google Cloud to TechCrunch Disrupt 2024. It joins Fintech, SaaS and Space as the other industry-focused…

Announcing the agenda for the AI Stage at TechCrunch Disrupt 2024

The firm has numerous legs to it, ranging from a venture studio to standard funds, where it does everything from co-founding companies to deploying capital.

CityRock launches second fund to back founders from diverse backgrounds

Since launching xAI last year, Elon Musk has been using X as a sandbox to test some of the Grok model’s AI capabilities. Beyond the basic chatbot, X uses the…

X launches underwhelming Grok-powered ‘More About This Account’ feature

Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European…

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Alongside a slew of announcements for Play — such as AI-powered app comparisons and a feature that bundles similar apps — Google has introduced new “Curated Spaces,” hubs dedicated to…

Google Play gets ‘Comics’ feature for manga readers in Japan

Farmers have got to do something about pests. But nobody really likes the idea of using more chemical pesticides. Thomas Laurent’s company, Micropep, thinks the answer might already be in…

Micropep taps tiny proteins to make pesticides safer

Play Store is getting AI-powered app comparisons, automatically organized categories for similar apps, dedicated hubs for content, data personalization controls, support for playing multiple mobile games on PCs, and more…

Google adds AI-powered comparisons, collections and more data controls to Play Store

Vanta, a trust management platform that helps businesses automate much of their security and compliance processes, today announced that it has raised a $150 million Series C funding round led…

Vanta raises $150M Series C, now valued at $2.45B

The Overture Maps Foundation is today releasing data sets for 2.3B building “footprints” globally, 54M notable places of interest, a visual overlay of “boundaries,” and land and water features such…

Backed by Microsoft, AWS and Meta, the Overture Maps Foundation launches its first open map datasets

The startup is not disclosing its valuation, but sources close to the company say the figure is just under $400 million post-money.

Dazz snaps up $50M for AI-based, automated cloud security remediation

The outcome of the Spanish authority’s probe could take up to two years to complete, and leave Apple on the hook for fines in the billions.

Apple’s App Store hit with antitrust probe in Spain

Proton’s first cryptocurrency product is a wallet called Proton Wallet that’s designed to make it easier to get started with bitcoin.

Proton releases a self-custody bitcoin wallet

Dental care is a necessity, yet many patients lack confidence in their dentists’ ability to provide accurate diagnoses and appropriate treatments. Some dentists overtreat patients, leading to unnecessary expenses, while…

Pearl raises $58M to help dentists make better diagnoses using AI 

Exoticca’s platform connects flights, hotels, meals, transfers, transportation and more, plus the local companies at the destinations.

Spanish startup Exoticca raises a €60M Series D for its tour packages platform