AI

AI training data has a price tag that only Big Tech can afford

Comment

Binary code in blue with little yellow locks in between to illustrate data protection.
Image Credits: Peresmeh / Getty Images

Data is at the heart of today’s advanced AI systems, but it’s costing more and more — making it out of reach for all but the wealthiest tech companies.

Last year, James Betker, a researcher at OpenAI, penned a post on his personal blog about the nature of generative AI models and the datasets on which they’re trained. In it, Betker claimed that training data — not a model’s design, architecture or any other characteristic — was the key to increasingly sophisticated, capable AI systems.

“Trained on the same data set for long enough, pretty much every model converges to the same point,” Betker wrote.

Is Betker right? Is training data the biggest determiner of what a model can do, whether it’s answer a question, draw human hands, or generate a realistic cityscape?

It’s certainly plausible.

Statistical machines

Generative AI systems are basically probabilistic models ��� a huge pile of statistics. They guess based on vast amounts of examples which data makes the most “sense” to place where (e.g., the word “go” before “to the market” in the sentence “I go to the market”). It seems intuitive, then, that the more examples a model has to go on, the better the performance of models trained on those examples.

“It does seem like the performance gains are coming from data,” Kyle Lo, a senior applied research scientist at the Allen Institute for AI (AI2), a AI research nonprofit, told TechCrunch, “at least once you have a stable training setup.”

Lo gave the example of Meta’s Llama 3, a text-generating model released earlier this year, which outperforms AI2’s own OLMo model despite being architecturally very similar. Llama 3 was trained on significantly more data than OLMo, which Lo believes explains its superiority on many popular AI benchmarks.

(I’ll point out here that the benchmarks in wide use in the AI industry today aren’t necessarily the best gauge of a model’s performance, but outside of qualitative tests like our own, they’re one of the few measures we have to go on.)

That’s not to suggest that training on exponentially larger datasets is a sure-fire path to exponentially better models. Models operate on a “garbage in, garbage out” paradigm, Lo notes, and so data curation and quality matter a great deal, perhaps more than sheer quantity.

“It is possible that a small model with carefully designed data outperforms a large model,” he added. “For example, Falcon 180B, a large model, is ranked 63rd on the LMSYS benchmark, while Llama 2 13B, a much smaller model, is ranked 56th.”

In an interview with TechCrunch last October, OpenAI researcher Gabriel Goh said that higher-quality annotations contributed enormously to the enhanced image quality in DALL-E 3, OpenAI’s text-to-image model, over its predecessor DALL-E 2. “I think this is the main source of the improvements,” he said. “The text annotations are a lot better than they were [with DALL-E 2] — it’s not even comparable.”

Many AI models, including DALL-E 3 and DALL-E 2, are trained by having human annotators label data so that a model can learn to associate those labels with other, observed characteristics of that data. For example, a model that’s fed lots of cat pictures with annotations for each breed will eventually “learn” to associate terms like bobtail and shorthair with their distinctive visual traits.

Bad behavior

Experts like Lo worry that the growing emphasis on large, high-quality training datasets will centralize AI development into the few players with billion-dollar budgets that can afford to acquire these sets. Major innovation in synthetic data or fundamental architecture could disrupt the status quo, but neither appear to be on the near horizon.

“Overall, entities governing content that’s potentially useful for AI development are incentivized to lock up their materials,” Lo said. “And as access to data closes up, we’re basically blessing a few early movers on data acquisition and pulling up the ladder so nobody else can get access to data to catch up.”

Indeed, where the race to scoop up more training data hasn’t led to unethical (and perhaps even illegal) behavior like secretly aggregating copyrighted content, it has rewarded tech giants with deep pockets to spend on data licensing.

Generative AI models such as OpenAI’s are trained mostly on images, text, audio, videos and other data — some copyrighted — sourced from public web pages (including, problematically, AI-generated ones). The OpenAIs of the world assert that fair use shields them from legal reprisal. Many rights holders disagree — but, at least for now, they can’t do much to prevent this practice.

There are many, many examples of generative AI vendors acquiring massive datasets through questionable means in order to train their models. OpenAI reportedly transcribed more than a million hours of YouTube videos without YouTube’s blessing — or the blessing of creators — to feed to its flagship model GPT-4. Google recently broadened its terms of service in part to be able to tap public Google Docs, restaurant reviews on Google Maps and other online material for its AI products. And Meta is said to have considered risking lawsuits to train its models on IP-protected content.

Meanwhile, companies large and small are relying on workers in third-world countries paid only a few dollars per hour to create annotations for training sets. Some of these annotators — employed by mammoth startups like Scale AI — work literal days on end to complete tasks that expose them to graphic depictions of violence and bloodshed without any benefits or guarantees of future gigs.

Growing cost

In other words, even the more aboveboard data deals aren’t exactly fostering an open and equitable generative AI ecosystem.

OpenAI has spent hundreds of millions of dollars licensing content from news publishers, stock media libraries and more to train its AI models — a budget far beyond that of most academic research groups, nonprofits and startups. Meta has gone so far as to weigh acquiring the publisher Simon & Schuster for the rights to e-book excerpts (ultimately, Simon & Schuster sold to private equity firm KKR for $1.62 billion in 2023).

With the market for AI training data expected to grow from roughly $2.5 billion now to close to $30 billion within a decade, data brokers and platforms are rushing to charge top dollar — in some cases over the objections of their user bases.

Stock media library Shutterstock has inked deals with AI vendors ranging from $25 million to $50 million, while Reddit claims to have made hundreds of millions from licensing data to orgs such as Google and OpenAI. Few platforms with abundant data accumulated organically over the years haven’t signed agreements with generative AI developers, it seems — from Photobucket to Tumblr to Q&A site Stack Overflow.

It’s the platforms’ data to sell — at least depending on which legal arguments you believe. But in most cases, users aren’t seeing a dime of the profits. And it’s harming the wider AI research community.

“Smaller players won’t be able to afford these data licenses, and therefore won’t be able to develop or study AI models,” Lo said. “I worry this could lead to a lack of independent scrutiny of AI development practices.”

Independent efforts

If there’s a ray of sunshine through the gloom, it’s the few independent, not-for-profit efforts to create massive datasets anyone can use to train a generative AI model.

EleutherAI, a grassroots nonprofit research group that began as a loose-knit Discord collective in 2020, is working with the University of Toronto, AI2 and independent researchers to create The Pile v2, a set of billions of text passages primarily sourced from the public domain.

In April, AI startup Hugging Face released FineWeb, a filtered version of the Common Crawl — the eponymous dataset maintained by the nonprofit Common Crawl, composed of billions upon billions of web pages — that Hugging Face claims improves model performance on many benchmarks.

A few efforts to release open training datasets, like the group LAION’s image sets, have run up against copyright, data privacy and other, equally serious ethical and legal challenges. But some of the more dedicated data curators have pledged to do better. The Pile v2, for example, removes problematic copyrighted material found in its progenitor dataset, The Pile.

The question is whether any of these open efforts can hope to maintain pace with Big Tech. As long as data collection and curation remains a matter of resources, the answer is likely no — at least not until some research breakthrough levels the playing field.

More TechCrunch

For frontier AI models, when it rains, it pours. Mistral released a fresh new flagship model on Wednesday, Large 2, which it claims to be on par with the latest…

Mistral’s Large 2 is its answer to Meta and OpenAI’s latest models

Researchers at MIT CSAIL this week are showcasing a new method for training home robots in simulation.

Researchers are training home robots in simulations based on iPhone scans

Apple announced on Wednesday that Apple Maps is now available on the web via a public beta, which means you can now access the service directly from your browser. The…

Apple Maps launches on the web to challenge Google Maps

AltStore, an alternative app store, has launched its first batch of third-party iOS apps in the European Union. The rollout comes a few months after the company launched an updated…

Alternative app store AltStore PAL adds third-party iOS apps in wake of EU Apple ruling

Microsoft this afternoon previewed its answer to Google’s AI-powered search experiences: Bing generative search. Available for only a “small percentage” of users at the moment, Bing generative search, underpinned by…

Bing previews its answer to Google’s AI Overviews

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. Last Sunday, President Joe Biden announced that he no longer plans to seek reelection, instead offering his “full endorsement” of VP Kamala…

This Week in AI: How Kamala Harris might regulate AI

But the fate of many generative AI businesses — even the best-funded ones — looks murky.

VCs are still pouring billions into generative AI startups

Thousands of stories have been written about former NFL quarterback and civil rights activist Colin Kaepernick. If anyone knows a thing or two about losing control of your own narrative,…

Colin Kaepernick lost control of his story. Now he wants to help creators own theirs

Several people who received the CrowdStrike offer found that the gift card didn’t work, while others got an error saying the voucher had been canceled.

CrowdStrike offers a $10 apology gift card to say sorry for outage

TikTok Lite, a low-bandwidth version of the video platform popular across Africa, Asia and Latin America, is exposing users to harmful content because of its lack of safety features compared…

TikTok Lite exposes users to harmful content, say Mozilla researchers

If the models continue eating each other’s data, perhaps without even knowing it, they’ll progressively get weirder and dumber until they collapse.

‘Model collapse’: Scientists warn against letting AI eat its own tail

Astranis has fully funded its next-generation satellite program, called Omega, after closing its $200 million Series D round, the company said Wednesday.  “This next satellite is really the milestone into…

Astranis is set to build Omega constellation after $200M Series D

Reworkd’s founders went viral on GitHub last year with AgentGPT, a free tool to build AI agents that acquired more than 100,000 daily users in a week. This earned them…

After AgentGPT’s success, Reworkd pivots to web-scraping AI agents

We’re so excited to announce that we’ve added a dedicated AI Stage presented by Google Cloud to TechCrunch Disrupt 2024. It joins Fintech, SaaS and Space as the other industry-focused…

Announcing the agenda for the AI Stage at TechCrunch Disrupt 2024

The firm has numerous legs to it, ranging from a venture studio to standard funds, where it does everything from co-founding companies to deploying capital.

CityRock launches second fund to back founders from diverse backgrounds

Since launching xAI last year, Elon Musk has been using X as a sandbox to test some of the Grok model’s AI capabilities. Beyond the basic chatbot, X uses the…

X launches underwhelming Grok-powered ‘More About This Account’ feature

Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European…

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Alongside a slew of announcements for Play — such as AI-powered app comparisons and a feature that bundles similar apps — Google has introduced new “Curated Spaces,” hubs dedicated to…

Google Play gets ‘Comics’ feature for manga readers in Japan

Farmers have got to do something about pests. But nobody really likes the idea of using more chemical pesticides. Thomas Laurent’s company, Micropep, thinks the answer might already be in…

Micropep taps tiny proteins to make pesticides safer

Play Store is getting AI-powered app comparisons, automatically organized categories for similar apps, dedicated hubs for content, data personalization controls, support for playing multiple mobile games on PCs, and more…

Google adds AI-powered comparisons, collections and more data controls to Play Store

Vanta, a trust management platform that helps businesses automate much of their security and compliance processes, today announced that it has raised a $150 million Series C funding round led…

Vanta raises $150M Series C, now valued at $2.45B

The Overture Maps Foundation is today releasing data sets for 2.3B building “footprints” globally, 54M notable places of interest, a visual overlay of “boundaries,” and land and water features such…

Backed by Microsoft, AWS and Meta, the Overture Maps Foundation launches its first open map datasets

The startup is not disclosing its valuation, but sources close to the company say the figure is just under $400 million post-money.

Dazz snaps up $50M for AI-based, automated cloud security remediation

The outcome of the Spanish authority’s probe could take up to two years to complete, and leave Apple on the hook for fines in the billions.

Apple’s App Store hit with antitrust probe in Spain

Proton’s first cryptocurrency product is a wallet called Proton Wallet that’s designed to make it easier to get started with bitcoin.

Proton releases a self-custody bitcoin wallet

Dental care is a necessity, yet many patients lack confidence in their dentists’ ability to provide accurate diagnoses and appropriate treatments. Some dentists overtreat patients, leading to unnecessary expenses, while…

Pearl raises $58M to help dentists make better diagnoses using AI 

Exoticca’s platform connects flights, hotels, meals, transfers, transportation and more, plus the local companies at the destinations.

Spanish startup Exoticca raises a €60M Series D for its tour packages platform

Content creators are busy people. Most spend more than 20 hours a week creating new content for their respective corners of the web. That doesn’t leave much time for audience…

Mark Zuckerberg imagines content creators making AI clones of themselves

Elon Musk says he will show off Tesla’s purpose-built “robotaxi” prototype during an event October 10, after scrapping a previous plan to reveal it August 8. Musk said Tesla will…

Elon Musk sets new date for Tesla robotaxi reveal, calls everything beyond autonomy ‘noise’

Alphabet will spend an additional $5 billion on its self-driving subsidiary, Waymo, over the next few years, according to Ruth Porat, the company’s chief financial officer. Porat announced the commitment…

Alphabet to invest another $5B into Waymo