AI

This Week in AI: OpenAI moves away from safety

Comment

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first ever Open AI DevDay conference. (Photo by Justin Sullivan/Getty Images)
Image Credits: Justin Sullivan / Getty Images

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly — so be on the lookout for more editions.

This week in AI, OpenAI once again dominated the news cycle (despite Google’s best efforts) with not only a product launch, but also with some palace intrigue. The company unveiled GPT-4o, its most capable generative model yet, and just days later effectively disbanded a team working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue.

The dismantling of the team generated a lot of headlines, predictably. Reporting — including ours — suggests that OpenAI deprioritized the team’s safety research in favor of launching new products like the aforementioned GPT-4o, ultimately leading to the resignation of the team’s two co-leads, Jan Leike and OpenAI co-founder Ilya Sutskever.

Superintelligent AI is more theoretical than real at this point; it’s not clear when — or whether — the tech industry will achieve the breakthroughs necessary in order to create AI capable of accomplishing any task a human can. But the coverage from this week would seem to confirm one thing: that OpenAI’s leadership — in particular CEO Sam Altman — has increasingly chosen to prioritize products over safeguards.

Altman reportedly “infuriated” Sutskever by rushing the launch of AI-powered features at OpenAI’s first dev conference last November. And he’s said to have been critical of Helen Toner, director at Georgetown’s Center for Security and Emerging Technology and a former member of OpenAI’s board, over a paper she co-authored that cast OpenAI’s approach to safety in a critical light — to the point where he attempted to push her off the board.

Over the past year or so, OpenAI has let its chatbot store fill up with spam and (allegedly) scraped data from YouTube against the platform’s terms of service while voicing ambitions to let its AI generate depictions of porn and gore. Certainly, safety seems to have taken a back seat at the company — and a growing number of OpenAI safety researchers have come to the conclusion that their work would be better supported elsewhere.

Here are some other AI stories of note from the past few days:

  • OpenAI + Reddit: In more OpenAI news, the company reached an agreement with Reddit to use the social site’s data for AI model training. Wall Street welcomed the deal with open arms — but Reddit users may not be so pleased.
  • Google’s AI: Google hosted its annual I/O developer conference this week, during which it debuted a ton of AI products. We rounded them up here, from the video-generating Veo to AI-organized results in Google Search to upgrades to Google’s Gemini chatbot apps.
  • Anthropic hires Krieger: Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the company’s first chief product officer. He’ll oversee both the company’s consumer and enterprise efforts.
  • AI for kids: Anthropic announced last week that it would begin allowing developers to create kid-focused apps and tools built on its AI models — so long as they follow certain rules. Notably, rivals like Google disallow their AI from being built into apps aimed at younger ages.
  • AI film festival: AI startup Runway held its second-ever AI film festival earlier this month. The takeaway? Some of the more powerful moments in the showcase came not from AI but from the more human elements.

More machine learnings

AI safety is obviously top of mind this week with the OpenAI departures, but Google DeepMind is plowing onward with a new “Frontier Safety Framework.” Basically it’s the organization’s strategy for identifying and hopefully preventing any runaway capabilities — it doesn’t have to be AGI; it could be a malware generator gone mad or the like.

Image Credits: Google DeepMind

The framework has three steps: (1) Identify potentially harmful capabilities in a model by simulating its paths of development; (2) evaluate models regularly to detect when they have reached known “critical capability levels”; and (3) apply a mitigation plan to prevent exfiltration (by another or itself) or problematic deployment. There’s more detail here. It may sound kind of like an obvious series of actions, but it’s important to formalize them or everyone is just kind of winging it. That’s how you get the bad AI.

A rather different risk has been identified by Cambridge researchers, who are rightly concerned at the proliferation of chatbots that one trains on a dead person’s data in order to provide a superficial simulacrum of that person. You may (as I do) find the whole concept somewhat abhorrent, but it could be used in grief management and other scenarios if we are careful. The problem is we are not being careful.

Image Credits: Cambridge University / T. Hollanek

“This area of AI is an ethical minefield,” said lead researcher Katarzyna Nowaczyk-Basińska. “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.” The team identifies numerous scams, potential bad and good outcomes, and discusses the concept generally (including fake services) in a paper published in Philosophy & Technology. Black Mirror predicts the future once again!

In less creepy applications of AI, physicists at MIT are looking at a useful (to them) tool for predicting a physical system’s phase or state, normally a statistical task that can grow onerous with more complex systems. But training up a machine learning model on the right data and grounding it with some known material characteristics of a system and you have yourself a considerably more efficient way to go about it. Just another example of how ML is finding niches even in advanced science.

Over at CU Boulder, they’re talking about how AI can be used in disaster management. The tech may be useful for quickly predicting where resources will be needed, mapping damage, even helping train responders, but people are (understandably) hesitant to apply it in life-and-death scenarios.

Attendees at the workshop.
Image Credits: CU Boulder

Professor Amir Behzadan is trying to move the ball forward on that, saying, “Human-centered AI leads to more effective disaster response and recovery practices by promoting collaboration, understanding and inclusivity among team members, survivors and stakeholders.” They’re still at the workshop phase, but it’s important to think deeply about this stuff before trying to, say, automate aid distribution after a hurricane.

Lastly some interesting work out of Disney Research, which was looking at how to diversify the output of diffusion image generation models, which can produce similar results over and over for some prompts. Their solution? “Our sampling strategy anneals the conditioning signal by adding scheduled, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition alignment.” I simply could not put it better myself.

Image Credits: Disney Research

The result is a much wider diversity in angles, settings, and general look in the image outputs. Sometimes you want this, sometimes you don’t, but it’s nice to have the option.

More TechCrunch

A powerful new video-generating AI model became widely available today — but there’s a catch: The model appears to be censoring topics deemed too politically sensitive by the government in…

A new Chinese video-generating model appears to be censoring politically sensitive topics

Our growth as a civilization is tightly coupled to our ability to sufficiently generate ever-increasing amounts of electricity. Could the same be true in space?  Star Catcher Industries, a startup…

Star Catcher wants to build a space power grid to supercharge orbital industry

For frontier AI models, when it rains, it pours. Mistral released a fresh new flagship model on Wednesday, Large 2, which it claims to be on par with the latest…

Mistral’s Large 2 is its answer to Meta and OpenAI’s latest models

Researchers at MIT CSAIL this week are showcasing a new method for training home robots in simulation.

Researchers are training home robots in simulations based on iPhone scans

Apple announced on Wednesday that Apple Maps is now available on the web via a public beta, which means you can now access the service directly from your browser. The…

Apple Maps launches on the web to challenge Google Maps

AltStore, an alternative app store, has launched its first batch of third-party iOS apps in the European Union. The rollout comes a few months after the company launched an updated…

Alternative app store AltStore PAL adds third-party iOS apps in wake of EU Apple ruling

Microsoft this afternoon previewed its answer to Google’s AI-powered search experiences: Bing generative search. Available for only a “small percentage” of users at the moment, Bing generative search, underpinned by…

Bing previews its answer to Google’s AI Overviews

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. Last Sunday, President Joe Biden announced that he no longer plans to seek reelection, instead offering his “full endorsement” of VP Kamala…

This Week in AI: How Kamala Harris might regulate AI

But the fate of many generative AI businesses — even the best-funded ones — looks murky.

VCs are still pouring billions into generative AI startups

Thousands of stories have been written about former NFL quarterback and civil rights activist Colin Kaepernick. If anyone knows a thing or two about losing control of your own narrative,…

Colin Kaepernick lost control of his story. Now he wants to help creators own theirs

Several people who received the CrowdStrike offer found that the gift card didn’t work, while others got an error saying the voucher had been canceled.

CrowdStrike offers a $10 apology gift card to say sorry for outage

TikTok Lite, a low-bandwidth version of the video platform popular across Africa, Asia and Latin America, is exposing users to harmful content because of its lack of safety features compared…

TikTok Lite exposes users to harmful content, say Mozilla researchers

If the models continue eating each other’s data, perhaps without even knowing it, they’ll progressively get weirder and dumber until they collapse.

‘Model collapse’: Scientists warn against letting AI eat its own tail

Astranis has fully funded its next-generation satellite program, called Omega, after closing its $200 million Series D round, the company said Wednesday.  “This next satellite is really the milestone into…

Astranis is set to build Omega constellation after $200M Series D

Reworkd’s founders went viral on GitHub last year with AgentGPT, a free tool to build AI agents that acquired more than 100,000 daily users in a week. This earned them…

After AgentGPT’s success, Reworkd pivots to web-scraping AI agents

We’re so excited to announce that we’ve added a dedicated AI Stage presented by Google Cloud to TechCrunch Disrupt 2024. It joins Fintech, SaaS and Space as the other industry-focused…

Announcing the agenda for the AI Stage at TechCrunch Disrupt 2024

The firm has numerous legs to it, ranging from a venture studio to standard funds, where it does everything from co-founding companies to deploying capital.

CityRock launches second fund to back founders from diverse backgrounds

Since launching xAI last year, Elon Musk has been using X as a sandbox to test some of the Grok model’s AI capabilities. Beyond the basic chatbot, X uses the…

X launches underwhelming Grok-powered ‘More About This Account’ feature

Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European…

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Alongside a slew of announcements for Play — such as AI-powered app comparisons and a feature that bundles similar apps — Google has introduced new “Curated Spaces,” hubs dedicated to…

Google Play gets ‘Comics’ feature for manga readers in Japan

Farmers have got to do something about pests. But nobody really likes the idea of using more chemical pesticides. Thomas Laurent’s company, Micropep, thinks the answer might already be in…

Micropep taps tiny proteins to make pesticides safer

Play Store is getting AI-powered app comparisons, automatically organized categories for similar apps, dedicated hubs for content, data personalization controls, support for playing multiple mobile games on PCs, and more…

Google adds AI-powered comparisons, collections and more data controls to Play Store

Vanta, a trust management platform that helps businesses automate much of their security and compliance processes, today announced that it has raised a $150 million Series C funding round led…

Vanta raises $150M Series C, now valued at $2.45B

The Overture Maps Foundation is today releasing data sets for 2.3B building “footprints” globally, 54M notable places of interest, a visual overlay of “boundaries,” and land and water features such…

Backed by Microsoft, AWS and Meta, the Overture Maps Foundation launches its first open map datasets

The startup is not disclosing its valuation, but sources close to the company say the figure is just under $400 million post-money.

Dazz snaps up $50M for AI-based, automated cloud security remediation

The outcome of the Spanish authority’s probe could take up to two years to complete, and leave Apple on the hook for fines in the billions.

Apple’s App Store hit with antitrust probe in Spain

Proton’s first cryptocurrency product is a wallet called Proton Wallet that’s designed to make it easier to get started with bitcoin.

Proton releases a self-custody bitcoin wallet

Dental care is a necessity, yet many patients lack confidence in their dentists’ ability to provide accurate diagnoses and appropriate treatments. Some dentists overtreat patients, leading to unnecessary expenses, while…

Pearl raises $58M to help dentists make better diagnoses using AI 

Exoticca’s platform connects flights, hotels, meals, transfers, transportation and more, plus the local companies at the destinations.

Spanish startup Exoticca raises a €60M Series D for its tour packages platform

Content creators are busy people. Most spend more than 20 hours a week creating new content for their respective corners of the web. That doesn’t leave much time for audience…

Mark Zuckerberg imagines content creators making AI clones of themselves