Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Meta Changes 'Made With AI' Policy After Mislabeling Images

Meta is tweaking its AI labels after Instagram and Threads users found them confusing and inaccurate.

July 2, 2024
Phone with Meta AI search tool on it with Meta name behind on a screen. (Credit: Photosince/Shutterstock.com)

Meta has released an update to its "Made With AI" labeling policy after some user images were incorrectly flagged as AI-made when they were, in fact, real photos edited with basic software.

In many cases, Meta would automatically flag content as made with AI simply because it had been lightly edited in Adobe Photoshop—but wasn't fully AI-generated with a software like Stable Diffusion or DALL-E. Photoshop exports include metadata that essentially function as a disclaimer that some Photoshop features use generative AI and that, therefore, the image itself could be AI-altered. Once Meta's automated systems decided a piece of content was "Made With AI," the creator couldn't remove the label.

But a fully AI-generated image is not the same as a photo where a speck of dust in one corner was edited out. It's also not all that helpful to label a real photo as "Made With AI" just because a background was swapped out to something plain and not misleading, as YouTuber Justine "iJustine" Ezarik pointed out in a Threads post last month.

"I changed the color of my background and added a lens flare and I'm getting tagged with this being AI," Ezarik said with a laughing emoji.

Now, Meta says it's going to stop conflating AI edits and fully AI-generated images to make things clearer. "We’ve found that our labels based on these indicators weren’t always aligned with people’s expectations and didn’t always provide enough context," Meta explained in its Monday update. Meta's own Oversight Board helped guide this change, according to the company.

"While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the “Made with AI” label to “AI info” across our apps, which people can click for more information," Meta's post explains. Like many big tech firms, Meta has been pushing for more AI tools across its platforms, launching "Meta AI" in Instagram and WhatsApp and giving Instagram users the ability to alter their photos with AI in-app.

Meta first announced its plans to start automatically detecting and labeling AI content back in February. It also started asking users on its apps like Instagram to proactively disclose when an image is "Made With AI." But Meta's detection system relies almost entirely on assuming that all AI-generated or AI-altered images will contain a standard called C2PA, which is file metadata that self-labels AI content under the hood.

This has never been a perfect solution, though. While it's easy for Adobe to slap AI metadata on exported content—even if it's only been lightly retouched—it's also always been easy to remove this metadata, too. It takes a second to screenshot an image with the metadata to effectively delete said metadata, and it's also possible to reformat or re-export images through a different program to erase this data, as well.

For now, there's no one-size-fits-all solution for detecting AI images online. Instead, it's still ultimately up to internet users to be wary of suspicious images that could be deepfakes—and learn to spot certain clues that suggest an image could actually be "Made With AI."

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Kate Irwin

Reporter

I’m a reporter covering early morning news. Prior to joining PCMag in 2024, I was a reporter and producer at Decrypt and launched its gaming vertical, GG. I have previous bylines with Input, Game Rant, and Dot Esports. I’ve been a PC gamer since The Sims (yes, the original). In 2020, I finally built my first PC with a 3090 graphics card, but also regularly use Mac and iOS devices as well. As a reporter, I’m passionate about uncovering scoops and documenting the wide world of tech and how it affects our daily lives.

Read Kate's full bio

Read the latest from Kate Irwin