Users

Adobe Swears It’s Not Training Its A.I. on Your Photoshops

But Creative Cloud needs “some degree” of omniscience.

An illustrated face on the Adobe Creative Cloud logo with angry-pointed eyebrows. giving a side-eye glance with arms holding tightly illustrated tiles representing the various apps in Creative Cloud.
Illustration by Natalie Matthews-Ramo/Slate

In the age of artificial intelligence, every internet user is reduced to their lowest form. In the eyes of the most powerful machine learning companies, we’re little more than training data. It’s a morbid existence, knowing that whenever you post to Reddit, review a restaurant online, or upload a photo of yourself, that could be scraped and used to train generative A.I. models to make the next great (or not-so-great) chatbot.

So, when Adobe informed its customers of changes to its terms of use this week, many of its creative-minded loyalists read the update and promptly freaked out. A pop-up notification informed them that the company “may access your content through both automated and manual methods, such as for content review.” Elsewhere in the terms of service, users posted on X (formerly Twitter) to complain about caveats in which Adobe might analyze user content using machine learning.

Naturally, this conjured up nightmarish visions of Big Brother Adobe watching every time you doctor an image on Photoshop, design a brochure in InDesign, or cut video with Premiere. Fears of client confidentiality breaches abounded: “If you are a professional, if you are under NDA with your clients, if you are a creative, a lawyer, a doctor or anyone who works with proprietary files—it is time to cancel Adobe, delete all the apps and programs,” another user posted. “Adobe can not be trusted.”

The rebuttal from Adobe was just as swift. Scott Belsky, the company’s chief strategy officer, responded to a viral post on X and tried to explain what was going on. “I can clearly state that Adobe does NOT train any GenAI models on customer’s content, and we obviously have tight security around any form of access to customer’s content,” he wrote. “As a company that stores cloud documents and assets for customers, there are probably circumstances (like indexing to help you search your documents, updating components used from CC libraries across your documents, among others) where the company’s terms of service allow for some degree of access.”

According to a post on its blog, the company is not training its A.I. model on user projects: “Adobe does not train Firefly Gen AI models on customer content. Firefly generative AI models are trained on a dataset of licensed content, such as Adobe Stock, and public domain content where copyright has expired.” The post claims that the company often uses machine learning to review user projects for signs of illegal content, such as child pornography, spam, and phishing material.

Although an outside spokesperson for Adobe simply pointed me to the blog post, Belsky offered a view into the consternation inside the company, admitting on X that the wording of the terms of use was confusing. “Trust and transparency couldn’t be more crucial these days, and we need to be clear when it comes to summarizing terms of service in these pop-ups,” he wrote.

Despite the cleanup efforts, this episode demonstrates how gun-shy everyone is about generative A.I. And perhaps there’s no population that has been more wronged here than creative professionals, many of whom feel that generative A.I. companies have illicitly trained their image-, video-, and sound-generation models on copyright works. Big Tech is splitting its loyalties between serving its existing audiences and taking advantage of self-propagating hype for generative A.I. But by doing this, it risks alienating loyal customers. No one wants to be treated like training data—even if that’s what we all are.