Looking for a bargain? – Check out the best tech deals in Australia

Adobe Is Changing Its Terms of Use Again After Backlash

Adobe is revising its legal language to clear the air after customers accused the company of surveilling their work and using it to train AI tools. Here's a look at what is—and isn't—going on.

(Credit: IB Photography/Shutterstock.com)

Adobe is reworking its Terms of Use to try to clear up confusion about how it does and doesn't access its customers' creative work.

"We recently rolled out a re-acceptance of our Terms of Use which has led to concerns about what these terms are and what they mean to our customers," Adobe's Chief Strategy Officer Scott Belsky and Dana Rao, who leads Adobe's legal policy, write in a blog post Monday. "Over the next few days, we will speak to our customers with a plan to roll out updated changes by June 18."

Adobe reaffirmed Monday that it won't use customer creations or data to train any generative AI tools. "We’ve never trained generative AI on customer content, taken ownership of a customer’s work, or allowed access to customer content beyond legal requirements. Nor were we considering any of those practices as part of the recent Terms of Use update," Belsky and Rao added.

Adobe admitted it was time to make its legal language more readable for customers without law degrees. "We should have modernized our Terms of Use sooner," Belsky and Rao continue. "As technology evolves, we must evolve the legal language that evolves our policies and practices not just in our daily operations, but also in ways that proactively narrow and explain our legal requirements in easy-to-understand language."

Last week, Adobe sparked backlash online after filmmakers and artists raised concerns that Adobe's new language gave the tech giant sweeping license to "access" and "view" user content and "analyze" it with AI tools. The terms didn't make it clear whether user art would be fed into a massive dataset for AI training. The language also raised questions about how, when, and to what extent Adobe is viewing its customers' work in apps like Photoshop.

In response, Adobe said last week that its new terms were actually rolled out to legally allow it to better identify and combat the spread of child sexual abuse material and stop it from being processed through or hosted on its apps.

"The focus of this update was to be clearer about the improvements to our moderation processes that we have in place," Adobe previously said in response to the issue. "Given the explosion of Generative AI and our commitment to responsible innovation, we have added more human moderation to our content submissions review processes."

What does this mean? Anything uploaded to Adobe's cloud will continue to be "scanned" by its automated systems. Only local content stored on a user's computer won't be examined in this manner. Adobe explained Monday that it "automatically scans" all content uploaded through its Creative Cloud to ensure it isn't hosting any child sexual abuse material. A human review of user content only occurs if potentially illegal material is flagged by this automated process.

In the next iteration of its Terms of Use, Adobe will specifically state that it won't use customer data to train its AI products, like Firefly. Instead, Adobe uses licensed content from places like Adobe Stock. But Adobe Stock isn't perfect, either. It's recently hosted AI images "inspired" by famous artists, like Ansel Adams, that have been sold with their names attached without their consent.

About Kate Irwin