Concerned about AI Risk? – Check for this clause in your contracts
Artwork by Generative Steve - ChatGPT assured me that DALL.E was ok with that nickname.

Concerned about AI Risk? – Check for this clause in your contracts

We covered this in “Demystifying AI Risks”, but it bears repeating, AI is not a monolith, it is many different things, and it is adopted in many different ways each carrying different risks.

What if I told you that there’s a good chance that your largest AI risk may have nothing to do with YOUR usage of AI and everything to do with how your vendors may be using your data to train their models.

As with all things, the devil is in the details, almost every SaaS contract you have signed has an innocuous clause referred to as a Product Enhancement Clause. It reads something like this:

The customer agrees that [Vendor Name] may collect, store, and use data derived from the customer's use of [Product/Service Name] for the purposes of enhancing and improving [Product/Service Name].

Ten years ago, “enhancing and improving” meant fixing bugs and using production data to test new features.

The problem is that the definition of “enhancements and improvements” is almost never stated and it has expanded to vendors using your data to train the machine learning models that will then “enhance and improve” the product.

What can you do about this?

  • Get the details: You need to ask your vendors whether or not they are using your data to train their models, you can even dig deeper and ask them to list the activities undertaken with your data under the product enhancement clause.
  • Assess your risks: Vendors will curate, anonymize, and cluster the data before using it to train their models, this preprocessing may render the risk acceptable.
  • Understand your options: Do your vendors allow you to limit how your data is used or mitigate the risks under the product enhancement clause.


What if I must agree to those terms if I want to continue using this product?

This outcome is not as rare as you think and in this case, you’ll have to make a risk based decision whether to stay with this vendor or start transitioning to an alternative offering.

For organizations based in Europe you have two additional considerations:

  • The GDPR: a data processor’s re-use of personal data for its own purposes (such as product improvement) results in its re-classification into a data controller, regardless of what it says in the contract, and may be subject to sanctions for failure to act on the instructions of the controller. For more detail review these guidelines published by the French data protection authority.
  • The AI Act: developers of AI capabilities will have to provide details to their clients under the requirements for transparency together with options for risk mitigation otherwise they face serious fines.

Paula Fontana

VP of Global Marketing at iluminr | Venture Partner | Growth Advisor

1mo

There is also the question of the value of the data. What is the big data deal underlying your agreement with this vendor? What does your organization put in and what are the secondary uses of the data you disclose? Every conversation, every activity, represents a whole host of metadata to your third party. This data is valuable. In some cases we can cure literal diseases looking at this historic data. Applying existing policies is a good place to start, but AI introduces novel aspects that require specific assessment so it's not everything goes with these tools. “We’re just experimenting with it, we don’t care if they use it for personal use.” Often the legal and compliance folks are the last to know it's already being used or going live next week. Make sure your business understands the risks.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics