Demystifying AI Risks
Artwork by Generative Steve - ChatGPT assured me that DALL.E was ok with that nickname.

Demystifying AI Risks

Since the good people at OpenAI launched ChatGPT, AI has been the juggernaut of the tech industry. This has resulted in “rapid adoption” often unencumbered with trivial things like return-on-investment calculations or the occasional risk assessment.

The latter (risk assessment) is further aggravated by that knot in your stomach when something brand new is placed in front of you and now YOU have to deal with it.

So, let’s make it simple.

From a risk perspective you can break up AI adoption into four classes, each one comes with its own risks - many of which you are familiar - and each one has its own mitigating controls - many of which you already have.

AI in the wild: This category includes AI tools (generative or otherwise) available in the public domain that employees are using for work-related purposes, sanctioned or otherwise (such as ChatGPT and Gemini). When you break it down, this is the age-old problem of employee awareness and may involve refreshing some of the old policies to put them into context.

Embedded AI: This category includes AI capabilities built into standard solutions and SaaS offerings used within the enterprise. Service providers have been complementing their offerings with AI capabilities for the better part of the past decade, many of which are completely invisible to the organization. Once again this is nothing new. You will need to ask vendors new questions about the AI capabilities they packaged into the products they sold you. Fun questions like whether they are using your data to train their models.

Hybrid AI: This category includes enterprise offerings that come with a pre-trained foundational model (generative or otherwise) which is then augmented (or further trained) using enterprise data to achieve a desired outcome. A popular example is Microsoft 365 Copilot which uses a static GPT-3.5 instance (pre-trained foundational model) together with corporate data from the enterprise Office 365 tenant so that employees can ask questions and get answers in context. This adoption class is rapidly emerging, it is also the most complex from a risk perspective because it is a combination of a pre-trained model and enterprise data. Most of the risk management for this class is done by monitoring and tuning outcomes which is heavily platform specific.  

AI in-house: This category includes AI capabilities trained, tested and developed internally where the organization has full visibility into the data, the technologies, and the subsequent tuning made to the AI models, as well as the purpose for which they are used. If this sounds familiar, then chances are you have a software engineering team or a data science team who are doing this work and who should take on the associated governance and risk management responsibilities.

 

Finally, it is worth noting that most of the AI governance frameworks out there (such as the NIST AIRMF or one of the many ISO standards) focus almost exclusively on AI In-House, which is arguably the least likely adoption class for most organizations because it requires that you have the data, the expertise and the investment in place.

Inversely, Embedded-AI is going to have the highest impact from a risk perspective, because it will touch every SaaS platform, many of which will have dozens of models in each product. Vendors will have to provide transparency and attestations to support clients having to make risk assessments as part of their governance program or due to regulations such as the EU’s AI Act.

For more on this subject see the latest research from Gartner.

Getting Ready for the EU AI Act, Phase 1: Discover & Catalog

The AI Act outlines a set of rules for organizations operating in the EU with enforcement starting in late 2024 and expanding through 2027. Security and risk management leaders should immediately start discovering and cataloging AI-enabled capabilities ahead of for the mandatory risk assessment.

AI Needs “A Moral Compass”: Q&A With the Co-Architect of the EU AI Act Executive leaders can hear from MEP Dragos Tudorache on why organizations should introduce safeguards now to make sure their AI-generated content isn’t “harmful and against the law.” Plus, learn how — unlike privacy regulation — global jurisdictions are aligned on the challenges AI poses.


Note, please don’t leave a comment lecturing me about my use of the term “AI” and that what we have now is not true artificial intelligence. Thank you for your attention to detail and kindly leave me alone.


Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management.

Abhinav Mittal ♻️ Governing AI for a Safer World

Responsible AI Advisor | 2X Author | Saved $100m+ in Tech Costs | 🔔Follow for AI Insights | Technology Industry Thought Leader | 45+ Digital Transformation Projects | IT Strategy & Governance Leader | Keynote Speaker

2mo
Like
Reply

To view or add a comment, sign in

Explore topics