It indicates an expandable section or menu, or sometimes previous / next navigation options. Homepage

Shadow AI is a growing threat. Here are 3 ways organizations can safeguard their data.

Two women analyzing a tablet
Dell

By Clint Boulton

The public cloud is often the first stop for organizations deploying new workloads. Its agile approach to building, testing, and scaling applications makes it a no-brainer platform for time-crunched staff.

Yet the public cloud can also become the bane of IT leaders' existence when employees use it, along with SaaS tools, for do-it-yourself application delivery and consumption — the practice commonly known as shadow IT.

This may be truer than ever for generative AI, an increasingly popular workload. Shadow AI, or the unsanctioned use of technologies such as GenAI, has emerged as a critical threat for organizations trying to secure corporate IP and data.

Proper guardrails and training, in conjunction with deploying GenAI in your data center, can help mitigate some of these risks. For organizations getting started with GenAI, it's important to understand why shadow AI is dangerous to best identify how to address it.

Why shadow AI presents a credible threat

Microsoft and LinkedIn report1 that 78% of employees are "bringing their own AI technologies to work," (BYOAI) a softer way of describing shadow AI. The research also acknowledges that such BYOAI puts corporate data at risk.

Shadow IT and shadow AI share the same low barrier to entry and platform dynamics. Just as employees easily access public cloud and SaaS solutions, they can simply log into a public digital assistant and prompt it to begin creating content. The learning curve for basic prompting isn't much different than querying Google and other search engines.

This is all well and good until employees input privileged information, such as personally identifiable data, financial information, or critical strategy documents.

At best, the employee is sharing sensitive data with a third-party vendor. At worst, the vendor may use that information to continuously train its model, which may use it in answers to other users' prompts. Regurgitation in the consumer domain is one thing; it's something else entirely in a corporate context.

Accordingly, the security risks associated with employees consuming public LLMs are very real, particularly when IT departments aren't aware of what data their employees are using for their prompting.

As organizations launch GenAI initiatives they can take steps to help reduce the risks associated with adopting the nascent technology. These tips can help:

On-premises can be kinder to the budget

As it happens, on-premises deployments may also be more cost-effective.

Research shows that deploying an open-source LLM with RAG on-premises proved two to eight times more cost-effective for inferencing using open-source LLMs compared to the public cloud or API-based services, according to an Enterprise Strategy Group survey.2

ESG found that running Mistral 7B (7 billion parameters) with RAG was 38% to 48% more cost-effective than Amazon Web Services. A bigger model yielded bigger savings, as running Llama 2 (70 billion parameters) with RAG was 69% to 75% more cost-effective than AWS.

ESG ran the same Llama 2 model vs. OpenAI's ChatGPT 4 Turbo API and found the on-premises deployment to be 81% to 88% more cost-effective.

The bottom line

Deploying GenAI services on-premises won't eliminate shadow AI but it may help organizations maintain control over their corporate IP by bringing the AI to their data rather than entrusting it to a third party.

No matter what models IT leaders pick or which locations they choose to run them, deploying GenAI workloads remains a challenge for organizations that may not have the equipment let alone the expertise to deploy it.

That's where trusted partners can help. Dell Technologies is a leader in the growing open ecosystem that is helping organizations build, test, and deploy GenAI services. Dell's AI-enabled infrastructure, client devices, and professional services can help companies along their GenAI journey.

Learn more about Dell AI solutions.

This post was created by Dell with Insider Studios

 


1 AI at Work is Here. Now Comes the Hard Part, Microsoft and LinkedIn, May 2024

2 Understanding the Total Cost of Inferencing Large Language Models, Enterprise Strategy Group, April 2024

Studios Enterprise AI Technology
Advertisement
Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account