We haven't been able to take payment
You must update your payment details via My Account or by clicking update payment details to keep your subscription.
Act now to keep your subscription
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account or by clicking update payment details to keep your subscription.
Your subscription is due to terminate
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account, otherwise your subscription will terminate.
author-image
EXPERT TIPS

Tread carefully letting your staff loose with AI

Making sure your team are aware of the risks — as well as the benefits — is essential to keeping your organisation safe

The Times

Q: Some of my staff are using ChatGPT and other AI platforms. What do we need to be aware of from a security perspective?

A: Artificial intelligence is everywhere, but most people don’t understand it well. What it can and can’t do is shrouded in myth and mystery, but that hasn’t stopped people from using the raft of new tools available.

Take, for example, large-language models, such as ChatGPT, which have exploded in popularity in the past two years. They work by taking the words you put into them, breaking them down into numbers and then using the patterns it’s learnt to predict what comes next. Basically, it’s the use of algorithms to replicate human thought.

There are plenty of risks to be aware of when using these tools, the main one being that you absolutely must not enter private or confidential information into any public system. This rule is by no means exclusive to the use of AI, yet the excitement of using these new tools seems to have made many forget it.

ChatGPT has grown hugely in popularity amid the rise in companies’ use of artificial intelligence
ChatGPT has grown hugely in popularity amid the rise in companies’ use of artificial intelligence
ALAMY/PA

The potential benefit of using AI — accelerating administrative tasks and freeing people to carry out more meaningful work — will naturally be appealing to individual staff members and business leaders, but for criminals there is major scope for exploitation. In March last year a bug was found to have leaked ChatGPT user payment data, while earlier this year another leak from the platform saw conversations, personal data and login credentials exposed.

Advertisement

It might seem like a good idea to take a substantial Excel document and paste it into an AI system to speed up a research task, but freely giving away scrolls of information is naive, at best. Certain command structures can even be used to extract data in its raw format.

For me, there are three straightforward steps to abide by when using AI at work:

• Educate your staff about the risks.
• Have a firm policy on what AI can be used for, and what it must not be used to do.
• Consider blocking access to AI if the first two steps aren’t followed. It’s a tough line to take, but giving your IT team the ability to prevent staff from using AI may make the difference between being breached and keeping the organisation secure.

It is also worth noting that getting the balance right with education is a tricky tightrope to walk. Security fatigue is a real problem, so don’t overburden your staff with messaging on how to stay safe. Keep it clear, simple and actionable.

Internal AI systems such as Microsoft’s Copilot are useful tools and a happy medium for dipping your toe into this technology. Of course, you still have to put your faith in a third party and trust that what they’re doing with your information is what they say they are, but if used properly they can transform your approach to simple tasks such as content generation, extraction, summarisation, rewriting, question answering or note-taking.

Advertisement

And your staff may be using more than just ChatGPT. There’s Sora, which can make realistic-looking videos from text instructions; Fireflies.AI, which can analyse voice conversations; and even structuredprompt.com, which helps to make a prompt for these systems by walking you through the process of drafting a thorough instruction.

And that’s just the tip of the iceberg. More and more AI tools are being created every day, so educating your staff is the crucial first step to getting ahead of the risks.

Shaun Reardon is a former detective at Scotland Yard, where he worked on digital forensic cyberinvestigations, and is now head of industrial systems cybersecurity at DNV, a quality assurance and risk management company