Generative AI and Considerations for Business & Insurance

Generative AI and Considerations for Business & Insurance

The available evidence, as discussed in the previous sections, allows us to describe the likely impact of Artificial Intelligence and Large Language Models on the frequency and severity of cyber-related losses, providing a strong basis for businesses and the insurance industry to carefully assess the potential impacts, according to Generative AI in Cyber Insurance Report.

Responding to the new risk landscape

  • Insurance: The potential for a broader set of businesses to be subjected to attacks places a greateremphasis on closing protection gaps for currently underserved audiences, such as SMEs

  • Business: It will be increasingly important for businesses to invest in the mapping of their critical functions and open-up conversations around cyber defence and restoration capabilities and business continuity planning beyond the risk and information security functions. Organisations using Gen AI should also be able to detail the procedures around how they use it and rely upon it, to evidence with transparency any potential operational risks
  • Government: Cross-industry working groups around cyber security could offer a platform to shareinformation and learnings in a trusted way. Likewise, collaborating on supply chain failovers could be helpful to reduce the disruption to the economy if a business is impacted

  • Society: Educating society on cyber hygiene practices, such as zero trust or multi factorauthorisation, can reduce susceptibility to social engineering. In addition, education programmes for young people could help to instil good practice and foster the right mindset in the next generation

A new cyber threat landscape

Overall, AI has the potential to act as an augmentation of threat actor capability, enhancing the effectiveness of skilled actors, improving the attractiveness of the unit cost economics, and lowering the barrier to entry.

It is likely to mean that there will be more vulnerabilities available for threat actors to exploit, and that it will be easier for them to scout targets, construct campaigns, finetune elements of the attacks, obscure their methods and fingerprint, exfiltrate funds or data, and avoid attributability.

All these factors point to an increase in lower-level cyber losses, mitigated only by the degree to which the security industry can act as a counterbalance.

  • Initial access vectors which rely on human targets making errors of judgement (spear phishing, executive impersonation, poisoned watering holes, etc) are likely to become significantly more effective as attacks become more targeted and finetuned for recipients

  • Attacks are likely to reach broader audiences due to lower cost of target selection and campaign design, meaning the absolute number of losses, and the potential severity of each loss could grow

  • Industrial or operational technology attacks are likely to become more common as automation uncovers vulnerabilities

  • Embedding AI into software could create entirely new initial access vectors for threat actors to exploit, resulting in larger surface area of attack, and consequently more claims

  • The industrialised production of synthetic media content (deepfakes) poses significant challenges for executive impersonation, extortion, and liability risks

Though more companies will be vulnerable to cyber attacks and there will be more security flaws that threat actors can exploit, it is uncertain if this will lead to an increase in highly targeted attacks on specific companies, an increase in broad attacks aimed at many companies, or some other mixed outcome.

The increased number of potential targets and vulnerabilities creates the potential for growth in both focused and widespread cyber campaigns.

Overall, it is likely that the frequency, severity, and diversity of smaller scale cyber losses will grow over the next 12-24 months, followed by a plateauing as security and defensive technologies catch up to counterbalance.

Cyber catastrophes

Cyber campaigns tend to be designed with specific objectives and aim to maximise returns for the perpetrators, so most threat actors have a strong incentive to keep their actions concealed and their attacks contained.

Catastrophes in cyber occur, for the most part, because the mechanisms put in place by the perpetrators to keep the campaign under control have failed.

The exception to this is state-backed, hostile cyber activity, which includes campaigns designed to cause indiscriminate harm and destruction. It is important to look at the two types of events separately, and distinguish between manageable cyber catastrophes and state-backed, hostile cyber activity.

There is evidence to suggest that the AI-enhancement of threat actor capabilities detailed in the previous section could increase the frequency of manageable cyber catastrophes. However as the mechanism of action is indirect, the magnitude of any increase is likely to be small.

There are several factors driving the occurrence of manageable cyber catastrophes considering AI augmentation of threat actor capabilities:

  • The frequency of manageable cyber catastrophes may increase as campaigns are designed to target a broader set of business, coupled with some automation of attacks

  • AI-enhancements are also likely to result in better and more effective designs of controls for cyber campaigns. This would allow threat actors to develop more targeted campaigns, meaning that the overall increase in frequency for catastrophes is likely to be lower than the increase for smaller scale losses

  • There is evidence of concentration of LLM services, creating a new tier of cloud provider. This new breed of cloud providers would in itself be vulnerable to failures, therefore increasing the frequency of catastrophes associated to Single Point of Failure

The last point deserves some more context. The emergence of LLM services creates an opportunity for threat actors to monetise their attacks in novel ways.

A concentration of LLM services in turn creates fertile ground for large accumulations, or in other words catastrophes, akin to existing service provider failure scenarios, but with potentially slightly different, and more severe, effects than is possible today.

In conclusion, it is highly probable that the frequency of manageable cyber catastrophes will moderately increase. The risk is very unlikely to sharply escalate without massive improvements in AI effectiveness, which current industry oversight and governance make improbable; this is an area where an increased focus from regulators may be helpful.

The increases in catastrophe risk will more likely be gradual based on the steady but incremental progress in AI capabilities that can reasonably be anticipated.

State-backed, hostile cyber activity

State-backed, hostile cyber activity, which includes campaigns designed to cause indiscriminate harm and destruction, gives rise to systemic risks which require a different pricing and aggregation approach.

The effects of Gen AI on this type of systemic risk will surface in the augmentation of tooling and the automation of vulnerability discovery, both of which could enhance existing means to intentionally cause harm and destruction.

It is conceivable that the efforts to discover new exploits could concentrate on high impact targets, particularly industrial technology.

The conclusion is that cyber weapons are likely to become more effective, in both destructive power and espionage capabilities.

However, it is unclear to what extent the proliferation of advanced capabilities will increase the risk of a major catastrophe happening.

The trend is clearly upwards, but once again the human factor will come into play, and the mere existence of these capabilities might not directly translate into deployment, let alone indiscriminate deployment.

To view or add a comment, sign in

Explore topics