EIOPA survey on Generative AI shows swift but cautious adoption among Europe’s insurers
Generative AI (Gen AI) has introduced new and specific cybersecurity vulnerabilities to the insurance sector. Cybersecurity risks are ranked as the second‑highest concern for insurance undertakings, surpassed only by “hallucinations”.
The report highlights the following key points regarding cybersecurity vulnerabilities:
𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗖𝘆𝗯𝗲𝗿 𝗧𝗵𝗿𝗲𝗮𝘁𝘀
The use of Gen AI increases an organization's exposure to several unique types of cyberattacks:
• Prompt Injection: This involves users providing malicious instructions designed to bypass a model's safety guardrails or trick it into unintended actions, such as revealing sensitive data.
• Adversarial Inputs and Jailbreaks: These are techniques used to manipulate AI systems into producing prohibited content or circumventing security controls.
• Data Leakage: There is a primary concern regarding the potential for sensitive customer information to be leaked or accessed without authorization.
• Fraudulent Exploitation: Fraudsters may use Gen AI to create convincing deepfakes or falsified documents to commit scams against insurers.
𝗩𝗮𝗹𝘂𝗲 𝗖𝗵𝗮𝗶𝗻 𝗮𝗻𝗱 𝗧𝗵𝗶𝗿𝗱‑𝗣𝗮𝗿𝘁𝘆 𝗥𝗶𝘀𝗸𝘀
The report identifies a significant value chain vulnerability due to the sector's high reliance on a small number of third‑party providers. Because many Gen AI tools are provided by the same entities that offer cloud computing services, a cyberattack on a single major provider could impact the operations of many insurance undertakings simultaneously.
Mitigation and Risk Management
To address these vulnerabilities, insurance undertakings are implementing several technical and organizational measures:
• 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: Using red team prompts and adversarial testing to simulate attacks and evaluate the system's defenses.
• 𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗦𝗮𝗳𝗲𝗴𝘂𝗮𝗿𝗱𝘀: Implementing stricter access controls, sandboxing (strict testing environments), and the encryption of prompts and outputs to prevent data leakage.
• 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀: Establishing built‑in limitations such as output controls to prevent the generation of inappropriate or biased information.
• 𝗣𝗼𝗹𝗶𝗰𝘆 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: Nearly half (49%) of insurers have developed dedicated AI policies, and many have established internal guidelines for staff to prevent the use of sensitive data in public Gen AI tools.
• 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁: Insurers are looking toward the Digital Operational Resilience Act (DORA) for ICT risk management frameworks and the AI Act to provide reassurances regarding the security and reliability of third‑party systems.