The European Insurance and Occupational Pensions Authority has finalised its opinion on AI Governance and Risk Management. EIOPA’s guidance focuses on AI applications in the insurance sector that are not classified as high-risk or prohibited by the EU AI Act. Although primarily relevant to EU insurance firms, firms based elsewhere and in other sectors can draw parallels as to how supervisors may use existing regulatory requirements to oversee AI risks.
Key principles in EIOPA’s opinion on AI governance and risk management
In its opinion EIOPA suggests key principles for managing AI. In doing so, EIOPA draws on existing regulation, such as the Solvency II Directive, the Insurance Distribution Directive and the Digital Operation Resilience Act (DORA).
According to EIOPA, firms should:
- Be proportionate: Assess the risks of each AI system and apply adequate governance and risk management measures for its responsible use.
- Use risk management systems: Develop risk-based and proportionate governance and risk management systems and integrate them into broader governance and risk management frameworks, including for AI systems developed by third party service providers.
- Consider fairness and ethics: Adopt a customer-centric approach to the use of AI systems throughout their entire lifecycle and across the value chain. Make reasonable efforts to remove biases in data and regularly monitor the outcomes of AI systems.
- Apply data governance: Implement a data governance policy for AI systems in compliance with applicable insurance and data protection legislation and ensure that data used in AI systems is complete, accurate and appropriate.
- Keep records: Document appropriate records of training and testing data and modelling methodologies to enable their reproducibility and traceability.
- Be able to explain outcomes: Firms should be able to meaningfully explain the outcomes of AI systems and use stronger guardrails and increased human oversight where the complexity of an AI system hinders full explainability. Explanations should be adapted to the needs of different recipients.
- Be transparent: Customers should be informed that they are interacting with an AI system.
- Use human oversight: Put in place effective internal controls during the entire lifecycle of an AI system, define roles and responsibilities in policy documents and provide sufficient training to ensure that the human oversight is effective.
- Promote accuracy, robustness and cybersecurity: Define the levels of accuracy, robustness and cybersecurity that AI systems should meet, use metrics to measure performance and have systems to ensure ICT business continuity.
Next steps
In two years, EIOPA will look into the supervisory practices of competent authorities with a view to evaluate supervisory convergence. EIOPA also envisages developing more detailed analysis on specific AI systems or issues arising from the use of AI systems in insurance and will provide further guidance, as appropriate.