Artificial Intelligence (AI) is transforming the insurance value chain by enabling more personalised services, generating valuable insights to manage risks and lowering costs. As AI technologies become more mature and widely adopted, there is an increasing need for more regulation and oversight.
In response to these challenges, the European Commission has approved the new AI-Act, which aims to create a harmonised regulatory framework for AI in the EU. The objective of the AI-Act is to ensure that AI is trustworthy, human-centric and aligned with the fundamental rights and values of the EU. The AI-Act introduces a risk-based approach to AI regulation, where different levels of obligations apply to different categories of AI systems, depending on their potential impact on human safety, rights and freedoms. The AI-Act also establishes a coordinated governance system, where national authorities, the European AI Board and the Commission work together to ensure the effective implementation and enforcement of the AI rules. The EU AI-Act enters into force across all 27 EU Member States on 1 August 2024, and the enforcement of the majority of its provisions will commence on 2 August 2026.
Risk-based approach
The risk-based approach of the AI-Act distinguishes between four risk classes of AI systems: prohibited, high-risk, limited-risk, and minimal-risk:
- Prohibited AI systems are those that are considered to be contrary to the EU values and principles, such as those that manipulate human behaviour, exploit vulnerabilities or cause social scoring.
- High-risk AI systems are those that pose significant risks to the health, safety or fundamental rights of people or the environment, such as those used for critical infrastructure, education, employment, law enforcement or biometric identification.
- Limited-risk AI systems, a special category for foundation models, are those that may affect the rights or interests of users or other persons, such as those who generate or manipulate content, provide chatbot services or offer emotional recognition.
- Minimal-risk AI systems are those that are unlikely to cause any adverse impact on people or the environment, such as those used for video games, spam filters or personal assistants.
Impact on insurance
The AI-Act may have significant implications for the insurance industry, because AI systems related to pricing and underwriting in health and life insurance are considered high-risk AI applications. This implies that the providers and users of these systems need to comply with new obligations and responsibilities, such as strict requirements on data quality, data fairness, technical documentation, human oversight, transparency and accuracy and the need to be registered with local authorities.
Insurers that use limited-risk AI systems, e.g., for customer service, marketing or product development, or in other product lines than life and health, will have to provide clear information and opt-out options to their users.
Apart from the AI-Act, it is worth mentioning that in some insurance markets, like the Netherlands (by the Association of Insurers), there are ethical frameworks based on self-regulation that an insurer needs to comply with. These frameworks are complementary to those of the AI-Act.
AI definition and its impact on actuarial models
According to the AI-Act, an “AI system” is defined as: “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
Specifically, some of the following techniques and approaches are mentioned in the AI-Act to illustrate the key characteristic of AI systems, i.e., the capability of AI systems to derive models and/or algorithms from inputs and data using inference: machine learning and logic- and knowledge-based approaches. Consequently, the definition of AI is rather broad and may encompass already existing techniques, as long as these techniques are embedded into a system that has some degree of independence from human involvement (level of autonomy) and/or self-learning capabilities, allowing the system to change while in use (adaptiveness).
The implication of this broad definition is that more traditional actuarial models may participate in the definition of AI, and, depending on their use and impact, in the definition of high-risk AI systems.
Actuaries who develop, use or audit such models should be aware of the potential legal and ethical implications of the AI-Act and ensure that their models comply with this new regulation.
In addition, AI is becoming an increasingly important tool for actuaries, as it can offer new insights, improve efficiency and enhance decision-making. AI can help actuaries to analyse large and complex (unstructured) data sets, identify patterns and trends, automate tasks and generate predictions and scenarios. Therefore, not only data scientists but also actuarial departments need to keep up with the evolving regulatory and professional frameworks around AI.
Time to act
To comply in time with the AI-Act, insurers will need to start assessing the risk level of their existing and planned AI applications, implement appropriate measures to ensure compliance and monitor and review their AI performance and impact on a regular basis. Noncompliance will lead to hefty fines and sanctions.
For this reason, it is important that both the data scientists as well as the actuaries are aware of the requirements of this new regulation and that a process of model governance, risk management and model validation is in place.
Potential first steps to start are:
- Listing all AI applications that are currently in use (both built in-house and externally bought) within your company and classify them according to the different risk levels.
- Setting up a multidisciplinary AI governance board to oversee AI strategy, policy and compliance.
- Implementing risk management standards to:
- Ensure AI systems undergo conformity assessments and are well documented
- Develop policies and procedures across the AI life cycle for the use and development of AI systems
Explore more tags from this article
About the Author(s)
Contact us
We’re here to help you break through complex challenges and achieve next-level success.
Contact us
We’re here to help you break through complex challenges and achieve next-level success.