Back to Insights

AI Regulatory Developments - Understanding the Obligations and Risks


Thomas Singleton
Legal Trainee
LMA

AI/large language model (LLM) technology is perhaps the greatest operational change in recent industry memory. As such, it is both a chance to be grasped with both hands and a risk to be mitigated. 

The regulatory perspective

AI is already playing a significant part in considerations of regulators worldwide with approaches varying across a spectrum of those looking at new legislation to others seeking to rely on and strengthen existing legislation. 2024 is set to be the year that significant legislation and/or regulation is passed and implemented in regulating AI.

Defining artificial intelligence systems is in itself a regulatory challenge; some, like the EU, are adopting a general definition, others such as the UK refer to functional characteristics of such systems.
 

Governments are engaged and high profile in these discussions: the UK recently hosted the first AI Safety Summit at Bletchley Park, for example. It led to a declaration signed by over 25 countries and the EU which addressed the risks of AI and emphasised the need for international collaboration on AI development and safety.

This blog will provide a brief overview (to elicit interest and market engagement with the legal aspects only) of what to expect this year in three core jurisdictions for Lloyd’s market participants: the EU, the US and the UK:

EU

In the EU, several pieces of legislation and regulation are currently being debated, including the AI Liability Directive and an updated Product Liability Directive. 

The AI Liability Directive, which relates to fault-based claims, includes a rebuttable presumption of a causal link between a failed duty of care and harm caused by an AI system. Victims are required to show that someone was at fault for not complying with an obligation relevant to a harm and that a causal link with the AI performance is reasonably likely. 

The court can presume that this non-compliance caused the damage. While this is rebuttable, (for example, by proving that a different cause provoked the damage), it effectively reverses the standard burden of proof of causality.

The Product Liability Directive is being updated to bring AI systems into scope. In this directive, a claimant has to prove a damage was suffered (which will now include loss of data), the defectiveness of the product and the causal link between the two. However, the update proposes to alleviate the burden of proof of causality and defectiveness when the claimant faces excessive difficulties to prove the existence of a defect and/or the causal link between the defect and the damage, due to the technical complexity of the product. The EU Commission has expressly identified AI systems as being covered by this presumption.

The fundamental piece of legislation is, however, the EU AI Act, which will apply to any user or provider of AI systems active in the EU, including if done via an authorised representative. This means that it will apply to insurers using AI in any form as part of providing coverage in the EU. The AI Act is vast in scope, similar to GDPR and its implementation will have a similarly broad impact on a wide range of services and products. 

There is also a significant overlap between the AI Act and GDPR with regard to personal data. A natural consequence of this is that many companies’ data processing policies may need to be updated if they use AI to analyse any personal data. 

The ICO makes several recommendations on how to best navigate this, such as recording decision making to explain decisions made by AI systems. This is key to providing users transparency and potential redress should they wish to challenge the basis of any outcomes of an AI system. 

Recording decision-making may prove a technical challenge though, as some AI developers (including those of ChatGPT) are not always able to provide answers as to why an AI system arrives at a particular conclusion. These technical challenges will need to be addressed when implementing AI systems into insurance processes. 

However, the cornerstone of the AI Act is to classify AI systems into 4 risk categories: Unacceptable, High risk, Limited risk and Minimal or no risk. This is a risk-based approach and AI systems are therefore categorised based upon the level of harm they may do, rather than what they are actually doing or designed to do.

Unacceptable

Unacceptable AI systems are to be prohibited from use in the EU. These include AI systems used for biometric identification in public places, for example, or – which may be of some relevance – those that deal with social scoring. 

The maximum penalty for using AI for a prohibited practice is a fine of €30 million or 6% of global annual turnover.

Definition of Social Scoring

The Act’s current definition of social scoring is evaluating or classifying natural persons based on their social behaviour, socio-economic status or known or predicted personal or personality characteristics.  Social scoring is prohibited in two cases: 
  • any data generated by social scoring cannot lead to detrimental treatment of anyone outside of the context for which it was collected. Insurers cannot use any social scoring data to input into AI systems for secondary purposes
  • any social scoring that does lead to detrimental treatment must not be unjustified or disproportionate to the social behaviour. Insurers may be called upon to justify that their use of social scoring is proportionate to the social behaviour in question.
Most obviously, AI systems used to counter fraud are most at risk of coming under this definition and should be considered carefully.

High-risk

High-risk AI systems are those that may have a significant impact on people’s lives or rights, such as those used in health, education, law enforcement or to determine access to essential public or private services. These are permitted on the European market subject to compliance with certain mandatory requirements. 

From an insurance perspective, there are for now two types of relevant AI system in the high-risk category:
  • AI used to determine access to and price of health and life insurance
  • AI used to evaluate the credit score or creditworthiness of natural persons

Users of High Risk AI systems will have to comply with strict requirements, such as:

  • conducting a risk assessment before deployment of the AI system
  • ensuring the quality and accuracy of the data used to train the AI system
  • providing clear and transparent information and explanations to users and customers about how the AI system works and what its outcomes mean
  • implementing effective human oversight and control mechanisms for the AI system
  • ensuring the security and resilience of the AI system
  • establishing a monitoring and evaluation system
  • registration.
The maximum penalty for breach of compliance obligations for high-risk systems is a fine of €20 million or 4% of global annual turnover.

As a number of insurance companies are likely to, or in some cases already may use AI systems for the two aforementioned purposes, they will fall under the high-risk designation. Therefore, they need to be thinking about their compliance obligations moving forward. 

Other insurance uses of AI may be added to the High-risk category if their use of AI systems are deemed to pose a risk of harm to health and safety, or a risk of adverse impact on fundamental rights that is equivalent or greater to those already in scope. Monitoring additions to this category will therefore be vital and insurers should be aware of the risk.

Limited risk

Any AI systems that interact directly with humans, such as chatbots, are likely to be in the limited risk category and relevant procedures must be implemented.

No-risk systems

For minimal or no-risk systems, there will be no obligations. The EU Commission will encourage voluntary commitment to codes of conduct.

As a result, insurers should start thinking about what actions they need to take and the consequences of implementing any AI system if they conduct any business in the EU directly or via third parties and service providers.

For more on the implications of the EU AI Act, see, for example, here.

US

In the US, the NAIC released a model bulletin in July 2023 to all US insurance commissioners to provide regulatory guidance, expectations and oversight considerations for the use of AI systems by insurers. 

Instead of developing standalone AI legislation like the EU, the US approach is to highlight the need to use AI “in a manner that complies with and is designed to assure that the decisions made using those systems meet the requirements of all applicable federal and state laws.” 

These include: 

  • The Unfair Trade Practices Act and the Unfair Claims Settlement Practices Act:
    Insurers are expected to adopt practices, including governance frameworks and risk management protocols, that are designed to assure that the use of AI systems does not result in either unfair trade practices or unfair claims settlement practices.
  • The Corporate Governance Annual Disclosure Model Act:
    The requirements of CGAD apply to elements of insurers’ corporate governance framework that address insurers’ use of AI Systems to support decisions that impact consumers.
  • The Property and Casualty Model Rating Law:
    The requirements of this law apply regardless of the methodology insurers use to develop rate, rating rules and rating plans. Developing these using AI systems must not result in excessive, inadequate, or unfairly discriminatory insurance rates with respect to all forms of property or casualty insurance.
  • The Market Conduct Surveillance Model Law:
    Insurers’ conduct in the state, including any use of AI systems to make or support decisions that impact consumers, is subject to investigation, including market conduct actions.
While the order attempts to spur innovation and address concerns that AI could exacerbate bias, displace workers, or undermine national security, there are limits to what it can achieve without an act of Congress. Congress has been unable to pass any major tech legislation for several years and this deadlock is not expected to ease any time soon.  It is broadly acknowledged that further AI regulations are to be debated at the federal level for the foreseeable future, though passage may be unlikely.

Other developments include the 2022 Blueprint for an AI Bill of Rights and the recent Biden Executive Order. 26 states have introduced AI bills in 2023 and more are expected. 

As such, regulation in the US as of now is a patchwork of federal and state law and requires careful navigation.

UK

The UK is proposing a ‘contextual, sector-based regulatory framework’, anchored in its existing, diffuse network of regulations and laws. This is part of an effort to differentiate itself, particularly from the EU and position itself as a leader in the development of innovative AI technology.

In place of any specific legislation on AI, the UK has published a white paper which outlines the government’s approach to AI regulation. It establishes the framework with five cross-sector principles to guide and inform the responsible development and use of AI:

  • safety, security and robustness
  • appropriate transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress.

The UK Government is putting the onus on existing regulators to determine the scope of AI within the context of its application in an organisation’s provision of services or products. The white paper calls for any AI regulatory framework in the UK to adhere to a common set of principles: 

  • pro-innovation
  • proportionate 
  • trustworthy 
  • adaptable 
  • clear 
  • collaborative.

For insurers, the key point is that the UK AI framework is in development and likely to result in more regulatory oversight, albeit to a less prescriptive degree than the EU AI Act. 

Other pieces of legislation that may impact the application of AI in insurance services include, but are not limited to:

  • Equality Act 2010 – Insurers will already need to be able to demonstrate that the information used in pricing using AI is reliable, that it is reasonable to use it and that it falls within the limited available exceptions. In particular, organisations will need to be careful in applying any potential protected characteristics
  • GDPR – Elements of GDPR that could have application to any AI systems implemented in insurance include laws regarding the consent to use and process data, the lawful basis of processing considerations, regulation relating to data retention and minimisation and rights relating to automated decision making and behavioural profiling, particularly when using so-called “big data” analytical methods.  

In conclusion

AI is a technology to be managed, with important ethical dilemmas in data usage, potential biases in AI algorithms impacting underwriting decisions and the imperative for transparency in AI-driven processes to sustain customer trust. 

Crucially for insurers, governments, legislators and regulators are taking an interest in the technology and cornerstone legislation is expected this year in core jurisdictions. Other economies, such as China and Brazil, are expected to follow suit, particularly in protecting individuals from misuse of their data and the unsafe application of machine learning systems.

GDPR created a global trend to implement data regulation and the EU AI Act has been similarly billed as a potentially trendsetting piece of legislation. While the Act has yet to be passed, it seems this process has already begun, with Brazil’s AI legislation expected to be broadly modelled on the Act.

AI represents a massive opportunity for the insurance industry, but it is not without its risks. As with many of the products they are designed to regulate, AI regulations are still in their early stages. Combined with the various approaches to regulation taken in different jurisdictions, insurers will need to keep a close eye on these developments should they wish to utilise AI to its full potential. For any questions, please contact thomas.singleton@lmalloyds.com or arabella.ramage@lmalloyds.com.