We are seeing an increased interest in the area of artificial intelligence (AI) from regulators recently. In this blog post, I will provide a summary of regulatory papers published in the US and the Netherlands last year. In the US, the Casualty Actuarial and Statistical Task Force (CASTF) published a paper in May 2019 aimed at identifying best practices when reviewing predictive models and analytics filed by insurers with regulators to justify rates, and providing state guidance for review of rate filings based on predictive models. In the Netherlands, the Dutch supervisors Authority for Financial Markets (AFM) and De Nederlandsche Bank (DNB) published two articles in July 2019 discussing the use of AI in the Dutch financial sector and specifically among Dutch insurers.
Regulatory review of predictive models in the US
The CASTF paper begins by defining what a best practice is and discusses whether regulators need best practices to review predictive models. It concludes that best practices will aid regulatory reviewers by raising their level of model understanding. With regard to scorecard models and the model algorithm, there is often not sufficient support for relative weight, parameter values or scores of each variable—best practices can potentially aid in fixing this problem. It notes that best practices are not intended to create standards for filings that include predictive models. Rather, best practices will assist regulators in identifying the model elements they should be looking for in a filing. This should aid the regulator in understanding why the company believes that the filed predictive model improves the company’s rating plan, making that rating plan fairer to all consumers in the marketplace.
The focus of the paper is on generalised linear models (GLMs) used to create private passenger automobile and home insurance rating plans. It is noted, however, that the knowledge needed to review predictive models and the guidance provided may be transferrable when the review involves GLMs applied to other lines of business. The guidance might also be useful when starting to review different types of predictive models.
The paper goes on to provide best practices (or “guidance”) for the regulatory review of predictive models. It advises that the regulator’s review of predictive models should:
- Ensure that the factors developed based on the model produce rates that are not excessive, inadequate or unfairly discriminatory.
- Thoroughly review all aspects of the model including the source data, assumptions, adjustments, variables and resulting output.
- Evaluate how the model interacts with and improves the rating plan.
- Enable competition and innovation to promote the growth, financial stability and efficiency of the insurance marketplace.
Additional details are provided to give guidance on how to ensure each of these points is met.
The paper identifies the information a regulator might need to review a predictive model used by an insurer to support a filed insurance rating plan. It is a lengthy list, though it is noted that it is not meant to be exhaustive. The information required is rated by level of importance to the regulator’s review. It includes information on:
- Model input (available data sources; adjustments to data; data organisation; data in the sub-models).
- Building the model (narratives on how the model was built; information on predictor variables; adjustments to the data; model validation and goodness-of-fit measures; modeller software; and an analysis comparing the old model and new model).
- The filed rating plan output from the model (general impact of the model on the rating algorithm; relevance of the rating variables and their relationship to the risk of loss; comparison of model outputs to current and selected rating factors; issues with data, credibility and granularity; definitions of the rating variables; data supporting the model output; impacts on consumers; and information on how the model is translated to a rating plan).
Dutch regulator’s view on AI in the insurance sector
The first article published by the Dutch supervisors is a discussion paper written by DNB, containing a preliminary view on regulation towards the use of AI in the financial sector and specifically among Dutch insurers. In this paper the DNB invites market participants to comment on the following possible principles for the use of AI in the financial sector: soundness, accountability, fairness, ethics, skills and transparency (SAFEST).
The second article, written jointly by the AFM and DNB, is an exploration of AI usage among Dutch insurers. The supervisors state that the current biggest AI applications at insurance companies are machine learning methods applied to fraud detection and underwriting processes. Points of attention for using AI are divided into three categories:
- Embedding AI in an organisation—how AI can be embedded in the governance structure and policy of insurers.
- Technical aspects of AI—what the key considerations are regarding the development and application of AI.
- AI and the consumer—what the key considerations are regarding the duty of care when applying AI.
The article also discusses the effect of AI on solidarity—how AI can increase solidarity and trust in insurance companies as a result of fairer pricing. On the other hand, personal risk assessments can result in uninsurability and exclusion of high-risk individuals, especially in the segments of disability insurance and term life insurance.
Further details on these articles by Dutch supervisors can be found in briefing notes published by our colleagues.
Future regulatory publications?
We expect these papers published in the US and the Netherlands are just the beginning of papers to be published by regulatory bodies on this topic. As interest in the area of data science, and in particular in the areas of AI and predictive modelling, continues to increase further, we expect to see publications released by regulatory bodies in many different regions in the near future.