Tag Archives: AI

Regulatory interest in AI: A summary of papers published in the US and the Netherlands

We are seeing an increased interest in the area of artificial intelligence (AI) from regulators recently. In this blog post, I will provide a summary of regulatory papers published in the US and the Netherlands last year. In the US, the Casualty Actuarial and Statistical Task Force (CASTF) published a paper in May 2019 aimed at identifying best practices when reviewing predictive models and analytics filed by insurers with regulators to justify rates, and providing state guidance for review of rate filings based on predictive models. In the Netherlands, the Dutch supervisors Authority for Financial Markets (AFM) and De Nederlandsche Bank (DNB) published two articles in July 2019 discussing the use of AI in the Dutch financial sector and specifically among Dutch insurers.

Regulatory review of predictive models in the US

The CASTF paper begins by defining what a best practice is and discusses whether regulators need best practices to review predictive models. It concludes that best practices will aid regulatory reviewers by raising their level of model understanding. With regard to scorecard models and the model algorithm, there is often not sufficient support for relative weight, parameter values or scores of each variable—best practices can potentially aid in fixing this problem. It notes that best practices are not intended to create standards for filings that include predictive models. Rather, best practices will assist regulators in identifying the model elements they should be looking for in a filing. This should aid the regulator in understanding why the company believes that the filed predictive model improves the company’s rating plan, making that rating plan fairer to all consumers in the marketplace.

The focus of the paper is on generalised linear models (GLMs) used to create private passenger automobile and home insurance rating plans. It is noted, however, that the knowledge needed to review predictive models and the guidance provided may be transferrable when the review involves GLMs applied to other lines of business. The guidance might also be useful when starting to review different types of predictive models.

The paper goes on to provide best practices (or “guidance”) for the regulatory review of predictive models. It advises that the regulator’s review of predictive models should:

  • Ensure that the factors developed based on the model produce rates that are not excessive, inadequate or unfairly discriminatory.
  • Thoroughly review all aspects of the model including the source data, assumptions, adjustments, variables and resulting output.
  • Evaluate how the model interacts with and improves the rating plan.
  • Enable competition and innovation to promote the growth, financial stability and efficiency of the insurance marketplace.

Additional details are provided to give guidance on how to ensure each of these points is met.

The paper identifies the information a regulator might need to review a predictive model used by an insurer to support a filed insurance rating plan. It is a lengthy list, though it is noted that it is not meant to be exhaustive. The information required is rated by level of importance to the regulator’s review. It includes information on:

  • Model input (available data sources; adjustments to data; data organisation; data in the sub-models).
  • Building the model (narratives on how the model was built; information on predictor variables; adjustments to the data; model validation and goodness-of-fit measures; modeller software; and an analysis comparing the old model and new model).
  • The filed rating plan output from the model (general impact of the model on the rating algorithm; relevance of the rating variables and their relationship to the risk of loss; comparison of model outputs to current and selected rating factors; issues with data, credibility and granularity; definitions of the rating variables; data supporting the model output; impacts on consumers; and information on how the model is translated to a rating plan).
Continue reading

How is automation changing the way insurance companies work?

Job automation has shaped cultures and economies since before the agricultural revolution, throughout industrial revolutions and into the current digital age. The insurance industry is not immune, with automation and innovation continuing to drive the scope for significant change.

Traditional automation has been transformative in automating simple, repeatable tasks in back-end processes. Robotic process automation (RPA) combined with artificial intelligence and machine learning capabilities can be, and are being, used to automate high volume and high frequency tasks that have traditionally required human intervention.

In this paper, Milliman’s George Barrett, Claire Booth and Tanya Hayward discuss how automation has affected—and will affect—the nature of insurance companies’ processes and the nature of their clients’ needs. They cover numerous examples of how automation and RPA are transforming the way insurance companies operate and discuss the impact of job automation on insurers.

AI risks: Model explainability

What is the issue?

As the use of artificial intelligence (AI) within financial institutions becomes more widespread, new challenges are posed in terms of understanding the output of AI analysis. A Risk.net article published recently discusses the challenges with interpreting these new types of models.

What does the article say?

The article “No silver bullet for AI explainability” points to the increased use of neural networks to automate tasks in areas such as options hedging and credit card lending within the banking sector. Such models include interactions between layers which cannot often be traced, therefore making explainability a real issue. Similar techniques are increasingly being used within the insurance industry; for example, insurers are making use of machine learning in the fields of pricing and underwriting, and in order to obtain in-depth insights from their data sets.

The ability to explain AI output is important for a number of reasons; ensuring that models are reliable, transparent and understood, ensuring that any biases can be identified, and ensuring that regulators are confident of capabilities within the business to understand the approach.

A paper by Caenazzo and Ponomarev, “Interpretability of neural networks: a credit card default model example,” finds that popular approaches to explaining neural networks each have their own strengths and offer different insights. Therefore no particular technique is superior, and a combination of techniques can be the most information-yielding approach.

Why is this relevant to the risk team?

Risk teams need to develop new expertise to ensure that use of such models is not introducing excessive risk into the business model; risk teams are likely to be asked to give risk opinions on whether or not an AI approach should be adopted and to validate on an ongoing basis whether the risk posed by the use of such tools remains within risk appetite.  Staying abreast of the interpretation techniques used by the business and more widely by researchers will help ensure that the risk team is able to get feedback on the extent to which model results are robust, traceable and defendable. In turn, this should contribute to the avoidance of undue risks of reputational damage, errors, mispricing or inappropriate business decisions being made.

Even for firms not currently employing AI techniques, the issues described here could rapidly become relevant once there becomes a business and strategic focus on adopting AI-based tools. Therefore it is key that risk teams start considering now how to identify, manage and mitigate AI-related risks, particularly given that assessing AI risks will typically require a different set of skills compared to the knowledge base traditionally found within risk teams.

Relevant links

Emerging risks and opportunities in insurance: Technology and innovation

Use of external data in insurance

Emerging risks in insurance: Job automation

The new ABCs: AI, Blockchain, and the Cloud

Insurance customers expect personalized, agile, and on-demand delivery from carriers nowadays. Insurers must keep up with technological advances and implement them to provide solutions that address these expectations. In her Best’s Review article “Mind your ABCs,” Milliman’s Pat Renzi explores why insurance companies must center their strategic initiatives on using emerging technology like artificial intelligence (AI), blockchain, and the cloud. She also explains how partnerships that feature diverse experts will see faster, smarter, and more successful disruption.




Obtaining mortgage loan information with artificial intelligence

Applying for a mortgage loan is a process that requires a lot of information to make an informed decision. Even in this digital age the process of obtaining a mortgage remains complex. Can artificial intelligence (AI) technology that makes recommendations based on research from consumer organizations and federal agencies help? Milliman consultant Madeline Johnson looks at the question in her article “Couch surfing for mortgage loans.”