Tag Archives: artificial intelligence

Making better use of available internal data

Data science is a term given to the broad array of activities used to gain insight and extract value from existing data sources, including techniques such as data analytics, predictive analytics, machine learning, data mining and artificial intelligence. The use of data science techniques enables the extraction of value from increasingly diverse sources of data.

The cost of storing data has been falling exponentially for decades, and many companies have started storing lots of potentially valuable internal data. Computing speeds have been increasing exponentially for decades, and data analysis software has been steadily improving, making it feasible for companies to analyse and gain insight from these larger data sets.

In this briefing note, Milliman’s Donal McGinley, Bridget MacDonnell and Eamon Comerford provide a high-level overview of how insurers can make better use of internal data to gain insight and drive competitive advantage.

Regulatory interest in AI: A summary of papers published in the US and the Netherlands

We are seeing an increased interest in the area of artificial intelligence (AI) from regulators recently. In this blog post, I will provide a summary of regulatory papers published in the US and the Netherlands last year. In the US, the Casualty Actuarial and Statistical Task Force (CASTF) published a paper in May 2019 aimed at identifying best practices when reviewing predictive models and analytics filed by insurers with regulators to justify rates, and providing state guidance for review of rate filings based on predictive models. In the Netherlands, the Dutch supervisors Authority for Financial Markets (AFM) and De Nederlandsche Bank (DNB) published two articles in July 2019 discussing the use of AI in the Dutch financial sector and specifically among Dutch insurers.

Regulatory review of predictive models in the US

The CASTF paper begins by defining what a best practice is and discusses whether regulators need best practices to review predictive models. It concludes that best practices will aid regulatory reviewers by raising their level of model understanding. With regard to scorecard models and the model algorithm, there is often not sufficient support for relative weight, parameter values or scores of each variable—best practices can potentially aid in fixing this problem. It notes that best practices are not intended to create standards for filings that include predictive models. Rather, best practices will assist regulators in identifying the model elements they should be looking for in a filing. This should aid the regulator in understanding why the company believes that the filed predictive model improves the company’s rating plan, making that rating plan fairer to all consumers in the marketplace.

The focus of the paper is on generalised linear models (GLMs) used to create private passenger automobile and home insurance rating plans. It is noted, however, that the knowledge needed to review predictive models and the guidance provided may be transferrable when the review involves GLMs applied to other lines of business. The guidance might also be useful when starting to review different types of predictive models.

The paper goes on to provide best practices (or “guidance”) for the regulatory review of predictive models. It advises that the regulator’s review of predictive models should:

  • Ensure that the factors developed based on the model produce rates that are not excessive, inadequate or unfairly discriminatory.
  • Thoroughly review all aspects of the model including the source data, assumptions, adjustments, variables and resulting output.
  • Evaluate how the model interacts with and improves the rating plan.
  • Enable competition and innovation to promote the growth, financial stability and efficiency of the insurance marketplace.

Additional details are provided to give guidance on how to ensure each of these points is met.

The paper identifies the information a regulator might need to review a predictive model used by an insurer to support a filed insurance rating plan. It is a lengthy list, though it is noted that it is not meant to be exhaustive. The information required is rated by level of importance to the regulator’s review. It includes information on:

  • Model input (available data sources; adjustments to data; data organisation; data in the sub-models).
  • Building the model (narratives on how the model was built; information on predictor variables; adjustments to the data; model validation and goodness-of-fit measures; modeller software; and an analysis comparing the old model and new model).
  • The filed rating plan output from the model (general impact of the model on the rating algorithm; relevance of the rating variables and their relationship to the risk of loss; comparison of model outputs to current and selected rating factors; issues with data, credibility and granularity; definitions of the rating variables; data supporting the model output; impacts on consumers; and information on how the model is translated to a rating plan).
Continue reading

How is automation changing the way insurance companies work?

Job automation has shaped cultures and economies since before the agricultural revolution, throughout industrial revolutions and into the current digital age. The insurance industry is not immune, with automation and innovation continuing to drive the scope for significant change.

Traditional automation has been transformative in automating simple, repeatable tasks in back-end processes. Robotic process automation (RPA) combined with artificial intelligence and machine learning capabilities can be, and are being, used to automate high volume and high frequency tasks that have traditionally required human intervention.

In this paper, Milliman’s George Barrett, Claire Booth and Tanya Hayward discuss how automation has affected—and will affect—the nature of insurance companies’ processes and the nature of their clients’ needs. They cover numerous examples of how automation and RPA are transforming the way insurance companies operate and discuss the impact of job automation on insurers.

How can data science extract value from external data sources?

Traditionally, insurers have relied heavily on data they have collected as well as industry-specific data to inform their business decisions and strategy. However, data science techniques have become more sophisticated, allowing insurers to better understand the relationship between internal and external data sources. Predictive analytics, machine learning, data mining, and artificial intelligence are helping companies extract value from both sources.

In this article, Milliman’s Cormac Gleeson and Eamon Comerford discuss how the use of external data can complement a company’s wider data science initiatives. They also explore some of the challenges posed by working with external data.

The new ABCs: AI, Blockchain, and the Cloud

Insurance customers expect personalized, agile, and on-demand delivery from carriers nowadays. Insurers must keep up with technological advances and implement them to provide solutions that address these expectations. In her Best’s Review article “Mind your ABCs,” Milliman’s Pat Renzi explores why insurance companies must center their strategic initiatives on using emerging technology like artificial intelligence (AI), blockchain, and the cloud. She also explains how partnerships that feature diverse experts will see faster, smarter, and more successful disruption.





Milliman’s gradient A.I. platform brings first A.I. predictive analytics solution to professional employer organization (PEO) market

Milliman has announced that gradient A.I., a Milliman predictive analytics platform, now offers a professional employer organization (PEO)-specific solution for managing workers’ compensation risk. gradient A.I. is an advanced analytics and A.I. platform that uncovers hidden patterns in big data to deliver a daily decision support system (DSS) for insurers, self-insurers, and PEOs. It’s the first solution of its kind to be applied to PEO underwriting and claims management.

“Obtaining workers’ compensation insurance capacity has been historically difficult because of the lack of credible data to understand a PEO’s expected loss outcomes. Additionally, there were no formal pricing tools specific to the PEO community for use with any level of credibility—until gradient A.I. Pricing within a loss-sensitive environment can now be done with the science of Milliman combined with the instinct and intuition of the PEO,” says Paul Hughes, CEO of Libertate/RiskMD, an insurance agency/data analytics firm that specializes in providing coverage and consulting services to PEOs. “Within a policy term we can understand things like claims frequency and profitability, and we can get very good real-time month-to-month directional insight, in terms of here’s what you should have expected, here’s what happened, and as a result did we win or lose?”

gradient A.I., a transformational insurtech solution, aggregates client data from multiple sources, deposits it into a data warehouse, and normalizes the data in comprehensive data silos. “The uniqueness for PEOs and their service providers—and the power of gradient A.I.—emerges from the application of machine-learning capabilities on the PEOs’ data normalization,” says Stan Smith, a predictive analytics consultant and Milliman’s gradient A.I. practice leader. “With the gradient A.I. data warehouse, companies can reduce time, costs, and resources.”

For more on how gradient A.I. and Libertate brought predictive analytics solutions to PEOs, click here.