Interpreting actual models for enhanced results

Insurers use models to predict the future, whether it is for future reported claims, ultimate claim costs or policy sales. Businesses are sometimes reluctant to move to more sophisticated models because models can be difficult to interpret and trust.

Predictions from models are not always intuitive. Validation exercises can be undertaken to prove that they are reliable, but additional work is needed to ensure that they are doing what is expected of them.

There have been some public examples where models have not performed as expected. Gender bias or racial bias, for example, can occur unintentionally where input data is not representative. Changes in data collection over time can also invalidate results. Unlike human beings, a well-coded and well-explained algorithm may be able to clearly display its own biases.

Model interpretation is understanding how models work and why they are accurate. If a model is interpretable, the predictions from that model should be intuitive to the modeller. Model validation and interpretation are key and distinct steps in the data science workflow and are essential before deployment.

In this briefing note, Milliman’s Eoin O’Baoighill and Eamonn Phelan discuss key steps in model interpretation, requirements from model interpreters and model validation methods.

Leave a Reply

Your email address will not be published. Required fields are marked *