Tag Archives: benchmarking

Milliman updates Claim Variability Benchmarks with valuable industry data for P&C insurers

Milliman announced today that it has released version 2.0 of its Claim Variability BenchmarksTM (CVB), an insurtech solution that helps property and casualty (P&C) insurers increase efficiencies and provides richer analysis in the face of regulatory and economic change such as reserve range and cash flow requirements, Solvency II, and International Financial Reporting Standard (IFRS) 17.

As part of the firm’s family of state-of-the-art actuarial reserve analysis systems, this release of CVB adds new industry benchmarks for claim frequency, severity, and loss development patterns for all major P&C insurance coverages, helping actuaries better model and understand their claim costs. Additional benchmarks are provided to help measure the correlations of experience among various lines of business. The new system also adds both Mack and Merz- Wüthrich distributions to aid insurers working with Solvency II and IFRS 17 reporting.

In addition, the new release provides a free version so that all actuaries can easily evaluate these important benchmarking tools.

Our CVB solution is specifically designed to help our clients, and insurers of all sizes, better understand their data and compare their trends and results to industry benchmarks. This release provides a number of new and  sophisticated calculations, so actuaries can gain more confidence in their estimates and focus on the areas where their substantial expertise can provide the most value to their organizations, especially important in this time of pandemic-based industry disruption.

To learn more about Milliman’s Claim Variability Benchmarks, click here.

Reevaluating P&C benchmarks in the COVID-19 era

The introduction of COVID-19 had an almost immediate influence on insurance variables, from exposure levels, claim development, and litigation rates to trends, yet it may take a long time to realize the effects. As a result, typical benchmarks and the process of benchmarking that has worked well in the past may no longer be applicable.

With many users and applications of benchmarks, how can benchmarks continue to be useful when consistency has been significantly disrupted? How did COVID-19 make an impact on common metrics and statistics used in benchmarking?  What are strategies to make benchmarks valuable now and post-COVID-19?

In this paper, Milliman consultant Richard Frese examines property and casualty (P&C) benchmarking in a new era affected by COVID-19.

Benchmarking P&C risk margins under Solvency II

Since the advent of Solvency II, insurers are faced with a number of challenges that can have a potential impact on determining the economic value of their liabilities. These challenges start with an insurer’s modeled uncertainty with respect to the timing and amount of future cash flows, which sets the stage for nearly every other element of the risk margin. Milliman actuary Mark Shapland offers some perspective in this paper.

A quantum leap in benchmarking P&C reserve ranges

Traditional development pattern benchmarks have provided some support in estimating fundamental liabilities, but even here, the process has long been a one-dimensional exercise, at least until now. A recently developed benchmarking tool, which includes percentiles at all stages of development, allows for the calibration of a benchmark that better resembles your portfolio. As such, this rigorously back-tested tool can provide actuaries an added level of confidence in the reasonableness of an entity’s reserve ranges. The next generation benchmarking tool, known as claim variability guidelines, is derived from extensive testing that involved all long-tail Schedule P lines of business and more than 30,000 data triangle sets. Milliman’s Mark Shapland provides perspective in this article.




A quantum leap in benchmarking P&C unpaid claim distributions

Much like the decisions about a central estimate, quantifying the uncertainty (i.e., determining a loss distribution) is prone to many of the same vulnerabilities of subjectivity and method/model error. The introduction of the claims variability guidelines is part of an evolutionary process that began with deterministic and statistical models aimed at understanding an insurance entity’s risk. The advent of substantial computing power allowed actuaries to move closer to a reasonable depiction of an entity’s risk with the development of sophisticated models that simulate millions of possible outcomes. From there, distributions of the possible outcomes can be used to identify a central estimate and to quantify worst case scenarios. Milliman’s Mark Shapland offers some perspective in this article.




A quantum leap in benchmarking P&C aggregate unpaid distributions

Confined by limited data, the aggregation process is typically riddled with volatility that can skew the view of an entity’s risk and capital needs. What has long been missing, at least until now, is a reliable benchmark for identifying and quantifying the risk dependencies between segments that underlie the loss aggregation process. Understanding risk dependencies between segments is a fundamental part of the process in forming conclusions about the interaction of loss distributions. With the introduction of new claims variability guidelines, actuaries can gauge the reasonableness of their correlations against benchmark correlations. Milliman’s Mark Shapland offers some perspective in this article.