Much like the decisions about a central estimate, quantifying the uncertainty (i.e., determining a loss distribution) is prone to many of the same vulnerabilities of subjectivity and method/model error. The introduction of the claims variability guidelines is part of an evolutionary process that began with deterministic and statistical models aimed at understanding an insurance entity’s risk. The advent of substantial computing power allowed actuaries to move closer to a reasonable depiction of an entity’s risk with the development of sophisticated models that simulate millions of possible outcomes. From there, distributions of the possible outcomes can be used to identify a central estimate and to quantify worst case scenarios. Milliman’s Mark Shapland offers some perspective in this article.