Technical Papers
- GI ROC reserving study - Effectiveness of Reserving Methods Working Party
- Guide to the paper: "Best Estimates for Reserves"
- Best Estimates for Reserves
- Variability and Uncertainty
- Correlations and Linearity
- A common misconception with correlated lognormals
- Economic Inflation Scenario Generators and calendar year trends
- Loss Reserve Upgrades - A Pervasive Myth
- Outward Reinsurance of Long Tail Liabilities- Some Surprising Findings
- Sarbanes-Oxley and Actuarial Methods
- When can accident years be regarded as development years?
- The need for diagnostic assessment of bootstrap predictive models
- About the bootstrap
- ICRFS-Plus™ algorithms for model estimation, forecasts, and simulations: why are they so fast?
- Notes on Introduction to Credibility Theory
For other papers and publications click here.
Guide to the paper: "Best Estimates for Reserves"
An additional set of study notes (powerpoint slides in zip format) designed to aid the understanding of the paper: "Best Estimates for Reserves" is available here. This guide was written to make the paper easier to read.
Best Estimates for Reserves
Insureware is pleased to announce that “Best Estimates for Reserves”, by Dr. Glen Barnett and Professor Ben Zehnwirth, is included in the list of readings for the 2005 CAS Syllabus of Examinations. “Best Estimates for Reserves” is published in the Proceedings of the CAS, Volume LXXXVII, 2000, pp. 245-303 and is available at:
www.casact.org/pubs/proceed/proceed00/00245.pdf
A list of errata is available here (in zip format).
A spreadsheet that illustrates the calculations for the regression equivalent of chain ladder link ratios can be found here (in zip format).
Answers to FAQs will be added below:
- What are weighted standardised residuals? Answer (in zip format)
Abstract of “Best Estimates for Reserves”
Link-ratio techniques can be regarded as weighted regressions. We extend these regression models to handle different exposure bases and modeling of trends in the incremental data, and develop a variety of diagnostic tools for testing the assumptions of these models.
This ‘extended link-ratio family’ (ELRF) of regression models is used to test the assumptions made by standard link-ratio techniques and compare their predictive power with modeling trends in the incremental data. Most loss arrays don't satisfy the assumptions of standard link-ratio techniques. The ELRF modeling structure creates a bridge to a statistical modeling framework where the assumptions are more consistent with actual data. There is a paradigm shift from standard link-ratio techniques to the statistical modeling framework - the ELRF models form a bridge from the ‘old’ paradigm to the ‘new’.
There are three critical stages involved in arriving at a reserve figure: extracting information from the data in terms of trends and stability, and distributions about these trends; formulating assumptions about the future leading to forecasting of distributions of paid losses; and correlation between lines and security level sought.
Other benefits of the new statistical paradigm are discussed, including segmentation, credibility and reserves or distributions for different layers.
A review of the paper entitled: “Barnett and Zehnwirth Provide Road Map for Probabilistic Reserve Models”, by Frederick F. Cripe, Chairperson, CAS Research Policy and Management Committee, is available at www.casact.org/pubs/actrev/may01/latest.htm.
The following are a number of excerpts from Cripe’s review
“Although the actuarial literature is replete with articles explaining the shortcomings of the traditional link ratio methods, most actuarial reserve analyses mainly rely on link ratio methods.”
“‘Best Estimates for Reserves’ Glen Barnett and Ben Zehnwirth provide an outstanding discussion of the relationship between traditional link ratio methods, broader regression models, and probabilistic reserve estimation models.”
“Barnett and Zehnwirth have written an excellent paper that provides a flexible and powerful set of methods for estimating reserves.”
Variability and Uncertainty
There is an important distinction between variability and uncertainty and the two should not be used interchangeably. Variability is the effect of chance and a function of the system. Uncertainty is the assessor's lack of knowledge (level of ignorance) about the parameters that characterize the physical system that is being modelled.
See this paper on Variability and Uncertainty for more details.
Correlations and Linearity
The multivariate normal distribution plays a central role in linear regression and linear correlation.
See this paper on Correlations and Linearity for more details.
A common misconception with correlated lognormals
There is a common misconception amongst actuaries and insurance executives about correlations.
Click here for more information.
Economic Inflation Scenario Generators and calendar year trends
There is a common misconception in the Insurance industry that economic inflation is the driving component of calendar year trends. Thus, much attention is applied to the selection of future inflation rates and the analysis of alternative future inflation scenarios.
Click here to view some counter-examples and how to set up economic inflation scenarios.
Loss Reserve Upgrades - A Pervasive Myth
Major misconceptions about the loss reserving process are widespread. We debunk a critical loss reserving myth involving loss reserve upgrades that is pervasive in the insurance industry.
Click here for more information.
Outward Reinsurance of Long Tail Liabilities- Some Surprising Findings
This paper that includes some counter intuitive findings were presented by Professor Ben Zehnwirth and Dr. Glen Barnett at a research conference at the University of New South Wales. The paper has important implications in respect of outgoing reinsurance of long tail liabilities.
See this paper for details.
Also this presentation covers similar results.
Sarbanes-Oxley and Actuarial Methods
For a discussion of the influence of Sarbanes-Oxley on Actuarial Methods click here.
When can accident years be regarded as development years?
When can accident years be regarded as development years? written by Professor Ben Zehnwirth, Dr. Glen Barnett and Dr Eugene Dubossarsky 18th January 2006 - The Casualty Actuarial Society Committee on Review of Papers has released its quarterly update of recently accepted papers. The CAS Editorial Committee will be editing these papers for inclusion in the Proceedings of the Casualty Actuarial Society.
Click here to view this paper.
The need for diagnostic assessment of bootstrap predictive models
The bootstrap is, at heart, a way to obtain an approximate sampling distribution for a statistic (and hence, if required, produce a confidence interval). The bootstrap can be modified in order to produce a predictive distribution (and hence, if required, prediction intervals). It is predictive distributions that are generally of prime interest to insurers (because they pay the outcome of the process, not its mean).
The bootstrap has become quite popular in reserving in recent years, but it's necessary to use the bootstrap with caution.
The bootstrap does not require the user to assume a distribution for the data. Instead, sampling distributions are obtained by resampling the data. However, the bootstrap certainly does not avoid the need for assumptions, nor for checking those assumptions. The bootstrap is far from a cure-all. It suffers from essentially the same problems as finding predictive distributions and sampling distributions of statistics by any other means. These problems are exacerbated by the time-series nature of the forecasting problem - because reserving requires prediction into never-before-observed calendar periods, model inadequacy in the calendar year direction becomes a critical problem. In particular, the most popular actuarial techniques (see this paper)- those most often used with the bootstrap - don't have any parameters in that direction, and are frequently mis-specified with respect to the behaviour against calendar time.
In this paper (html), we describe these common problems in using the bootstrap and how to spot them. The complete paper can be downloaded as a PDF document.
About the bootstrap
In this article, David Munroe and Dr David Odell, illustrate the bootstrap technique with a series of examples beginning from basic principles. The argument commences with a deliberately flawed example to emphasise the importance of verifying that the sample being (directly) bootstrapped (re-sampled) is structureless. That is, the sample is a random sample from the same distribution. It is then shown that if structure remains in the weighted standardised residuals of a fitted model, the bootstrap samples (pseudo-data loss development arrays) can easily be distinguished from the real data, as they do not replicate the features in the real data.
The bootstrap technique is an excellent tool for testing a model, that is, does the model capture the volatility in the data in a parsimoniously.
It is necessary to design a model that carefully splits the trend structure and process variability with the precision of a sushi chef. Cutting incorrectly results in inferior sushi.
The complete article can be downloaded here. The "About the Bootstrap" article was published in December 2009 issue of The Actuary UK magazine. Page 30-35.
ICRFS-Plus™ algorithms for model estimation, forecasts, and simulations: why are they so fast?
Brief explanations of why ICRFS-Plus™ algorithms are fast for model estimation, forecast, and simulations are presented in this document. The Kalman Filter is referenced by way of analogy since the principle of this method (fast, incremental updates) forms the foundational principle of the internal algorithms developed.
As mentioned, the crtical steps for developing any fast algorithm are to consider the structure and class of the models.
Notes on Introduction to Credibility Theory
The following pdf contains a set of lecture notes on credibility written by Ben Zehnwirth.
The study guide forms background material for a lecture on credibility given at the 1991 CAS Seminar on Ratemaking held on March 14-15 in Chicago, Illinois.
The notes have grown out of a number of sets of lecture notes prepared for statistical and actuarial courses at Macquarie University and the University of Copenhagen. They are intended to provide an introductory set of lectures on the subject of credibility and its intimate connections with linear regression, Bayes estimation, and recursive estimation.
Average link ratio methods can be formulated as regression estimators. A Buhlman-Straub credibility model is formulated for each link ratio regression model that facilitates the development credibility formulae for link ratios.
It is also shown that there are better formulae for adjuting link ratios than those based on Buhlmann- Straub.