Chain Ladder Method of reserving

Chain Ladder Method (CLM): the most common reserving method and steps to apply

Chain Ladder Method of reserving
Understanding Chain Ladder Method

The chain ladder or development method (CLM) is a prominent actuarial loss reserving technique. The chain ladder method is used in both the property and casualty and health insurance fields. Its intent is to estimate incurred but not reported claims and project ultimate loss amounts. The primary underlying assumption of the chain ladder method is that historical loss development patterns are indicative of future loss development patterns.

Breaking Down Chain Ladder Method (CLM)

It is important to understand the terms used below. If you are unaware of what triangles, development factors, paid and reported claims are, this reserving learning portal will help you understand the basic terms used.

The chain ladder method (clm) calculates incurred but not reported (IBNR) loss estimates, using run-off triangles of paid losses and incurred losses, representing the sum of paid losses and case reserves. Insurance companies must set aside a portion of the premiums they receive from their underwriting activities to pay for claims that may be filed in the future. The amount of claims forecasted and the amount that is actually paid determine how much profit the insurer will publish in its financial documents.

Reserve triangles are two-dimensional matrices that are generated by accumulating claim data over a period of time. The claim data is run through a stochastic process to create the run-off matrices after allowing for many degrees of freedom.

At its core, the chain ladder method operates under the assumption that patterns in claims activities in the past will continue to be seen in the future. In order for this assumption to hold, data from past loss experiences must be accurate. Several factors can impact accuracy, including changes to the product offerings, regulatory and legal changes, periods of high-severity claims, and changes in the claims settlement process. If the assumptions built into the model differ from observed claims, insurers may have to make adjustments to the model.

Creating estimations can be difficult because random fluctuations in claims data and a small data set can result in forecasting errors. To smooth over these problems, insurers combine both company claims data with data from the industry in general.

Methodology

According to Jacqueline Friedland’s “Estimating Unpaid Claims Using Basic Techniques,” there are seven steps to apply the chain ladder technique:

  1. Compile claims data in a development triangle
  2. Calculate age-to-age factors
  3. Calculate averages of the age-to-age factors
  4. Select claim development factors
  5. Select tail factor
  6. Calculate cumulative claim development factors
  7. Project ultimate claims

Age-to-age factors, also called loss development factors (LDFs) or link ratios, represent the ratio of loss amounts from one valuation date to another, and they are intended to capture growth patterns of losses over time. These factors are used to project where the ultimate amount of losses will settle.

Example of Chain Ladder Method (CLM)

First, losses (either reported or paid) are compiled into a triangle, where the rows represent accident years and the columns represent valuation dates. For example, 43,169,009 represents loss amounts related to claims occurring in 1998, valued as of 24 months.

Valuation
date
Accident year
1224364860728496108120
199837,017,48743,169,00945,568,91946,784,55847,337,31847,533,26447,634,41947,689,65547,724,67847,742,304
199938,954,48446,045,71848,882,92450,219,67250,729,29250,926,77951,069,28551,163,54051,185,767
200041,155,77649,371,47852,358,47653,780,32254,303,08654,582,95054,742,18854,837,929
200142,394,06950,584,11253,704,29655,150,11855,895,58356,156,72756,299,562
200244,755,24352,971,64356,102,31257,703,85158,363,56458,592,712
200345,163,10252,497,73155,468,55157,015,41157,565,344
200445,417,30952,640,32255,553,67356,976,657
200546,360,86953,790,06156,786,410
200646,582,68454,641,339
200748,853,563

Next, age-to-age factors are determined by calculating the ratio of losses at subsequent valuation dates for clm estimates. From 24 months to 36 months, accident year 1998 losses increased from 43,169,009 to 45,568,919, so the corresponding age-to-age factor is 45,568,919 / 43,169,009 = 1.056. A “tail factor” is selected (in this case, 1.000) to project from the latest valuation age to the ultimate factor.

Accident year12-2424-3636-4848-6060-7272-8484-9696-108108-120To ult
19981.1661.0561.0271.0121.0041.0021.0011.0011.000
19991.1821.0621.0271.0101.0041.0031.0021.000
20001.2001.0611.0271.0101.0051.0031.002
20011.1931.0621.0271.0141.0051.003
20021.1841.0591.0291.0111.004
20031.1621.0571.0281.010
20041.1591.0551.026
20051.1601.056
20061.173
2007

Finally, averages of the age-to-age factors are calculated. Judgmental selections are made after observing several averages. The age-to-age factors are then multiplied together to obtain cumulative development factors.

Sometimes, where the development is very odd or outlier, we specifically remove that age-to-age factor in order to make sure that there are no anomalies in the data.

Month range
Averaging method
12-2424-3636-4848-6060-7272-8484-9696-108108-120To ult
Simple average last 5 years1.1681.0581.0271.0111.0041.0031.0021.0011.000
Simple average last 3 years1.1641.0561.0271.0121.0051.0031.0021.0011.000
Volume weighted last 5 years1.1681.0581.0271.0111.0041.0031.0021.0011.000
Volume weighted last 3 years1.1641.0561.0271.0121.0051.0031.0021.0011.000
Selected1.1641.0561.0271.0121.0051.0031.0021.0011.0001.000
Cumulative to ultimate1.2921.1101.0511.0231.0111.0061.0031.0011.0001.000

The cumulative development factors are multiplied by the reported (or paid) losses to project ultimate losses.

Accident yearReported claimsDevelopment factor to ultimateProjected ultimate claims
199847,742,3041.00047,742,304
199951,185,7671.00051,185,767
200054,837,9291.00154,892,767
200156,299,5621.00356,468,461
200258,592,7121.00658,944,268
200357,565,3441.01158,198,563
200456,976,6571.02358,287,120
200556,786,4101.05159,682,517
200654,641,3391.11060,651,886
200748,853,5631.29263,118,803
Total543,481,587569,172,456

Incurred but not reported can be obtained by subtracting reported losses from ultimate losses, in this case, 569,172,456 – 543,481,587 = 25,690,869. This is the IBNR that will be further discussed with management and will be shown in the company’s Balance sheet as liability.

Chain Ladder Method vs. Bornhuetter-Ferguson Method

The Chain Ladder Method is often the go-to for actuaries when they have a lot of consistent, reliable claims data. It basically assumes that how claims developed in the past is a good guide for how they’ll develop in the future. But that’s also its biggest limitation—if the past isn’t a good predictor (say, for a new line of business or a year with unusual events), the method can lead to misleading results. That’s where the Bornhuetter-Ferguson Method comes in handy. Instead of relying solely on past claims patterns, it combines those patterns with a separate, expected view of what the total losses should be. In other words, it balances what has actually happened with what was expected to happen. This makes it especially useful when you don’t fully trust the early data or when the loss patterns are too erratic to rely on alone.

Chain Ladder Method vs. Expected Loss Ratio Method

The Expected Loss Ratio Methodworks a bit differently. Instead of looking at past claims to make future predictions, it starts with a forward-looking view—usually based on pricing or underwriting expectations—and sticks with that, regardless of what’s happened so far. It’s a useful approach when there’s not much claims data available, like early in a policy year or with brand-new products. The Chain Ladder Method, by contrast, only works well when you’ve got a good amount of credible historical data and the development patterns have been fairly stable. It updates estimates as more claims come in, which is great for mature books of business. But if you’re working with fresh or unpredictable data, the ELR method might be the safer bet because it’s less reactive and more controlled.

Learn Excel or Python/R for Data Science

Best Python & R in Data Science course
Machine Learning A-Z™: Hands-On Python & R In Data Science
Best Excel course
Microsoft Excel – Excel from Beginner to Advanced.

Source: Wikipedia

About the Author
blank

The Actuarial Club

Facebook

A dedicated website to provide students with the latest news about actuarial and related domains and a website to help students in solving their problems and provide recent info about trends in actuarial

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.