Predicting the unpredictable: Considerations for rate filing support when implementing predictive models

  • Print
  • Connect
  • Email
  • Facebook
  • Twitter
  • LinkedIn
  • Google+
By Eric P. Krafcheck | 30 June 2016

It has been a long road in developing your company’s newest pricing model: you have prepped the data, explored relationships among rating variables, and developed multiple model iterations before successfully building a model that you believe is highly predictive. You have selected rating factors that you believe are actuarially reasonable and also address management’s business objectives, demonstrated that the model corrects the current misalignment between rate and risk in the portfolio, and received management’s buy-in and approval to move forward with the next phase of the project: implementation of the new pricing model.

Even if you have a small army of staff whose primary responsibilities revolve around the submission and approval of rate filings, you will be the primary owner of the predictive model. As such, you will be responsible for providing support and responding to regulators’ objections concerning the model. Determining what support should be included in a filing can be difficult because most states give little direction or have no explicit instructions as to what support is required. Further, employee turnover and the lack of proper documentation may make it difficult to remember what support has been successful in the past for your company. And if this is your company’s first implementation of a predictive model, there may be little to no expertise available in the company on which to rely. So what types of predictive modeling support should be included in a filing to minimize the objections and amount of time needed to receive an approval from a regulator?

One size does not fit all

Unfortunately, the answer to that question is not as simple as it seems because the level of predictive modeling support required by state can vary widely. Some departments of insurance may require nothing more than the rating factors indicated by the predictive model. In contrast, others may require extensive support, including explanations for any proposed factors that deviate from the indicated, a description of the modeling process and the modeler’s qualifications, and the submission of various goodness-of-fit measures.

The support needed varies widely by state for a few reasons. Mainly, regulators have varying interpretations of the Casualty Actuarial Society’s Statement of Principles Regarding Property and Casualty Insurance Ratemaking, which in part states that an actuarially sound rate should not be excessive or inadequate. But what exactly constitutes an excessive rate? If the predictive model indicated a rating factor of 1.24 and a rating factor of 1.25 is selected, is the proposed rate excessive? Whereas some regulators may see the rounded 25% surcharge as reasonable, others may say it results in an excessive rate and require additional support for the selected 1.25 rating factor.

Regulators may also require a variety of support because of the black-box nature of predictive models. Unlike traditional one-way analyses, it is not feasible to see the derivation of the rating factors developed by the predictive model. Further, why is the proposed model better than other models? Does it fit the data well and also predict well? What sort of adjustments were made to the data before modeling even began? Are there large losses that are giving undue influence on the modeled results? Experienced modelers will consider all of these questions and more during the modeling process, but if regulators are only provided with the indicated relativities from the predictive model, it is hard for them to assess whether or not the modeler performed the appropriate due diligence. As a result, regulators may ask for a variety of support in order to assess that the modeling process was reasonable and completed by a competent modeler.

Finally, the amount of support required also depends on the regulators’ familiarity with predictive modeling concepts. Naturally, regulators who are more familiar with them will ask for more support in order to assess reasonability. Regulators that lack the experience may have to hire third parties to assess reasonability or they may not require much support because they may not know the proper questions to ask. It is important to note that regulators face an uphill battle in this regard: the complexity of rating plans, especially for personal lines of business, is growing at a pace that is difficult for regulators to keep up with. This means that the introduction of new modeling techniques or third-party data that has been derived using another predictive model may require longer review times from regulators as they must familiarize themselves with the new approach or data.

Types of support for predictive models

While there are a variety of predictive modeling support documents that a filer can choose to include in a rate filing, some are more standard than others. At the very least, in addition to the indicated rating factors, an actuarial memo should be included with the filing that describes the data used, the adjustments that were made to the data, and the overall modeling process, including a description of the model validation techniques. Additionally, the memo should include a discussion of the predictive modeling method used and give any necessary specifications of the models. For instance, if a generalized linear model (GLM) was used, include model specifications such as the target variable being modeled (e.g., pure premium, loss ratio, frequency, severity, etc.), the error distribution used (e.g., Poisson, Tweedie, etc.), the link function used, and the predictor variables included in the final model. As always, if you are an actuary, consult the Actuarial Standard of Practice No. 41, Actuarial Communications, for other items that you should consider adding to the memo.

Other types of support are useful to have on hand but might not be necessary to provide in every state. For instance, some states require the submission of various goodness-of-fit measures and other model validation statistics whereas other states may not. If you do not want to provide this information in every state unless necessary, you should at least have the information readily available in an exhibit format. This will save a lot of time down the road if goodness-of-fit information is requested in response to a filing.

When providing support, do not always assume that regulators will “connect the dots” on their own if there were several steps involved to derive the proposed rating factors. For instance, if you are proposing rating factors for every possible driver age (e.g., 16 through 100), but the ages of drivers were grouped to achieve more stability in the model (e.g., 16-24, 25-29, etc.), it may be useful for some regulators to see how the final, more granular relativities were determined. In this particular example, the modeler could demonstrate how the volatility in the modeled results was smoothed and then show how the smoothed relativities were interpolated to derive the indicated rating factor for each individual driver age. If extrapolation was used, provide an explanation of the method and show how the extrapolated factors were derived.

Additionally, it is often useful for regulators to see that the proposed relativities fall in between some sort of range, such as a confidence interval. A range most commonly accepted by regulators is the current-to-indicated relativity range. Selecting factors that fall in between the current and indicated rating factors acts as an informal credibility-weighting approach, which many regulators find acceptable. The approach allows the company to move in the direction of the modeled results without having to make large changes to rating factors. This also reduces policyholder premium impacts, which can be a significant concern for regulators in regard to rate filings that include extensive changes to rating factors or the structure of rating variables.

Lastly, one-way analyses can sometimes be used for support as a last resort. One-way analyses—where pure premiums, loss ratios, or other measures are summarized by rating variable—undo the benefit of multivariate analyses because they do not account for correlations between variables. However, regulators with less experience reviewing predictive models or complex actuarial analyses may favor more traditional types of actuarial support. This analysis may be easier for some regulators to understand, which may make them more comfortable with the filing.

Other considerations

As with any project, a little bit of planning can go a long way. When selecting proposed rating factors based on the results of a predictive model, anticipate areas that might cause concerns for regulators. If you were the regulator, would you have enough information to assess whether or not the proposed rates are reasonable? If not, develop additional support. Consider setting up support exhibits that include the current, indicated, and proposed relativities as well as indicated confidence intervals and exposure distributions. While all this information may not be necessary in all states, if the exhibits are already set up to include them, you will save time down the road if and when a regulator asks for them.

Along with planning, proper documentation can also prevent headaches during the filing and objection process. Because it can take up to a year to receive approval for a rate filing in some states, it is important to document the logic used for selections when it is not obvious based on the indicated rating factors. For instance, if business considerations were used in selecting a larger discount than indicated, document the reasoning and any support you may be able to use in addition to the indicated rating factors. Otherwise, you may be scratching your head a year later trying to remember why the selected factor was lower than the indicated. This will save time responding to questions down the road and hopefully reduce the number of days it takes to receive an approval.

When the objections do start rolling in, learn to pick your battles. It is easy to lose sight of the materiality of each item in question when you are in defense mode responding to objections. For instance, if your proposed rating factor is 1.50 but the regulator thinks anything over 1.45 is excessive, consider the cost and effort required to assemble a cogent response in conjunction with the number of policyholders that this particular rating factor affects. Also consider the risk that the response may not be sufficient and may lead to further objections, which could delay the model’s implementation. If there are very few policies affected such that reducing the factor to 1.45 would be immaterial to the overall bottom line, consider complying with the regulator’s request to avoid additional delays in the filing’s approval.

Finally, hold internal weekly status meetings with the key stakeholders involved with the filings to discuss strategies regarding how to approach a filing, how to address objections, and how to communicate with regulators. This can also be useful to make sure everyone involved in the filing process is making progress on their assigned tasks, which is especially critical when juggling multiple states at once.

The implementation of a predictive model used for rating purposes can be cumbersome and challenging. However, with the proper planning and preparation, the process will become much more efficient, which can save your company time and money.

Authors

Featured topics