Introduction

Continuing on from the 0410 program we will now try and vary our assumption by additional variables. Instead of starting with the existing assumption from the “smoothed_assumptions_20180301.RData” file, we will now use the adjusted assumption from the “smoothed_assump_adj.RData” file as it is now more representative of our experience. We will explore adding in ClaimType and a calendar year variable.

Prerequisite programs:

0150 - Separate data into train val test.R 0410 - Update existing assumption with glmnet.R

Load packages and datasets

We’ll start by loading and filtering the datasets from the programs listed above.

# 4.2.0 - Load the packages and explore the data
library(dplyr)
library(glmnet)
library(stringr)
library(Matrix)
library(doParallel)
library(ggplot2)
library(plotly)
library(scales)
library(lubridate)
library(kableExtra)
library(knitr)

# Initialize file paths for working directories
data_stored = "C:\\ILTCI_Workshop\\Data"
data_output = "C:\\ILTCI_Workshop\\Output"

# Load in the smoothed assumption we adjusted by gender and duration
load(file = paste0(data_output, "\\", "smoothed_assump_adj.RData"))

# Load prior processed data from the "0150 - Separate data into train val test.R" program 
load(paste0(data_output, "\\", "ctr_data.RData"))

# Apply filters used earlier
ctr_data <- ctr_data %>%
  filter(GroupIndicator == "Ind", 
         ClaimType != "Unk",
         Cov_Type_Bucket == "Comprehensive"
         )

# Convert Gender back to a character format so we can join it onto the data
smoothed_assump_adj$Gender <- as.character(smoothed_assump_adj$Gender)

# Join on the smoothed adjusted assumption from the 
# "0410 - Update existing assumption with glmnet.R" program to the data.
ctr_data_bench <- ctr_data %>% 
  left_join(smoothed_assump_adj,
            by = c("ClaimDuration", "Gender")
            ) %>% 
  mutate(exp_terms_orig = smoothed*Exposure,
         exp_terms = smooth_adj*Exposure)

rm(ctr_data)

head(ctr_data_bench) %>%
kable("html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
fake_claim_id GroupIndicator Gender IncurredAgeBucket Incurred_Year ClaimType Region StateAbbr Diagnosis_Category TQ_Status Cov_Type_Bucket Infl_Rider_Bucket EP_Bucket Max_Ben_Bucket ClaimDuration Exposure Terminations start_duration end_duration incurred_date study_start_date current_date Sample V1 smoothed smooth_adj exp_terms_orig exp_terms
1 Ind Female 70 to 74 1991 NH South VA Injury Q Comprehensive GPO 90/100 5 + 103 1 0 102 108 1991-07-01 2000-01-01 2000-02-01 training 353 0.025871 0.0228027 0.025871 0.0228027
1 Ind Female 70 to 74 1991 NH South VA Injury Q Comprehensive GPO 90/100 5 + 104 1 0 102 108 1991-07-01 2000-01-01 2000-03-01 training 354 0.025871 0.0227586 0.025871 0.0227586
1 Ind Female 70 to 74 1991 NH South VA Injury Q Comprehensive GPO 90/100 5 + 106 1 0 102 108 1991-07-01 2000-01-01 2000-05-01 training 356 0.025871 0.0227856 0.025871 0.0227856
1 Ind Female 70 to 74 1991 NH South VA Injury Q Comprehensive GPO 90/100 5 + 107 1 0 102 108 1991-07-01 2000-01-01 2000-06-01 training 357 0.025871 0.0228256 0.025871 0.0228256
4 Ind Male 60 to 64 1991 HHC Northeast CT Stroke Q Comprehensive None 90/100 3 to 4 104 1 0 101 138 1991-07-01 2000-01-01 2000-04-01 validation 104 0.040733 0.0310022 0.040733 0.0310022
4 Ind Male 60 to 64 1991 HHC Northeast CT Stroke Q Comprehensive None 90/100 3 to 4 109 1 0 101 138 1991-07-01 2000-01-01 2000-09-01 validation 109 0.040733 0.0310794 0.040733 0.0310794

Now that we have the expected terminations tagged on the data let’s explore the fit of the assumption. We’ve already explored the fit by gender, so lets look at the impact of adding ‘Claim Type’ and ‘Calendar Year’.

Explore Adding Variables

# https://protect-us.mimecast.com/s/cFUkCn5JXRS3Qg3BFOfYqw?domain=4.2.1.1 - Add Calendar Year column
ctr_data_bench <- ctr_data_bench %>%
  mutate(CY_Yr = year(current_date),  # calendar year variable
         LnCY_Yr = log(year(current_date) - 1999)
         )

# https://protect-us.mimecast.com/s/3yhQCo2JKZCPJ5PWInEKNn?domain=4.2.1.2 - View fit by ClaimType, which looks like it could use some adjusting
AtoE_train_type <- ctr_data_bench %>%
  filter(Sample == "training") %>%
  group_by(ClaimType) %>%
  summarise(Termination = comma(sum(Terminations)),
            Rate = percent(sum(Terminations)/sum(Exposure)),
            AtoE = round(sum(Terminations)/sum(exp_terms), 2))

kable(AtoE_train_type,"html") %>%
  kable_styling() 
ClaimType Termination Rate AtoE
ALF 2,561 1.88% 0.74
HHC 5,728 2.81% 1.02
NH 4,160 3.11% 1.16

When using a variable such as calendar year in the model, it is important to view the trend and determine how far you should let it extrapolate into the future. You will have to use judgment to determine if the trend should continue or be held constant.

# https://protect-us.mimecast.com/s/QlQDCpYVKZtOPqO6sXcAK-?domain=4.2.1.3 - View fit by calendar year 
AtoE_train_CY_Yr <- ctr_data_bench %>%
  filter(Sample == "training") %>%
  group_by(CY_Yr) %>%
  summarise(Termination = comma(sum(Terminations)),
            Rate = percent(sum(Terminations)/sum(Exposure)),
            AtoE = round(sum(Terminations)/sum(exp_terms), 2))

ggplotly(ggplot() + 
           geom_point(data = AtoE_train_CY_Yr, 
                      aes(x = CY_Yr, y = AtoE))+ 
           geom_smooth(data = AtoE_train_CY_Yr, 
                       aes(x = CY_Yr, y = AtoE),
                       method='lm', se = FALSE)+
           scale_y_continuous(limits = c(0.5, 1.75))+
           ggtitle("AtoE by Calendar Year and gender"))

Preprocessing Data

Standardizing Variables

Before we include calendar year, we must standardize the variable first in order for the penalization to be applied evenly across the coefficients. Typically you should leave binary (0/1) variables and standardize the continuous variables. One method of standardization is to normalize a variable by subtracting its mean and dividing it by its standard deviation. This centers the variable and gives it a variance of one. For count variables, taking the square root is usually a good method as there tends to be diminishing returns as the count increases. Also transforming the variable to be bounded between 0 and 1 is a reasonable standardization technique. In the end the goal is to standardize the variables such that it is plausible that the resulting coefficient can be drawn from the same underlying distribution.

For example, small coefficients should have a large frequency, while large coefficients should have a lower frequency. Non-penalized glms are scale invariant meaning that standardizing the variables will not change your predictions. However, with the penalized glm, the scale of the variables is important because the penalization puts an extra constraint on the sum of the coefficients. Therefore, if the scales of your variables are all different the penalization will not be applied evenly across the variables.

# https://protect-us.mimecast.com/s/8XBUCv2J5QCLoqLVILHUZ6?domain=4.2.1.4 - Standardize Calendar Year
LnCY_Yr_avg <- mean(ctr_data_bench[ctr_data_bench$Sample == "training",]$LnCY_Yr)
LnCY_Yr_st_dev <- var(ctr_data_bench[ctr_data_bench$Sample == "training",]$LnCY_Yr)^.5

ctr_data_bench$LnCY_Yr_st <- (ctr_data_bench$LnCY_Yr - LnCY_Yr_avg)/LnCY_Yr_st_dev

# check that we get a mean=0 and var=1
mean(ctr_data_bench[ctr_data_bench$Sample == "training", ]$LnCY_Yr_st)
## [1] 4.177221e-16
var(ctr_data_bench[ctr_data_bench$Sample == "training", ]$LnCY_Yr_st)
## [1] 1

After standardizing Calendar Year, lets take a quick look at visually if there are any trends within cohorts.

# https://protect-us.mimecast.com/s/JT2DCwpg50SVXkVrFZAXtW?domain=4.2.1.5 - Voew fit by calendar year
AtoE_train_g_CY <- ctr_data_bench %>%
  filter(Sample == "training") %>%
  group_by(Gender,
           CY_Yr) %>%
  summarise(Termination = comma(sum(Terminations)),
            Rate = percent(sum(Terminations)/sum(Exposure)),
            AtoE = round(sum(Terminations)/sum(exp_terms), 2))


ggplotly(ggplot() + 
           geom_point(data = AtoE_train_g_CY, 
                      aes(x = CY_Yr, y = AtoE, col=c(Gender)))+ 
           geom_smooth(data = AtoE_train_g_CY, method='lm', se = FALSE,
                       aes(x = CY_Yr, y = AtoE, col=c(Gender)))+
           scale_y_continuous(limits = c(0.5, 1.75))+
           ggtitle("AtoE by Calendar Year and gender"))
# View ClaimType and calendar year. We can ALF trends slightly upwards 
# while HHC and NH trend downwards with HHC having a large downward trend.
AtoE_train_type_CY <- ctr_data_bench %>%
  filter(Sample == "training") %>%
  group_by(ClaimType,
           CY_Yr) %>%
  summarise(Termination = comma(sum(Terminations)),
            Rate = percent(sum(Terminations)/sum(Exposure)),
            AtoE = round(sum(Terminations)/sum(exp_terms), 2))


ggplotly(ggplot() + 
           geom_point(data = AtoE_train_type_CY, 
                      aes(x = CY_Yr, y = AtoE, col=c(ClaimType)))+ 
           geom_smooth(data = AtoE_train_type_CY, 
                       aes(x = CY_Yr, y = AtoE, col=c(ClaimType)),
                       method='lm', se = FALSE)+
           scale_y_continuous(limits = c(0.5, 1.75))+
           ggtitle("AtoE by Calendar Year and ClaimType"))

Just on an eye test, it appears that there is a difference in the AtoEs between ALF compared to HHC and NH as Calendar Year increases.

Creating K-folds

Similar to the 0410 program we’ll update the fit based on the training data using a penalized glm. However, now we will explore developing a model with the following variables: ClaimDuration, Gender, ClaimType and CY_Yr. We will select the optimal penalty value using a 10-fold cross-validation (CV). To do this first we need create random fold IDs to assign observations randomly to the 10 different folds.

# https://protect-us.mimecast.com/s/0KFeCxkj56tRKDRNiog3Fe?domain=4.2.2.1 - Creating K-Fold Buckets

# Use a variable as a place holder for the number of folds
nfolds <- 10

# Grab list of distinct claim IDs since we will create the fold by claim ID 
# in order to avoid splitting one claim into two or more folds
distinct_claim_ids <- ctr_data_bench %>%  
  distinct(fake_claim_id)

# Use seed to make results reproducible and sample 10 folds
set.seed(28)
folds <- data.frame(distinct_claim_ids,
                    fold = sample(nfolds, 
                                  size = nrow(distinct_claim_ids), 
                                  replace = TRUE
                                  )
                    )

# Join folds assignments back to the data
ctr_data_bench <- ctr_data_bench %>%
  left_join(folds, by="fake_claim_id")

rm(distinct_claim_ids,
   folds)

# Verify that each fold has about the same amount of data
round(table(ctr_data_bench$fold)/(nrow(ctr_data_bench)), digits = 2)
## 
##   1   2   3   4   5   6   7   8   9  10 
## 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1

We have sucessfully attached K-Fold identifiers for the data evenly. Before we begin fitting the data, we should address an issue with our training data.

Editing duration variable

We must bucket durations 160+ months together because our test data contains later durations that are not in our training data. If we didn’t do this then we wouldn’t have predictions for the later durations in the test data, which would cause the code to break when we score the test data at the end. Later we can try incorporating additional variables and claim duration bucketing.

# https://protect-us.mimecast.com/s/dcPfCyPk5wt2EK2VhXB4la?domain=4.2.2.2 - Change values above 160 to 160 exactly
ctr_data_bench <- ctr_data_bench %>%
  mutate(ClaimDuration_v2_ = ifelse(ClaimDuration > 160, 160, ClaimDuration))

# select the variables needed to train and test the model and then aggregate the data 
# to speed up training.
agg_data <- ctr_data_bench %>%
  group_by(fold,
           Gender,
           ClaimDuration_v2_,
           ClaimType,
           LnCY_Yr_st,
           CY_Yr,
           ClaimDuration,
           Sample
           ) %>%
  summarise(exp_terms = sum(exp_terms),
            exp_terms_orig = sum(exp_terms_orig),
            Terminations = sum(Terminations),
            Exposure = sum(Exposure)
            ) %>% data.frame()

rm(ctr_data_bench)

kable(head(agg_data),"html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
fold Gender ClaimDuration_v2_ ClaimType LnCY_Yr_st CY_Yr ClaimDuration Sample exp_terms exp_terms_orig Terminations Exposure
1 Female 1 ALF -3.5711800 2000 1 training 0.1847716 0.2095773 0 1.642700
1 Female 1 ALF -3.5711800 2000 1 validation 0.0886904 0.1005971 0 0.788496
1 Female 1 ALF -2.2455566 2001 1 training 0.1552082 0.1760449 0 1.379868
1 Female 1 ALF -2.2455566 2001 1 validation 0.0406498 0.0461070 0 0.361394
1 Female 1 ALF -1.4701166 2002 1 validation 0.2233434 0.2533274 0 1.985620
1 Female 1 ALF -0.9199331 2003 1 training 0.3141118 0.3562814 0 2.792590

Now we have our data ready let’s get ready for the regression function in the glmnet package. It’s a bit different than what we’ve seen so far. glmnet() uses x, y, and offset input data instead of formulas like in glm(). As a result, we will have to create a few different data sets.

# 4.2.3 - Getting Data Structures for glmnet

# x: Select our independent variables 
ind_vars <- agg_data %>% 
  filter(Sample == "training") %>%
  select(Gender,
         ClaimType,
         ClaimDuration_v2_,
         LnCY_Yr_st
         ) 

# y: Grab our terminations
terminations <- agg_data %>% 
  filter(Sample == "training") %>%
  select(Terminations)

# offset: create expected termination offset
offset_cal <- agg_data %>% 
  filter(Sample == "training") %>%
  select(exp_terms) %>% 
  log() %>% # take the log since we are using a log-link function in the GLM
  as.matrix() 

Glmnet Model

One-Hot Encoding

Next we need to prep our independent variables more for modeling. For all categorical variables we will build indicators for every level. This is different than what is typically done with a non-penalized glm where one would code one of the levels as a reference level. This is known as one-hot encoding (building a full suite of dummy variables). Doing so then allows the penalization to determine which variables should be squeezed into the reference level. You can choose a reference level yourself if you want, but this is not necessary with a penalized regression. It can actually do the math when there is perfect multicollinearity, unlike an OLS regression. This method of one-hot encoding is also applicable to other machine learning methods such as tree based models. We will also store the data in a sparse matrix via sparse.model.matrix() from the Matrix package, which is an effective way to store data. It works by only storing the non-zero elements of the data, saving a lot of space. Although not every package works with the sparse matrix format, glmnet is a package that does work with this type of data.

First we need to format all the categorical variables as factors to help the sparse.model.matrix() function determine which variables to one-hot encode. Do this by listing out the variables we want to be factors.

factor_vars <- c('Gender',
                 'ClaimType',
                 'ClaimDuration_v2_')

# Now update the variables in our list to be factors
ind_vars[, factor_vars] <- lapply(ind_vars[, factor_vars], as.factor)

Let’s now fit the model in a formula type.

# 4.2.4 - Create model
glm_formula <- formula(~ -1 # drop intercept since glmnet has a statment for this 
                       + Gender
                       + ClaimType
                       + LnCY_Yr_st
                       + Gender:ClaimType
                       + Gender:LnCY_Yr_st
                       + ClaimType:LnCY_Yr_st
                       + ClaimDuration_v2_:Gender
                       + ClaimDuration_v2_:ClaimType
                       )

Here in the next step, we are one-hot encoding our factor variables we have stored in the factor_vars list, building out our model design matrix we have specified in the glm_formula from above and finally taking that design matrix and storing it as a sparse formatted matrix that only points to the rows and columns of the non-zero elements in our design matrix.

x_train_sp <- sparse.model.matrix(glm_formula,
                                  data = ind_vars,
                                  # This contrast call allows us to create indicators 
                                  # for every level of variables that have a factor format
                                  contrasts.arg = lapply(ind_vars[, sapply(ind_vars, is.factor)], 
                                                         contrasts, 
                                                         contrasts=FALSE)
                                  ) 

# store y as sparse matrix
term_sp <- terminations %>% 
  as.matrix() %>% 
  Matrix(., sparse=TRUE)



# Grab fold IDs and store them as a vector since glmnet needs them in this format for 
# the k-fold CV
fold_ID <- agg_data %>%
  filter(Sample == "training") %>%
  select(., fold) %>% 
  t() %>% as.vector()

Hyper Parameter Selection

Now that we have added in some more variables and interactions, we’ll see if they are unecessary. We’ll use an alpha value between 0 and 1. This is know as an elastic net regression, which provides feature selection automatically by shrinking non-significant variables to zero. When alpha is close to 0 it set few variables to zero and as you move alpha closer to 1 it sets more variables to zero. Since the lasso penalty (alpha=1) tends to have trouble when faced with multicollinearity it is always good to use an alpha value of less than 1.0 to help guard against multicollinearity by blending in the ridge penalty. The downfall with lasso is when there is multicollinearity within a group of variables lasso tends to randomly select one of the variables in the group and set the rest to zero. This can produce wild coefficients in your model.

Typically the alpha value isn’t to sensitive when varied between 0 and 1 so let’s use a value of 0.5, which gives us a 50/50 blend of ridge and lasso.

# 4.2.5 - Check how our coefficients change as we vary the lambda penalty

my_alpha = 0.5

glmnet_model1 <- glmnet(x = x_train_sp,
                        y = term_sp,
                        family = "poisson",
                        offset = offset_cal,
                        alpha = my_alpha,
                        standardize = FALSE, # set to false since binary variables shouldn't be standardized, 
                                            # only continuous ones should be
                        intercept = FALSE # set intercept to false since the intercept term is not penalized in the model and 
                                          # we only want to adjust the assumption if there is credible enough data to do so
                        )

# plot coefficient path
plot.glmnet(glmnet_model1, xvar = "lambda")
title(main = "Coefficients Across the Penalty Path\n\n")

Unforunately, most of the time the default lambda sequence doesn’t show the full lambda path. Let’s grab the lambda sequence that was used above and store to use with the k-fold CV. We’ll stretch the lambda sequence a bit since usually the auto generated sequence doesn’t cover the full range of penalties (no penalty to full penalty).

my_lambda <- exp(
  seq(from=log(glmnet_model1$lambda[1]),
      to=log(glmnet_model1$lambda[1]*1e-8),
      length.out=100
  )
)



# Now let's view how our coefficients change with the full lambda sequence
glmnet_model2 <- glmnet(x = x_train_sp,
                        y = term_sp,
                        family = "poisson",
                        offset = offset_cal,
                        alpha = my_alpha,
                        lambda = my_lambda,
                        standardize = FALSE,
                        intercept = FALSE 
)

# plot coefficient path across the lambda penalties
# Note that this function plots the coefficients in log scale since
# they were fit using a log-link function
plot.glmnet(glmnet_model2, xvar = "lambda", ylab = "Coefficients (Log Scale)")
title(main = "Coefficients Across the Penalty Path\n\n")

Right now we have the coefficients in a log scale. Let us convert them back.

# Though we can do this hack to put convert them into factor adjustments
factor_coef_data <- glmnet_model2
factor_coef_data$beta <- exp(factor_coef_data$beta)
plot.glmnet(factor_coef_data, xvar = "lambda", ylab = "Coefficients (Factor Scale)")
title(main = "Coefficients Across the Penalty Path\n\n")

# Save the coefficient path in a CSV for easier viewing
all_coeff <- coef(glmnet_model2, s=my_lambda) %>% 
  exp() %>% as.matrix() %>% data.frame() 

coeff_seq <- data.frame(all_coeff)

# Save as csv file 
write.csv(coeff_seq, file = paste0(data_output, "\\", "0420_Full_coefficient_path.csv"))

Utilizing Parallel Computing

Create new instances of R to use for parallel processing the cross-validation via the registerDoParallel() function in the doParallel Package. Note, do not use more processes than there are folds. Also do not use more processes than number of cores you have on your computer; otherwise, this causes a traffic jam and slows things down.

# https://protect-us.mimecast.com/s/vtcKCzpl56SwvEwVh5Pvep?domain=4.2.7.1 - 10-fold cross-validation to select the optimal lambda

# Will utilize 6 cores
registerDoParallel(6)

glmnet_model2_cv <- cv.glmnet(x = x_train_sp,
                              y = term_sp,
                              family = "poisson",
                              offset = offset_cal,
                              alpha = my_alpha,
                              foldid = fold_ID,
                              lambda = my_lambda,
                              parallel = TRUE,
                              standardize = FALSE,
                              intercept = FALSE
                              )

# plot cross-validation results
plot(glmnet_model2_cv)
title(main = "K-fold Cross-validation Performance Across the Penalty Path\n\n")

# https://protect-us.mimecast.com/s/cx7UCADOo6tlMBlqClV6ji?domain=4.2.7.2 - plot coefficient path across the lambda penalties and add
# a red line that shows the optimal penalty chosen by the 10-fold CV
plot.glmnet(factor_coef_data, xvar = "lambda", ylab = "Coefficients (Factor Scale)")
abline(v = log(glmnet_model2_cv$lambda.min), col = "red")
title(main = "Coefficients Across the Penalty Path\n\n")

Let’s zoom in closer to find the minimum log lambda.

plot.glmnet(factor_coef_data, xlim = c(-8, log(max(glmnet_model2_cv$lambda))),
            ylim = c(0.3, 1.5),
            xvar = "lambda", ylab = "Coefficients (Factor Scale)")
abline(v = log(glmnet_model2_cv$lambda.min), col = "red")
title(main = "Coefficients Across the Penalty Path\n\n")

Let’s extract the coefficients and predictions for the lambda that produces the lowest error from cross-validation as well as the lambda that is assosciated with one standard error.

# 4.2.8 - Now let's extract our Coefficients and predictions

# This one corresponds to the lambda the produces the min cross-validation error
Coeff_lambda_min <- coef(glmnet_model2_cv, s="lambda.min") %>% 
  as.matrix() %>% data.frame()

# This one corresponds to the lambda that is one standard error from the min cross-validation error
Coeff_lambda_1SE <- coef(glmnet_model2_cv, s="lambda.1se") %>% 
  as.matrix() %>% data.frame()

compare <- data.frame(Coeff_lambda_min[,-1],
                      lambda_min = exp(Coeff_lambda_min$X1),
                      lambda_1se = exp(Coeff_lambda_1SE$X1))

kable(compare,"html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
lambda_min lambda_1se
(Intercept) 1.0000000 1.0000000
GenderFemale 1.0000000 1.0000000
GenderMale 0.9731154 0.9834309
ClaimTypeALF 0.9307383 0.8189183
ClaimTypeHHC 0.9660393 1.0000000
ClaimTypeNH 1.1501766 1.1470541
LnCY_Yr_st 0.9996235 1.0000000
GenderFemale:ClaimTypeALF 0.9101371 0.9342677
GenderMale:ClaimTypeALF 1.0000000 1.0000000
GenderFemale:ClaimTypeHHC 1.0000000 1.0306285
GenderMale:ClaimTypeHHC 0.9299237 0.9826148
GenderFemale:ClaimTypeNH 1.0088562 1.0002806
GenderMale:ClaimTypeNH 1.0000000 1.0000000
GenderFemale:LnCY_Yr_st 0.9728690 0.9791129
GenderMale:LnCY_Yr_st 1.0000000 1.0000000
ClaimTypeALF:LnCY_Yr_st 1.0285531 1.0272428
ClaimTypeHHC:LnCY_Yr_st 0.9489899 0.9414168
ClaimTypeNH:LnCY_Yr_st 0.9992327 1.0000000
GenderFemale:ClaimDuration_v2_1 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_1 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_2 0.9817422 1.0000000
GenderMale:ClaimDuration_v2_2 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_3 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_3 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_4 0.9745765 1.0000000
GenderMale:ClaimDuration_v2_4 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_5 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_5 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_6 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_6 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_7 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_7 1.0063991 1.0000000
GenderFemale:ClaimDuration_v2_8 1.0219032 1.0000000
GenderMale:ClaimDuration_v2_8 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_9 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_9 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_10 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_10 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_11 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_11 0.9755189 1.0000000
GenderFemale:ClaimDuration_v2_12 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_12 1.0950452 1.0000000
GenderFemale:ClaimDuration_v2_13 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_13 1.0071254 1.0000000
GenderFemale:ClaimDuration_v2_14 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_14 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_15 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_15 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_16 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_16 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_17 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_17 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_18 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_18 1.0001250 1.0000000
GenderFemale:ClaimDuration_v2_19 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_19 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_20 1.0056601 1.0000000
GenderMale:ClaimDuration_v2_20 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_21 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_21 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_22 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_22 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_23 0.9698099 1.0000000
GenderMale:ClaimDuration_v2_23 1.1488285 1.0000000
GenderFemale:ClaimDuration_v2_24 1.0488280 1.0000000
GenderMale:ClaimDuration_v2_24 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_25 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_25 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_26 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_26 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_27 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_27 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_28 0.9963084 1.0000000
GenderMale:ClaimDuration_v2_28 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_29 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_29 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_30 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_30 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_31 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_31 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_32 0.9934116 1.0000000
GenderMale:ClaimDuration_v2_32 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_33 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_33 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_34 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_34 1.1239583 1.0000000
GenderFemale:ClaimDuration_v2_35 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_35 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_36 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_36 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_37 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_37 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_38 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_38 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_39 1.0619364 1.0000000
GenderMale:ClaimDuration_v2_39 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_40 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_40 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_41 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_41 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_42 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_42 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_43 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_43 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_44 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_44 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_45 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_45 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_46 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_46 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_47 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_47 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_48 0.9991533 1.0000000
GenderMale:ClaimDuration_v2_48 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_49 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_49 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_50 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_50 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_51 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_51 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_52 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_52 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_53 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_53 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_54 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_54 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_55 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_55 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_56 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_56 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_57 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_57 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_58 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_58 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_59 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_59 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_60 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_60 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_61 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_61 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_62 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_62 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_63 1.0320170 1.0000000
GenderMale:ClaimDuration_v2_63 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_64 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_64 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_65 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_65 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_66 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_66 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_67 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_67 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_68 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_68 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_69 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_69 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_70 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_70 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_71 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_71 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_72 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_72 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_73 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_73 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_74 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_74 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_75 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_75 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_76 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_76 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_77 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_77 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_78 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_78 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_79 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_79 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_80 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_80 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_81 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_81 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_82 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_82 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_83 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_83 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_84 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_84 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_85 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_85 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_86 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_86 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_87 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_87 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_88 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_88 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_89 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_89 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_90 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_90 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_91 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_91 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_92 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_92 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_93 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_93 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_94 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_94 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_95 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_95 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_96 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_96 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_97 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_97 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_98 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_98 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_99 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_99 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_100 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_100 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_101 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_101 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_102 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_102 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_103 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_103 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_104 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_104 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_105 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_105 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_106 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_106 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_107 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_107 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_108 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_108 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_109 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_109 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_110 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_110 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_111 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_111 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_112 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_112 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_113 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_113 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_114 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_114 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_115 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_115 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_116 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_116 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_117 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_117 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_118 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_118 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_119 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_119 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_120 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_120 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_121 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_121 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_122 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_122 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_123 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_123 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_124 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_124 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_125 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_125 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_126 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_126 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_127 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_127 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_128 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_128 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_129 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_129 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_130 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_130 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_131 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_131 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_132 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_132 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_133 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_133 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_134 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_134 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_135 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_135 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_136 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_136 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_137 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_137 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_138 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_138 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_139 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_139 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_140 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_140 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_141 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_141 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_142 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_142 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_143 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_143 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_144 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_144 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_145 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_145 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_146 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_146 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_147 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_147 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_148 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_148 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_149 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_149 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_150 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_150 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_151 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_151 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_152 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_152 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_153 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_153 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_154 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_154 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_155 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_155 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_156 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_156 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_157 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_157 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_158 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_158 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_159 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_159 1.0000000 1.0000000
GenderFemale:ClaimDuration_v2_160 1.0000000 1.0000000
GenderMale:ClaimDuration_v2_160 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_1 0.5775477 1.0000000
ClaimTypeHHC:ClaimDuration_v2_1 1.2200077 1.0000000
ClaimTypeNH:ClaimDuration_v2_1 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_2 0.7204061 1.0000000
ClaimTypeHHC:ClaimDuration_v2_2 1.1544503 1.0000000
ClaimTypeNH:ClaimDuration_v2_2 0.9904640 1.0000000
ClaimTypeALF:ClaimDuration_v2_3 0.4822451 0.9152148
ClaimTypeHHC:ClaimDuration_v2_3 1.3074751 1.0703500
ClaimTypeNH:ClaimDuration_v2_3 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_4 0.3888119 0.7003168
ClaimTypeHHC:ClaimDuration_v2_4 1.4574384 1.2171058
ClaimTypeNH:ClaimDuration_v2_4 0.9307032 1.0000000
ClaimTypeALF:ClaimDuration_v2_5 0.5377169 0.8698059
ClaimTypeHHC:ClaimDuration_v2_5 1.2014169 1.0000000
ClaimTypeNH:ClaimDuration_v2_5 1.0431206 1.0000000
ClaimTypeALF:ClaimDuration_v2_6 0.6166669 0.9656811
ClaimTypeHHC:ClaimDuration_v2_6 1.2586082 1.0253191
ClaimTypeNH:ClaimDuration_v2_6 1.0078748 1.0000000
ClaimTypeALF:ClaimDuration_v2_7 0.7422965 1.0000000
ClaimTypeHHC:ClaimDuration_v2_7 1.1114513 1.0000000
ClaimTypeNH:ClaimDuration_v2_7 1.1055824 1.0000000
ClaimTypeALF:ClaimDuration_v2_8 0.7091063 1.0000000
ClaimTypeHHC:ClaimDuration_v2_8 1.1593676 1.0000000
ClaimTypeNH:ClaimDuration_v2_8 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_9 0.7629176 1.0000000
ClaimTypeHHC:ClaimDuration_v2_9 1.0038607 1.0000000
ClaimTypeNH:ClaimDuration_v2_9 1.0495181 1.0000000
ClaimTypeALF:ClaimDuration_v2_10 0.8690047 1.0000000
ClaimTypeHHC:ClaimDuration_v2_10 1.2294474 1.0000000
ClaimTypeNH:ClaimDuration_v2_10 1.0382551 1.0000000
ClaimTypeALF:ClaimDuration_v2_11 0.8210501 1.0000000
ClaimTypeHHC:ClaimDuration_v2_11 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_11 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_12 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_12 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_12 1.0409026 1.0000000
ClaimTypeALF:ClaimDuration_v2_13 1.0940847 1.0000000
ClaimTypeHHC:ClaimDuration_v2_13 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_13 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_14 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_14 1.1521542 1.0000000
ClaimTypeNH:ClaimDuration_v2_14 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_15 0.9060523 1.0000000
ClaimTypeHHC:ClaimDuration_v2_15 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_15 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_16 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_16 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_16 0.9807524 1.0000000
ClaimTypeALF:ClaimDuration_v2_17 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_17 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_17 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_18 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_18 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_18 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_19 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_19 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_19 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_20 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_20 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_20 1.0064222 1.0000000
ClaimTypeALF:ClaimDuration_v2_21 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_21 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_21 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_22 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_22 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_22 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_23 0.9697182 1.0000000
ClaimTypeHHC:ClaimDuration_v2_23 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_23 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_24 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_24 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_24 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_25 1.1186242 1.0000000
ClaimTypeHHC:ClaimDuration_v2_25 1.0944619 1.0000000
ClaimTypeNH:ClaimDuration_v2_25 0.9136160 1.0000000
ClaimTypeALF:ClaimDuration_v2_26 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_26 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_26 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_27 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_27 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_27 1.0099601 1.0000000
ClaimTypeALF:ClaimDuration_v2_28 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_28 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_28 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_29 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_29 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_29 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_30 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_30 1.0455999 1.0000000
ClaimTypeNH:ClaimDuration_v2_30 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_31 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_31 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_31 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_32 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_32 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_32 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_33 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_33 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_33 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_34 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_34 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_34 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_35 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_35 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_35 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_36 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_36 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_36 0.9694436 1.0000000
ClaimTypeALF:ClaimDuration_v2_37 1.1387050 1.0000000
ClaimTypeHHC:ClaimDuration_v2_37 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_37 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_38 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_38 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_38 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_39 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_39 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_39 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_40 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_40 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_40 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_41 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_41 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_41 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_42 1.0740473 1.0000000
ClaimTypeHHC:ClaimDuration_v2_42 0.9876656 1.0000000
ClaimTypeNH:ClaimDuration_v2_42 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_43 1.1058551 1.0000000
ClaimTypeHHC:ClaimDuration_v2_43 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_43 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_44 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_44 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_44 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_45 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_45 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_45 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_46 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_46 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_46 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_47 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_47 1.1209407 1.0000000
ClaimTypeNH:ClaimDuration_v2_47 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_48 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_48 0.9829928 1.0000000
ClaimTypeNH:ClaimDuration_v2_48 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_49 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_49 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_49 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_50 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_50 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_50 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_51 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_51 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_51 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_52 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_52 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_52 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_53 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_53 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_53 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_54 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_54 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_54 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_55 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_55 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_55 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_56 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_56 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_56 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_57 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_57 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_57 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_58 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_58 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_58 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_59 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_59 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_59 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_60 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_60 0.9224033 1.0000000
ClaimTypeNH:ClaimDuration_v2_60 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_61 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_61 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_61 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_62 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_62 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_62 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_63 1.0093096 1.0000000
ClaimTypeHHC:ClaimDuration_v2_63 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_63 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_64 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_64 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_64 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_65 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_65 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_65 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_66 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_66 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_66 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_67 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_67 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_67 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_68 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_68 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_68 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_69 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_69 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_69 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_70 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_70 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_70 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_71 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_71 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_71 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_72 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_72 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_72 1.0039611 1.0000000
ClaimTypeALF:ClaimDuration_v2_73 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_73 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_73 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_74 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_74 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_74 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_75 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_75 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_75 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_76 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_76 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_76 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_77 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_77 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_77 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_78 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_78 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_78 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_79 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_79 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_79 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_80 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_80 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_80 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_81 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_81 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_81 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_82 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_82 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_82 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_83 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_83 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_83 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_84 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_84 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_84 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_85 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_85 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_85 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_86 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_86 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_86 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_87 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_87 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_87 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_88 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_88 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_88 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_89 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_89 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_89 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_90 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_90 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_90 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_91 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_91 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_91 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_92 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_92 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_92 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_93 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_93 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_93 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_94 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_94 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_94 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_95 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_95 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_95 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_96 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_96 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_96 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_97 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_97 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_97 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_98 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_98 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_98 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_99 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_99 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_99 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_100 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_100 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_100 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_101 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_101 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_101 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_102 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_102 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_102 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_103 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_103 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_103 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_104 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_104 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_104 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_105 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_105 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_105 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_106 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_106 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_106 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_107 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_107 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_107 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_108 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_108 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_108 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_109 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_109 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_109 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_110 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_110 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_110 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_111 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_111 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_111 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_112 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_112 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_112 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_113 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_113 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_113 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_114 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_114 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_114 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_115 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_115 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_115 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_116 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_116 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_116 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_117 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_117 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_117 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_118 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_118 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_118 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_119 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_119 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_119 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_120 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_120 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_120 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_121 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_121 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_121 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_122 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_122 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_122 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_123 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_123 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_123 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_124 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_124 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_124 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_125 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_125 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_125 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_126 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_126 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_126 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_127 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_127 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_127 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_128 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_128 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_128 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_129 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_129 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_129 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_130 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_130 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_130 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_131 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_131 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_131 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_132 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_132 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_132 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_133 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_133 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_133 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_134 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_134 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_134 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_135 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_135 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_135 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_136 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_136 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_136 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_137 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_137 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_137 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_138 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_138 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_138 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_139 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_139 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_139 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_140 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_140 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_140 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_141 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_141 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_141 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_142 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_142 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_142 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_143 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_143 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_143 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_144 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_144 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_144 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_145 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_145 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_145 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_146 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_146 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_146 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_147 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_147 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_147 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_148 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_148 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_148 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_149 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_149 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_149 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_150 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_150 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_150 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_151 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_151 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_151 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_152 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_152 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_152 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_153 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_153 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_153 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_154 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_154 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_154 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_155 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_155 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_155 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_156 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_156 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_156 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_157 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_157 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_157 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_158 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_158 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_158 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_159 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_159 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_159 1.0000000 1.0000000
ClaimTypeALF:ClaimDuration_v2_160 1.0000000 1.0000000
ClaimTypeHHC:ClaimDuration_v2_160 1.0000000 1.0000000
ClaimTypeNH:ClaimDuration_v2_160 1.0000000 1.0000000

We can save the coefficients as a csv file so we can view it outside of R.

write.csv(compare, file = paste0(data_output, "\\", "0420_Coeff_lambda_min_and_1se.csv"))

Looking at the file we can see that it made similar scalar adjustments as the raw AtoE factors we looked at in the beginning of this program. When models become more complex with additional variables and interactions, reviewing the relationships of the coefficients becomes challenging. One approach for reviewing the final model produced is to develop pseudo observations of every combination in the model and export predictions for each combination. Then from these predicted values you can create base tables of the assumptions to review the relationships across different cells. Another approach is to review the predictions on the historical experience for reasonableness, which we will do later in this program.

Comparing models with Different Lambda Values

Now let’s calculate the fit on our various data sets across the full lambda path. Typically we would not do such a test on all data sets since testing the various penalties on the data sets would bias our model. And most of the time you may not have enough data to afford to have data held out for a validation/test set. We are providing this illustration to help show how the k-fold cross-validation is a good indicator of how your model will perform on an out-of-sample data set that was not used to train the model. It also illustrated how the choice of the selecting the lambda.min and lambda.1se can involve some judgment, but usually both get you in the same ballpark.

# https://protect-us.mimecast.com/s/5h65CBBEprCR30R2i502GN?domain=4.2.9.1 - Function to calculate lambda for a specific variable

devi = function(type) {
  # Select our independent variables
  agg_vars <- agg_data %>% 
    filter(Sample == type) 
  
  # Now update the variables in our list to be factors
  agg_vars[, factor_vars] <- lapply(agg_vars[, factor_vars], as.factor)
  
  x_vars <- sparse.model.matrix(glm_formula,
                                data = agg_vars,
                                contrasts.arg = lapply(agg_vars[, sapply(agg_vars, 
                                                                         is.factor)], 
                                                       contrasts, contrasts=FALSE)
                                ) 
  
  # Grab our terminations
  y <- agg_data %>% 
    filter(Sample == type) %>%
    select(Terminations)
  
  # create exposure offset
  offset <- agg_data %>% 
    filter(Sample == type) %>%
    select(exp_terms) %>% 
    log() %>% # take the log since we are using a log-link function in the GLM
    as.matrix()
  
  # predict at every lambda level
  eta <- predict(glmnet_model2, newx=x_vars, newoffset=offset, type='link')
  
  # Poisson deviance formula taken from glmnet code
  # https://protect-us.mimecast.com/s/z651CDkErqtJQwJGiKP7XC?domain=github.com  
  deveta = y$Terminations * eta - exp(eta)
  devy = y$Terminations * log(y$Terminations) - y$Terminations
  devy[y$Terminations == 0] = 0
  
  apply(2*(devy - deveta), 2, mean, na.rm = TRUE)
}
agg_data %>% distinct(Sample)
##       Sample
## 1   training
## 2 validation
## 3    testing
deviance_cal = devi("training")
deviance_val = devi("validation")
deviance_test = devi("testing")

min_lambdas <- data.frame(train.min = which.min(deviance_cal),
                          cv.min = which.min(glmnet_model2_cv$cvm),
                          val.min = which.min(deviance_val),
                          test.min = which.min(deviance_test))

row.names(min_lambdas) <- "Lowest Error Lambda"

We’ll now plot the lambda values assosciated with the lowest error for the Training, Validation, Testing, and 10-fold Cross-validation set.

# https://protect-us.mimecast.com/s/uShVCERQvqf0KO0XiMtvsI?domain=4.2.9.2 - First look at the training and 10-fold CV error
# we will drop the values on the y-axis since we only care 
# about the location of the lowest error on these curves when 
# trying to select the lambda values
plot(glmnet_model2_cv, 
     ylab='Error', 
     ylim = c(min(deviance_cal), max(glmnet_model2_cv$cvm)), 
     yaxt='n')
title(main = "Performance Across the Penalty Path for Various Data Sets\n\n")

# the "best" lambda corresponds to the lowest point on this curve, which is now shown by the
# vertical red line
abline(v = log(glmnet_model2$lambda[min_lambdas$cv.min]), col = "red")

# https://protect-us.mimecast.com/s/fbJgCG6KxRfOoEOXs5fKmu?domain=4.2.9.3 - Add on the training error curve
points(log(glmnet_model2$lambda), deviance_cal, col="blue", pch="*")  
legend("topleft", c("10 fold CV", "Train"), pch="*", 
       col=c("red", "blue"), lty=1:3, cex=0.8) 

# As you can see with the blue vertical line the best fit based on the
# training error suggests using no penalty, which will always be the 
# case when looking at the training error. 
abline(v = log(glmnet_model2$lambda[min_lambdas$train.min]), col = "blue")


#add line
legend("topleft", c("10 fold CV", "Train", "Validation"), pch="*", 
       col=c("red", "blue", "green"), lty=1:3, cex=0.8) 
abline(v = log(glmnet_model2$lambda[min_lambdas$val.min]), col = "green")


#add line
legend("topleft", c("10 fold CV", "Train", "Validation", "Test"), pch="*", 
       col=c("red", "blue", "green", "orange"), lty=1:3, cex=0.8)
abline(v = log(glmnet_model2$lambda[min_lambdas$test.min]), col = "orange")

min_lambdas
##                     train.min cv.min val.min test.min
## Lowest Error Lambda       100     26      27       25

Looking at the lambda index chosen for each, we can see that the 10-fold CV did a good job at picking a lambda value that worked well on the validation and test sets. If we based our decision on selecting the penalty that produced the lowest calibration error then we would have no penalty at all and would have overfit the data. This highlights the ability of the penalized GLMs ability to automate the bias-variance trade-off.

Using Glmnet to Predict

After reviewing our results now let’s test the fit further by doing some predictions with the glmnet models with no penalty, the lambda associated with the minimum loss, and the lambda associated with 1 standard deviation away from the min loss.

# https://protect-us.mimecast.com/s/P0v6CJ6XA8fAPlAMIxAc-7?domain=4.2.10.1 - Further Exploration

# First we'll re-score all the data 

# Create exposure offset
offset <- agg_data %>%
  mutate(LnExp_Term = log(exp_terms)) %>%
  select(LnExp_Term) %>% 
  as.matrix() 

# Convert our variables in our factor_vars list to factors
agg_data[, factor_vars] <- lapply(agg_data[, factor_vars], as.factor)


# create our sparse matrix of indicator variables
x_all <- sparse.model.matrix(glm_formula,
                             data = agg_data,
                             contrasts.arg = lapply(agg_data[, sapply(agg_data, is.factor)], 
                                                    contrasts,
                                                    contrasts=FALSE)
                             ) 



# Make some predictions at lambda.1se, lambda.min and no penalty
lambda_1se <- predict(glmnet_model2_cv,
                      newx = x_all,
                      newoffset = offset,
                      type = 'response',
                      s = "lambda.1se")

lambda_min <- predict(glmnet_model2_cv,
                      newx = x_all,
                      newoffset = offset,
                      type = 'response',
                      s = "lambda.min")

no_penalty <- predict(glmnet_model2_cv,
                      newx = x_all,
                      newoffset = offset,
                      type = 'response',
                      s = 0)


predictions <- data.frame(agg_data,
                          lambda_1se = lambda_1se[,1],
                          lambda_min = lambda_min[,1],
                          no_penalty = no_penalty[,1])
rm(lambda_min,
   no_penalty,
   lambda_1se)

We have some predictions, let’s view the overall AtoE for our three data sets.

# https://protect-us.mimecast.com/s/rZigCKrEB8SM0PMpFPFcCn?domain=4.2.10.2 - First view overall AtoEs on our three training sets.
AtoE_sample <- predictions %>%
  group_by(Sample) %>%
  summarise(Termination = comma(sum(Terminations)),
            Rate = percent(sum(Terminations)/sum(Exposure)),
            AtoE_bench = round(sum(Terminations)/sum(exp_terms_orig), 2),
            AtoE_bench_adj = round(sum(Terminations)/sum(exp_terms), 2),
            AtoE_lambda_1se = round(sum(Terminations)/sum(lambda_1se), 2),
            AtoE_lambda_min = round(sum(Terminations)/sum(lambda_min), 2),
            AtoE_no_penalty = round(sum(Terminations)/sum(no_penalty), 2))

kable(AtoE_sample, "html") %>%
  kable_styling() 
Sample Termination Rate AtoE_bench AtoE_bench_adj AtoE_lambda_1se AtoE_lambda_min AtoE_no_penalty
testing 4,344 2.5% 0.82 0.99 1.06 1.06 1.06
training 12,449 2.63% 0.81 0.98 1.00 1.00 1.00
validation 4,158 2.64% 0.82 0.99 1.01 1.01 1.01

Let’s look at the AtoE’s dependent on variables by sample next

# create function to test AtoEs on various variables by sample
AtoE <- function(test){
  AtoE_sample <- predictions %>%
    group_by_('Sample',
              test) %>%
    summarise(Termination = comma(sum(Terminations)),
              Rate = percent(sum(Terminations)/sum(Exposure)),
              AtoE_bench = round(sum(Terminations)/sum(exp_terms_orig), 2),
              AtoE_bench_adj = round(sum(Terminations)/sum(exp_terms), 2),
              AtoE_lambda_1se = round(sum(Terminations)/sum(lambda_1se), 2),
              AtoE_lambda_min = round(sum(Terminations)/sum(lambda_min), 2),
              AtoE_no_penalty = round(sum(Terminations)/sum(no_penalty), 2))
 
  
}

kable(AtoE('Gender'),"html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
Sample Gender Termination Rate AtoE_bench AtoE_bench_adj AtoE_lambda_1se AtoE_lambda_min AtoE_no_penalty
testing Female 2,648 2.19% 0.88 1.00 1.07 1.07 1.07
testing Male 1,696 3.2% 0.75 0.99 1.05 1.05 1.05
training Female 7,603 2.33% 0.87 0.99 1.01 1.00 1.00
training Male 4,846 3.29% 0.74 0.97 0.99 1.00 1.00
validation Female 2,582 2.38% 0.89 1.01 1.02 1.02 1.02
validation Male 1,576 3.23% 0.73 0.96 0.98 0.99 0.99
kable(AtoE('ClaimType'),"html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
Sample ClaimType Termination Rate AtoE_bench AtoE_bench_adj AtoE_lambda_1se AtoE_lambda_min AtoE_no_penalty
testing ALF 1,074 2.15% 0.74 0.89 1.15 1.13 1.12
testing HHC 2,058 2.41% 0.78 0.94 1.00 1.02 1.03
testing NH 1,212 3.15% 1.01 1.23 1.09 1.09 1.08
training ALF 2,561 1.88% 0.62 0.74 0.98 1.00 1.00
training HHC 5,728 2.81% 0.84 1.02 1.00 1.00 1.00
training NH 4,160 3.11% 0.95 1.16 1.01 1.00 1.00
validation ALF 869 1.91% 0.62 0.75 0.99 1.01 1.01
validation HHC 1,874 2.85% 0.86 1.04 1.01 1.01 1.01
validation NH 1,415 3.07% 0.95 1.15 1.01 1.00 0.99
kable(AtoE('CY_Yr'),"html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
Sample CY_Yr Termination Rate AtoE_bench AtoE_bench_adj AtoE_lambda_1se AtoE_lambda_min AtoE_no_penalty
testing 2010 2,583 2.52% 0.79 0.95 1.01 1.01 1.00
testing 2011 1,761 2.47% 0.88 1.07 1.15 1.15 1.16
training 2000 467 3.77% 1.02 1.23 1.09 1.08 1.08
training 2001 554 3.06% 0.88 1.07 1.00 0.99 0.99
training 2002 651 2.7% 0.80 0.96 0.92 0.92 0.92
training 2003 857 2.75% 0.83 1.00 0.98 0.98 0.98
training 2004 1,030 2.68% 0.82 0.99 0.99 0.98 0.98
training 2005 1,211 2.63% 0.81 0.98 1.00 1.00 1.00
training 2006 1,407 2.61% 0.81 0.98 1.00 1.00 1.00
training 2007 1,701 2.69% 0.84 1.01 1.05 1.05 1.05
training 2008 1,952 2.63% 0.83 1.00 1.04 1.04 1.04
training 2009 2,073 2.39% 0.75 0.90 0.95 0.95 0.96
training 2010 546 2.18% 0.76 0.92 0.98 0.99 1.01
validation 2000 134 3.21% 0.90 1.08 0.96 0.95 0.96
validation 2001 150 2.39% 0.70 0.84 0.78 0.78 0.78
validation 2002 233 2.79% 0.84 1.02 0.98 0.98 0.98
validation 2003 287 2.74% 0.84 1.01 0.99 0.99 0.99
validation 2004 350 2.79% 0.85 1.03 1.02 1.02 1.02
validation 2005 408 2.73% 0.84 1.02 1.03 1.02 1.02
validation 2006 474 2.67% 0.82 0.99 1.02 1.02 1.02
validation 2007 594 2.84% 0.89 1.08 1.11 1.11 1.11
validation 2008 635 2.58% 0.81 0.98 1.02 1.02 1.02
validation 2009 708 2.45% 0.77 0.92 0.98 0.98 0.98
validation 2010 185 2.21% 0.77 0.93 0.99 1.00 1.02
AtoE_sample <- predictions %>%
  group_by(Sample,
           ClaimType,
           CY_Yr) %>%
  summarise(Termination = comma(sum(Terminations)),
            Rate = percent(sum(Terminations)/sum(Exposure)),
            AtoE_bench = round(sum(Terminations)/sum(exp_terms_orig), 2),
            AtoE_bench_adj = round(sum(Terminations)/sum(exp_terms), 2),
            AtoE_lambda_1se = round(sum(Terminations)/sum(lambda_1se), 2),
            AtoE_lambda_min = round(sum(Terminations)/sum(lambda_min), 2),
            AtoE_no_penalty = round(sum(Terminations)/sum(no_penalty), 2))

kable(AtoE_sample,"html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
Sample ClaimType CY_Yr Termination Rate AtoE_bench AtoE_bench_adj AtoE_lambda_1se AtoE_lambda_min AtoE_no_penalty
testing ALF 2010 624 2.11% 0.70 0.83 1.09 1.09 1.07
testing ALF 2011 450 2.21% 0.82 0.98 1.24 1.19 1.20
testing HHC 2010 1,229 2.45% 0.75 0.91 0.95 0.95 0.96
testing HHC 2011 829 2.35% 0.83 1.00 1.09 1.13 1.14
testing NH 2010 730 3.17% 0.97 1.17 1.04 1.04 1.03
testing NH 2011 482 3.11% 1.10 1.33 1.19 1.16 1.16
training ALF 2000 65 2.2% 0.62 0.75 1.07 1.18 1.22
training ALF 2001 88 1.81% 0.55 0.66 0.91 0.97 0.99
training ALF 2002 108 1.59% 0.51 0.61 0.82 0.85 0.86
training ALF 2003 164 1.83% 0.59 0.70 0.94 0.97 0.97
training ALF 2004 203 1.79% 0.59 0.70 0.94 0.95 0.95
training ALF 2005 242 1.79% 0.58 0.70 0.92 0.94 0.94
training ALF 2006 328 2.07% 0.69 0.82 1.08 1.09 1.09
training ALF 2007 348 1.89% 0.62 0.74 0.98 0.99 0.99
training ALF 2008 449 2.09% 0.70 0.84 1.10 1.11 1.11
training ALF 2009 452 1.81% 0.60 0.72 0.94 0.95 0.95
training ALF 2010 114 1.59% 0.58 0.69 0.88 0.89 0.94
training HHC 2000 202 4.76% 1.19 1.44 1.07 1.04 1.04
training HHC 2001 222 3.49% 0.94 1.14 0.94 0.92 0.92
training HHC 2002 288 3.25% 0.90 1.09 0.96 0.94 0.94
training HHC 2003 394 3.28% 0.95 1.15 1.05 1.04 1.04
training HHC 2004 460 3.04% 0.89 1.08 1.02 1.02 1.02
training HHC 2005 547 2.92% 0.88 1.06 1.03 1.03 1.03
training HHC 2006 630 2.72% 0.81 0.98 0.97 0.97 0.97
training HHC 2007 811 2.88% 0.88 1.06 1.07 1.07 1.07
training HHC 2008 909 2.69% 0.82 0.99 1.02 1.02 1.02
training HHC 2009 1,002 2.44% 0.75 0.90 0.94 0.94 0.94
training HHC 2010 263 2.19% 0.75 0.91 0.98 1.01 1.01
training NH 2000 200 3.85% 1.09 1.32 1.11 1.08 1.08
training NH 2001 244 3.54% 1.06 1.28 1.10 1.07 1.07
training NH 2002 255 3.03% 0.90 1.09 0.94 0.93 0.92
training NH 2003 299 2.95% 0.88 1.07 0.93 0.92 0.91
training NH 2004 367 3.05% 0.92 1.12 0.98 0.96 0.96
training NH 2005 422 3.07% 0.95 1.15 1.00 0.99 0.99
training NH 2006 449 3% 0.93 1.13 1.00 0.99 0.98
training NH 2007 542 3.26% 0.99 1.21 1.06 1.06 1.05
training NH 2008 594 3.12% 0.97 1.18 1.04 1.03 1.03
training NH 2009 619 2.97% 0.92 1.11 0.99 0.98 0.98
training NH 2010 169 2.9% 1.00 1.21 1.08 1.05 1.07
validation ALF 2000 15 1.5% 0.44 0.53 0.75 0.81 0.82
validation ALF 2001 34 2.02% 0.61 0.73 1.01 1.06 1.07
validation ALF 2002 48 2% 0.61 0.74 1.00 1.05 1.07
validation ALF 2003 61 2.02% 0.66 0.79 1.06 1.07 1.07
validation ALF 2004 66 1.81% 0.59 0.71 0.94 0.96 0.95
validation ALF 2005 86 2% 0.64 0.77 1.02 1.04 1.03
validation ALF 2006 98 1.85% 0.59 0.71 0.94 0.97 0.97
validation ALF 2007 120 1.95% 0.64 0.77 1.01 1.02 1.02
validation ALF 2008 124 1.73% 0.57 0.69 0.90 0.91 0.90
validation ALF 2009 167 1.99% 0.66 0.79 1.03 1.04 1.05
validation ALF 2010 50 2.08% 0.75 0.91 1.15 1.16 1.24
validation HHC 2000 55 4.23% 1.10 1.34 1.00 0.98 0.98
validation HHC 2001 61 2.89% 0.79 0.96 0.79 0.78 0.77
validation HHC 2002 110 3.61% 1.03 1.25 1.09 1.08 1.08
validation HHC 2003 123 3.04% 0.90 1.09 0.99 0.99 0.99
validation HHC 2004 148 3.05% 0.90 1.09 1.02 1.02 1.02
validation HHC 2005 178 3% 0.89 1.08 1.04 1.04 1.04
validation HHC 2006 216 3.01% 0.90 1.09 1.07 1.07 1.07
validation HHC 2007 276 3.13% 0.95 1.15 1.16 1.15 1.16
validation HHC 2008 303 2.75% 0.84 1.01 1.04 1.04 1.04
validation HHC 2009 329 2.44% 0.74 0.90 0.93 0.93 0.93
validation HHC 2010 75 1.87% 0.64 0.78 0.84 0.86 0.86
validation NH 2000 64 3.4% 0.97 1.18 0.99 0.97 0.97
validation NH 2001 55 2.21% 0.66 0.80 0.68 0.67 0.67
validation NH 2002 75 2.59% 0.82 0.98 0.85 0.83 0.83
validation NH 2003 103 3.02% 0.92 1.11 0.96 0.95 0.94
validation NH 2004 136 3.35% 1.01 1.22 1.07 1.05 1.05
validation NH 2005 144 3.08% 0.95 1.15 1.01 1.00 0.99
validation NH 2006 160 3.01% 0.94 1.14 1.00 0.99 0.98
validation NH 2007 198 3.34% 1.05 1.27 1.12 1.11 1.11
validation NH 2008 208 3.22% 1.01 1.22 1.08 1.07 1.07
validation NH 2009 212 3.02% 0.94 1.13 1.00 1.00 1.00
validation NH 2010 60 3.05% 1.05 1.27 1.13 1.10 1.11

Assessing Model Accuracy

Now that we have the AtoE values for the samples and variables, let’s calculate the mean squared error to determine how good of a fit the models are.

# 4.2.11 - Check out mean squared error (MSE)
# the no penalty performs better on the training data because MSE is lower,
# but lambda.min performs better on both the validation and testing
MSE_sample <- predictions %>%
  group_by(Sample) %>%
  summarise(Termination = comma(sum(Terminations)),
            Rate = percent(sum(Terminations)/sum(Exposure)),
            mse_bench = round(sum((Terminations-exp_terms_orig)^2)/sum(Exposure), 5),
            mse_bench_adj = round(sum((Terminations-exp_terms)^2)/sum(Exposure), 5),
            mse_lambda_1se = round(sum((Terminations-lambda_1se)^2)/sum(Exposure), 5),
            mse_lambda_min = round(sum((Terminations-lambda_min)^2)/sum(Exposure), 5),
            mse_no_penalty = round(sum((Terminations-no_penalty)^2)/sum(Exposure), 5))

kable(MSE_sample, "html") %>%
  kable_styling()
Sample Termination Rate mse_bench mse_bench_adj mse_lambda_1se mse_lambda_min mse_no_penalty
testing 4,344 2.5% 0.02837 0.02630 0.02555 0.02548 0.02615
training 12,449 2.63% 0.02924 0.02813 0.02694 0.02651 0.02638
validation 4,158 2.64% 0.02785 0.02733 0.02693 0.02682 0.02686

And looking at the MSE on various variables by the sample:

# create function to test MSE on various variables by sample
MSE <- function(test){
  MSE_sample <- predictions %>%
    group_by_('Sample',
              test) %>%
    summarise(Termination = comma(sum(Terminations)),
              Rate = percent(sum(Terminations)/sum(Exposure)),
              mse_bench = round(sum((Terminations-exp_terms_orig)^2)/sum(Exposure), 5),
              mse_bench_adj = round(sum((Terminations-exp_terms)^2)/sum(Exposure), 5),
              mse_lambda_1se = round(sum((Terminations-lambda_1se)^2)/sum(Exposure), 5),
              mse_lambda_min = round(sum((Terminations-lambda_min)^2)/sum(Exposure), 5),
              mse_no_penalty = round(sum((Terminations-no_penalty)^2)/sum(Exposure), 5))
  
}
kable(MSE('Gender'), "html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
Sample Gender Termination Rate mse_bench mse_bench_adj mse_lambda_1se mse_lambda_min mse_no_penalty
testing Female 2,648 2.19% 0.02427 0.02325 0.02231 0.02233 0.02287
testing Male 1,696 3.2% 0.03768 0.03324 0.03292 0.03265 0.03360
training Female 7,603 2.33% 0.02602 0.02553 0.02407 0.02356 0.02341
training Male 4,846 3.29% 0.03636 0.03391 0.03329 0.03305 0.03296
validation Female 2,582 2.38% 0.02502 0.02486 0.02441 0.02425 0.02426
validation Male 1,576 3.23% 0.03417 0.03283 0.03255 0.03253 0.03264
kable(MSE('ClaimType'), "html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
Sample ClaimType Termination Rate mse_bench mse_bench_adj mse_lambda_1se mse_lambda_min mse_no_penalty
testing ALF 1,074 2.15% 0.02779 0.02466 0.02282 0.02218 0.02299
testing HHC 2,058 2.41% 0.02622 0.02347 0.02349 0.02382 0.02465
testing NH 1,212 3.15% 0.03387 0.03470 0.03367 0.03344 0.03357
training ALF 2,561 1.88% 0.02528 0.02247 0.01967 0.01876 0.01855
training HHC 5,728 2.81% 0.03085 0.03022 0.02947 0.02916 0.02910
training NH 4,160 3.11% 0.03081 0.03074 0.03050 0.03036 0.03022
validation ALF 869 1.91% 0.02138 0.02026 0.01919 0.01888 0.01888
validation HHC 1,874 2.85% 0.03014 0.02972 0.02954 0.02948 0.02954
validation NH 1,415 3.07% 0.03097 0.03089 0.03085 0.03085 0.03090
kable(MSE('CY_Yr'), "html") %>%
  kable_styling() %>%
  scroll_box(width = "900px", height = "250px")
Sample CY_Yr Termination Rate mse_bench mse_bench_adj mse_lambda_1se mse_lambda_min mse_no_penalty
testing 2010 2,583 2.52% 0.02982 0.02698 0.02583 0.02581 0.02642
testing 2011 1,761 2.47% 0.02626 0.02532 0.02515 0.02500 0.02575
training 2000 467 3.77% 0.04084 0.04128 0.03830 0.03788 0.03792
training 2001 554 3.06% 0.03208 0.03197 0.03066 0.03049 0.03053
training 2002 651 2.7% 0.02960 0.02922 0.02798 0.02759 0.02753
training 2003 857 2.75% 0.03008 0.02966 0.02825 0.02767 0.02760
training 2004 1,030 2.68% 0.02787 0.02722 0.02606 0.02561 0.02544
training 2005 1,211 2.63% 0.03126 0.03046 0.02914 0.02847 0.02825
training 2006 1,407 2.61% 0.02808 0.02713 0.02618 0.02580 0.02566
training 2007 1,701 2.69% 0.02992 0.02882 0.02770 0.02700 0.02676
training 2008 1,952 2.63% 0.02828 0.02705 0.02623 0.02595 0.02584
training 2009 2,073 2.39% 0.02832 0.02614 0.02471 0.02437 0.02432
training 2010 546 2.18% 0.02519 0.02362 0.02283 0.02258 0.02240
validation 2000 134 3.21% 0.03178 0.03164 0.03106 0.03100 0.03102
validation 2001 150 2.39% 0.02313 0.02258 0.02260 0.02262 0.02266
validation 2002 233 2.79% 0.02983 0.02968 0.02944 0.02938 0.02942
validation 2003 287 2.74% 0.02881 0.02837 0.02825 0.02815 0.02820
validation 2004 350 2.79% 0.02793 0.02755 0.02723 0.02719 0.02723
validation 2005 408 2.73% 0.02876 0.02842 0.02809 0.02791 0.02793
validation 2006 474 2.67% 0.02803 0.02779 0.02709 0.02687 0.02687
validation 2007 594 2.84% 0.03017 0.03004 0.02952 0.02919 0.02916
validation 2008 635 2.58% 0.02792 0.02740 0.02691 0.02680 0.02682
validation 2009 708 2.45% 0.02655 0.02534 0.02497 0.02504 0.02518
validation 2010 185 2.21% 0.02267 0.02175 0.02147 0.02135 0.02136

Plot model predictions

Now let’s view some graphs of the model predictions. We can begin by using duration as our predictive variable.

# https://protect-us.mimecast.com/s/D8gmCL9BDZIw5Vw8hlt3hN?domain=4.2.12.1 
plot_test <- function(val){
  agg_results <- predictions %>%
    filter(Sample == val) %>%
    group_by(ClaimDuration) %>%
    summarise(actual_hz_rate = sum(Terminations)/sum(Exposure),
              bench = sum(exp_terms_orig)/sum(Exposure),
              bench_adj = sum(exp_terms)/sum(Exposure),
              no_penalty = sum(no_penalty)/sum(Exposure),
              lambda_min = sum(lambda_min)/sum(Exposure),
              lambda_1se = sum(lambda_1se)/sum(Exposure)
              )
  
  ggplotly(ggplot() + 
             geom_line(data = agg_results, 
                       linetype = "dotted", 
                       aes(x = ClaimDuration, y = actual_hz_rate, col="actual")
                       )+ 
             geom_line(data = agg_results, 
                       linetype = "dashed", 
                       aes(x = ClaimDuration, y = bench, col="bench")
                       )+
             
             geom_line(data = agg_results, 
                       linetype = "dashed", 
                       aes(x = ClaimDuration, y = bench_adj, col="bench_adj")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = no_penalty, col="no_penalty")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = lambda_min, col="lambda_min")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = lambda_1se, col="lambda_1se")
                       )+
             ggtitle(paste0("Actual vs Predicted Hazard Rates on ", val," Holdout"))
           )
}
plot_test('training')
plot_test('validation')
plot_test('testing')

Now lets make some graphs using gender as our variable.

# https://protect-us.mimecast.com/s/l9RJCNk1jOtP1JPyIBw51f?domain=4.2.12.2 - View results on testing set for Gender
plot_test <- function(val, set){
  agg_results <- predictions %>%
    filter(Gender == val,
           Sample == set) %>%
    group_by(ClaimDuration) %>%
    summarise(actual_hz_rate = sum(Terminations)/sum(Exposure),
              bench = sum(exp_terms_orig)/sum(Exposure),
              bench_adj = sum(exp_terms)/sum(Exposure),
              no_penalty = sum(no_penalty)/sum(Exposure),
              lambda_min = sum(lambda_min)/sum(Exposure),
              lambda_1se = sum(lambda_1se)/sum(Exposure))
  
  ggplotly(ggplot() + 
             geom_line(data = agg_results, 
                       linetype = "dotted", 
                       aes(x = ClaimDuration, y = actual_hz_rate, col="actual")
                       )+ 
             geom_line(data = agg_results, 
                       linetype = "dashed", 
                       aes(x = ClaimDuration, y = bench, col="bench")
                       )+
             geom_line(data = agg_results, 
                       linetype = "dashed", 
                       aes(x = ClaimDuration, y = bench_adj, col="bench_adj")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = no_penalty, col="no_penalty")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = lambda_min, col="lambda_min")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = lambda_1se, col="lambda_1se")
                       )+
             ggtitle(paste0("Actual vs Predicted Hazard Rates for ", 
                            val, " on the ", set, " Set"))
           )
}


plot_test('Female', "testing")
plot_test('Male', "testing")

Now lets move onto using ClaimType as our predictive variable.

# https://protect-us.mimecast.com/s/rS0jCOY9kgtZolZQigcHig?domain=4.2.12.3 - View results on testing set for ClaimType
plot_test <- function(val, set){
  agg_results <- predictions %>%
    filter(ClaimType == val,
           Sample == set) %>%
    group_by(ClaimDuration) %>%
    summarise(actual_hz_rate = sum(Terminations)/sum(Exposure),
              bench = sum(exp_terms_orig)/sum(Exposure),
              bench_adj = sum(exp_terms)/sum(Exposure),
              no_penalty = sum(no_penalty)/sum(Exposure),
              lambda_min = sum(lambda_min)/sum(Exposure),
              lambda_1se = sum(lambda_1se)/sum(Exposure))
  
  ggplotly(ggplot() + 
             geom_line(data = agg_results, 
                       linetype = "dotted", 
                       aes(x = ClaimDuration, y = actual_hz_rate, col="actual")
                       )+ 
             geom_line(data = agg_results, 
                       linetype = "dashed", 
                       aes(x = ClaimDuration, y = bench, col="bench")
                       )+
             geom_line(data = agg_results, 
                       linetype = "dashed", 
                       aes(x = ClaimDuration, y = bench_adj, col="bench_adj")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = no_penalty, col="no_penalty")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = lambda_min, col="lambda_min")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = lambda_1se, col="lambda_1se")
                       )+
             ggtitle(paste0("Actual vs Predicted Hazard Rates for ", 
                            val, " on the ", set, " Set"))
           )
}
plot_test('HHC', "testing")
plot_test('NH', "testing")
plot_test('ALF', "testing")

Lastly, let’s look at Calendar Year.

# https://protect-us.mimecast.com/s/Aza7CPNJljCNq9NVtDmevF?domain=4.2.12.4 - View results on testing set for CY_Yr
plot_test <- function(val, set){
  agg_results <- predictions %>%
    filter(CY_Yr == val,
           Sample == set) %>%
    group_by(ClaimDuration) %>%
    summarise(actual_hz_rate = sum(Terminations)/sum(Exposure),
              bench = sum(exp_terms_orig)/sum(Exposure),
              bench_adj = sum(exp_terms)/sum(Exposure),
              no_penalty = sum(no_penalty)/sum(Exposure),
              lambda_min = sum(lambda_min)/sum(Exposure),
              lambda_1se = sum(lambda_1se)/sum(Exposure))
  
  ggplotly(ggplot() + 
             geom_line(data = agg_results, 
                       linetype = "dotted", 
                       aes(x = ClaimDuration, y = actual_hz_rate, col="actual")
                       )+ 
             geom_line(data = agg_results, 
                       linetype = "dashed", 
                       aes(x = ClaimDuration, y = bench, col="bench")
                       )+
             geom_line(data = agg_results, 
                       linetype = "dashed", 
                       aes(x = ClaimDuration, y = bench_adj, col="bench_adj")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = no_penalty, col="no_penalty")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = lambda_min, col="lambda_min")
                       )+
             geom_line(data = agg_results, 
                       aes(x = ClaimDuration, y = lambda_1se, col="lambda_1se")
                       )+
             ggtitle(paste0("Actual vs Predicted Hazard Rates for ", 
                            val, " on Testing Holdout"))
           )
}
plot_test(2010, "testing")
plot_test(2011, "testing")

Conclusion

After you have selected a strategy for developing a model on the training data and have confirmed the approach holds up on the validation and testing set and have also reviewed the model for reasonableness, retrain the model on all the data to incorporate the more recent data. You should then review the final model that was trained on all the data again for reasonableness. At this point you may also want to make manual smoothing adjustments to the final assumption produced by the model if necessary.