Call Us @

(91) 96200 48623

Blog: Insurance Claim Fraud Modeling - Step by step Approach

img

Insurance Claim Fraud Modeling - Step by step Approach

Insurance Claim Fraud

Insurance Fraud is done to get the monetary benefits that the policyholder is not entitled to on the insurance policy. Fraud can be committed by insurance company employees, brokers, intermediaries providing services to claimants (e.g. hospital, auto service garage, etc) and policyholders.

It can be done at the time of taking policy - hiding some facts or information to reduce policy premiums.

Also, the claim process is another important area to find fraud. The policyholder tries to get benefits against losses never incurred or claimed a higher amount than the actual losses.

In the US along, FBI estimates that the total cost of insurance fraud (excluding health insurance) is more than $40 billion per year.

In the insurance claim, there are a lot of interesting problems that are being solved using Machine Learning, Deep Learning, and AI.

Some of Machine Learning Modeling Scenarios in Insurance(by no means an exhaustive list):

  • Predicting Potential Claim by a Policy Holder: What are chances a policyholder will apply for a claim
  • Claim Volume Forecasting
  • Predicting Claim Fraud at First Report
  • Refreshing Claim Fraud Model at various stages of the Claim Process
  • Predicting Litigation Probability on a Claim
  • Predicting Subrogation Potential for Claim
  • Based on Auto Accident or Property Damage Images - Estimating Claim Amount

In this blog, we have a sample data related model insurance claim fraud. The objective is to find ways to predict whether a claim is fraudulent or not.

Modeling Methodology:

* Understanding Objective
* Data and Context
* Exploratory Data Analysis (EDA)
* Feature Engineeting and Variable Transformation
* Model Variable Selection
* Model Selection 
* Validation: Training and Testing Samples

Data and Context

In a claim process, typically three sets of information are available - CustomerPolicy, and Claim. The claims adjudication process involving the assessment of the information before approving a claim. In this data sample, we have features related to these three sections and we aim to predict whether the claim is fraudulent or not.

image.png

# read data 
import pandas as pd

claim_insurance = pd.read_csv("http://ramgopalprajapat.com/static/files/2020/09/02/insurance_claims.csv")

claim_insurance.head()

# Remove last column 
claim_insurance=claim_insurance.drop('_c39', axis=1)

# Row and Column Counts
claim_insurance.shape

# Column Names
claim_insurance.columns.to_list()

We need to find Date columns so that we can create features from them and drop these columns.

Date Columns available in the data are: policy_bind_date, incident_date, and auto_year

We can create following interval features from these date features

Months between incident_date and auto_year
Months between policy_bind_date and incident_date
Months between policy_bind_date and auto_year

import numpy as np
import pandas as pd
# Months 
# 2014-10-17
claim_insurance['bind_months'] = ((pd.to_datetime(claim_insurance.incident_date, format="%Y-%m-%d")-pd.to_datetime(claim_insurance.policy_bind_date, format="%Y-%m-%d"))/np.timedelta64(1, 'M'))
claim_insurance['auto_incident_months'] = ((pd.to_datetime(claim_insurance.incident_date, format="%Y-%m-%d")-pd.to_datetime(claim_insurance.auto_year, format="%Y"))/np.timedelta64(1, 'M'))
claim_insurance['bind_auto_months'] = ((pd.to_datetime(claim_insurance.policy_bind_date, format="%Y-%m-%d")-pd.to_datetime(claim_insurance.auto_year, format="%Y"))/np.timedelta64(1, 'M'))

 

# Plot Histogram
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.hist(claim_insurance['bind_months'],bins=15, cumulative=False)
ax.set_xlabel('Bind Months')
ax.set_ylabel('Frequency')
plt.show()

import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.hist(claim_insurance['auto_incident_months'],bins=15, cumulative=False)
ax.set_xlabel('Months Between Auto Year and Incident')
ax.set_ylabel('Frequency')
plt.show()

import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.hist(claim_insurance['bind_auto_months'],bins=15, cumulative=False)
ax.set_xlabel('Months Between Auto Year and Bind Months')
ax.set_ylabel('Frequency')
plt.show()

 

 

 

# Drop Date Columns
claim_insurance=claim_insurance.drop(['policy_bind_date' , 'incident_date','auto_year'], axis=1)

Summary Statistics

We need to check summary statics to see if there anything unusual. describe() method helps us get the summary statistics and using transpose(), we have transposed so that we can see all the columns/features clearly.

claim_insurance.describe().transpose()

Observations

policy_number and insured_zip are relevant for the analysis. ZIP code can be used indirectly to see if there is increased fraud in certain ZIP codes. we are going to check.

policy_deductable is looking like an ordinal variable. we need to check the levels of this variable.

umbrella_limit is an interesting variable, very low minimum value, and very high maximum value. Value at 25th, 50th, and 75th percentiles are 0

number_of_vehicles_involvedbodily_injuries, and witnesses are ordinal variables.

 

Policy Deductible

claim_insurance.policy_deductable.value_counts()

1000    351
500     342
2000    307
Name: policy_deductable, dtype: int64

We can do one-hot encoding for this variable instead of using a row variable. The Pandas package has a get_dummies method to create dummy variables.

# Merge dummy variables using merge
claim_insurance = claim_insurance.merge(pd.get_dummies(claim_insurance['policy_deductable'],prefix='policy_deductable'),
                                        left_index=True,
                                        right_index=True)
# drop original variable
claim_insurance=claim_insurance.drop(['policy_deductable'], axis=1)
claim_insurance.head()

Umbrella Limit

Umbrella insurance policy is extra liability insurance coverage above the limits of home and auto insurance. We need to explore the distribution and get some useful insights.

claim_insurance.umbrella_limit.value_counts()

It is evident that 0 value indicates no Umbrella insurance policy associated with motor insurance policy. -1000000 may be data issue with - sign.

Now, we may want to create an indicator variable which indicates whether there is associated Umbrella coverage instead of using the amount.

Zip Code

Zipcode can be very important to see if linked to insurance fraud. We can do frequency encoding for zip codes. But we need to check the unique value of the Zip codes.

fq = claim_insurance.groupby('insured_zip').size()/len(claim_insurance)
len(fq)

claim_insurance=claim_insurance.drop(['insured_zip'],axis=1)

There are 1000 rows in the data and 995 unique zip codes. There almost every row is linked to a zip code. It is like an ID variable and may not be useful for the analysis.

So, better to drop this variable.

Number of vehicles involved

In an incident, the number of variables involved could be an important indicator of fraudulent claims. If we look at the distribution, we can see that almost 94% of the times 1 or 3 vehicles were involved. We can align 2 and 4 vehicles rows to other rows instead of creating separate columns/dummy variables.

claim_insurance.number_of_vehicles_involved.value_counts()

We can create a bivariate analysis table.

import warnings
warnings.filterwarnings("ignore")

number_of_vehicles = claim_insurance.groupby(['number_of_vehicles_involved','fraud_reported'])['number_of_vehicles_involved','fraud_reported'].size().reset_index(name='counts')

# Restucture
number_of_vehicles = number_of_vehicles.pivot(index='number_of_vehicles_involved', columns='fraud_reported', values='counts')
# Fill with 0
number_of_vehicles=number_of_vehicles.fillna(0).reset_index()
# Rename the columns
number_of_vehicles.columns=['number_of_vehicles_involved','fraud_reported_N','fraud_reported_Y']

# Calculate Bad Rate for each of the income type
number_of_vehicles['pct_obs'] = (number_of_vehicles['fraud_reported_N']+number_of_vehicles['fraud_reported_Y'])/(sum(number_of_vehicles['fraud_reported_N'])+sum(number_of_vehicles['fraud_reported_Y']))
number_of_vehicles['pct_fraud_reported_N']= number_of_vehicles['fraud_reported_N']/(number_of_vehicles['fraud_reported_N']+number_of_vehicles['fraud_reported_Y'])
number_of_vehicles['pct_fraud_reported_Y']= number_of_vehicles['fraud_reported_Y']/(number_of_vehicles['fraud_reported_N']+number_of_vehicles['fraud_reported_Y'])
print(number_of_vehicles)

 

Interesting observation - when the number of vehicles is odd, low % fraud reported. Though, only 6% of the incidents involved even vehicles. Still, we may create an indicator variable to see the association with frauds.

claim_insurance['odd_vehicles']=np.where( (claim_insurance.number_of_vehicles_involved==1) | (claim_insurance.number_of_vehicles_involved==3) , 1, 0)
claim_insurance['odd_vehicles'].value_counts()

# Drop 'number_of_vehicles_involved'
claim_insurance= claim_insurance.drop(['number_of_vehicles_involved'], axis=1)

 

Bodily Injuries

In motor insurance, bodily injury can also be claimed. This variable takes value 0, 1, and 2. Probably 0 indicates no body injury claim and others suggest a number of claims.

We can create dummy variables for each of these levels. We will do the one-hot encoding.

print(claim_insurance.bodily_injuries.value_counts())

# Merge dummy variables using merge
claim_insurance = claim_insurance.merge(pd.get_dummies(claim_insurance['bodily_injuries'],prefix='bodily_injuries'),
                                        left_index=True,
                                        right_index=True)
# drop original variable
claim_insurance=claim_insurance.drop(['bodily_injuries'], axis=1)
print(claim_insurance.head())

Witnesses

If there is/are witnesses to the incident, it gives some confidence of none fraud case. Again we can create dummy variables from the level of this variable.

print(claim_insurance.witnesses.value_counts())

claim_insurance = claim_insurance.merge(pd.get_dummies(claim_insurance['witnesses'],prefix='witnesses'),
                                        left_index=True,
                                        right_index=True)
# drop original variable
claim_insurance=claim_insurance.drop(['witnesses'], axis=1)
print(claim_insurance.head())

EDA for Categorical Variables

In this data, there are a number of categorical variables are available. We can assess which of these variables are relevant and what type of grouping/encoding should be used for these.

Based on the below function we can find variable type based on the scale of measurement.

Variable Type- Measurement Scale

# Find Continuous and Categorical Features
def featureType(df):
    import numpy as np 
    from pandas.api.types import is_numeric_dtype

    columns = df.columns
    rows= len(df)
    colTypeBase=[]
    colType=[]
    for col in columns:
        try:
            try:
                uniq=len(np.unique(df[col]))
            except:
                 uniq=len(df.groupby(col)[col].count())
            if rows>10:
                if is_numeric_dtype(df[col]):
                    
                    if uniq==1:
                        colType.append('Unary')
                        colTypeBase.append('Unary')
                    elif uniq==2:
                        colType.append('Binary')
                        colTypeBase.append('Binary')
                    elif rows/uniq>3 and uniq>5:
                        colType.append('Continuous')
                        colTypeBase.append('Continuous')
                    else:
                        colType.append('Continuous-Ordinal')
                        colTypeBase.append('Ordinal')
                else:
                    if uniq==1:
                        colType.append('Unary')
                        colTypeBase.append('Category-Unary')
                    elif uniq==2:
                        colType.append('Binary')
                        colTypeBase.append('Category-Binary')
                    else:
                        colType.append('Categorical-Nominal')
                        colTypeBase.append('Nominal')
            else:
                if is_numeric_dtype(df[col]):
                    colType.append('Numeric')
                    colTypeBase.append('Numeric')
                else:
                    colType.append('Non-numeric')
                    colTypeBase.append('Non-numeric')
        except:
            colType.append('Issue')
                
    # Create dataframe    
    df_out =pd.DataFrame({'Feature':columns,
                          'BaseFeatureType':colTypeBase,
                        'AnalysisFeatureType':colType})
    # Sort by BaseFeatureType
    df_out = df_out.sort_values('BaseFeatureType')
    return df_out

featureType(claim_insurance)  

 

We have two variables which are Binary but Categorical and we can convert to Binary variables. There is fraud_reported (Label/Target Variable) and insured_sex. 

print('Reported Fraud \n',claim_insurance.fraud_reported.value_counts())

print('Gender of Insured Person \n',claim_insurance.insured_sex.value_counts())

 

claim_insurance['fraud']= np.where(claim_insurance['fraud_reported']=='Y', 1, 0)

claim_insurance['insured_male']= np.where(claim_insurance['insured_sex']=='MALE', 1, 0)

 

# Distribution of new column

print('Fraud: \n', claim_insurance.fraud.value_counts())

print('Insuraned Male: \n', claim_insurance.insured_male.value_counts())

 

# drop original columns

claim_insurance=claim_insurance.drop(['fraud_reported','insured_sex'],axis=1)

Getting Level Count Report for each of the categorical columns.

# Find Feature Type
colType = featureType(claim_insurance)
# Select Nominal Features
catFeatures=colType.loc[colType.BaseFeatureType=='Nominal','Feature']

for feature in catFeatures:
    print(claim_insurance[feature].value_counts())

Observations

For the first iteration, we may want to drop the below variables.

  • incident_location: Too many values and not repeating. So drop the feature.
  • insured_hobbies : Again for 1000 rows of observations, this variable has too many levels. We may drop this column
  • insured_occupation: Too many levels, so drop it
  • auto_model: Too many levels, so drop it
  • auto_make: This is NOT a straight drop. But we will drop it.

The remaining variables will be considered for the weight of evidence (WOE) based transformation. WOE could be really useful, considering the small dataset and a fairly long list of categorical variables. If we create one-hot encoding for each level, there may be too many features.

# drop variables with too many levels
claim_insurance = claim_insurance.drop(['incident_location','insured_hobbies','insured_occupation','auto_model','auto_make'], axis=1)

Weight of Evidence (WOE) based Transformations

def woe_cat(df, var, label):
    import numpy as np
    grp = df.groupby([var,label])[var,label].size().reset_index(name='counts')
    # Restucture
    pivot = grp.pivot(index=var, columns=label, values='counts')
    # Fill with 0
    pivot=pivot.fillna(0).reset_index()
    # Rename the columns
    pivot.columns=[var,'label_0','label_1']
    # Find total 0 and 1
    sum_0 = pivot.label_0.sum()
    sum_1 = pivot.label_1.sum()
    pivot['woe']= np.log( (pivot['label_1']/sum_1 +0.00001)/(pivot['label_0']/sum_0 +0.00001))
    # Create a bar chart for WOR
    import matplotlib.pyplot as plt
    plt.style.use('ggplot')
    plt.bar(pivot[var], pivot['woe'], color='green')
    plt.xlabel(var)
    plt.ylabel("WOE Value")
    plt.title("WOE Across Categories")
    plt.xticks(rotation=90) # change orientation of X axis tick label
    plt.show()
    plt.close()
    return pivot
# property_damage_woe=woe_cat(claim_insurance, 'property_damage', 'fraud')

def woe_cat_features(df,woeDF, var):
    new_feature = var+"_woe"
    for r in range(len(woeDF)):
        df.loc[(df[var]== woeDF[var][r]),new_feature] = woeDF['woe'][r]        
    df[new_feature].astype(float)
    
# woe_cat_features(claim_insurance, property_damage_woe,'property_damage')

# Transform all the features
def cat_woe_trans(df,LabelVar,catFeature=[]):
    for f in catFeature:
        woe = woe_cat(df, f, LabelVar)
        woe_cat_features(df,woe,f)

cat_woe_trans(claim_insurance,'fraud',catFeature=['property_damage', 'police_report_available', 'collision_type', 'incident_city',
                                          'incident_state', 'policy_state', 'policy_csl', 'authorities_contacted', 'incident_severity',
                                          'insured_education_level', 'incident_type', 'insured_relationship'])

 

# Summary statistics

claim_insurance.describe().transpose()

We have done feature engineering and transformations required. We can now drop the list of categorical variables as we already have WOE transformed variables.

claim_insurance = claim_insurance.drop(['property_damage', 'police_report_available', 'collision_type', 'incident_city',
                                          'incident_state', 'policy_state', 'policy_csl', 'authorities_contacted', 'incident_severity',
                                          'insured_education_level', 'incident_type', 'insured_relationship'], axis=1)

Now, we can review the columns/features. We expect to be numeric type - binary, continuous and ordinal variables. We are ready to proceed to the model development. 
 

# drop policy_number
claim_insurance = claim_insurance.drop(['policy_number'], axis=1)
featureType(claim_insurance)

The scale of Measurement may play an important role in some of the algorithms. But we are using Tree-based methods and logistic regression, so we should be fine for now.

Split Data to Training and Validation Samples

The sample is not very big, we are creating only 2 samples- Training and Validation with 70% and 30% observations.

label = claim_insurance['fraud']
features = claim_insurance.drop(['fraud'],axis=1)
from sklearn.model_selection import train_test_split
claim_train,claim_test,fraud_train,fraud_test=train_test_split(features,label,test_size=0.3)

Insurance Claim Fraud: Logistic Regression

We have a high number of features compared to the number of data points, we can select a good set of features and build the model on. This can be done in a number of ways with each has its own limitations.

Information Value is one of them. We are going to use the Recursive Feature Elimination method available in Python.

from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
logitreg = LogisticRegression()
# We want to select top 20 features
rfe = RFE(logitreg, 20)
rfe = rfe.fit(claim_train, fraud_train.values.ravel())
print(rfe.support_)
print(rfe.ranking_)

Select variables and create the structure to exclude variables that are not significant based on P-Value.

select_vars =claim_train.columns[rfe.support_].to_list()

exclude = ['witnesses_0','bodily_injuries_2','insured_male','bodily_injuries_1','policy_csl_woe','policy_state_woe',
          'umbrella_insurance','collision_type_woe','incident_type_woe','witnesses_3','witnesses_1','witnesses_2',
          'property_damage_woe']
select_vars_updated = np.setdiff1d(select_vars,exclude)
print(select_vars_updated)

 

import statsmodels.api as sm
logit_model=sm.Logit(fraud_train,claim_train[select_vars_updated])
logit_model_output=logit_model.fit()
print(logit_model_output.summary2())

 

 

Multi-collinearity amount set of features play a role in the stability of the model. We can check if the final model features have a high level of multi-collinearity.

Variance Inflation Factor (VIF) = 1/(1-R^2) is one of the commonly used metrics to assess multi-collinearity.

from statsmodels.stats.outliers_influence import variance_inflation_factor

def vif(X):
    # Creat empty data Frame
    vif = pd.DataFrame()
    # Add Feature Names
    vif["Features"] = X.columns
    # Calculate VIF for each Feature as dependent and rest all as independent
    vif["VIF"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
    return(vif)

# Final Model Selected Feature DataFrame
model_features = claim_train[select_vars_updated]
vif_values = vif(model_features)
print(vif_values)

VIF values are less than 2, so we conclude that there is no significant collinearity amount of these selected features.

Model Prediction

Now we can select this model for model performance evaluation. We can predict the label using the model and compare it with actual label values. For the selected variables, fit the model using sklearn's LogisticRegression classifier.

# Import logistic Regression classifier
from sklearn.linear_model import LogisticRegression
# Call function with default parameters
logreg = LogisticRegression()
# Model Development: fit the model with data
logreg.fit(claim_train[select_vars_updated],fraud_train)

# Predict The Fraud Class
pred_class = logreg.predict(claim_train[select_vars_updated]) 

Model Performance: Training

Various model classification matrics are calculated to see the performance on the training sample.

from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
# Model Accuracy
print('Model Accuracy(Training) {:.2f}'.format(logreg.score(claim_train[select_vars_updated], fraud_train)))
# Confusion Matrix
confusion_matrix = confusion_matrix(fraud_train, pred_class)
print("Confusion Matrix (Training): \n",confusion_matrix)
print("Model Classification Report (Training): \n",classification_report(fraud_train, pred_class))

Model Performance on Test Sample

# Score validation/test sample:Predict The Fraud Class

test_pred_class = logreg.predict(claim_test[select_vars_updated])

 

from sklearn.metrics import confusion_matrix

from sklearn.metrics import classification_report

# Model Accuracy

print('Model Accuracy(Training) {:.2f}'.format(logreg.score(claim_test[select_vars_updated], fraud_test)))

# Confusion Matrix

confusion_matrix = confusion_matrix(fraud_test, test_pred_class)

print("Confusion Matrix (Training): \n",confusion_matrix)

print("Model Classification Report (Training): \n",classification_report(fraud_test, test_pred_class))

 

Model performance is very stable across training and testing samples. In my view, this is one of the great advantages of logistic regression, typically performance is stable across samples if the model is developed rigorously.

Some of the area of improvement in this Model Development process

  • Feature Transformation: some of the continuous variables can also be explored for WOE based or Mathematical Transformation
  • Feature Selection: Using IV, Decision Tree or Random Forest-based feature selection to find the combinations which can improve the model predictive power

 

In the next section, we will use the sample data sample and build Random Forest and XGBoost Models.

Tags

Claim Fraud Fraud Analytics Insurance Analytics Logistic Regression Python

Suggested Readings

Deep Neural Network for Structured Data - Heart Attack Prediction …

Sep 11, 2020

Predict Heart Attack using Machine Learning Tutorial- EDA, RF and …

  • SOCIAL SHARE :

0 Comments

Leave a Comment

Your email address will not be published. Required fields are marked *