Caret Package – A Practical Guide To Machine Learning In R Hk.saowen.com

hk.saowen.com-Caret%20Package%20%20A%20Practical%20Guide%20to%20Machine%20Learning%20in%20R

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 24

DownloadCaret Package – A Practical Guide To Machine Learning In R Hk.saowen.com-Caret
Open PDF In BrowserView PDF
Caret Package – A Practical Guide to Machine Learning in R
hk.saowen.com/a/e3de9c193942feb012101c3b3d0de210c7c6e5c68398f29ca66887e84a4c9501
www.machinelearningplus.com

Caret Package is a comprehensive framework for building machine learning models in R. In this tutorial, I explain
nearly all the core features of the caret package and walk you through the step-by-step process of building
predictive models. Be it a decision tree or xgboost, caret helps to find the optimal model in the shortest possible
time.
Caret Package – A Practical Guide to Machine Learning in R
Caret Package – A Practical Guide to Machine Learning in R
Caret Package – A Practical Guide to Machine Learning in R. Photo by Kate Tryst.

Contents
2. Initial Setup – load the package and dataset
3. Data Preparation and Preprocessing
3.1. How to split the dataset into training and validation?
3.2 Descriptive statistics
3.3 How to impute missing values using preProcess()?
3.4 How to create One-Hot Encoding (dummy variables)?
3.5 How to preprocess to transform the data?
4. How to visualize the importance of variables using `featurePlot()`
5. How to do feature selection using recursive feature elimination (`rfe`)?
6. Training and Tuning the model
6.1. How to `train()` the model and interpret the results?
6.2 How to compute variable importance?
6.3. Prepare the test dataset and predict
6.4. Predict on test data
6.5. Confusion Matrix
7. How to do hyperparameter tuning to optimize the model for better performance?
7.1. Setting up the `trainControl()`
7.2 Hyperparameter Tuning using `tuneLength`
7.3. Hyperparameter Tuning using `tuneGrid`
8. How to evaluate the performance of multiple machine learning algorithms?
8.1. Training Adaboost
8.2. Training Random Forest
8.3. Training xgBoost Dart
1/24

8.5. Run resamples() to compare the models
9. Ensembling the predictions
9.1. How to ensemble predictions from multiple models using caretEnsemble?
9.2. How to combine the predictions of multiple models to form a final prediction
Caret nicely integrates all the activities associated with the model development in a streamlined workflow, for
nearly every major ML algorithm available in R.
Actually we will not just stop with the caret package but will also go a step ahead and see how to smartly
ensemble predictions from multiple best models and possibly produce an even better prediction using
caretEnsemble.
A lot of exciting stuff ahead, so to make it simpler, this tutorial is structured to cover the following 5 topics:
1.
2.
3.
4.
5.

Data Preparation and Preprocessing
Visualize the importance of variables
Feature Selection using RFE
Training and Tuning the model
Ensembling the predictions

Let’s begin!

1. Introduction
Caret is short for C lassification A nd RE gression T raining.
With R having so many implementations of machine learning algorithms, spread across packages it may be
challenging to keep track of which algorithm resides in which package.
Sometimes the syntax and the way to implement the algorithm differ across packages combined with
preprocessing and looking at the help page for the hyperparameters (parameters that define how the algorithm
learns) can make building predictive models an involved task.
Well, thanks to caret because no matter which package the algorithm resides, caret will remember that for you
and may just prompt you to run install.package for that particular algorithm’s package, which by the way I am
quite happy to do.
Later in this tutorial I will show how to see all the available ML algorithms supported by caret (it’s a long list!) and
what hyperparameters can be tuned.
Now that you have a fair idea of what caret is about, let’s get started with the basics.

2. Initial Setup – load the package and dataset
For this tutorial, I am going to use a modified version of the Orange Juice Data , originaly made available in the
ISLR package.
The objective of this dataset is to predict which of the two brands of orange juices did the customers purchase.
The predictor variables are characteristics of the customer and the product itself.
It contains 1070 rows with 18 columns. The response variable is ‘Purchase’ which takes either the value
‘CH'(citrus hill) or ‘MM'(minute maid).
I have chosen a lightweight dataset so the focus is more on getting familiar with the usage of caret package
rather than having to spend much time on training the models.
Let’s import the dataset and see it’s structure and starting few rows.

2/24

# install.packages(c('caret', 'skimr', 'RANN', 'randomForest', 'fastAdaboost', 'gbm', 'xgboost',
'caretEnsemble', 'C50', 'earth'))
# Load the caret package
library(caret)
# Import dataset
orange  0
YeoJohnson: Like BoxCox, but works for negative values.
expoTrans: Exponential transformation, works for negative values.
pca: Replace with principal components
ica: Replace with independent components
spatialSign: Project the data to a unit circle

For our problem, let’s convert all the numeric variables to range between 0 and 1, by setting method=range in
preProcess() .
preProcess_range_model <- preProcess(trainData, method='range')
trainData <- predict(preProcess_range_model, newdata = trainData)
# Append the Y variable
trainData$Purchase <- y
apply(trainData[, 1:10], 2, FUN=function(x){c('min'=min(x), 'max'=max(x))})

7/24

WeekofPurchase

StoreID

PriceCH

PriceMM

DiscCH

DiscMM

SpecialCH

SpecialMM

LoyalCH

SalePriceMM

min

0

0

0

0

0

0

0

0

0

0

max

1

1

1

1

1

1

1

1

1

1

All the predictor now range between 0 and 1.

4. How to visualize the importance of variables using featurePlot()
Now that the preprocessing is complete, let’s visually examine how the predictors influence the Y (Purchase).
In this problem, the X variables are numeric whereas the Y is categorical.
So how to gauge if a given X is an important predictor of Y?
A simple common sense approach is, if you group the X variable by the categories of Y, a significant mean shift
amongst the X’s groups is a strong indicator (if not the only indicator) that X will have a significant role to help
predict Y.
It is possible to watch this shift visually using box plots and density plots.
In fact, caret’s featurePlot() function makes it so convenient.
Simply set the X and Y parameters and set plot='box' . You can additionally adjust the label font size (using
strip ) and the scales to be free as I have done in the below plot.
featurePlot(x = trainData[, 1:18],
y = trainData$Purchase,
plot = "box",
strip=strip.custom(par.strip.text=list(cex=.7)),
scales = list(x = list(relation="free"),
y = list(relation="free")))

Caret Package – A Practical Guide to Machine Learning in R
Caret Package – A Practical Guide to Machine Learning in R
featurePlot Output – Boxplot
Let me quickly refresh how to interpret a boxplot.
Each subplot in the above figure has two boxplots (in blue) inside it, one each for each of the Y categories, CH
and MM . The top of the box represents the 25th %ile and the bottom of the box represents the 75th %ile. The
black dot inside the box is the mean.
The blue box represents the region where most of the regular data point lie.
The subplots also show many blue dots lying outside the top and bottom dashed lines called whiskers. These
dots are formally considered as extreme values.
So, What did you observe in the above figure?
Consider for example, LoyalCH s subplot, which measures the loyalty score of the customer to the CH brand.
The mean and the placement of the two boxes are glaringly different.
Just by seeing that, I am pretty sure, LoyalCH is going to be a significant predictor of Y.
What other predictors do you notice have significant mean differences?
Let’s do a similar exercise with density plots.
In this case, For a variable to be important, I would expect the density curves to be significantly different for the 2
classes, both in terms of the height (kurtosis) and placement (skewness).

8/24

Take a look at the density curves of the two categories for ‘LoyalCH’, ‘STORE’, ‘StoreID’, ‘WeekofPurchase’. Are
they different?
featurePlot(x = trainData[, 1:18],
y = trainData$Purchase,
plot = "density",
strip=strip.custom(par.strip.text=list(cex=.7)),
scales = list(x = list(relation="free"),
y = list(relation="free")))

Caret Package – A Practical Guide to Machine Learning in R
Caret Package – A Practical Guide to Machine Learning in R
featurePlot Output – Density
Having visualised the relationships between X and Y, We can only say which variables are likely to be important
to predict Y. It may not be wise to conclude which variables are NOT important.
Because sometimes, variables with uninteresting pattern can help explain certain aspects of Y that the visually
important variables may not.
So to be safe, let’s not arrive at conclusions about excluding variables prematurely.

5. How to do feature selection using recursive feature elimination (rfe)?
Most machine learning algorithms are able to determine what features are important to predict the Y. But in some
scenarios, you might be need to be careful to include only variables that may be significantly important and
makes strong business sense.
This is quite common in banking, economics and financial institutions.
Or you might just be doing an exploratory analysis to determine important predictors and report it as a metric in
your analytics dashboard.
Or if you are using a traditional algorithm like like linear or logistic regression, determining what variable to feed
to the model is in the hands of the practitioner.
Given such requirements, you might need a rigorous way to determine the important variables first before feeding
them to the ML algorithm.
A good choice of selecting the important features is the recursive feature elimination (RFE).
So how does recursive feature elimination work?
RFE works in 3 broad steps:
Step 1: Build a ML model on a training dataset and estimate the feature importances on the test dataset.
Step 2: Keeping priority to the most important variables, iterate through by building models of given sizes.
Ranking of the predictors is recalculated in each iteration.
Step 3: The model performances are compared across different subset sizes to arrive at the optimal number and
list of final predictors.
It can be implemented using the rfe() function and you have the flexibility to control what algorithm rfe uses
and how it cross validates by defining the rfeControl() .

9/24

set.seed(100)
options(warn=-1)
subsets <- c(1:5, 10, 15, 18)
ctrl <- rfeControl(functions = rfFuncs,
method = "repeatedcv",
repeats = 5,
verbose = FALSE)
lmProfile <- rfe(x=trainData[, 1:18], y=trainData$Purchase,
sizes = subsets,
rfeControl = ctrl)
lmProfile
Recursive feature selection
Outer resampling method: Cross-Validated (10 fold, repeated 5 times)
Resampling performance over subset size:
Variables Accuracy
1
2
3
4

0.7433
0.8143
0.8187
0.8058

Kappa AccuracySD KappaSD Selected
0.4554
0.6063
0.6147
0.5904

0.04107
0.04037
0.04194
0.04253

0.08692
0.08559
0.08896
0.08750

5
10
15

0.7988 0.5743
0.8024 0.5810
0.8070 0.5879

0.04379 0.09258
0.04464 0.09557
0.04215 0.09079

18

0.8065 0.5879

0.03882 0.08297

*

The top 3 variables (out of 3):
LoyalCH, PriceDiff, StoreID

In the above code, we call the rfe() which implements the recursive feature elimination.
Apart from the x and y datasets, RFE also takes two important parameters.
1. sizes
2. rfeControl
The sizes determines what all model sizes (the number of most important features) the rfe should consider.
In above case, it iterates models of size 1 to 5, 10, 15 and 18.
The rfeControl parameter on the other hand receives the output of the rfeControl() as values. If you look
at the call to rfeControl() we set what type of algorithm and what cross validation method should be used.
In above case, the cross validation method is repeatedcv which implements k-Fold cross validation repeated 5
times, which is rigorous enough for our case.
Once rfe() is run, the output shows the accuracy and kappa (and their standard deviation) for the different
model sizes we provided. The final selected model subset size is marked with a * in the rightmost Selected
column.
From the above output, a model size of 3 with LoyalCH, PriceDiff and StoreID seems to achieve the optimal
accuracy.
That means, out of 18 other features, a model with just 3 features outperformed many other larger model.
Interesting isn’t it! Can you explain why?
However, it is not a mandate that only including these 3 variables will always give high accuracy over larger sized
models.
Thats because, the rfe() we just implemented is particular to random forest based rfFuncs
10/24

Since ML algorithms have their own way of learning the relationship between the x and y, it is not wise to neglect
the other predictors, especially when there is evidence that there is information contained in rest of the variables
to explain the relationship between x and y.
Plus also, since the training dataset isn’t large enough, the other predictors may not have had the chance to show
its worth.
In the next step, we will build the actual randomForest model on trainData .

6. Training and Tuning the model
6.1. How to train() the model and interpret the results?
Now comes the important stage where you actually build the machine learning model.
To know what models caret supports, run the following:
# See available algorithms in caret
modelnames <- paste(names(getModelInfo()), collapse=',
modelnames
'ada, AdaBag, AdaBoost.M1, adaboost,
bagEarthGCV, bagFDA, bagFDAGCV, bam,

')

amdai, ANFIS, avNNet,
bartMachine, bayesglm,

awnb, awtan, bag,
binda, blackboost,

bagEarth,
blasso,

blassoAveraged, bridge, brnn, BstLm, bstSm, bstTree, C5.0, C5.0Cost, C5.0Rules, C5.0Tree,
cforest, chaid, CSimca, ctree, ctree2, cubist, dda, deepboost, DENFIS, dnn, dwdLinear,
dwdPoly, dwdRadial, earth, elm, enet, evtree, extraTrees, fda, FH.GBML, FIR.DM, foba,
FRBCS.CHI, FRBCS.W, FS.HGD,
gaussprRadial, gbmh2o, gbm,
glmboost,

glmneth2o,

glmnet,

gam, gamboost, gamLoess, gamSpline, gaussprLinear, gaussprPoly,
gcvEarth, GFS.FR.MOGUL, GFS.LT.RS, GFS.THRIFT, glm.nb, glm,
glmStepAIC,

gpls,

kernelpls, kknn, knn, krlsPoly, krlsRadial,
leapForward, leapSeq, Linda, lm, lmStepAIC,
lssvmLinear,

lssvmPoly,

lssvmRadial,

mlpKerasDecayCost, mlpKerasDropout,
mlpWeightDecayML, monmlp, msaenet,
nbSearch,

neuralnet,

nnet,

ORFridge, ORFsvm, ownn,
PenalizedLDA, plr, pls,
qrnn,

randomGLM,

ranger,

nnls,

lvq,

hda,

hdda,

lars, lars2,
LMT, loclda,

M5,

M5Rules,

hdrda,

HYFIS,

icr,

J48,

JRip,

lasso, lda, lda2, leapBackward,
logicBag, LogitBoost, logreg,

manb,

mda,

Mlda,

mlp,

mlpKerasDecay,

mlpKerasDropoutCost, mlpML, mlpSGD, mlpWeightDecay,
multinom, mxnet, mxnetAdam, naive_bayes, nb, nbDiscrete,
nodeHarvest,

null,

OneR,

ordinalNet,

ORFlog,

ORFpls,

pam, parRF, PART, partDSA, pcaNNet, pcr, pda, pda2, penalized,
plsRglm, polr, ppr, PRIM, protoclass, pythonKnnReg, qda, QdaCov,
rbf,

rbfDDA,

Rborist,

rda,

regLogistic,

relaxo,

rf,

rFerns,

qrf,

RFlda,

rfRules, ridge, rlda, rlm, rmda, rocc, rotationForest, rotationForestCp, rpart, rpart1SE,
rpart2, rpartCost, rpartScore, rqlasso, rqnc, RRF, RRFglobal, rrlda, RSimca, rvmLinear,
rvmPoly, rvmRadial, SBC, sda, sdwd, simpls, SLAVE, slda, smda, snn, sparseLDA, spikeslab,
spls, stepLDA, stepQDA, superpc, svmBoundrangeString, svmExpoString, svmLinear, svmLinear2,
svmLinear3,

svmPoly,

svmRadial,

svmRadialCost,

svmRadialSigma, svmRadialWeights, svmSpectrumString, tan,
vglmAdjCat, vglmContRatio, vglmCumulative, widekernelpls,

svmLinearWeights,

svmLinearWeights2,

tanSearch,
WM, wsrf,

treebag,
xgbDART,

vbmpRadial,
xgbLinear, xgbTree,

xyf'

Each of those is a machine learning algorithm caret supports.
Yes, it’s a huge list!
And if you want to know more details like the hyperparameters and if it can be used of regression or classification
problem, then do a modelLookup(algo) .
Once you have chosen an algorithm, building the model is fairly easy using the train() function.
Let’s train a Multivariate Adaptive Regression Splines (MARS) model by setting the method='earth' .
The MARS algorithm was named as ‘earth’ in R because of a possible trademark conflict with Salford Systems.
May be a rumor. Or not.
modelLookup('earth')

11/24

model

parameter

earth

nprune

earth

degree

label

forReg

forClass

probModel

#Terms

TRUE

TRUE

TRUE

Product Degree

TRUE

TRUE

TRUE

# Set the seed for reproducibility
set.seed(100)
# Train the model using randomForest and predict on the training data itself.
model_mars = train(Purchase ~ ., data=trainData, method='earth')
fitted <- predict(model_mars)

But you may ask how is using train() different from using the algorithm’s function directly?
The difference is, besides building the model train() does multiple other things like:
1.
2.
3.
4.

Cross validating the model
Tune the hyper parameters for optimal model performance
Choose the optimal model based on a given evaluation metric
Preprocess the predictors (what we did so far using preProcess())

The train function also accepts the arguments used by the algorithm specified in the method argument.
Now let’s see what the train() has generated.
model_mars
Multivariate Adaptive Regression Spline
857 samples
18 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Bootstrapped (25 reps)
Summary of sample sizes: 857, 857, 857, 857, 857, 857, ...
Resampling results across tuning parameters:
nprune

Accuracy

Kappa

2

0.8013184

0.5746285

10
19

0.8102610
0.8103685

0.5987447
0.5986923

Tuning parameter 'degree' was held constant at a value of 1
Accuracy was used to select the optimal model using the largest value.
The final values used for the model were nprune = 19 and degree = 1.

You can see what is the Accuracy and Kappa for various combinations of the hyper parameters –
interaction.depth and n.trees . And it says ‘Resampling: Bootstrapped (25 reps)’ with a summary of
sample sizes.
Looks like train() has already done a basic cross validation and hyper parameter tuning. And that is the
default behaviour.
The chosen model and its parameters is reported in the last 2 lines of the output.
When we used model_mars to predict the Y, this final model was automatically used by predict() to compute
the predictions.
Plotting the model shows how the various iterations of hyperparameter search performed.
plot(model_mars, main="Model Accuracies with MARS")

Caret Package – A Practical Guide to Machine Learning in R
Model Accuracy – MARS
12/24

6.2 How to compute variable importance?
Excellent, since MARS supports computing variable importances, let’s extract the variable importances using
varImp() to understand which variables came out to be useful.
varimp_mars <- varImp(model_mars)
plot(varimp_mars, main="Variable Importance with MARS")

Caret Package – A Practical Guide to Machine Learning in R
Feature Importance MARS
As suspected, LoyalCH was the most used variable, followed by PriceDiff and StoreID .

6.3. Prepare the test dataset and predict
A default MARS model has been selected.
Now in order to use the model to predict on new data, the data has to be preprocessed and transformed just the
way we did on the training data.
Thanks to caret, all the information required for pre-processing is stored in the respective preProcess model and
dummyVar model.
If you recall, we did the pre-processing in the following sequence:

Missing Value imputation –> One-Hot Encoding –> Range Normalization
You need to pass the testData through these models in the same sequence:
preProcess_missingdata_model –> dummies_model –> preProcess_range_model
# Step 1: Impute missing values
testData2 <- predict(preProcess_missingdata_model, testData)
# Step 2: Create one-hot encodings (dummy variables)
testData3 <- predict(dummies_model, testData2)
# Step 3: Transform the features to range between 0 and 1
testData4 <- predict(preProcess_range_model, testData3)
# View
head(testData4[, 1:10])

WeekofPurchase

StoreID

PriceCH

2

0.23529412

0

0.150

3

0.35294118

0

6

0.05882353

7

PriceMM

DiscCH

DiscMM

SpecialCH

SpecialMM

LoyalCH

SalePriceMM

0.5000000

0.00

0.375

0

1

0.6000352

0.4545455

0.425

0.6666667

0.34

0.000

0

0

0.6800414

0.8181818

1

0.000

0.5000000

0.00

0.000

0

1

0.9652913

0.7272727

0.09803922

1

0.000

0.5000000

0.00

0.500

1

1

0.9722459

0.3636364

9

0.15686275

1

0.150

0.5000000

0.00

0.500

0

0

0.9822616

0.3636364

13

0.96078431

1

0.750

0.7333333

0.00

0.675

0

1

0.9927734

0.3636364

6.4. Predict on testData
The test dataset is prepared. Let’s predict the Y.
# Predict on testData
predicted <- predict(model_mars, testData4)
head(predicted)

1. CH
13/24

2.
3.
4.
5.
6.

CH
CH
CH
CH
MM

6.5. Confusion Matrix
The confusion matrix is a tabular representation to compare the predictions ( data ) vs the actuals ( reference
). By setting mode='everything' pretty much most classification evaluation metrics are computed.
# Compute the confusion matrix
confusionMatrix(reference = testData$Purchase, data = predicted, mode='everything', positive='MM')
Confusion Matrix and Statistics
Reference
Prediction CH
CH 113
MM

17

MM
21
62
Accuracy : 0.8216

95% CI : (0.7635, 0.8705)
No Information Rate : 0.6103
P-Value [Acc > NIR] : 2.139e-11
Kappa : 0.6216
Mcnemar's Test P-Value : 0.6265
Sensitivity : 0.7470
Specificity : 0.8692
Pos Pred Value : 0.7848
Neg Pred Value : 0.8433
Precision : 0.7848
Recall : 0.7470
F1 : 0.7654
Prevalence : 0.3897
Detection Rate : 0.2911
Detection Prevalence : 0.3709
Balanced Accuracy : 0.8081
'Positive' Class : MM

You have an overall accuracy of 80.28%.

7. How to do hyperparameter tuning to optimize the model for better
performance?
There are two main ways to do hyper parameter tuning using the train() :
1. Set the tuneLength
2. Define and set the tuneGrid
tuneLength corresponds to the number of unique values for the tuning parameters caret will consider while

forming the hyper parameter combinations.
Caret will automatically determine the values each parameter should take.
Alternately, if you want to explicitly control what values should be considered for each parameter, then, you can
define the tuneGrid and pass it to train() .
Let’s see an example of both these approaches but first let’s setup the trainControl() .

7.1. Setting up the trainControl()
14/24

The train() function takes a trControl argument that accepts the output of trainControl() .
Inside trainControl() you can control how the train() will:
1. Cross validation method to use.
2. How the results should be summarised using a summary function
Cross validation method can be one amongst:
‘boot’: Bootstrap sampling
‘boot632’: Bootstrap sampling with 63.2% bias correction applied
‘optimism_boot’: The optimism bootstrap estimator
‘boot_all’: All boot methods.
‘cv’: k-Fold cross validation
‘repeatedcv’: Repeated k-Fold cross validation
‘oob’: Out of Bag cross validation
‘LOOCV’: Leave one out cross validation
‘LGOCV’: Leave group out cross validation
The summaryFunction can be twoClassSummary if Y is binary class or multiClassSummary if the Y has more
than 2 categories.
By settiung the classProbs=T the probability scores are generated instead of directly predicting the class based
on a predetermined cutoff of 0.5.
# Define the training control
fitControl <- trainControl(
method = 'cv',
number = 5,

# k-fold cross validation
# number of folds

savePredictions = 'final',
classProbs = T,

# saves predictions for optimal tuning parameter
# should class probabilities be returned

summaryFunction=twoClassSummary

# results summary function

)

7.2 Hyper Parameter Tuning using tuneLength
Let’s take the train() function we used before, plus, additionally set the tuneLength , trControl and
metric .
# Step 1: Tune hyper parameters by setting tuneLength
set.seed(100)
model_mars2 = train(Purchase ~ ., data=trainData, method='earth', tuneLength = 5, metric='ROC',
trControl = fitControl)
model_mars2
# Step 2: Predict on testData and Compute the confusion matrix
predicted2 <- predict(model_mars2, testData4)
confusionMatrix(reference = testData$Purchase, data = predicted2, mode='everything', positive='MM')

15/24

Multivariate Adaptive Regression Spline
857 samples
18 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Cross-Validated (5 fold)
Summary of sample sizes: 685, 685, 687, 686, 685
Resampling results across tuning parameters:
nprune

ROC

Sens

Spec

2

0.8745398

0.8700916

0.7006784

6
10

0.8912361
0.8879988

0.8719414
0.8623626

0.7334238
0.7423790

14

0.8879988

0.8623626

0.7423790

19

0.8879846

0.8661722

0.7483492

Tuning parameter 'degree' was held constant at a value of 1
ROC was used to select the optimal model using the largest value.
The final values used for the model were nprune = 6 and degree = 1.

7.3. Hyper Parameter Tuning using tuneGrid
Alternately, you can set the tuneGrid instead of tuneLength .
# Step 1: Define the tuneGrid
marsGrid <- expand.grid(nprune = c(2, 4, 6, 8, 10),
degree = c(1, 2, 3))
# Step 2: Tune hyper parameters by setting tuneGrid
set.seed(100)
model_mars3 = train(Purchase ~ ., data=trainData, method='earth', metric='ROC', tuneGrid = marsGrid,
trControl = fitControl)
model_mars3
# Step 3: Predict on testData and Compute the confusion matrix
predicted3 <- predict(model_mars3, testData4)
confusionMatrix(reference = testData$Purchase, data = predicted3, mode='everything', positive='MM')

16/24

Multivariate Adaptive Regression Spline
857 samples
18 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Cross-Validated (5 fold)
Summary of sample sizes: 685, 685, 687, 686, 685
Resampling results across tuning parameters:
degree

ROC

Sens

Spec

1
1

nprune
2
4

0.8745398
0.8924657

0.8700916
0.8662454

0.7006784
0.7394844

1

6

0.8912361

0.8719414

0.7334238

1
1

8
10

0.8886974
0.8879988

0.8661722
0.8623626

0.7334238
0.7423790

2

2

0.8745398

0.8700916

0.7006784

2
2

4
6

0.8953757
0.8917824

0.8739377
0.8681868

0.7454998
0.7515152

2

8

0.8904559

0.8624359

0.7574401

2
3

10
2

0.8932377
0.8582783

0.8547436
0.8777106

0.7784261
0.6618725

3

4

0.8914544

0.8662454

0.7544550

3
3

6
8

0.8910605
0.8838647

0.8586264
0.8452015

0.7665310
0.7456355

3

10

0.8827056

0.8471062

0.7426504

ROC was used to select the optimal model using the largest value.
The final values used for the model were nprune = 4 and degree = 2.

8. How to evaluate performance of multiple machine learning algorithms?
Caret provides the resamples() function where you can provide multiple machine learning models and
collectively evaluate them.
Let’s first train some more algorithms.

8.1. Training Adaboost
set.seed(100)
# Train the model using adaboost
model_adaboost = train(Purchase ~ ., data=trainData, method='adaboost', tuneLength=2, trControl =
fitControl)
model_adaboost
AdaBoost Classification Trees
857 samples
18 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Cross-Validated (5 fold)
Summary of sample sizes: 685, 685, 687, 686, 685
Resampling results across tuning parameters:
nIter
50

method
Adaboost.M1

ROC
0.8657598

Sens
0.8070330

Spec
0.7635007

50

Real adaboost

0.6884169

0.8356410

0.7275441

100
100

Adaboost.M1
Real adaboost

0.8638731
0.6572239

0.8127839
0.8432784

0.7515604
0.7305292

ROC was used to select the optimal model using the largest value.
The final values used for the model were nIter = 50 and method = Adaboost.M1.

17/24

8.2. Training Random Forest
set.seed(100)
# Train the model using rf
model_rf = train(Purchase ~ ., data=trainData, method='rf', tuneLength=5, trControl = fitControl)
model_rf
Random Forest
857 samples
18 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Cross-Validated (5 fold)
Summary of sample sizes: 685, 685, 687, 686, 685
Resampling results across tuning parameters:
mtry

ROC

Sens

Spec

2

0.8677768

0.8853297

0.6526006

6
10

0.8783062
0.8763244

0.8643407
0.8528755

0.7335595
0.7335595

14
18

0.8764471
0.8756972

0.8509707
0.8432784

0.7394844
0.7515604

ROC was used to select the optimal model using the largest value.
The final value used for the model was mtry = 6.

8.3. Training xgBoost Dart
set.seed(100)
# Train the model using MARS
model_xgbDART = train(Purchase ~ ., data=trainData, method='xgbDART', tuneLength=5, trControl =
fitControl, verbose=F)
model_xgbDART

18/24

eXtreme Gradient Boosting
857 samples
18 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Cross-Validated (5 fold)
Summary of sample sizes: 685, 685, 687, 686, 685
Resampling results across tuning parameters:
max_depth
1

eta
0.3

rate_drop
0.01

skip_drop
0.05

subsample
0.500

colsample_bytree
0.6

nrounds
50

1

0.3

0.01

0.05

0.500

0.6

100

1
1

0.3
0.3

0.01
0.01

0.05
0.05

0.500
0.500

0.6
0.6

150
200

1

0.3

0.01

0.05

0.500

0.6

250

1
1

0.3
0.3

0.01
0.01

0.05
0.05

0.500
0.500

0.8
0.8

50
100

1
1

0.3
0.3

0.01
0.01

0.05
0.05

0.500
0.500

0.8
0.8

150
200

1

0.3

0.01

0.05

0.500

0.8

250

1
0.3 0.01
(..truncated..)

0.05

0.625

0.6

50

Tuning parameter 'gamma' was held constant at a value of 0
Tuning
parameter 'min_child_weight' was held constant at a value of 1
ROC was used to select the optimal model using the largest value.
The final values used for the model were nrounds = 200, max_depth = 2, eta
= 0.3, gamma = 0, subsample = 1, colsample_bytree = 0.6, rate_drop =
0.5, skip_drop = 0.05 and min_child_weight = 1.

8.4. Training SVM
set.seed(100)
# Train the model using MARS
model_svmRadial = train(Purchase ~ ., data=trainData, method='svmRadial', tuneLength=15, trControl =
fitControl)
model_svmRadial

19/24

Support Vector Machines with Radial Basis Function Kernel
857 samples
18 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Cross-Validated (5 fold)
Summary of sample sizes: 685, 685, 687, 686, 685
Resampling results across tuning parameters:
C

ROC

Sens

Spec

0.25

0.8858137

0.8775824

0.7213930

0.50
1.00

0.8907736
0.8888077

0.8814469
0.8852930

0.7363636
0.7333786

2.00

0.8864533

0.8680952

0.7305292

4.00
8.00

0.8832598
0.8773941

0.8719048
0.8681502

0.7335142
0.7275441

16.00

0.8719099

0.8623626

0.7245138

32.00
64.00

0.8684735
0.8583358

0.8661905
0.8700366

0.6886929
0.6946630

128.00

0.8481861

0.8681136

0.6856174

256.00
512.00

0.8365898
0.8246331

0.8719414
0.8719414

0.6647218
0.6436906

1024.00
2048.00

0.8174809
0.8147110

0.8757326
0.8949084

0.6228856
0.5629579

4096.00

0.8113213

0.9043956

0.5418363

Tuning parameter 'sigma' was held constant at a value of 0.06414448
ROC was used to select the optimal model using the largest value.
The final values used for the model were sigma = 0.06414448 and C = 0.5.

8.5. Run resamples() to compare the models
# Compare model performances using resample()
models_compare <- resamples(list(ADABOOST=model_adaboost, RF=model_rf, XGBDART=model_xgbDART,
MARS=model_mars3, SVM=model_svmRadial))
# Summary of the models performances
summary(models_compare)

20/24

Call:
summary.resamples(object = models_compare)
Models: ADABOOST, RF, XGBDART, MARS, SVM
Number of resamples: 5
ROC
Min.
1st Qu.
Median
Mean
3rd Qu.
Max. NA's
ADABOOST 0.8126510 0.8462687 0.8682549 0.8657598 0.8868515 0.9147727
0
RF

0.8203269 0.8584932 0.8894948 0.8783062 0.9061123 0.9171037

0

XGBDART
MARS

0.8618337 0.8656716 0.9142509 0.8980115 0.9169580 0.9313433
0.8520967 0.8660981 0.9091561 0.8953757 0.9118590 0.9376688

0
0

SVM

0.8537313 0.8728500 0.8903559 0.8907736 0.9053030 0.9316276

0

Sens
Min.
1st Qu.
Median
Mean
3rd Qu.
Max. NA's
ADABOOST 0.7619048 0.7904762 0.7904762 0.8070330 0.8076923 0.8846154
0
RF

0.8285714 0.8380952 0.8557692 0.8643407 0.8761905 0.9230769

0

XGBDART
MARS

0.8190476 0.8476190 0.8571429 0.8586081 0.8653846 0.9038462
0.8190476 0.8476190 0.8857143 0.8739377 0.8942308 0.9230769

0
0

SVM

0.8653846 0.8761905 0.8857143 0.8814469 0.8857143 0.8942308

0

Spec
Min.

1st Qu.

Median

Mean

3rd Qu.

Max. NA's

ADABOOST 0.7014925 0.7462687 0.7727273 0.7635007 0.7761194 0.8208955
RF
0.6417910 0.7313433 0.7313433 0.7335595 0.7424242 0.8208955

0
0

XGBDART

0.6417910 0.7313433 0.7462687 0.7515604 0.7727273 0.8656716

0

MARS
SVM

0.6567164 0.7164179 0.7313433 0.7454998 0.7424242 0.8805970
0.6417910 0.6818182 0.7313433 0.7363636 0.7462687 0.8805970

0
0

Let’s plot the resamples summary output.
# Draw box plots to compare models
scales <- list(x=list(relation="free"), y=list(relation="free"))
bwplot(models_compare, scales=scales)

Caret Package – A Practical Guide to Machine Learning in R
Caret Package – A Practical Guide to Machine Learning in R
In the above output you can see clearly how the algorithms performed in terms of ROC, Specificity and Sensitivity
and how consistent has it been.
The xgbDART model appears to be the be best performing model overall because of the high ROC. But if you
need a model that predicts the positives better, you might want to consider MARS, given its high sensitivity.
Either way, you can now make an informed decision on which model to pick.

9. Ensembling the predictions
9.1. How to ensemble predictions from multiple models using
caretEnsemble?
So we have predictions from multiple individual models. To do this we had to run the train() function once for
each model, store the models and pass it to the res
The caretEnsemble package lets you do just that.
All you have to do is put the names of all the algorithms you want to run in a vector and pass it to
caretEnsemble::caretList() instead of caret::train() .

21/24

library(caretEnsemble)
# Stacking Algorithms - Run multiple algos in one call.
trainControl <- trainControl(method="repeatedcv",
number=10,
repeats=3,
savePredictions=TRUE,
classProbs=TRUE)
algorithmList <- c('rf', 'adaboost', 'earth', 'xgbDART', 'svmRadial')
set.seed(100)
models <- caretList(Purchase ~ ., data=trainData, trControl=trainControl, methodList=algorithmList)
results <- resamples(models)
summary(results)
Call:
summary.resamples(object = results)
Models: rf, adaboost, earth, xgbDART, svmRadial
Number of resamples: 30
Accuracy
rf

Min.
1st Qu.
Median
Mean
3rd Qu.
Max. NA's
0.7126437 0.7764706 0.7965116 0.7990467 0.8235294 0.9058824
0

adaboost
earth
xgbDART

0.6823529 0.7674419 0.7906977 0.7966532 0.8328659 0.8941176
0.7209302 0.7906977 0.8187415 0.8164175 0.8367305 0.8604651
0.7441860 0.8023256 0.8303694 0.8316063 0.8575581 0.8953488

0
0
0

svmRadial 0.7764706 0.8028936 0.8362517 0.8308446 0.8604651 0.8941176

0

Kappa
rf
adaboost

Min.
1st Qu.
Median
Mean
3rd Qu.
Max. NA's
0.3733794 0.5051521 0.5504351 0.5639658 0.6253768 0.8040346
0
0.3349754 0.5046620 0.5686668 0.5711983 0.6423870 0.7831018
0

earth
0.4102857 0.5609657 0.6148850 0.6095470 0.6580869 0.7147595
xgbDART
0.4703247 0.5796222 0.6451845 0.6437736 0.7025532 0.7777140
svmRadial 0.5134008 0.5817176 0.6464787 0.6388747 0.7026175 0.7758570

0
0
0

Plot the resamples output to compare the models.
# Box plots to compare models
scales <- list(x=list(relation="free"), y=list(relation="free"))
bwplot(results, scales=scales)

Caret Package – A Practical Guide to Machine Learning in R
Caret Package – A Practical Guide to Machine Learning in R
Excellent! It is possible for further tune the model within caretList in a customised way.

9.2. How to combine the predictions of multiple models to form a final
prediction
That one function simplified a whole lot of work in one line of code.
Here is another thought: Is it possible to combine these predicted values from multiple models somehow and
make a new ensemble that predicts better?
Turns out this can be done too, using the caretStack() . You just need to make sure you don’t use the same
trainControl you used to build the models .

22/24

# Create the trainControl
set.seed(101)
stackControl <- trainControl(method="repeatedcv",
number=10,
repeats=3,
savePredictions=TRUE,
classProbs=TRUE)
# Ensemble the predictions of `models` to form a new combined prediction based on glm
stack.glm <- caretStack(models, method="glm", metric="Accuracy", trControl=stackControl)
print(stack.glm)
A glm ensemble of 2 base models: rf, adaboost, earth, xgbDART, svmRadial
Ensemble results:
Generalized Linear Model
2571 samples
5 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Cross-Validated (10 fold, repeated 3 times)
Summary of sample sizes: 2314, 2314, 2314, 2314, 2313, 2313, ...
Resampling results:
Accuracy
0.8310751

Kappa
0.6405773

A point to consider: The ensembles tend to perform better if the predictions are less correlated with each other.
So you may want to try passing different types of models, both high and low performing rather than just stick to
passing high accuracy models to the caretStack.
print(stack.glm)
A glm ensemble of 2 base models: rf, adaboost, earth, xgbDART, svmRadial
Ensemble results:
Generalized Linear Model
2571 samples
5 predictor
2 classes: 'CH', 'MM'
No pre-processing
Resampling: Cross-Validated (10 fold, repeated 3 times)
Summary of sample sizes: 2314, 2314, 2314, 2314, 2313, 2313, ...
Resampling results:
Accuracy
0.8310751

Kappa
0.6405773

# Predict on testData
stack_predicteds <- predict(stack.glm, newdata=testData4)
head(stack_predicteds)

1.
2.
3.
4.
5.
6.

CH
CH
CH
CH
CH
MM

10. Conclusion
23/24

The purpose of this post was to cover the core pieces of the caret package and how you can effectively use it to
build machine learning models.
This information should serve as a reference and also as a template you can use to build a standardised machine
learning workflow, so you can develop it further from there.

24/24



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.4
Linearized                      : No
Title                           : Caret Package – A Practical Guide to Machine Learning in R
Creator                         : wkhtmltopdf 0.12.2.1
Producer                        : Qt 4.8.6
Create Date                     : 2018:07:19 07:23:29Z
Page Count                      : 24
Page Mode                       : UseOutlines
EXIF Metadata provided by EXIF.tools

Navigation menu