MECHANISED TUNNELLING | TECHNICAL
(response) and independent variables (predictors) is not exactly known before building a predictive model (Breiman et al. 1984). Furthermore, there is no need to consider prior suppositions about the relationship between variables. Given a set of samples, CART identifies one input
variable and one break-point, before partitioning the samples into two child nodes. Starting from the entire set of available training samples (root node), recursive binary partition is performed for each node until no further split is possible or a certain terminating criterion is satisfied. At each node, best split is identified by exhaustive
search, i.e., all potential splits on each input variable and each break-point are tested, and the one corresponding to the minimum deviations by, respectively, predicting two child nodes of samples with their mean output variables is selected. After the tree growing procedure, typically an overly
large tree is constructed, resulting in lack of model generalisation to unseen samples. A procedure of pruning is then employed to remove sequentially those splits insufficiently contributing to training accuracy. The tree is pruned all the way back to the root node, resulting in a sequence of candidate trees, each of which is tested on an independent validation sample set. The candidate tree corresponding to the lowest prediction error is selected as the final tree (Breiman 2001; Wu et al. 2008; Yang et al. 2017; Salimi 2021). To perform the RT analysis, recursive partitioning
and multiple regressions are carried out. From the root node, the data splitting process is repeated until a stop condition previously specified is reached. Stopping criteria can be defined for performing the regression tree algorithm – to keep the resultant tree from being too complicated for interpretation – and the three main ones are: (1) a minimum number of observations in a node split; (2) the depth of the tree; and, (3) the complexity parameter (cp). The complexity is the value analysed by the following rule: if the division in a specified node does not improve the model fit (on the value of set cp) then it is ignored (Therneau and Atkinson 1997; Therneau et al. 2012; Tomczyk and Ewertowski 2013; Salimi 2021). Alternatively, the optimal tree structure can be
identified through tenfold cross-validation. In brief, in regression problems, where the output is a continuous number, CART can successfully predict the targeted outcome through using, for example, least absolute deviation (LAD) error. The study discussed here used CART RT models implemented in the R statistical computing software. There is often a balance to be achieved in the depth
and complexity of the tree to optimise predictive performance on some unseen data. Respective to tree depth, the higher the depth the more complicated is the model and harder to produce the tree; if the depth is low, the efficiency of the model will be reduced and some parameters may be omitted. Hence, the related tree depth was reduced to 5, 6, 7 and 8.
To find this balance, one typically grows a very large
tree, which is then pruned back to find an optimal subtree using a cost complexity parameter that penalises our objective function LAD for the number of terminal nodes. For a given value of the smallest pruned tree that has the lowest penalised error, the optimum setting can be achieved. Smaller penalties tend to produce more complex models, which result in larger trees; larger penalties result in much smaller trees. Behind the scenes, CART RT via R computing software, is applied with a range of cost complexity (cp) values
Above, figure 6:
Comparison between the calculated and predicted FPI based on rock type categorisation via
regression tree (CART) for training and testing datasetscategorisation via regression analysis for training and testing datasets
July 2024 | 23
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92 |
Page 93 |
Page 94 |
Page 95 |
Page 96 |
Page 97 |
Page 98 |
Page 99 |
Page 100 |
Page 101 |
Page 102 |
Page 103 |
Page 104 |
Page 105 |
Page 106 |
Page 107 |
Page 108 |
Page 109 |
Page 110 |
Page 111 |
Page 112 |
Page 113 |
Page 114 |
Page 115 |
Page 116 |
Page 117 |
Page 118 |
Page 119 |
Page 120 |
Page 121 |
Page 122 |
Page 123 |
Page 124 |
Page 125 |
Page 126 |
Page 127 |
Page 128 |
Page 129 |
Page 130 |
Page 131 |
Page 132 |
Page 133 |
Page 134 |
Page 135 |
Page 136 |
Page 137