search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
BTS | HARDING PRIZE COMPETITION 2023


deployment. If a future project is at the tender


stage, a prediction can be made with the trained model. Additional training on new data will increase the accuracy, but the model at this stage will still be able to perform accurately, even when applied to new scenarios. Three error metrics were used to assess performance


of the neural network in both studies: Mean Squared Error (MSE); Mean Absolute Error (MAE); and, R2 correlation. These are all common for neural networks predicting numerical values and having three metrics ensures more robust decision-making. A value greater than 0.7 for these values is good. MSE penalises high errors, making it useful for


understanding how correct an answer is, and how common large errors are. MAE, contrary to MSE, does not penalise high errors


and is useful for understanding the actual spread of error of a prediction. R2


is used to explain how well the inputs of a model


can explain an output, like how correlation is used to explain relationships but in this case for a model. If R2


is


>0.9, it means the inputs are likely too correlated to the output, meaning inputs are driving the output.


RESULTS


Case Study 1 – UKPN Southwark Tunnel For the UKPN Southwark Tunnel dataset, the neural network that was used is detailed in Table 3. The neural network structure was selected based on initial experiments. The learning rate, batch size, and optimisation algorithm are all standard for this field. Training of the neural network started from Rings 1–2000, with predictions and further refinements then performed over Rings 2001–2950. The neural networks were trained to output three


parameters – Excavation Time, Excavation Rate, and Earth Pressure Balance – using a dataset with 11 inputs and 2950 data points. The dataset was predominantly of TBM performance data, which is useful for any running projects but should not be solely relied upon for a forecasting project, as the machine parameters will also need to be predicted. Table 4 has error results. Excavation time for each ring was pleasingly accurate,


even with uncertainty from other external factors such as breakdowns, with the machine learners able to capture the overall behaviour of the machine. Figure 4 shows a sample of the excavation time predictions, with previous actual data included to show how well the model continues a previous trend.


Here, a clear trend has been successfully captured


by the algorithm. Even though there are significant deviations, as around Ring 2600, the overall MSE error metric is low (0.005). This means that, generally, the size of the error is low, as high deviations in prediction would be penalised. The low error is again reflected by the MAE (0.06).


As seen around Ring 2450, nuances of tunnelling performance are captured as shown by the dip in performance. The error metric R2


(0.7) is lower than for the other


two outputs, as there are fewer data sources that can help explain the behaviour of the algorithm. If the R2 value was >0.9, then the parameters of the TBM inputted are driving the outputs too high. This was not the case, as the R2


value was less than 0.9. In future, additional data sources could be included


to understand why there are deviations between the prediction and result. The prediction is also close to previous data points, as seen at the intersection, showing how it can accurately continue a previous trend. Figure 5 shows the average excavation rate of the


machine. The average excavation rate results were even more


accurate than the excavation time, with the behaviour accurately captured, especially after Ring 2600. This resulted in a low MSE (0.01), which was surprising considering the large deviation at Ring 2620 and after Ring 2550. The low error is best viewed after Ring 2700 and after Ring 2500, where the predictions are almost identical to the actual result. This low error is reflected by the low MAE (0.07). Figure 6 shows the EPB predictions. EPB was accurately captured, with only some


deviation just before Ring 2600 and after 2400. This deviation was only slight, resulting in the lowest MSE of the three predictions (0.0009), and the error was not as common, resulting in the lowest MAE (0.02). EPB was able to be explained best by the sensor array used, as shown by its high R2


score (0.9). These predictions


could be useful for predicting machine parameters needed, helping the operator to make more efficient decisions. The results of the neural network were accurate


and were able to capture the underlying behaviours of the machine. This was more impressive, as the neural network was simultaneously predicting three outputs (Excavation Time, Excavation Rate, EPB) which can impact the accuracy of the model.


Table 3: Neural network parameters for the Southwark Tunnel project Variable


Neural Network Structure Learning Rate


Optimisation Algorithm Batch Size per Update


Value


Inputs (11)-100-150-Output (3) 0.001 Adam 16


36 | August 2023


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49