search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
ADDITIVES ARTIFICIAL


VisionTrack’s latest AI-powered software can save umpteen hours of trawling though camera footage. Jack Carfrae explains how it works


“ I


think the technical term is shedloads of data,” says Charles Morriston, VisionTrack’s head of professional services. He’s describing the number of clips generated by the average video telematics system. Now common among commercial vehicle fleets, they typically capture just about everything from an innocuous brush of a kerb to a full-blown shunt. They serve up multiple recordings –


normally footage of the incident itself, plus a few seconds either side – as irrefutable proof of what took place. Hard to beat for the likes of targeted driver training and settling insurance claims. The trouble is, they create an awful lot of alerts, and even relatively small operators could wind up with hours of footage to sift through, before they really know whether any clips amount to more than trundling over a speedbump, which Morriston describes as “the bane of any fleet manager’s life” – assuming they have a video telematics system.


“They’ll get these videos – and it might be a shock event or a high kerb mounting – and they have to watch them. It takes them a minute or so to find the video, watch it, and then they have to dismiss it as a false positive,” he explains. VisionTrack’s latest piece of


software, NARA – Notification, Analysis and Risk Assessment – is designed to sort the videographic wheat from the chaff. Launched in February, it is essentially an analysis tool, which uses artificial intelligence to determine what is and isn’t worth a fleet manager’s attention. “What we’re trying to do is prove that watching video is going to become fairly redundant, particularly with the advent of AI,” says Morriston, “it analyses footage automatically, in pretty much near real-time… strips out all the false positives, and what you are left with is primarily actual incidents or collisions that you can then, in real


@whatvan


Focus On Safety


The AI technology feeds back assessments of video in real-time.


time, inform the insurance company, or inform a fleet or risk manager.” The system makes its decisions by absorbing “billions of data points” and includes parameters such as distance, speed, and vehicle type. It can, for example, correlate sharp deceleration with the proximity of another road user, and deduce that, even though there was no physical contact, there was a near miss, and the AI element helps it to spot common issues. During our video call, Morriston shares an image of a “typical fleet manager’s dashboard” displaying the results from two days of video telematics at work. The fleet of 1,149 vehicles had covered 381,000 miles in that time and generated 576 ‘red’ events, which are harsh manoeuvres (a ‘black’ event is the most severe and involves an actual collision). This is our own calculation, but if you assume the clips last for an average of eight seconds, that works out at just under an hour and 20 minutes of footage to review.


He then shows us a dashboard from a fleet of 1,272 vehicles using NARA, which had covered close to 2.2 million miles over a week and generated 2,892 red events. “It’s really skimmed those videos down and left 26 that might just need a double check by a human eye,” he explains, “the rest of those videos have automatically been categorised or dismissed.”


Assuming the same average clip


time, that’s about three and a half minutes to review.


The system also catalogues less severe clips which don’t depict accidents but might warrant a chase-up. “I’ve also got 379 [in the same example] that potentially require intervention,” adds Morriston, “it could be that it’s a near miss – no collision, nothing to do from an insurance perspective – but it could be a driver training opportunity.”


Assuming the same eight second clip average, these amounted to around 50 minutes of review time.


Beyond the obvious efficiency and admin benefits, the tech’s two other big paybacks are said to be insurance and safety. At the operator’s discretion, it can automatically send footage of the severest events to the insurer when it issues the alert to the fleet, dramatically speeding up the first notification of loss – FNOL as it’s known in the insurance industry. “If NARA says, ‘actually, this is a black event because it involves an actual collision,’ then the insurance company can be emailed,” explains Morriston, “we’ve got APIs [Application Programming Interface: intermediary software that allows two applications to talk to each other] where we can push data out to insurance systems, or they can request that data on, say, an hourly or a five-minute basis.


“Even if your driver is at fault, it’s always better to be on the front foot with that claim. Launch that claim within that golden hour, and there are huge savings in time and cost.” The company says it can


July 2023 WhatVan? 15


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53