TRAFFIC
Visualisation of road user movements in one of Stockholm’s largest intersections. Several systems work in parallel, and road users are tracked through multiple fields of view. Three per cent of motorised vehicles run the red light; 20 per cent of cyclists do the same
project is hoping to show that it is possible to substitute induction loops for vision technology, to measure the number of vehicles coming in and out of the road junction, including pedestrians and cyclists. Te Stockholm trial began in August and runs for a year. Viscando has also been granted
government funding for another project in Uppsala, setting up a complete solution to deliver a similar pilot as the one in Stockholm, but for a larger intersection. Normally between one and four cameras
are needed for each intersection, depending on the size of the junction. Viscando is able to provide multi-camera tracking, where any number of cameras covering an intersection behave as one system. Te installation could use one sensor with a large field of view, but generally around three sensors with 90-degree fields of view are mounted, which gives more flexibility with regards to coverage and avoiding occlusions from trees or buildings. Te OTUS3D stereovision camera works
in real time, using machine learning algorithms. Viscando’s technology complies
www.imveurope.com | @imveurope
‘[Te vision system] gives cities much more data about... length of queues and duration of waiting time’
with GDPR regulations, in that the camera doesn’t record or transmit video – all the trajectory data is extracted onboard the camera in real time. Te data is analysed offline at the moment. ‘Sweden and Germany, in particular, have
a very strict interpretation of the GDPR rules, which our system fits in well with,’ Singh said.
Te real-time capability of the system
means it is able to control traffic passing through an intersection. ‘In the short term we can substitute five to ten induction loops with a single 3D vision system and, if two or three systems are mounted, we can control the full intersection with non-intrusive, low- maintenance technology,’ Singh added. ‘It also gives the cities much more data about
how the intersection is working, the length of queues and duration of waiting time, and being able to include pedestrians and cyclists in the same optimisation scheme.’ Rather than looking to optimise traffic
flow according to the number of cars, one potential scheme could be to optimise it for the number of pedestrians or cyclists passing through the intersection, and that could vary over the day. Te camera can deliver real-world co-
ordinates for each object because it’s a 3D system. Tat makes calibration between systems easier and more or less automatic, according to Singh. Tis also simplifies the inclusion of the traffic data into a city’s geographic information system. Singh explained that the advantage of
using 3D vision is that it gives the size of objects, no matter where they are in the field of view. It also means the speed is recorded automatically without the calibration needed when using a 2D camera. Te 3D system can also track objects moving directly towards the camera, which would be difficult to do with a 2D camera. In addition, when the algorithm classifies
g FEBRUARY/MARCH 2020 IMAGING AND MACHINE VISION EUROPE 17
Viscando
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40