interoperability standard Open Platform Communications Unified Architecture (OPC UA). Te group released part one of the machine vision companion specification to OPC UA last year, and has now developed a hardware demonstrator that includes a practical implementation of the work. OPC Machine Vision part one describes
an abstraction of the generic image processing system, i.e. a representation of a digital twin of the system. It handles the administration of recipes, configurations and results in a standardised way, while the contents remain manufacturer-specific and are treated as a black box. Te demonstrator establishes an infrastructure layer to simplify integration of image processing
‘It’s like a language. Making sure the language is the same, regardless of the content’
systems into higher-level IT production systems, such as a PLC, Scada, MES, ERP or the cloud. It demonstrates the generalised control of a vision system and abstracts the necessary behaviour via the concept of a state machine. Te information model specified in OPC
Machine Vision part one is designed to reduce the implementation time of vision systems, which Heinol-Heikkinen said is ‘one of the biggest pain points in daily automation’. Te group is now working on part two,
about how to reach other devices and exchange high-level data, which is also of use to other automation equipment. For example, exchanging data from a vision system switching between different tasks, such as inspecting different products. Tis can be done easily, but the idea is to transfer data to another device during or after each task, and generate understanding and maybe also interaction. Tis is the starting point of getting machines to talk to each other, Heinol-Heikkinen said. Today, the traditional way of taking data
from a camera is via a PLC. Te aim of OPC UA Machine Vision part two is for vision data to reach other IT levels directly. ‘OPC UA is not only an interface. It is a
technology we are using to describe a digital twin,’ said Heinol-Heikkinen. ‘Everything introduced into the network,
including a vision system, is described using a standardised information model – a companion specification – which tells the user the type of data recorded by the vision system and how to provide this information to the rest of the factory, or outside the
www.imveurope.com | @imveurope IMVE_59x258_sill
optics_oct-nov.indd 1 02.09.2020 14:17:59
factory. It’s like a language. Making sure the language is the same, regardless of the content. It’s totally different to a standard interface.’ Te information model is designed to be
applicable for all vision devices, from small sensors to large vision systems. Asentics vision products, for example, have the OPC UA companion specification built in, to ease connectivity. As long as machines and devices are
represented by an interoperable digital twin behaving in an Industry 4.0 ecosystem, Heinol-Heikkinen said it doesn’t matter how machines are physically connected, be that reaching another IT system inside the factory or an external partner in the cloud. ‘Once you have the interoperable digital twin information model, you know how to connect to other IT levels,’ he said. ‘Tis then becomes just a matter of security, how to connect securely with the outside world. ‘We want to connect to different
machines and different IT levels, and reach interoperability between machines. How to reach that is not easy. Machine vision, robots and other factory machines have traditionally developed their own ways of connecting in isolation. Tese different domains are not connected and not talking to each other. Even though they use OPC UA, there’s still work to be done to reach interoperability.’
Working at the edge ‘Moving towards reliable Industry 4.0 on manufacturing sites requires a combination of factors to work in parallel,’ Jonathan Hou, CTO at video interface firm Pleora Technologies, said. He listed three key things: improving the backbone network and the communication infrastructure; more automation by using edge devices; and improving the accuracy of vision and other sensors using AI running on edge devices. Working at ‘the edge’ involves running
any computer processing onboard the device, rather than sending information to a separate PC. Edge devices include embedded camera boards or smart cameras. Te idea is that these devices can be positioned throughout the factory to feed in data about the production process. One of the advantages of edge processing for IoT, Hou said, is the ability to add intelligence at the right point in a system, where it can make a decision and pass on results to the next step in the process. In this way, factories can speed up production and lower costs by reducing the need for central processing. ‘In order to connect machines so
that edge devices can talk to each other, networks start to become an important g
CUSTOMIZED SOLUTIONS FOR:
TELECENTRIC LENSES
TELECENTRIC LED-CONDENSORS
CCD LENSES ASPHERES F-THETA LENSES BEAM EXPANDERS LENS SYSTEMS TRAPPED ION
Sill Optics GmbH & Co. KG Johann-Höllfritsch-Str. 13 D-90530 Wendelstein
T. +49 9129 9023-0
info@silloptics.de WWW.SILLOPTICS.DE
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40