EMBEDDED VISION
to the point where they see an added-value [for using AI],’ Munkelt said, adding that this has to happen at all stages in the workflow, from data labelling and data management, to the processing. Nilsson noted that AI is good for solving
tasks that are difficult to do so with conventional rule-based image processing. It’s also good for companies that want to use vision but don’t have a lot of expertise in image processing, in tuning algorithms for instance. He added that both deep learning and
conventional image processing will have a place in vision engineering, and that hybrid solutions will also become more common – for instance, doing object segmentation using deep learning and then applying measurement tools with a rule-based algorithm. Some of Sick’s smart cameras now run neural networks and it provides deep learning software.
Te race for AI accelerators Munkelt said that there’s currently a race to develop AI accelerator hardware, which is going to influence edge processing. He said there are many startups providing really interesting hardware, which can perform 10 or 20 times better than existing GPUs from established vendors. ‘Speed for processing image data is super
important, and will be important in the future,’ Munkelt remarked. ‘Everyone in our vision community is looking at these AI
accelerators, because they can provide a big benefit.’ MVTec’s image processing libraries include neural network-based approaches. Later on during the Embedded World
conference, Jeff Bier, president of the Edge AI and Vision Alliance, and BDTI, gave a presentation where he noted that we are
‘We need to enable customers to take small steps... to migrate from one technology base to the other’
now entering the golden era of embedded AI processors. He said the chips could achieve orders of magnitude of 10 to 100, or even 200 times improvement in efficiency when running neural networks. Tese levels of performance are not obtained through improvements in fabrication, but through improvements in domain-specific architectures. Bier said that there are approaching 100
companies, from startups to the largest established chip makers, that are developing and offering these kinds of processors specialised for deep learning and visual AI. ‘Tis is very different from what the industry looked like a few years ago,’ he said. During the panel discussion Bake said
that one big benefit of embedded vision is that it brings down the cost of components
needed to build a vision system, and, as a result, opens up vision processing to fresh application areas. Bake mentioned medical devices and lab automation, and intelligent traffic systems as areas he is seeing embedded vision used more often. He added, however, that one of the
barriers to entry is complexity, in terms of the different types of hardware available – GPUs, SoCs, ISPs, special AI processors – and mapping software effectively to that hardware. ‘We see a lot of attempts from companies trying to bring the pieces together and make it easier for customers. Te easier it’s going to get, the higher the adoption rate and the wider the usage of that technology,’ he said.
Ashe concluded that the next few years will
bring a proliferation of vision devices thanks to the lower cost of CPUs and GPUs. Tere’s going to be ‘a more unified approach for the platform providers and the edge providers to come together. What’s going to unite them are the applications,’ he said. ‘Te three have to work in unison to provide this seamless ease-of-use experience to quickly and easily address these use-cases, and it needs to work almost as efficiently as the app store on our smartphones.’ Developers building the applications
will be interconnected with hardware vendors and platform vendors. ‘It’s already happening,’ he said, ‘there’s development happening at the edge and in the cloud, and it’s all coming together.’ O
Perspective from Anne Wendel, director of VDMA Machine Vision
Embedded vision was, once again, a major trend topic at the Embedded World digital trade fair in March. VDMA Machine Vision organised a panel discussion, and there was a dedicated embedded vision track and vision featured in many keynotes. So, be it in factories, agriculture, or our households and everyday lives, many future applications will be based on embedded vision. The drivers for the success
of embedded vision are cheaper, smaller, more powerful and more energy-
www.imveurope.com | @imveurope
efficient processors. Arndt Bake, CMO of Basler, said during the VDMA MV panel discussion that embedded vision devices will continue to get smaller, with smartphones a benchmark for size for camera manufacturers. Deep learning has become crucial to building embedded vision solutions in a more efficient way, and therefore reducing time-to-market. However, regardless of the rise of AI, a key factor for any vision application remains the quality of the image. Fredrik Nilsson, head of the machine vision business unit at Sick, noted that a vision task should start with the customer’s needs, and, based on these, the appropriate vision components - camera, lighting, etc - should be
selected to get a good image. Nilsson and Olaf Munkelt, managing director of MVTec Software, added that, in many cases, traditional rule- based algorithm approaches will not be replaced by deep learning. Hybrid solutions will also play an important role. Austin Ashe, of Amazon
Web Services, said that orchestration of cloud and edge computing, as well as big data management, will be crucial in embedded vision. He mentioned monitoring devices – or fleets of devices – in the cloud, running real- time alerts, updating devices, training models and data analytics, all taking place in the cloud. In many industrial environments, however, two critical questions remain: data security and data
ownership for any cloud- based application. PC-based image processing solutions still have their place in an industrial setting. Embedded technology is used more when the task is cost- sensitive and where larger numbers of vision devices are required. That said, new players
are now enriching the embedded vision ecosystem: those developing AI- accelerating hardware, board manufacturers, platform providers, development consultancies focusing on embedded vision solutions and software firms. One thing is for sure: we will see more smart machines and devices that are able to see and understand in the near future.
APRIL/MAY 2021 IMAGING AND MACHINE VISION EUROPE 23
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36