SPONSORED: INDUSTRIAL EMBEDDED VISION
Need for speed
Keely Portway on how vision application designers can use embedded technology to reduce complexity and time-to-market
A
dvances in embedded computing have been transforming how imaging devices are deployed,
thanks to lower associated development and deployment costs than more traditional machine vision. Tis has led to more use cases, with applications in industries such as aerospace, automotive, augmented reality (AR), pharmaceutical, consumer electronics, defence, security and even retail. Te technology behind embedded
computing has been around for some time. Te first ‘smart cameras’ emerged from research institutions in the 1980s. When they reached the commercial market, most embedded products were custom solutions ideally suited for high volume manufacturing. Alexis Teissié, product marketing
manager at Lucid Vision Labs, explained: ‘For many years, the option was to buy new, more powerful x86 CPU processing, PC- based systems. Te way to go if you needed faster processing and higher bandwidth was upgrading the PC architecture, which was very flexible.’ Te benefit here, explained Teissié, was
that this could be adapted to a variety of configurations, both simple and high-end. ‘Instead of having a central processing system, there was a shift towards moving
the processing closer to the acquisition side, closer to the edge,’ he said. ‘Tere was also the evolution of the graphic processing unit (GPU) that was well suited for a lot of vision processing tasks. One of the big motivations for moving to GPU and edge analytics was being able to run those paradigm shifts compared to traditional machine vision. Being able to run artificial intelligence (AI) on-camera is another motivation, because an optimised AI network doesn’t need the high-end processing hardware.’
Smaller and easier Te evolution of embedded tech has also driven a need for systems to be smaller and easier to integrate. ‘Systems started to become less enclosed,’ said Teissié, ‘so designers would not have to deal with cabling, for example – and the camera and processing would be nearby.’ However, as with all developing
technologies, embedded tech does not 30 IMAGING AND MACHINE VISION EUROPE APRIL/MAY 2022
always come without its challenges. With progress towards miniaturisation and edge processing, application designers found that they needed to work through several time-consuming steps to reach a finished product. Advances in modules that can connect directly to embedded boards have helped to alleviate some of these problems for designers, allowing more freedom to create an embedded vision system without having to design everything from a standing start. But the next challenge for vision application designers was architecture limitations, as Teissié explained. ‘For example,’ he said, ‘it can be difficult to deal with multiple cameras, because there is no standardised connection. So they would have to design carrier boards or interface boards. Ten there is the industrialisation part, which is how to produce at scale with something that is robust and reliable.’ Tis move from concept to system production is a particular challenge.
@imveurope |
www.imveurope.com
TierneyMJ/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36