search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
EMBEDDED VISION


Artificial intelligence adds another layer of complexity to an embedded vision project. Te hardware platforms are now available, which is helping companies to start working on vision AI projects, noted Dimitris Kastaniotis, product manager at Irida Labs. However, for real-world vision-AI solutions a lot of effort beyond connecting libraries and hardware platforms is needed to create and support a successful vision-AI application, he said. Kastaniotis said there are several pain


points that need to be addressed during the development of an AI vision solution, such as defining the problem in the first place, the availability of data, the hardware optimisation, and the AI lifecycle management. Irida Labs has developed an end-to-end


AI software and services platform, PerCV.ai, to shorten development time for building embedded vision AI systems. It is designed to help customers define a problem, and deploy and manage vision devices. Te first step, according to Kastaniotis,


when starting an AI computer vision project is determining what the customer needs – getting a detailed specification from the customer. Tis can be the first area that can introduce delays, as there can be varying expectations from the customer of what can and cannot be achieved. ‘You have to create a common


understanding with the customer, and do that quickly,’ Kastaniotis said. ‘Te definition of the problem can not only affect the performance, but also the specifications of the vision system and the hardware platform selection. In addition, when working with AI you have to describe the problem in terms of data, as this is what neural networks understand.’ Once the problem has been defined, the


Vision at Embedded World


The Embedded World exhibition has gone digital, with a variety of embedded vision technology, along with conference talks, presented virtually at the show from 1 to 5 March. The embedded vision


conference track will run on 4 and 5 March, and will include case studies covering agricultural


livestock monitoring, wildfire surveillance, and a system for detecting anomalies during welding. There will be two sessions on system integration, with talks from the Indian Institute of Technology in Mumbai, Allied Vision, Arm, Neil Trevett at The Khronos Group, Xilinx, and Core Avionics and Industrial. Intel will speak about


‘By solving the three main bottlenecks in embedded AI vision development, we saw a significant improvement in the time-to-market’


next step is a pilot project with customer samples in order to gauge whether the vision solution is solving the problem according to the expectations of the customer.


Ten comes the data campaign, which


can be time consuming and expensive, in Kastaniotis’s experience. Cameras need to be installed to collect


the data, and the data annotated. PerCV.ai aims to keep data campaigns to a minimum by collecting the right data, as well as minimising annotation effort.


its OpenVino platform, Basler about optimising neural networks, and Toradex about tuning performance. VDMA Machine Vision


will organise a panel discussion on embedded vision, asking what are the factors for success when developing embedded vision systems, and what are the barriers to entry.


Te next stage is to move from the first pilot installation to deployment, while also maintaining customer feedback. ‘When you start to scale up the system with more sensors and more feedback loops, then you definitely need a platform that will help roll out updates, get feedback, and also detect drift in the distribution of the data,’ Kastaniotis said. Te PerCV.ai platform can help with these aspects to scale and maintain the vision system over its lifetime. ‘By solving the three main bottlenecks


in embedded AI vision development – defining the problem, the data campaign, and updating the system during deployment – we saw a significant improvement in the time-to-market of an embedded vision solution,’ Kastaniotis added. ‘With our platform, we are able to go


from six months or one year for a standard embedded vision project, to four to eight weeks for developing and implementing a first system.’ Ultimately, according to Schmitt, the


toolsets vision engineers have available now are far better than 10 or 15 years ago. Tere are also ways to speed up prototyping, by evaluating the image acquisition chain in parallel with the choice of CPU, for instance. ‘Choices you make in software may automatically influence the choice of hardware and vice versa,’ Schmitt said. But that ‘grabbing images and validating the image acquisition equipment – image sensor, lens, and lighting – can be determined completely independently from the processing platform, using any standard cameras.’ In addition, upgrading with a Linux OS-


on-Arm system is relatively easy, such as moving to a quad core Arm solution, or a dual core Arm with higher clock rate, which ‘wouldn’t impact your development time,’ Schmitt said, adding ‘you don’t have to start everything from scratch.’ O


PhyBoard-Pollux embedded imaging kit with i.MX8M Plus module and VM-016 MIPI CSI-2 camera module www.imveurope.com | @imveurope FEBRUARY/MARCH 2021 IMAGING AND MACHINE VISION EUROPE 15


Phytec


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36