search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
EMBEDDED VISION g


and Allied Vision. All three have kits based on the Nvidia Jetson Nano developer kit. Allied Vision and MVTec have teamed up on their starter kit to offer Allied Vision’s 1.2 megapixel CSI-2 camera, Alvium 1500 C-120, along with Halcon software. Te Imaging Source also provides a kit based on Raspberry Pi, and has recently introduced MIPI CSI-2 and FPD-Link-III board cameras, as well as FPD-Link-III cameras with IP67-rated housing. Te board camera range includes monochrome and colour versions with sensors from Sony and On Semiconductor, and resolutions from 0.3 to 8.3 megapixels. Remmert said the kind of questions


Phytec has asked, relating to embedded imaging, concerns aspects such as lens specification – Phytec offers M12 or C/ CS-mount lens holders on its camera modules, or can create a custom camera with integrated lens – how rugged the components are, and questions about


‘If you are fast, you can do everything from scratch in one year’


maintenance and the lifetime of the components. Part of developing an embedded vision product is to validate the image acquisition chain: the optics, the lighting, the image sensor. Te algorithm also has to be validated, and then there’s the choice of hardware for the end product and validation of that software and hardware combination. ‘Image processing development takes


time,’ Schmitt said. ‘Creating a prototype can be reasonably fast, but the validation of the final solution takes time. And then


Irida Labs’ PerCV.ai workflow: an AI software and services platform that supports the full vision-AI product lifecycle


companies have to bring this into mass production.’ Building an image processing system


involves a lot of image acquisition and a lot of testing, Schmitt explained. Te device has to be tested in the real world, which can throw up problems and means the developer has to alter the algorithm or the image acquisition chain. Tis takes time, and adapting the system can happen several times. ‘If you are fast, you can do everything


from scratch in one year,’ Schmitt said. ‘If you are making a commercial product, it’s between one and four years.’ Sometimes the software development


is already done and the customer is transferring from a PC to an embedded platform. Once the software is ready, then the work involves porting and optimisation of the final mass production system. Schmitt added that, as long as the


Developing with the OpenCV AI kit


To celebrate OpenCV’s 20th anniversary, the open source computer vision library is running a Kickstarter campaign for its OpenCV AI kit (OAK). At the time of writing, the


campaign, which launched last summer, has raised more than $1.3m. OAK is an MIT-licensed open


source software and Myriad X-based hardware solution for computer vision. Those pledging $99 will


receive an OAK 12 megapixel rolling shutter camera module,


capable of 60fps, or $149 for the OAK-D stereovision synchronised global shutter camera. The cameras can be


programmed using Python, OpenCV and the DepthAI package. They are supplied with neural networks covering: Covid-19 mask or no-mask detection; age recognition; emotion recognition; face detection; detection of facial features, such as the corners of the eyes, mouth and chin; general object detection


14 IMAGING AND MACHINE VISION EUROPE FEBRUARY/MARCH 2021


(20-class); pedestrian detection; and vehicle detection. Those wanting to train models based on public datasets can use OpenVino to deploy them on OAK. There are two ways to


generate spatial AI results from OAK-D: monocular neural inference fused with stereo depth, where the neural network is run on a single camera and fused with disparity depth results; and stereo neural inference, where the neural network is run in


parallel on OAK-D’s left and right cameras to produce 3D position data directly with the neural network. OpenCV is running a second


spatial AI competition, inviting users to submit solutions for solving real-world problems using the OAK-D module. A similar competition last summer featured winning projects ranging from systems for aiding the visually impaired, a device for monitoring fish and a mechanical weeding robot for organic farming.


@imveurope | www.imveurope.com


engineer is working with a Linux-embedded solution mainly based on Arm processors, it is easy to assess the performance on a standard platform, such as a Raspberry Pi, and then tell how fast or slow it will be on a different platform.


Trowing AI into the mix Te performance of the system also depends on whether the engineer is using AI or rule- based image processing algorithms. AI is highly dependent on hardware acceleration, and there are differences between using accelerators from Intel, Google, GPUs or FPGAs, according to Schmitt. Also, the type of neural network the developer is using will make a huge difference. Engineers will often already have made this choice, Schmitt said; it’s taken at the beginning of the project because someone has expertise in Tensorflow, for example, or knows all the tools in the Nvidia toolchain.


Irida Labs


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36