search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Feature: Machine vision


Movidius vision- processing unit


Today, Ethernet is establishing itself as the dominant wired


networking protocol, with many enhancements such as time-sensitive networking for low-latency highly-deterministic networking. Terefore, the control environment that new robotic deployments oſten encounter may involve connecting and achieving bi-directional communication to equipment that uses protocols such as Profinet, EtherCAT, Modbus TCP, or others. Tis is where protocol-conversion cards come in. Tere are now many protocol-conversion cards available that


to run a neural network model once trained, is compute-intensive and requires memory close to the processing cores. Vision-processing units (VPU) systems-on-chip (SoCs) are a good example of inference engines designed for image- and video-processing tasks. Some of the latest VPU technology feature as many as 16 very long instruction word (VLIW), 128-bit streaming hybrid architecture vector engines (SHAVE), which enable them to perform inferences at up to one trillion operations per second. A total of eight high-definition video cameras can be directly


connected to a VPU using the 16 MIPI interfaces. Each camera stream can utilise one of twenty hardware-based accelerators for a variety of image or video frame manipulations. For example, this might involve stereo depth measurements or using Kalman filters to isolate specific object features. Some of the latest VPUs feature memory throughputs up to 450GBytes per second, meaning they are ideally placed to handle complex high-throughput image-processing tasks. And, despite their impressive performance specifications, power consumption is oſten less than 3W. VPUs are available in a variety of different subsystem configu-


rations including system-on-module, M2.AE key cards and PCIe card formats. In parallel, soſtware toolkits such as open visual inference and


neural-network optimisation are being developed to simplify the design of machine-vision applications. Tese toolkits support a heterogeneous development and prototyping environment across microprocessors, FPGAs, GPUs and VPUs with a common API framework. Packed with many function-specific libraries, example applications, pre-optimised kernels and pre-trained models, they support many of the popular neural-network frameworks in use, such as Caffe and Tensorflow. Subsystems based on VPUs and open visual inference and


neural-network optimisation are, therefore, enabling industrial robot designs to prototype and deploy new robotic applications to meet the increasing demand for large, high-throughput vision-processing tasks. Yet, their power consumption remains a fraction of that of GPUs and other programmable devices.


Connecting robots to legacy network devices Te past four decades have seen a rapid growth of industrial automation and process control across industry. During that time, the protocols used to control machinery have multiplied and, even with those industrial control and networking protocols that have stood the test of time, there have been multiple iterations.


52 November/December 2020 www.electronicsworld.co.uk


make integrating robots into legacy network devices much easier. Some of the latest protocol-converter PC cards and modules, for example, use their own specialist multi-network processor IC, making the task of provisioning reliable communication between industrial PCs, network-attached machinery and industrial robots much easier. Connecting Ethernet to EtherCAT, Modbus-TCP, Profibus or Profinet-based applications in a flexible yet reliable manner is now possible. What lies at the heart of some of the latest protocol-converter


PC cards and modules is a multi-network processor. Specifically developed for protocol conversion, this type of processor is optimised for the real-time high-bandwidth requirements of Ethernet and TCP/ IP protocols despite its low power consumption. Tere are two steps to assess this processor technology’s


capabilities: 1. purchase one of the latest PC cards on the market, and 2. incorporate it directly into a system – a path that manufacturers of industrial automation equipment in particular may want to take. Either way, there are different ready-made solutions to protocol conversion on the go, with latency as low as under 15µs. Te architecture of some of the latest multi-network processor


technology consists of an Arm Cortex-M3 core and an FPGA fabric. Te Cortex core manages the protocol and application stacks, whilst the FPGA manages the physical interfaces and the real-time switch. With its flash-based architecture, this type of processors can be reprogrammed to accommodate different protocols with ease, providing a highly-efficient, low-latency and high-bandwidth way of supporting legacy industrial networking protocols. Tis approach enables cobots and other new robotics applications to be quickly, efficiently and reliably integrated into existing industrial automation domains.


Going forward We have seen how, as robotic applications become more sophisticated and closely integrated into industrial processes, there is growing demand for compute-intensive resources. Tis adds to factory floor-space becoming more and more constrained, which makes reducing the number of control cabinets critical. Te ability to squeeze even more technology into a tight space makes it all the more imporant that the amount of energy consumed is minimised in order to avoid thermal challenges. Low-power, high-performance computing devices such as


the latest VPUs and protocol-converter PC cards and modules represent just two examples of technologies already advancing the development of more sophisticated robotic systems.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68