Artificial Intelligence
Factories transferring data from multiple camera feeds over high-speed Ethernet back to a tower PC in the same building for processing would be a great example of this kind of application. With this type of camera-to-server installation, data is harvested from the camera feeds and sent, in its entirety, to the control room for processing by the discreet GPU cards. Imagine the amount of data transfer consumed by ten 4k cameras operating 24 hours per day — fine when 100 per cent of this data is being transmitted over existing Ethernet infrastructure, but where things get a little more challenging is when cameras are out in the field, dotted along motorways, in mobile camera units, on rolling stock, or in the harsh environment of the metropolis exposed to the city elements. In this situation, often the only means of data transfer is via sporadic Wi-Fi connections, existing twisted pair cabling, or cellular, none of which are practical for the amount of data, due to either commercial reasons or simply the lack of bandwidth. This is where embedded edge AI systems come into their own. Instead of transmitting the entirety of the data, the data itself can be processed right on the edge, in close proximity to the camera itself, with only the critical information being sent back to the control room. This can mean the difference between kilobytes and gigabytes, and opens the door to cellular data transmission, especially now we’re in the process of 5G being rolled out. With the advent of GPU processing on the edge, mobile camera units can now process and alert of incidents in real time, independent of any established connectivity within its area. One of the catalysts behind the movement to GPU processing on the edge, along with the popular Nvidia Jetson family, is MXMs.
What is an MXM?
As with a lot of advances in technology centred around size restraints, such as M.2 modules, an MXM, or Mobile PCI Express Module, was originally designed for use in laptops. It was intended to provide an industry standard socket for graphics processors, which would allow the easy upgrade of GPU capabilities without the need to upgrade the whole system or be tied to proprietary hardware for the lifetime of the device.
In a similar vein to M.2 modules, this technology was quickly snapped up by manufacturers and integrators in
www.cieonline.co.uk
the embedded arena. Under the same space constraints as small commercial devices, embedded systems require high processing power in a small footprint, but it isn’t just the power to size ratio which is making MXMs ever more popular in embedded applications.
PCI cards are not, and never have been, designed for rugged applications where vibration is an issue. There are steps integrators can take to use these in harsh environments, such as designing GPU cages to minimise the effects of impacts and vibrations on the cards, but by and large they are intended for “carpeted spaces”, i.e., offices and control rooms where the environment is calm. Try and use a tower PC with a GPU card in a mobile camera unit, and its days are numbered. MXMs are designed specifically for these harsh environments. Intended to be integrated into systems which themselves are tested and certified to adequately handle vibration, these modules are hardy and capable of being exposed to the impacts and tremors found in the harsh environment of life on the edge. They too can be certified, with options for rail, oil and gas, marine and more, making the process of integration and certification of them into already certified embedded computing systems a much smoother process.
Which MXM GPU is right for your application?
When it comes to graphics processors there are generally two line-ups provided by
NVidia: GeForce (consumer) and Quadro (pro). As you’d expect, each have their pros and cons, the balance of which is tipped by the environment they’re being installed in and the type of processing they have to do. The first thing to understand is that both GeForce and Quadro are built on the same architecture. There are, however, versions of the architectures being released regularly, and within those versions there are even more versions. For example, one of the latest architecture releases from Nvidia is named Turing, which is split into three different versions: TU102, TU104, and TU106. Don’t be fooled into thinking the TU106 is more powerful than the TU102, though. In general terms, the lower the number, the more components there are integrated into the processing unit, and therefore the greater the processing performance. What this means is that two graphics cards which share the same architecture can actually differ hugely in capabilities.
Turning to intelligent video analytics applications, there is a series of MXM GPUs which includes features directly geared to video processing. The Nvidia Quadro RTX series of MXM GPUs offers the same high performance we’ve seen with the Turing and Pascal architectures, but with added features to serve video-intensive applications such as machine vision and IVA — features not included in its GeForce RTX cousin.
Nvidia’s GPUDirect for Video, available only on Quadro RTX modules, allows
manufacturers to write device drivers that efficiently transfer video frames in and out of NVIDIA GPU memory. What this brings is faster processing speeds, through the GPU and I/O devices working together to provide synchronised transfers rather than having forced wait times to ensure completion, reduced latency by transferring smaller chunks of data rather than sending the entire chunk in one go, and overall reduction in CPU overhead by copying data through pinned host memory. Another feature included with Quadro RTX GPU modules is the ability to scale GPUs. Many intensive AI applications such as inference and video analysis require multiple GPU cards to manage the load. The multi-GPU scalability of Quadro RTX GPU modules allows for multiple GPUs to be integrated into a single system, then multiple systems used for the application, essentially scaling the number of Tensor cores into the thousands.
In summary
With high processing power and a tiny footprint, MXM GPUs are enabling manufacturers and system integrators to provide efficient data capture and processing right on the edge, right out of the box. Discreet GPU cards still dominate the industrial space, but for applications requiring more durability or a smaller footprint, there is little MXM GPUs can’t do that their big discreet GPU card cousins can.
www.impulse-embedded.co.uk Components in Electronics December/January 2022 33
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66