This page contains a Flash digital edition of a book.
Embedded Design

Breaking the

boundaries A

Large multi-channel video systems are pushing standard video-processor ICs or software-centric systems to their limits. Allen Vexler looks at how a new micro-footprint H.264 core targeting all-programmable SoC devices can satisfy the demands of high performing systems

dvances in technology for gathering, processing and storing large quantities of high-quality

digital video are enabling increasingly sophisticated camera-based systems, and are also driving the emergence of new Smarter Vision systems incorporating embedded analytics. These new generations of equipment will enhance applications such as military and security surveillance, automated inspection, traffic monitoring, medical imaging, driver- assistance systems, and many more. With increasing end-user expectations and intensifying design activity, there is growing pressure to build systems featuring larger numbers of video input channels, smaller dimensions to save on parts costs and allow more compact packaging, and reduced power consumption to enhance efficiency and portability.

Demand for large numbers of input channels can be seen in today’s military surveillance equipment such as patrol vehicles or Unmanned Aerial Vehicles (UAVs), which may carry upwards of ten cameras collecting detailed information for strategic purposes or as evidence. Some of the most complex and advanced security surveillance applications can require the video subsystem to handle more than 100 camera feeds.

On the other hand, equipment such as

airborne weather monitors may have only a small number of cameras but demand extremely low power and small size to meet tight space and battery size/lifetime constraints.

Low latency can be another key

requirement, particularly in control-oriented applications or medical equipment such as

36 December 2013/January 2014

arthroscopes. These require a very short latency, typically 10-30ms between the time when the sensor sends the video to the compression engine and the point when the decoded image is displayed on the screen, to ensure a suitably responsive instrument.

Evolving approaches The established design approaches for building embedded video systems have been either to use a standard video- processor IC (ASSP), or to build a system using a discrete processor and custom FPGA. The ASSP approach allows a system to be built with a small footprint, but is relatively inflexible and difficult to scale up to large numbers of input channels. A discrete processor-based system, on the other hand, comprises several components including multiple banks of DDR and Flash memory and therefore has a significantly larger footprint. In addition, engineers using either approach can find it difficult to achieve the low levels of latency required for real-time control applications. A different solution is needed if designers are to meet market demands for more channels, increased performance, smaller size and lower power consumption. A2e Technologies has implemented high-performance H.264 encoder and encoder/decoder cores taking advantage of the integrated ARM dual-core Cortex-A9 MPCore processor and FPGA fabric of the Xilinx Zynq 7000 all-programmable System on Chip (SoC). These cores have a small footprint, low-power, and allow scalability to large numbers of channels using one or several Zynq 7000 SoCs.

Components in Electronics The encoder core can be integrated

directly in a camera subsystem as a module, and is capable of accepting raw video from the CMOS sensor, processing the video and compressing it for output to the client system. The Zynq 7000 SoC integrates industry-standard peripherals, a Flash controller and multi-port DRAM controller, and enables the

encoder/decoder to be implemented using only a single bank of external DRAM. Tight coupling between the memory and the processing subsystem provides a high internal bandwidth that is difficult to achieve using discrete processor and memory components. A significant advantage of

programmable SoC FPGA fabric is the flexibility to customise the number of inputs and support non-standard video sizes. To handle multiple input video streams, the device can be configured with multiple H.264 cores. Figure 1 illustrates the architecture of a dual 1080p30 H.264 compression engine. The flexibility of this approach enables engineers to implement large numbers of channels and still maintain the video processing throughput required. This contrasts with the approach needed when using an ASSP, since many standard ICs provide only two inputs. The designer must typically multiplex several channels into one input, and the resulting

Figure 1. Architecture of a dual-1080p30 H.264 compression engine

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52