Embedded special Embedded product update

DeepView machine learning toolkit Au-Zone Technologies has introduced DeepView, a machine learning toolkit and run-time inference engine for embedded processors. DeepView provides a graphical development

environment to design, train and deploy convolutional neural network solutions on a range of embedded processors. It is integrated with TensorFlow, Google’s training framework, and is compatible with cuDNN, Nvidia’s training acceleration library for desktop machines with compatible GPUs. DeepView allows developers to design and test custom CNN networks on proprietary hardware platforms within hours. The DeepView toolkit serves a range

MIPI/CSI-2 Dart cameras This autumn, the camera manufacturer Basler will offer an extension module that lets users operate its Dart camera modules via the MIPI Camera Serial Interface 2 (CSI-2) camera interface used for embedded processing platforms.

While the CSI interface is popular in the embedded world, many users in machine vision want to use established standards such as GenICam and cables longer than 20 to 30cm, both of which are not available with CSI. So far, these limitations have hindered the application of this technology particularly in industrial environments. The Dart extension module

eliminates these impairments. Basler’s approach closes the gap between MIPI CSI-2 and the requirements of the machine vision field. With the new extension, the Dart Bcon, a fully GenICam-compatible and plug-and- play camera, can be operated through the CSI-2 camera interface. No software expertise, such as for driver development, is required. The Basler Pylon camera software suite

provides the camera API. This allows unrestricted access to all camera features, and facilitates the migration of existing camera applications, such as from USB3.0/Windows to CSI-2/Linux, on ARM as well as x86 processors.

of markets, including ADAS for vehicles, healthcare and patient monitoring, business and retail analytics, mobile and wearable devices, security and surveillance, industrial machine vision, and consumer electronics. The toolkit offers: feature-rich dataset curation workspace with support for still image and video imports; graph-based network design with several ready-made network templates and 15 fully configurable primitives to simplify custom network designs; a user-configurable image augmentation feature that enhances dataset size and quality; a full featured network validation suite with real-time tools for runtime profiling and optimisation; and support for common desktop environments.

DR1152 camera module eWBM’s new DR1152 generates the distance information of an object with a single RGB-IR CMOS image sensor by comparing blur sizes of RGB and IR pixels. The RGB/IR images are captured through apertures of different sizes, which result in a difference in blurriness between the two images. Major sensor companies have developed RGB-IR image sensors for security and surveillance cameras, and most of them are compatible with DR1152.

The DR1152-based camera module obtains an object’s RGB/IR information without any active light devices. This leads to smaller sized and lower power depth processing applications, which will be perfect for outdoor use. The DR1152 supports depth images up to an XGA at 30 frames per second, or a VGA at 60 frames per second. The SoC’s distance calculation error can be as low as 100µm for close objects, which makes it capable of extracting many specific

details of depth information of an object with proper optical settings.

This solution can also extract 3D depth

information using a normal three-colour RGB image sensor by properly tuning the aperture filters. eWBM believes that the three-colour adaptation will open a new market sector, as it widens the image sensor choices.

The depth map processing engine of the SoC is

powered by the company’s technologies, such as an adaptive blur channel combining, depth noise reduction, and edge thinning. A comprehensive SDK, as well as the necessary

firmware and drivers, is available for application developers for both the Windows and the Android platforms.

The DR1152 is suitable for various applications, including gesture recognition solutions for AR/ VR devices, or for security uses such as face recognition.

Halcon 13.0.1 MVTec Software has released Halcon 13.0.1, which is compatible with ARM-based platforms running the Linux operating system. This version provides a straightforward way to use Halcon’s functions on ARM processing technology. The new Halcon release also offers a number of

improved features including better grading of data- codes and optimisations for developers using the Visual Studio extension. MVTec has many years of experience in porting Halcon on a customised basis to embedded platforms such as Android, BeagleBoard-xM, and DragonBoard, as well as other embedded systems. This provides companies with the ability to run an extensive machine vision library on compact and robust hardware devices such as smart cameras. Halcon 13.0.1 however supports ARM-based

platforms as standard, without the need for customised porting. The system also includes standard image acquisition interfaces such as GenICam and GigE Vision. The new Halcon version was tested on a number

of hardware platforms, including Raspberry Pi 3B, Nvidia Tegra TK1, and Xilinx Zedboard.

June/July 2017 • Imaging and Machine Vision Europe 41

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56