search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
DEEP LEARNING


more cost-effective for our customers.’ Te winner of the start-up award, GrAI


Matter Labs, has done just that. It plans to release an AI processor that it says is 20 times more efficient in terms of inferences per second per Watt than a Google Edge TPU or an Nvidia Jetson NX. Te Life-Ready AI chip is a vision


inference processor that lowers energy consumption by only processing changes happening in the images rather than every single pixel. In this way it can optimise the compute inside the network. Te sparsity- driven AI system-on-chip delivers Resnet-50 inferences at 1ms speed. Both Framos and Adlink have built


solutions based on the chip, with Framos showing a 3D imaging platform running Life-Ready AI with a D435e depth camera in Stuttgart. GrAI Matter Labs is only one of nine


AI chip makers in Europe, compared to 32 firms building AI chips in China and 64 in the US, Christian Verbrugge, GrAI Matter Labs’ senior director of business development Europe, pointed out during his pitch to the start-up award judges. Te disparity in the levels of activity


around using AI for image processing in different regions was picked up on by Ley during the panel discussion. He said that there’s big potential behind the technology and it should be considered on its merits. ‘What we see, especially in the Chinese market, is that there are a lot of people experimenting with it, and going through the learning curve now... From a regional


perspective, Europeans should get into this technology sooner rather than later, to make sure they are not left behind. Tere are engineers in other territories that are jumping on this.’ Ley said the majority of deep learning


projects involving machine vision are custom projects at the moment. He said the expectation is for large IT companies like Amazon Web Services to release toolchains in the future that make deep learning easy


‘Europeans should get into this technology sooner rather than later’


to use and which might work for a couple of standard applications, but this is ‘some time down the road.’ Donato Montanari, vice president and


general manager, machine vision at Zebra Technologies, said a lot of the challenges organisations face are around how to start using deep learning. Zebra Technologies has recently entered the industrial vision space with a range of smart cameras, alongside acquiring software provider Adaptive Vision. Ley also noted that many of Basler’s customers don’t have sufficient competence yet when using the technology. Te first suggestion for companies


wanting to start using deep learning is to save images, Montanari advised. ‘Take images, take a lot of them, store them, learn how to organise them,’ he said.


Depth of AI products in Stuttgart


Musashi AI has worked with automotive firms to develop AI inspection for complex parts like camshafts. The software combines anomaly detection and supervised learning with object detection. It has the ability to start inspecting with 30 to 50 good parts, and use the anomaly detector to correlate what the object detector sees to identify defects. It runs on the edge. Version 2.0 also uses instance segmentation to improve the process.


EVT was showing a smart 3D triangulation camera built on a Google Coral


chip for running deep learning, either for pre- processing or analysing the point cloud. The camera also contains a Xilinx Zynq dual-core processor with FPGA. The Coral SoC for deep learning operates at 4TOPS and 4W TDP power consumption. Those developing in TensorFlow can convert their networks to TensorFlow Lite and run it onboard the camera in the Coral chip. Transfer learning can also be done on the smart camera, meaning the network can be trained on site. Other EVT cameras


are also available based


on similar processing hardware, including a dual-head camera, a line scan model and a thermal camera.


Easics has developed an embedded neural network technology called NearbAI. The technology is designed for running neural networks close to the sensor on embedded hardware, typically in an application with low latency. NearbAI is platform independent and can be mapped to FPGAs, ASICs and SoMs. It can be used for tasks requiring ultra-low latency, as well as non-variable latency.


Te next step is learning how to annotate


the images, which is where a lot of deployments fail, Montanari continued. He said the best way to annotate images is to be on the factory floor together with the quality expert. Some of the start-ups pitching as part of


the start-up award were offering services to help make data annotation easier and more effective – companies like HodooAI, which offers a cloud service for data labelling. Services in data annotation are growing because it’s such a critical part to get right – Andrew Ng of Landing AI, speaking at the Association for Advancing Automation’s Automate Forward show earlier in the year, said that 80 per cent of the time invested in a deep learning vision project is in dealing with data. Ultimately, as Munkelt said during the


Vision panel discussion, there have to be toolchains developed for the entire process, from labelling data to the choice of algorithm, so that the customer can quickly find out what works and what doesn’t.


Processing power Inference can be run on a CPU, but training a network typically needs a GPU, Montanari said. Tere’s also the option of training a network in the cloud and then deploying it to edge devices. Stephen Walsh at Neurala Europe, one


of the start-ups, noted during his pitch that GPUs are expensive and are in short supply at the moment. He said Neurala is aiming to reduce the cost of deploying deep learning and to make it more accessible. Te company has partnered with Flir to offer Neurala tools and run a Neurala VIA model on Flir’s Firefly DL camera. Walsh said it’s important to accept the fact there aren’t many images of defects available in industrial inspection, and that ‘we have to be able to train with that’. Jens Hülsmann, of Isra Vision, said deep


learning is ‘clearly a future technology’. He said it’s another tool in the tool box that can solve tasks that were difficult with classic imaging processing methods. Mark Williamson, of Stemmer Imaging,


noted during the panel discussion there is now technology developed to make it easier to train a network with fewer defect images, and that over the last three years, deep learning has become easier to deploy with examples in industrial applications. He said: ‘Like any technology, you can


be too early, you can be too late, you can be at the right time. Neural networks have been talked about since the 90s. Tat was too early but there was potential. We’re now [beginning to see] adoption in some applications. Certainly it’s got its future. Now is the moment [to adopt].’ O


www.imveurope.com | @imveurope OCTOBER/NOVEMBER 2021 IMAGING AND MACHINE VISION EUROPE 11


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32