search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Embedded special


Do you speak neural network?


Neural networks will be computer vision’s future common language, according to Professor Jitendra Malik, from UC Berkeley, in his keynote presentation at the Embedded Vision Summit


N


eural networks will be the primary language of computer vision in the future, rather like English is the common language


for the scientific community. At least that is the hope of Professor Jitendra Malik at the University of California, Berkeley – Malik was speaking at the Embedded Vision Summit, a computer vision conference organised by the Embedded Vision Alliance and held in Santa Clara, California from 1 to 3 May. Half of the technical insight presentations at the


conference focused on deep learning and neural networks, a branch of artificial intelligence where the algorithms are trained to recognise objects in a scene using large datasets, as opposed to the traditional method of writing an algorithm for a specific task. Jeff Bier, the founder of the


Embedded Vision Alliance, when introducing Malik for his keynote address at the conference, said that 70 per cent of vision developers surveyed by the Alliance were using neural networks, a huge shiſt compared to three years ago, when hardly anyone was using them. Bier gave a presentation at the conference


owned by Cognex, offering a deep learning soſtware suite for machine vision. Malik went further than saying neural networks


Seventy per cent of vision developers surveyed by the [Embedded Vision] Alliance were using neural networks


predicting that the cost and power consumption for the computation required for vision will decrease by 1,000 times over the next three years, much of this thanks to neural networks. Bier clarified the statement, saying that the


figure came from, firstly a 10 times improvement in efficiency of neural networks, which have largely been developed for accuracy rather than efficiency, compounded with a 10 times efficiency increase in the processors running neural networks, and a 10 times improvement in the soſtware that mediates between processors and algorithms. Deep learning has also reached the industrial


machine vision world, with the latest version of MVTec’s Halcon soſtware running an OCR tool based on deep learning, and Vidi Systems, now


merely have their place in computer vision, to suggesting that deep learning could be used to unite different strands of computer vision. He gave the example of 3D vision, for which algorithms like simultaneous localisation and mapping (SLAM) have traditionally been used to model the world in 3D, and for which machine learning hasn’t been thought suitable. He said that the world of geometry – which techniques like SLAM fall into – and machine learning need to be brought together. A human will view a chair, for instance, in 3D, as well as being informed by past experiences of other chairs he or she has seen. Geometry and machine learning are two very different languages in computer vision terms, and Malik said, similar to scientists communicating in English, so a common language should be found in computer vision. ‘In my opinion, it is easier to make everybody learn English, which in


this case is neural networks,’ he said. Tere are neural networks that start to combine


the two worlds of thinking, but Malik noted that putting geometrical data in the language of neural networks requires a fundamental breakthrough. He added that, over the next couple of years he believes this marriage of geometrical thinking and machine


DEEP LEARNING FRAMEWORKS


UC Berkeley’s Caffe and Google’s TensorFlow are two of the best known platforms for developing neural networks. Caffe (http://caffe. berkeleyvision.org/) is a deep learning framework created by Yangqing Jia during his


36 Imaging and Machine Vision Europe • June/July 2017


PhD at UC Berkeley, while TensorFlow (www.tensorflow. org) was originally developed by engineers working on the Google Brain Team within Google’s Machine Intelligence research organisation for conducting machine learning


and deep neural networks research. The Embedded Vision Alliance has instructional webinars on both platforms at www.embedded-vision.com, and will run a class on TensorFlow on 13 July at the Hyatt Regency Hotel in Santa Clara, California.


@imveurope www.imveurope.com


learning-based methods will be achieved. He said another exciting area of computer vision research is training machines to make predictions, namely predictions about people and social behaviour. Tis involves work on teaching machines to recognise actions and to make sense of people’s behaviour in light of their possible objectives. He also suggested that computer vision


scientists should take note of research carried out in neuroscience, since deep neural networks are originally based on findings in neuroscience. ‘Neuroscientists found phenomena in the brain which led us down this path,’ he said, adding that researchers should look at neuroscience literature to see if there are things that should be exploited. One other problem that Malik felt needed


addressing was solving that of limited data. Neural networks learn about the world around them using masses of data, but there will always be instances where there isn’t enough information. He gave the example of work being carried out at the Berkeley Artificial Intelligence Research Laboratory, whereby a robot taught itself to manipulate objects by poking them repeatedly. Te robot is not being trained explicitly, but is teaching itself. Te work uses two different models – a forward one and an inverse one – and the interplay between them gives an accurate means of decision making for the robot. O


@imveurope


www.imveurope.com


ktsdesign/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56