search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Embedded special


What can drones learn from bees?


been painted since the last Google Street View update. So what can drones learn from


Dr Andrew Schofield, who leads the Visual Image Interpretation in Human and Machines network in the UK, asks what computer vision can learn from biological vision, and how the two disciplines can collaborate better


W


hat can drones learn from drones? Almost every month the national


news feeds carry a story about the latest development in drone aircraſt, self-driving cars, or intelligent robot co-workers. If such systems are to achieve mass usage in a mixed environment with human users, they will need advanced vision and artificial reasoning capabilities and will need to both behave, and fail, in ways that are acceptable to humans. Setting aside recent high profile


crashes in self-driven cars, the complexity and unreliability of our road systems mean that driverless cars will need to act very much like human drivers. A car that refuses to edge out into heavy traffic will cause gridlock. Likewise a drone should not fail to deliver its package because the front door at the target house has


drones – or, to be more precise, worker bees? In surveillance they may have similar tasks: explore the environment looking for particular targets, while avoiding obstacles and eventually return home. Tey also have similar payload and power constraints: neither can afford a heavy, power hungry brain. Te bee achieves its seek and locate


task with very little neural hardware and a near zero energy budget. To do so it uses relatively simple navigation, avoidance and detection strategies that produce apparently intelligent behaviour. Much of the technology for this kind of task is already available in the form of optic flow sensors and simple pattern recognisers, such as the ubiquitous face locators on camera phones. Even the vastly more complex human brain has within it separate modules, or brain regions, specialised for short range sub-conscious navigation via optic flow and rapid face detection. However, the human brain is much more adaptable and reliable than even the best computer vision systems. Te Visual Image Interpretation


in Human and Machines (ViiHM) network[1]


, funded by the Engineering


and Physical Sciences Research Council, brings together around 250 researchers to foster the translation of discoveries from biological to machine-vision systems. In the early days of machine vision there was a natural crossover. Te Canny[2]


edge


detector, for example, computes edges as luminance gradients in a blurred (de-noised) image and links weaker edge elements to stronger ones. Tis method has its roots in Marr and Hildreth’s[3]


model of retinal processing, plus the contour 38 Imaging and Machine Vision Europe • June/July 2017


@imveurope


www.imveurope.com


integration mechanisms found in visual cortex. More recent examples of biology


inspired processing include deep convolutional neural networks (DNN)[4]


, which have multiple


convolution-based filtering layers separated by non-linear operators and down sampling to achieve increasingly large-scale and complex filters, until finally classifications can


The relationship between machine and biological vision is symbiotic


be made. Tis structure is very similar to and loosely modelled on the multiple feature detection layers and receptive field properties of biological vision systems. Alternatively, the SpikeNet[5]


recognition system has a


similar convolutional structure but more directly models the production of neuron action potentials. Te relationship between machine and biological vision is symbiotic:


convolution filters developed for machine vision are used to model biological processing and DNNs have been applied to human behavioural data to characterise the visual system. However, in recent decades


the biological and machine vision communities have diverged. Driven by different success criteria – a desire to understand specific visual systems on the one hand and to rapidly build working engineering solutions on the other – the two disciplines have developed different priorities, and ways of working. Te ideal development cycle – where observed phenomenon are explored in biology, results modelled computationally, and those models turned into useful applications – can be protracted and require multiple skill sets. Te chain is oſten broken as academics on the biological vision side rush to publish their findings and get on with the next experiment, while those in industrial vision rightly employ tools in the quest for better performance. Progress is hindered by language


and understanding barriers, with different terminology used even for


@imveurope www.imveurope.com


Juraj Kovac/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56