search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Embedded special


Vision cleans up: Dyson robot vacuum navigates with imaging


Dyson’s 360 Eye robot vacuum cleaner relies on vision to navigate its way around a home. Mike Aldred, from Dyson, spoke about the product’s development at the UKIVA machine vision conference in Milton Keynes, and Greg Blackman was there to hear the presentation


D


yson’s 360 Eye robot vacuum cleaner, released in 2016, has been 17 years in the making. Five man years’ worth of work


was put into developing the tools –simulation and the like – with which the engineers could then develop the product, and around 75,000 in-home trials were conducted over eight years before the robot cleaner was launched. Even aſter launch, Dyson is still trialling the product, Mike Aldred, lead robotics engineer at Dyson, said during a presentation at the UK Industrial Vision Association’s (UKIVA) machine vision conference at the end of April in Milton Keynes, UK. Te 360 Eye is a robotic vacuum cleaner that


moves around a room autonomously vacuuming as it goes. It uses a 360-degree panoramic camera and simultaneous localisation and mapping (SLAM) algorithms to navigate around the room. ‘We realised very early that vision is where we


wanted to go [in developing the robot],’ Aldred said in his talk. ‘Vision, for us, was not just about solving the problems on this product [360 Eye], but it gave us so much more functionality in products going forward. Tere’s a richness of information we cannot achieve from other sensing technologies.’ A panoramic camera was a necessity, according


to Aldred, to avoid the robot being blinded when cleaning up against furniture and objects on the floor. Te camera also has a field of view between 0 and 45 degrees to image the walls. Tis is to give features that the robot’s SLAM algorithm can use to locate itself in its environment – there is a lot of clutter and occlusion on the floor, so imaging


the walls provides the robot with more useful information. And it’s important for the robot to know where it is in the room because, with its powerful vacuum, 360 Eye only has a runtime of 40 minutes, so it can’t afford to clean areas multiple times. Te sensor in the camera has VGA resolution,


and only a 480 x 480 pixel segment of that is used. A toroid is projected onto the segment, with the final resolution being 128k pixels. ‘We can navigate more than effectively using just that image,’ Aldred said. ‘If we had a larger sensor we would be wasting processing time, either sub-sampling or throwing away information. We need everything we can in terms of processing power; the processor is a 10-year-old processor.’ Te engineers also needed headroom on the processor to add functionality. ‘You should do what you need and no more,’ he advised. During product development,


360 Eye robot


vacuum cleaner


@imveurope


www.imveurope.com


Aldred noted that simulation has its place, if


just to get the bugs out of the system, but that it doesn’t replace real-world trials. He said that Dyson spent 70 to 80 per cent of development time testing the product. During 75,000 home trials, the team had to contend with people covering the robot to hide it, as well as pets and children running through the scene confusing the robot. Image quality was a challenge, Aldred added,


saying the team spent a long time looking at exposure control to handle light and dark rooms. Tis was solved by slowing down the robot in a dark room to avoid motion blur. Aldred said that Dyson has plans to


We realised


the Dyson team built simulation tools to test the robot in silico. Te team then set up a trolley and representative camera and optics, and drove it around hundreds of homes with a PlayStation controller. Tose image sequences were fed into the simulation model to see if they could be used to locate where the robot was in a room. In this way, the engineers could work on the image capture systems, image control systems, and the navigation systems without physically having to develop the product.


28 Imaging and Machine Vision Europe • June/July 2017


very early that vision is where we wanted to go [in developing the robot]


increase the functionality of the 360 Eye. Tis includes object recognition, to distinguish between a ball of fluff and wedding ring, for example. ‘We’re looking at everything from some of the neural network solutions, to the more basic algorithms, but there is a lot of complexity in object recognition,’ he commented. Tere is also work on


contextual understanding; if


the machine knows whether it’s in a kitchen or a bedroom, it can change its behaviour accordingly. In addition, the team want the robot eventually to interact with its environment, to pick up or move objects as the robot vacuums around the room. Vision is a key enabler to that, Aldred concluded. O


@imveurope www.imveurope.com


Dyson


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56