search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
COST


– with something as known as machine vision, we can make sure this happens first time.’ He added: ‘Machine vision systems can


vary widely in price, but it’s in comparison to the task they have to perform.’ Nevertheless, cost does play a role in the adoption of machine vision. During AIA’s online Vision Show this summer, Mark Lewandowski, global engineer at Procter and Gamble (P&G), said vision systems add capability, but as an enabling technology they shouldn’t cost as much as the rest of the system. P&G has thousands of vision systems


deployed across the company, he said, and it now uses 3D vision more readily as the technology has become more accessible. However, Lewandowski added that the company has a need for complex 3D imaging methods to design innovative production processes, but that when the vision system costs as much as the robot, it makes it unaffordable to deploy across its factories. During the AIA event, Paul Tomas at P&G described a robot bin picking system the team built using an Intel Realsense depth sensor, a relatively inexpensive stereo camera. He said the vision system and accessories should not exceed $10,000 in cost, not including the robot. He was targeting 45 picks per minute, with 1mm placement accuracy. Maintenance needed to be minimal because some of the sites where the robot would be deployed have little technical expertise. Te team wrote in-house deep learning


algorithms for the bin picking solution and ran them on an Nvidia EGX platform. Te objects the robot was handling often changed and were not biometrically feature rich, so could look symmetrical. Colour could be the same, which made it difficult to find edges, and the bins could also be a similar colour to the object. Te team wanted to get an oriented pick with just one image in order to reach a speed of 45 picks


per minute, so using multiple cameras or stopping the robot wasn’t going to be viable. Tomas said the use of deep learning


was paramount to P&G’s success. Te team created 3D scans of all the products the robot would work with, to train the deep learning algorithm. Lewandowski had a wish list for the vision


industry. First, the use of standard building blocks – hardware, software, networks – to connect together without having to reprogram or create a customised solution for each application. Tis would help cut the cost of engineering, as well as meaning systems can be deployed at scale. He would also like to see systems that


can be redeployed easily for other tasks, to improve their value. Ease of use is another desire to open up the use of vision to more small- and medium-sized companies that might not have technical support.


‘How much should an enabling technology cost if it provides the required automation result and return on investment?’


Support itself can also be an issue, he


said, in terms of whether a problem is to do with the robot, vision or another part of the system. He asked: ‘How do we get a truly integrated solution?’ Plug-and-play of all vision components


with automation and robotics is another wish list item, as well as the ability to use and integrate more open source code. Te bin picking solution that Tomas,


at P&G, developed was successful because of its Realsense camera and the deep learning algorithm. In terms of the cost of developing it – which wasn’t the object of Tomas’ presentation – it is difficult to


put a figure on it, because where does one draw the line when doing the accounting? For instance, does the cost of the project include all the costs involved in software development for 3D analysis and deep learning? P&G is a large company with a big R&D department that can benefit from economies of scale gained from working on related applications. Dechow, at Integro Technologies, said


that ‘companies without the resources of large players need to have a reliable and robust vision system that can be implemented with reasonable engineering skills’ – without needing a team of R&D scientists skilled in 3D and deep learning algorithms. ‘In this context, commercial vision


systems are a real bargain,’ he said. ‘Tousands upon thousands of man hours go into the development of components and associated software, to make the product suitable for general purpose industrial automation. Tis is a value well worth the cost.’ Dechow also believes that overarching


standardisation would be more of a hindrance to the vision sector than a help: ‘Te machine vision industry, like any other, thrives and grows based upon competitive products. I would hate to see some standardised, plain vanilla component or software specification where every product did the same thing with the same algorithms and same outcome. ‘It could potentially stifle the industry and


market for this technology.’ Te question that Dechow says people


should ask when it comes to thinking about the cost of technology is: ‘How much should an enabling technology cost if it provides the required automation result and return on investment? ‘What’s important is that industry needs


automation, and it needs it now more than ever,’ he concluded. O


Additional reporting by Matthew Dale


Our Assemblies are going to Mars. Yours are available on Earth.


www.alysium.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44