search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
AUTOMATICA


graphics card. Prüfer put the cost of the vision solution – the sensor, the soſtware, the cabling and so forth – at less than €5,000, excluding the robot. Kuka is positioning the sensor for the logistics


market, such as in automated pharmaceutical distribution warehouses. Here, the camera doesn’t have to be extremely accurate, as the robot would typically use a vacuum gripper, which only needs to latch onto a reasonably flat surface. Michael Keller, an application engineer at


Fanuc, said that around 20 per cent of Fanuc’s robots use vision in Germany, and this could be up to 30 per cent in Japan. Fanuc’s iRVision offers all types of vision, using 2D, 3D laser scanner, and 3D area sensors, among other types. Scanning rates can reach 60fps, resolutions can be up to 1,280 x 1,024 pixels, and up to 16 cameras can be connected to one robot controller. Fanuc’s soſtware package also supports more than 20 different vision processing functions. Te vision sensor was attached to the robot


arm on Fanuc’s bin-picking demo at Automatica, which has the advantage of only requiring a small field of view, as the camera is always positioned above the bin before taking an image. Tis is in contrast to setups that mount the camera above


the bin, rather than on the robot arm, which covers a larger field of view. Te downside to mounting the camera on the


robot arm is that the cable for the sensor has to be routed along the robot arm, and the robot has to stop to take each image. Isra Vision launched two bin-picking sensors


at the show – both new additions to its range of IntelliPick3D products. Te IntelliPick3D-Pro uses a laser light to capture detailed images in varying lighting conditions; it can detect parts from 15mm up to 2,000mm in size. Meanwhile, the PowerPick3D sensor uses four integrated cameras to capture a depth image from multiple angles to avoid occlusions. Te sensor uses structured light and can run at 500ms per image capture. Imaging company Photoneo was showing an


Giving robots the


power to pick up unknown objects could be the next stage of random bin picking


OPC UA VISION SPECIFICATION RELEASED


The OPC UA machine vision and robotics working groups have released the first versions of their companion specifications at the automation trade fair Automatica in Munich, Germany.


OPC UA is a standard designed so that factory machines can talk to each other. Two years ago at Automatica 2016, VDMA Machine Vision and the OPC Foundation signed a memorandum of understanding to prepare an OPC Unified Architecture (UA) machine vision companion specification. The first version of OPC UA Companion Specification for Machine Vision (OPC UA Vision) offers a generic model for all machine vision systems, from simple vision sensors to complex inspection systems. Part 1, published at the show on 19 June as a release candidate version, describes the infrastructure layer, which


is an abstraction of the generic machine vision system. It allows the control of a machine vision system, abstracting the necessary behaviour via a state machine concept. It handles the management of recipes, configurations and results in a ‘standard way’, whereas the contents stay vendor-specific and are treated as black boxes. ‘Interoperability is the key for distinguishing our products in a connected world of Industry 4.0,’ commented Dr Horst Heinol- Heikkinen, chairman of the VDMA OPC Vision initiative. ‘With today’s release of the companion specifications, we have reached a major milestone.’


The OPC UA Companion


Specification for Robotics (OPC UA Robotics), meanwhile, provides a standardised information model that is capable of presenting all robot-related data regardless of


12 Imaging and Machine Vision Europe • August/September 2018


manufacturer or location. Beside the release of the two


new companion specifications, a demonstrator for skill-based control with OPC UA was presented at Automatica. The demonstrator was an assembly cell producing fidget spinners, and integrating systems and components from more than 20 manufacturers, all of which communicate via OPC UA. VDMA is co-founder of the pre-competitive and non-profit association Labs Network Industry 4.0 (LNI), which runs test-beds and use cases focusing on SME requirements and standardisation validation. The University of Applied Sciences Ravensburg- Weingarten, Germany, plans to host in its lab the first test-bed together with industry partners. ‘The practical validation by


users, including SMEs, is a key to success for the international


standardisation activities to follow,’ Dr Christian Mosch from VDMA Forum Industrie 4.0 commented. The test-bed is open to any new partners. The OPC UA Vision companion specification is now available free of charge as VDMA specification number 40100 (as a release candidate version), while the robotics companion specification is available as VDMA specification number 40010 (as a draft version). Both specifications can be requested at opcua@vdma.org. There’s also educational content available (www.u4i.io/opc) to help users implement OPC UA. Experts from industry and science – including Kuka, Bosch Rexroth, Pepperl+Fuchs, Lenze, Fraunhofer IOSB, and Vitronic – explain the application possibilities of the new communication protocol, and offer practical hints for its implementation.


@imveurope www.imveurope.com


early version of its new 3D camera based on what it is calling a ‘mosaic shutter’ CMOS sensor. Te camera has a resolution of 0.6 megapixels and can capture a 3D point cloud at 30fps – one of the fastest on the market. Te company supplies its PhoXi 3D scanners for bin-picking applications, the largest version of which can scan a 2 x 2-metre area. Both Isra Vision’s and


Photoneo’s imaging solutions for bin picking rely on CAD


models of the part and the gripper to pick random components from a bin successfully. Te application has to be modelled to be reliable. It then becomes a case of matching the pose of an object to the model of the part for the robot arm to grasp it. Speaking on a panel discussion at Automatica


about the use of artificial intelligence (AI), Martin Hägele, department head of robotics at Fraunhofer IPA, said AI could make bin picking more reliable. Dr Ulrich Eberl, from Siemens, also pointed to studies by Google that taught robots to grip unknown objects using AI. Te robot required 800,000 attempts to learn how to do this, but it potentially removes the need for CAD models. In addition, once the robot learnt how to grip the part, it could pass this knowledge on to other robots through cloud computing. While AI is still in its infancy, giving robots the


power to pick up unknown objects could be the next stage of random bin picking. O


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44