ROBOTICS | RADWASTE MANAGEMENT
Above, figure 3: Prototype for volumetric measurements and radiological characterisation of the waste parts
Photopeak efficiencies are simulated using Tracer, an in-house development of AiNT. Tracer considers gamma self-shielding of the waste and the activity distribution. It uses volume information obtained from the global point cloud of the 3D laser scan.
Gripping pose estimation In order to grip waste, a gripping pose is estimated for the teleoperated or partially automated handling robots. This requires fast, reliable and unambiguous detection of the waste as a point cloud. This is created via a 3D laser scan. Additional point clouds from the 3D cameras are integrated into the system from different perspectives. A corresponding software pipeline for spatial characterisation is developed using object localisation and pose estimation methods based on point clouds. Alternatively, augmented virtuality (see below) is a handling option.
Augmented virtuality — optional use of rendered CAD data Information for the correct representation of the waste to be manipulated is obtained using integrated 3D cameras. By superimposing information about the dynamically captured
pieces on representations of static components and moving robot systems, an ‘augmented virtuality’ is created. One challenge is user-friendly representation of the
captured objects. By using several point clouds, complete representations of dynamically sensed objects can be displayed, regardless of the operator’s viewing angle. However, users can be affected by noise and irritated by direct representation of the point clouds in virtual reality. So the ergonomics of a virtual reality application are improved using rendered representations of dynamic objects in the form of CAD data, which are displayed in place of point clouds. To identify and determine the pose of objects, pipelines from several machine learning and intelligent automation methods are evaluated and developed at FAPS. A central task in the Virero project is further development of this approach so many strongly varying parts can be recognised efficently (see Figure 5).
Digital documentation with subsequent sorting into drums Photo documentation can be produced manually or automatically by the operator within the VR environment. In the manual case, an image is taken by the stereo U
www.neimagazine.com | February 2022 | 47
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61