search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
SENSORS


Why Time-of-Flight sensors could have signifi cant implications for robots and


how they’re used in the workplace By Fabrizio Petris, Senior Strategic Marketing Manager at Omron Electronic Components Europe B.V.


R


obot and cobot manufacturers must walk a fi ne line. Any safety incident involving a robot can bring negative headlines around the world, even if the robot itself was not strictly at fault. At the same time, manufacturers are eager to bring new developments and capabilities to market as quickly as possible.


Robots generally have an impeccable safety record, with the vast majority of serious safety incidents in recent decades occurring while a robot was being maintained rather than in everyday use. Still, the rise of collaborative robots, which are designed specifi cally to work with and around humans, the stakes have been raised even higher. No fault or safety loophole can be allowed to reach the market, which is why vast amounts of resource, research and testing are dedicated to ensuring that all robots are as safe as they can possibly be.


A key element of a safe robot is effective object detection. There are currently two major sensing technologies for detecting objects and humans: ultrasonic, and optical- based systems. Ultrasonics can be versatile and inexpensive, but typically have a lower sensing accuracy, particularly around soft materials that absorb sound, and a limited detection range compared to equivalent optical-based systems.


Competing object detection methods


Optical-based systems are often based on LiDAR, which uses light beams to scan an


area. There are two types of LiDAR: 2D, which uses a single beam, and 3D, which uses multiple beams. Logically, multiple beams building a 3D picture will provide a superior accuracy and detection distance compared to a single beam. However, 3D LiDAR systems are necessarily much more complex and require far more computing power to build an accurate picture of a scene. As such, they are often much more expensive. Time-of-Flight (ToF) is an emerging method that can offer superior detection in some applications compared to ultrasound, 2D and 3D LiDAR, with much less complexity and at a much more affordable price. It is a scannerless system that uses a single pulse of light from a laser or LED, from which the system measures the round trip time of the light signal, otherwise known as its time-of-fl ight. This allows the system to build a complete picture of a scene, including the depth and dimensions of objects in relation to others, with a single pulse. From this, a machine can better infer and understand the nature of the objects around it, even if those objects and/or the machine itself are in motion.


Advantages of ToF


This makes ToF ideal for robots and cobots, which are often required to work in dynamic environments with lots of movement along multiple axes. Multiple ToF sensors can also be used simultaneously without interfering


40 APRIL 2024 | ELECTRONICS FOR ENGINEERS


with one another, improving accuracy even further. ToF can also perform better than 2D or 3D LiDAR in particularly bright environments, or in scenarios where the system is detecting multiple objects of a similar colour or shape.


Omron’s B5L sensor has adopted the principle of ToF with some success. Designed for a measurement distance of between 0.5 and 4m, it provides a detection accuracy of +/-2%, and a repeating accuracy of only 1%. Because ToF signals require much less processing, the device is lighter and more compact compared to equivalent LiDAR- based systems, while offering a comparable and in some cases superior performance. Up to 17 B5L units can be used together with no interference, while software innovations such as AI-based skeleton detection can help the system to make better sense of the world around it, and be able to predict how certain types of object typically move. This could have signifi cant implications for the design and adoption of robots and cobots. A robot that can recognise patterns of human movement can more accurately infer what the human is doing, and therefore what they are likely to do next, allowing the machine to adjust its position and movement accordingly. In practice, this could mean slowing down or stopping as a human approaches, or an AGV adjusting its trajectory to avoid a human walking across the plant fl oor.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54