search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Feature: Industrial electronics


A technician uses a mixed reality device to “teach” the robot


Engineering breakthrough T e real democratisation mechanism is the shiſt from line-by-line coding to intuitive, visual guidance. Instead of a specialised, six-fi gure engineer writing code, a skilled technician uses a mixed reality device to “teach” the robot by demonstrating the intent in the real world. T e Spatial AI engine instantaneously


preserve the signal-to-noise ratio of depth data against interference from welding arcs, high-power motors and other industrial machinery.


• Data preservation: T e persistent spatial map (the living digital blueprint) is constantly updated via the sensor streams. T e integrity of this map relies on the temporal coherence and geometric accuracy of the ingested data. Engineers must focus on hardware-level shielding (e.g., specialised grounding and EMI-resistant enclosures) and robust data validation frameworks to ensure that spurious or corrupted sensor readings do not de-stabilise the spatial twin.


Low-latency processing and security T e primary function of translating human visual intent into a collision-free robot path is fundamentally a low-latency, real-time control problem.


Real-time kinematics and path planning Once the human operator visually indicates a task (e.g., pointing to an inspection area), the Spatial AI engine instantaneously translates it into a complete, collision-free robot path and motion plan. T is requires millisecond- level processing of the spatial map data and robot kinematic models. T e system must use deterministic


and high-reliability, low-jitter industrial protocols (like EtherCAT or Profi net) for communication between the edge


44 March 2026 www.electronicsworld.co.uk


handles the heavy liſt ing: • Intent capture: T e human-centric visual input (e.g., a hand gesture on a tablet pointing to a feature) is registered against the high-fi delity spatial map. T e system quantifi es this intent into mathematical constraints for the controller.


The visual intent to actionable kinematics process radically simplifi es set up, reducing a task that once took hours of specialised coding to under fi ve minutes


processor and the robot controller, ensuring the control loop is closed within tight time constraints necessary for safety and accuracy.


Secure data transmission Spatial AI oſt en relies on cloud or localised server infrastructure for heavy- duty map processing, persistent storage and complex AI model updates. T e transmission of this sensitive, high- bandwidth 3D data must adhere to strict industrial security standards (e.g., IEC 62443), employing robust encryption and access controls to maintain the confi dentiality, integrity and availability of the manufacturing process data. Localised edge processing minimises


the attack surface, whilst secure fi rmware updates are paramount for device integrity.


• Constraint generation: T e AI identifi es the target surface, its orientation and the required tool action (e.g., X-ray scan, measurement).


• Kinematic solver: T is is the core control mechanism. T e system uses the robot’s specifi c kinematic model and the persistent spatial map to compute the shortest, most effi cient and, critically, collision-free trajectory to execute the task. T is process involves inverse kinematics – determining the precise joint angles required to position the tool at the visually indicated target. T is is a real-time iterative process that constantly re-evaluates the path, considering all static fi xtures and dynamic elements (like the human operator) within the shared workspace, to ensure dynamic collision avoidance. T e visual intent to actionable


kinematics process radically simplifi es set up, reducing a task that once took hours of specialised coding to under fi ve minutes via an intuitive visual interface. T is advanced intelligence is the necessary foundation for truly fl exible and collaborative automation. By providing robots with spatial


awareness, they can safely work alongside humans, dynamically adjusting their speed and path to prevent collisions. For the systems engineer, this means building components that can sustain the high- reliability, low-latency compute and communication backbone that makes human-machine collaboration possible.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48