search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
DS-OCT22-PG08+09_Layout 1 21/10/2022 11:50 Page 2


COVER STORY ADVERTISEMENT FEATURE


ks: How digitalization d value


on machine learning (ML) and AI can help to detect anomalies more quickly. Lenze is working intensively on both topics.


With a fully functional showcase, we demonstrated a model-based and a data-based approach already at SPS 2019. The model-based approach compares the acquired data with a mathematically assumed model of the application and interprets the detected deviations from the previously defined model. The data-based approach uses a neural network, or colloquially artificial intelligence approaches, and learns the machine behaviour independently. The recorded values are then interpreted with those of the self-learned properties. In this way, for example, the combination


of several factors can be combined into one behaviour. The increased motor speed, reduced current consumption, longer conveying time between the light barriers thus give an indication of slippage or wear of the conveyor belt over the drive drum. However, this approach requires much more computing power, which means that processing currently still has to take place in a higher-level controller or the cloud; the edge controller is used for local data compression. Moore's Law will drastically increase the technological possibilities within a very short time and lead to more sophisticated models and processing possibilities in the control system and the frequency inverter.


The fuTure has begun - firsT projecTs in The auTomoTive indusTry


The projects already being implemented use data from around 1,000 drive packages each, which are distributed across several plants, in the first expansion stage. Pure data access already makes it possible to compare data from the plants and detect deviations. The data is stored and evaluated locally within the company’s own network. In this way, initial experience can be gained with the handling, amount of data, processing and analysis of the information. The system has an open design and


distributes the load to several edge controllers, which communicate with the higher-level data servers or data lake via MQTT, and use OPC UA downwards. In addition to the actual Ethernet-based fieldbus as a connection to its own components, this also enables external third-party components to be connected to the system, and is still scalable upwards via the number of edge controllers. This makes the knowledge gained interesting for large installations such as baggage conveyor systems in airports or fully automated warehouses, where it is not uncommon to find more than 10,000 drive packages and components from a wide range of suppliers in use. The edge controllers make it possible to gain experience with data


pre-processing, compression, real-time behaviour, integration of third-party components and connection to the data lake. The second step is then about predictive


maintenance in practice and the application of the algorithms already verified in the laboratory environment for the detection of system anomalies as well as the verification of the required data for the domain. In the laboratory environment, the processed data already provided good results. It will be interesting to find out how the massive data pre-processing with Fast Fourier Transformation, Kalman filter or envelope analysis affects the real data volume and the workload in the producing application domain. An immediate transfer of theoretical and


laboratory findings in the practical environment cannot always be guaranteed. For example, error patterns can have a negative effect on the algorithms due to the components installed around the sensors and drive packages and their own behaviour, and the adaptation and robustness of the data-based model to such influences is an interesting finding from this investigation. We are familiar with such a problem and phenomenon from everyday life when glasses rattle in the wall cupboard in the kitchen, but the cause is not to be found in the wall cupboard but in the surrounding compressor of the refrigerator, which has just increased the cooling capacity. So are we already in the ‘day after


tomorrow’ and are the developed processes already sufficient to ensure efficiency gains and avoid unplanned plant shutdowns? The future remains exciting. The market has been dealing with ‘condition


monitoring’ for decades. In simple terms, the condition of a system is observed (monitored), for example the temperature or vibration. Warning signals are generated at corresponding threshold values. However, this simple approach is clearly


limited in its possibilities. The failure rate of the mechanical components of a drive package - consisting of gearbox and motor - whose temperature is recorded and evaluated, is, in relation to the service life of the system, more than 75% lower than the failure rate of an electronic component in a frequency converter or a control system. The evaluation of the temperature is a good indicator for the condition of a gearbox, but when considering the overall system consisting of frequency inverter, gearbox and gearmotor, it has little significance for the condition of the system and is clearly limited in its form. Predictive maintenance is an attempt to predict


when a fault will occur in order to be able to initiate countermeasures in a timely manner. All available data, such as torque, current or temperature, is used for this purpose. However, permanently recording this data and sending it to the cloud is only partially effective, because more data does not equate to a higher quality or accuracy of the statement. Instead, this approach significantly increases the complexity and makes the necessary processing of the data more difficult. The art is therefore to derive as many insights as possible from as little data as possible. This requires a fundamental understanding of the machine and the system, which can be summarized under the term ‘domain knowledge’. With the corresponding system knowledge,


causality and correlation of the data as well as the findings are then brought into relation and systematically evaluated. This then makes it clear whether, for example, the unexpected increase in electricity demand is a system error or is based on a significantly higher, temporary utilization of the plant.


Lenze www.Lenze.com


OCTOBER 2022 DESIGN SOLUTIONS 9


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60