search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
February, 2020


www.us- tech.com What’s Next for Deep Learning in SMT? Continued from previous page


geometry. This means improving AOI systems to ensure the stability and accuracy of the definition of new components, even in dynamic production environments where designs, assembly fluids and packaging change rapidly. For Mycronic, this has required building up a large library of tens of thou- sands of 3D images, including data on complex geometries, that can be quickly recognized using algorithmic assistance. “We will need ten times or even a hundred


times more data, so that our deep neural network can generalize to all types of components and all types of boards. This amount of data is quite usual for deep learning applications,” says Roux. Machine learning, which is necessary for a


range of auto-programming, closed-loop and pre- dictive systems for Mycronic equipment today, involves parsing structured data to train machine learning algorithms according to defined criteria. Going a step further, deep learning, a subfield of machine learning and AI, struc- tures the algorithms in layers to cre- ate an artificial neural network that can create and simulate new situa- tions in order to improve its decision- making without relying on rule-based programming. Machine learning is divided into


three types of training:


Supervised Learning. In super- vised learning, the deep neural net- work is fed with the corresponding inputs and expected outputs. This data describes the behavior that the network is to mimic after the training. Given this input, the network must learn to return this output. Computer vision is a domain where supervised learning for deep neural networks reached systematically state-of-the- art performances in the past decade. Image classification, object recogni- tion and image restoration are typical cases for which there is no doubt now that deep convolutional neural net- works outperform every other approach. In the past five years, gen- erative adversarial networks have even pushed limits further.


Unsupervised Learning. Another typical application of deep learning is process monitoring for maintenance. Anomaly detection is usually based on first being able to capture the normal behavior of a system. If a solution is to be versatile, we must consider abnor- mal behavior as unknown. Un - supervised learning for convolutional autoencoders is a powerful solution to learn what “normal” means. Once the model has learned this, designing an anomaly detection module is straight- forward.


Reinforcement Learning. Rein - force ment learning may be the future of AI. Adapting it to the SMT indus- try requires us to solve the problem of the amount of data required for train- ing AI agents. Simulating the envi- ronment in which the AI has to inter- act is key toward a higher level of artificial intelligence.


AI Deployment Once a neural network has been


trained, it can be deployed to the cloud, a local network or on local hardware. “Which solution depends on the equipment’s accessibility to the cloud,” says Roux. “The band- width of the local network, complexi- ty of the neural network, the size of the data, and the need for real-time inference are also important aspects when choosing the solution.” Where the network is run


See at IPC APEX, Booth 1611


depends on the context of the inference. For exam- ple, during offline programming of manufacturing equipment, one can use a dedicated server, as the latency requirement is not a constraint and the local offline programming station (a standard laptop) may not meet basic specifications. Inline systems with very tough constraints,


in terms of speed and latency, one may have to run the neural network on local hardware. Finally, the protection of the data can simply forbid cloud com- puting. When the neural network runs in the cloud


on a dedicated server, large bandwidth is required to avoid too much latency between the query to the neural network and the response. Most of the time, at least one GPU will be required in the computa- tion unit in which the neural network is deployed.


As for the training phase, one can use distrib-


uted training to speed up computations. In some extreme cases, the training of complex neural net- works on a large database may require weeks of computation. By distributing the training on GPUs, one can reduce the computation time dra- matically. “Definitely, in this case, cloud comput- ing services proposed by companies such as Amazon, Google and IBM are a simple way to access customized computation platforms,” says Roux. “At the CDLe, we have access to a remote system with many high-end GPUs built by D2S.”


Digital Twins Today, deep learning scientists are building


virtual replicas of physical factories and combining Continued on page 93


Page 83


• • •





Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92  |  Page 93  |  Page 94  |  Page 95  |  Page 96  |  Page 97  |  Page 98  |  Page 99  |  Page 100  |  Page 101  |  Page 102  |  Page 103  |  Page 104  |  Page 105  |  Page 106  |  Page 107  |  Page 108  |  Page 109  |  Page 110  |  Page 111  |  Page 112  |  Page 113  |  Page 114  |  Page 115  |  Page 116  |  Page 117  |  Page 118  |  Page 119  |  Page 120  |  Page 121  |  Page 122  |  Page 123  |  Page 124  |  Page 125  |  Page 126  |  Page 127  |  Page 128  |  Page 129  |  Page 130  |  Page 131  |  Page 132  |  Page 133  |  Page 134  |  Page 135  |  Page 136  |  Page 137  |  Page 138  |  Page 139  |  Page 140  |  Page 141  |  Page 142  |  Page 143  |  Page 144