search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Deep Networks


Figure 4: Performance of the Deep Residual U-Net model with and without data augmentation for automatic segmentation of human cheek cell nuclei. Data augmenta- tion synthesizes “new” images from a small training set by introducing transformations (rotation, scaling, cropping) and noise to help the network learn information to aid predictions on unseen data. (A) Dice Loss test: pixels correctly determined divided by the total pixels, as a metric for segmentation quality versus the number of training examples. Data augmentation during training yields a better result regardless of the number of training examples. (B) Accuracy of auto-segmentation: model predic- tions compared with manual segmentation versus the number of training examples. The threshold for quality segmentation appears to be about 150 training examples.


position with respect to the nucleus may result in a series of varying shapes and contrast features depending on the section of the nucleus present. Performance tests of the model. To understand whether the


Deep ResUnet model was really learning to perform the nuclear segmentation task and not just remembering oddities or specif- ics of the training data, the accuracy of the model was assessed by comparing results from the model with held-out test data. Te latter were additional images and hand segmentations that the model did not get to see or use during train- ing. Although a relatively large number of hand-segmented images was used in this work, because they were part of a long-run- ning project, most investigators who might use this deep learning algorithm would have considerably less data to work with. Te performance of the model was


examined on the held-out test data while varying the amount of input training data. Figure 4 shows the impact of data augmenta- tion on the effectiveness of the segmentation. Two experimental sets were trained with the same training set sizes, one with augmenta- tion and one without. For both experimental sets, the performance drops off nonlinearly as the size of the training set is reduced. Since the model had no prior information about cells and nuclei, it learned exclusively by example. Overfitting can cause poor perfor- mance on unseen data. Clever regularization and optimization methods may not com- pletely ameliorate this issue, so it is useful to determine a threshold for the number of training examples needed to achieve a qual- ity prediction. Figure 4 shows that for two


22


different performance assessments the threshold was about 150 training examples. Te predicted segmentation for any arbitrary image can


be obtained by passing a new image through the network and recording the output. Figure 5 shows that, again, 150 training examples appears to be the threshold for an acceptable auto- mated segmentation.


Figure 5: After training with data augmentation and various numbers of training examples, two test images were passed through the network to get predictions. The model calculates the probability of a pixel being inside the nucleus. The image was then thresholded such that values higher than 0.6 were labeled “1”, while all other pixels were labeled “0”.. Effective segmentations can be produced for a variety of nuclei with different textures/shapes with 150 example training images.


www.microscopy-today.com • 2019 January


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52