search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Deep Networks


Te initial, truncated residual block


of Figure 1 uses 64 filters in each convolu- tion. Te next two residual blocks use 128 and 256 filters, respectively, followed by the central block with 512 filters. Te decoder follows the symmetrically opposite pattern: 256, 128, then 64 filters. Each filter learns a shape or texture that is relevant to discerning nuclear from non-nuclear regions in the cell. Training. Following previous work


[9], a variant of mini-batch gradient descent was used to optimize the values within the network with a binary cross- entropy loss function equation:


L yy yy yy  (,)( )( )11 (1) =− −− − loglog


where y records the number and location of each ground truth pixel labeled as the nucleus and y accounts for the number of pixels predicted to be the nucleus along with their positions. Te Adam optimiza- tion method with a batch size of 2 was run for 30 epochs with a learning rate of 10−5 [10]. Te dice coefficient, given by Equation 2, was then applied to monitor the quality of the segmentation.


d yy yy (, yy)


= ∗2 ∩  + 


Figure 3: Manual segmentation. TEM images of cheek cells (A,B) and the binary masks (C,D) constructed by hand tracing the outlines of the nuclei. Images of nuclei can often be quite contorted because of the angle at which they were sectioned. Image contrast of the nuclei with respect to the cytoplasm may vary because of the specimen preparation or the exposure conditions of the microscope. Hand segmentation is labor intensive and may be subjective among different operators.


the high-level architecture of the network, in which an image is passed through multiple blocks composed of the same pattern of mathematical operations. As the image passes through the leſt-hand side of the network, it is downsampled, or reduced in size, as it is convolved with the values of the network. Once it enters the right-hand side, the outputs of the earlier layers of the network are added through skip connections, a junction that relays low-level features to preserve fine-scale detail that would otherwise be lost by the downsampling operations. Each upsampling, or increase in input/image size, on the right side slowly steps the image back up to its original scale, but having been extensively transformed. Each resblock module contains the pattern of mathemati-


cal operations shown in Figure 2, a set of batch normaliza- tions and nonlinear functions known as the rectified linear unit (ReLU) followed by convolutions [9]. Batch normalization speeds up the training time because it scales the inputs down to reduce variance. Te ReLU activation function adds nonlin- earity to the network, allowing the model to learn fine details. Within each residual block, the original input is added to the output of the convolutional elements, allowing the block to learn a transformation without having to remember the origi- nal image.


2019 January • www.microscopy-today.com (2)


Te dice coefficient records the number of pixels the algorithm correctly guesses to be part of the nucleus, divided by the total number of pixels labeled as the nucleus in the predicted and ground truth images. Tus, it ranges from a value of 1.0, when the algorithm perfectly predicts the labels,


to 0.0. To examine how the algorithm behaves on unseen data, a test set of 75 examples was withheld for evaluation purposes (held out test data). For each experiment, 5, 10, 25, 50, 150, and 300 training examples were used. Image augmentation was implemented using the open source


OpenCV package in which each image was rotated, scaled, and translated by a random direction and magnitude before being fed into the network for training. Tus, at each training step a unique augmented image was used for training. Te code written for this study is open to the public and can be accessed at https://github. com/avdravid/TEM_cell_seg and https://github.com/khujsak. Te model was implemented in the open source library Keras with a Tensorflow backend on a custom-built desktop computer equipped with a Core i7 CPU (Intel Corporation, Santa Clara, CA), 32 GB of RAM, and a GTX 1080 (NVIDIA, Santa Clara, CA), resulting in an average training time of 30 minutes.


Results Manual segmentation. Example images and hand seg-


mentations are shown in Figure 3 to highlight the difficulty of the cheek cell nuclei segmentation task. A variety of cells with unique nuclear morphologies were present. In addition, the fact that the sample may be sectioned at an arbitrary angle and


21


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52