search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Deep Networks


contrast (DIC) and electron microscope images with few train- ing examples [7]. Encoder-decoder structures entail downsiz- ing the image to the features of interest and then restoring the image with its low-level features. By combining data augmen- tation with the U-Net model, a hybrid algorithm has achieved unprecedented success in certain tasks. In some cases, the algo- rithm can work with only 15 training examples [7]. Connecting new data with the training set. Further


enhancements on the basic CNN structure have relied on chang- ing the way convolutions and nonlinear activations are arranged. Residual blocks (resblocks) are one way to connect the convolu- tion operations between the input and output. Te residual block structure allows the network to learn small features rather than full image transformations, thus making it easier to pass errors back through the network during training [8]. Te Deep ResUnet model, also known as Deep Residual U-Net, implements residual blocks to increase training speed and simultaneously reduce the risk of overfitting [9]. With 15 convolutional layers, 6 residual blocks, and no data augmentation, ResUnet has a record perfor- mance of 98.66% accuracy on the Massachusetts roads dataset [9]. Testing the model with TEM images. Tis article exam-


ines the effectiveness of a Deep ResUnet model for segment- ing TEM images of stained nuclei from human cheek cells. Te effectiveness of data augmentation has been examined by vary- ing the training data size for a generalizable model.


Materials and Methods TEM sample preparation. Human cheek cells were har- vested using a Cytobrush® (CooperSurgical, Trumbull, CT) by


gently swabbing the inner cheek. Te cells were fixed immedi- ately in 2.5% EM grade glutaraldehyde (Electron Microscopy Sciences, Hatfield, PA) and 2% paraformaldehyde (EMS) in 1×phosphate buffered saline (Sigma-Aldrich, St. Louis, Mo). Cell pellets were formed aſter centrifuging at 2500 rpm, and gelatin was added to prevent dislodging. Aſter the gelatin solid- ified at 4 °C, the cell-gelatin mixture was treated as a tissue sample. Te mixture was further fixed by the same fixative for 1 hour at room temperature before staining with 1% OsO4 to enhance contrast in TEM imaging. Aſter serial ethanol dehy- dration, the sample was embedded in epoxy resin and cured at 60 °C for 48 hrs. Microtomed sections of 50 nm thickness were produced with an Leica FC7 ultramicrotome (Leica Microsys- tems, Buffalo Grove, IL) and mounted on a plasma-cleaned 200 mesh TEM grid covered with a carbon/formvar film (EMS). Post-staining was performed with uranyl acetate (EMS) and lead citrate (EMS) to enhance the contrast of nuclear content. Imaging. A Hitachi HT7700 TEM (Hitachi High-Tech-


nologies in America, Pleasanton, CA) was employed to image whole cheek cells, operating at 80 kV under low-dose condi- tions. Careful manual segmentation of the nuclei was per- formed using Adobe PhotoShop (Adobe Inc., San Jose, CA) and MATLAB (MathWorks, Natick, MA). Architecture of the model. Te Deep Residual U-Net has


been implemented from scratch using TensorFlow (Google Inc., Mountain View, CA) and Keras libraries [9]. Figure 1 gives


Figure 1: Flowchart of a U-Net convolutional neural network. A U-Net is one classic way to arrange operations for segmenting and denoising images. In a U-Net, several convolutional blocks with nonlinear-functions at the end, referred to as resblocks in the figure, are arranged in sequence. After each block,


the image is downsampled, which allows for convolution to be per-


formed at a higher and higher level in the image. After three convolutions and downsamples, the transformed image is then passed to the right-hand side of the network and iteratively upsampled, increased in size with greater detail. After each upsample, the fine details are passed back into the image through a skip-connection before being convolved and output into a binary mask.


20


Figure 2: Flowchart of the resblock. Each resblock is composed of a batch normalization, a rectified linear unit (ReLU), and a convolution. Batch normaliza- tion simplifies training by scaling down the size of the inputs. The left path down the network transforms the input image through a series of convolutions and nonlinear activations. The right side simply passes the image through without any large transformation. The paths are then added together, which allows the network to learn subtle transformations without having to remember the entire image down the left path explicitly.


www.microscopy-today.com • 2019 January


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52