Segmentation Approach Towards PCM Images 1135
and the filaments attached to flocs, halos could not be elimi- nated completely using this procedure.
Kittler’s Thresholding-Based Segmentation In this approach, the grayscale image is pre-processed by dilation and erosion, to eliminate halos which causes over- segmentation of flocs and filaments. The Kittler’s thresh- olding, which is also termed as minimum-error thresholding, was used for the final binarized image. The thresholding was proposed by Kittler & Illingworth (1986).
Split-Merge Method-Based Segmentation In this method, the image is divided into blocks, using quad tree decomposition, in the first step. Then each quad-region is set to 1 based on a predicate function (Gonzalez et al., 2013). The predicate function is defined by the following logical variable Vb for a block b
Vb = σb >6 and mb >0 and mb <255; (2)
where σb is the standard deviation and mb the mean of block b. If the predicate is true, the pixels of the block are considered as objects. If the predicate is false, the block is divided further, with minimum block size of 3×3 pixels.
operations of top-hat and bottom-hat filtering, followed by global thresholding. The G channel of the RGB image is used as grayscale image. First, the top-hat filtered image is added to the grayscale image. Then from the resultant image, the bottom-hat filtered grayscale image is subtracted. The procedures enhanced the contrast of the filaments and the darker regions inside the flocs. Finally, scaled global mean thresholding is used for the binary image. The algo- rithm cannot be used to segment flocs as it is unable to detect relatively brighter region of flocs and the filaments in it.
Floc Removal
The flocsinthe PCMimagesofAShaveuncertain regionsdue to the halos at the boundaries of flocs and filaments, and com- plex arbitrary structure of flocs. The bright region inside a floc might be floc or filament or background. This problem lead us to assess the segmentation only on the basis of gold approxi- mation of ground truth images of filaments. Consequently,
hinders the segmentation of the filaments attached to the flocs and filaments. The effect is found to be significant when there are large flocs and small filaments. Therefore, we cus- tomized the algorithm of top-hat and bottom-hat filtering for segmentation of the filaments only. In the proposed method, we have used morphological
Top-Bottom-Hat Filtering-Based Segmentation Jenné et al. (2007) suggested an automatic image analysis system to detect the abnormal state of filamentous bulking using top-hat and bottom-hat filtering, followed by inter- means thresholding. They used R and B channels of the images for the samples in bulking state only, and did not address thePCMartifacts. The artifact of shade-off and halos
we face the challenge of automated removal of flocs without affecting the morphology and count of the filaments. We have adopted two algorithms sequentially to remove
the flocs from all the segmentations to be assessed. First, we detected the flocs in the grayscale image by Otsu thresholding, followed by morphological dilation. The resultant mask was subtracted from the segmentation being considered for the assessment. In the second stage, the flocs and filaments were
differentiated by using reduced radius of gyration, and the flocswere
deleted.The final image comprises of filaments only if the segmentation is successful and there are no false objects.
Performance Assessment
We used accuracy, Rand (1971) index, false positive rate (FPR), and false negative rate (FNR) for the performance assessment of the segmentation algorithms. Rand index is a data clustering based metric, whereas the other metrics are based on the basic cardinalities of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). If TP are correctly detected foreground pixels by a segmen- tation algorithm, FP are incorrectly detected foreground pixels, TN are correctly detected background pixels, and FN are incorrectly detected background pixels, then the accuracy, FPR and FNR are given by
Accuracy = TP+TN+ FP +FN; TP+TN
FPR = FP FP +TN;
FNR = FN FN+TP:
(3) (4) (5)
Ideally, the accuracy should be unity, and the FPR and FNR should be 0. We were assessing on the basis of filaments (attached or free) only due to uncertainty in the floc region arising from halos and shade-off. We assumed that the floc region is background. As a result, the number of filament pixels are very few, as compared with that of the background and the floc pixels. Therefore, the accuracy remains very high (above 0.95) even when the segmentation is a failure. Similar
is the case of Rand index. TN will be large even if some filament pixels were misclassified as background. This will result in a very small FPR despite the failed segmentation. In contrast to FPR, FNR will be large even if the segmentation is successful. That, is because TP cannot be greater than number of filament pixels that are very small in number relative to the background pixels. Hence, even with a small difference in the values of the performance metrics, espe- cially accuracy, Rand index and FPR can be significant. Comparatively, FNR shows significant variations with reference to good or bad segmentations.
RESULTS AND DISCUSSION
To assess the segmentation approaches, 61 PCM images of AS (29 at 20 × original objective magnification and 32 at
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92 |
Page 93 |
Page 94 |
Page 95 |
Page 96 |
Page 97 |
Page 98 |
Page 99 |
Page 100 |
Page 101 |
Page 102 |
Page 103 |
Page 104 |
Page 105 |
Page 106 |
Page 107 |
Page 108 |
Page 109 |
Page 110 |
Page 111 |
Page 112 |
Page 113 |
Page 114 |
Page 115 |
Page 116 |
Page 117 |
Page 118 |
Page 119 |
Page 120 |
Page 121 |
Page 122 |
Page 123 |
Page 124 |
Page 125 |
Page 126 |
Page 127 |
Page 128 |
Page 129 |
Page 130 |
Page 131 |
Page 132 |
Page 133 |
Page 134 |
Page 135 |
Page 136 |
Page 137 |
Page 138 |
Page 139 |
Page 140 |
Page 141 |
Page 142 |
Page 143 |
Page 144 |
Page 145 |
Page 146 |
Page 147 |
Page 148 |
Page 149 |
Page 150 |
Page 151 |
Page 152 |
Page 153 |
Page 154 |
Page 155 |
Page 156 |
Page 157 |
Page 158 |
Page 159 |
Page 160 |
Page 161 |
Page 162 |
Page 163 |
Page 164 |
Page 165 |
Page 166 |
Page 167 |
Page 168 |
Page 169 |
Page 170 |
Page 171 |
Page 172 |
Page 173 |
Page 174 |
Page 175 |
Page 176 |
Page 177 |
Page 178 |
Page 179 |
Page 180