search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
272 J. Zelenty et al. The E step and M step are repeated until convergence,b


Figure 2. The output of k-means for a two-dimensional data set con- taining two clusters. Cluster assignments are shown in red and blue. The implicit requirement of k-means that all atoms must belong to a cluster lead to its failure in modeling solute atoms within the matrix.


where g(xn; μk, σk) is a D-dimensional Gaussian function. In this equation, the numerator is essentially a weighted probability of finding an atom, n, in cluster, k; the denomi- nator normalizes this probability across all clusters. The Gaussian parameters are then optimized given the


newly updated atomic probabilities of being in each cluster. This step, which maximizes the parameter likelihood, is called the M step:


μk = σk =


v u u u u u u t


n = 1 ðÞxn pkjn


P N


P N


n = 1


n = 1 ðÞ xn - μk 2k pkjn k


P N


D


wk = 1 N


P N


n = 1


X N


n = 1 pkjn pkjn ðÞ ðÞ: (5)


where N is the total number of atoms and D the Gaussian dimension. Equations (3) and (4) determine the mean and standard deviation of cluster k, respectively; the contributions of each atom to these values are weighted by the probability of that atom being in the cluster, which was calculated in the previous E step. Similar to equation (2), the denominators of equations (3) and (4) perform normalizations. Equation (5) computes theweight of a given cluster, k. This is accomplished by taking the probability of an atombelonging to cluster k and summing over all atoms.


pkjn : ðÞ


ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1


(3)


to determine the best GMM to fit 2D data (with no back- ground) is shown in Figure 3. In the first panel, (a), the data points are shown in green and the two randomly ini- tialized Gaussians are shown in blue and red. The atoms are probabilistically assigned to the two clusters as shown in panel (b). Atoms are colored based on their probability of belonging to each cluster. Atoms that are equally likely to belong to either the blue or red cluster have been colored purple. This is referred to as the E step. In panel (c), the Gaussian/cluster parameters (mean, weight, and variance) are updated (Mstep) based on the cluster assignments from the previous step. In panels (d), (e), and (f) the E andMsteps are repeated until convergence. A similar process is executed in GEMA, except for a few key differences: GEMA fits an additional background cluster; GEMA assumes the clusters are spherical; and GEMA’s initialization utilizes a kernel density estimate (KDE), which is described in the next section. In summary, the solute clusters within the APT data are


resulting in an estimate of μk, σk, and wk for each cluster, k. An illustration of an EM algorithm being utilized


modelled using 3D Gaussian distributions, one for each cluster. GEMA learns the parameters of these Gaussians and determines which atoms belong to each one via an iterative process. In each iteration GEMA slightly improves its estimate as to which atoms belong to each cluster and the parameters of each Gaussian. These parameters include the location of the cluster (µk), the width of the cluster (σk), and the density of the cluster (wk).


Initialization Using KDEs


EM is an extremely useful tool; however, it is highly sensitive to initialization. Local maxima often prevent the algorithm from finding the global maximum, as convergence can occur at any local maximum. Fortunately, this problem can be solved with careful initialization. Although initialization sensitivity is a thoroughly


: (4)


studied problem for general mixture models, typical ini- tializations, such as k-means or repeated random restarts, were not viable for APT data. This is due to solute fluctua- tions within the matrix: initializations get stuck in these local maxima, which is not a problem for typical GMMs. Therefore, a novel initialization approach was developed specifically adapted to the peculiarities of APT data. First, a KDE (Bishop, 2006) was fit to the empirical


solute distribution. KDEs are smoothed estimates of density, i.e. atoms from high solute density regions, notably from clusters, have high KDE values. The locations of the cluster centers were then initialized using this KDE. In particular, the solute atom with the highest KDE value was identified; this was the first initialization point. Using this point to initialize the cluster center, GEMA was run with K set equal to 1. GEMA’s output was then used to produce a parametric


bEM is guaranteed to converge to a local maximum of the likelihood.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92  |  Page 93  |  Page 94  |  Page 95  |  Page 96  |  Page 97  |  Page 98  |  Page 99  |  Page 100  |  Page 101  |  Page 102  |  Page 103  |  Page 104  |  Page 105  |  Page 106  |  Page 107  |  Page 108  |  Page 109  |  Page 110  |  Page 111  |  Page 112  |  Page 113  |  Page 114  |  Page 115  |  Page 116  |  Page 117  |  Page 118  |  Page 119  |  Page 120  |  Page 121  |  Page 122  |  Page 123  |  Page 124  |  Page 125  |  Page 126  |  Page 127  |  Page 128  |  Page 129  |  Page 130  |  Page 131  |  Page 132  |  Page 133  |  Page 134  |  Page 135  |  Page 136  |  Page 137  |  Page 138  |  Page 139  |  Page 140  |  Page 141  |  Page 142  |  Page 143  |  Page 144  |  Page 145  |  Page 146  |  Page 147  |  Page 148  |  Page 149  |  Page 150  |  Page 151  |  Page 152  |  Page 153  |  Page 154  |  Page 155  |  Page 156  |  Page 157  |  Page 158  |  Page 159  |  Page 160  |  Page 161  |  Page 162  |  Page 163  |  Page 164  |  Page 165  |  Page 166  |  Page 167  |  Page 168  |  Page 169  |  Page 170  |  Page 171  |  Page 172  |  Page 173  |  Page 174  |  Page 175  |  Page 176  |  Page 177  |  Page 178  |  Page 179  |  Page 180  |  Page 181  |  Page 182  |  Page 183  |  Page 184  |  Page 185  |  Page 186  |  Page 187  |  Page 188  |  Page 189  |  Page 190  |  Page 191  |  Page 192  |  Page 193  |  Page 194  |  Page 195  |  Page 196  |  Page 197  |  Page 198  |  Page 199  |  Page 200  |  Page 201  |  Page 202  |  Page 203  |  Page 204  |  Page 205  |  Page 206  |  Page 207  |  Page 208  |  Page 209  |  Page 210  |  Page 211  |  Page 212  |  Page 213  |  Page 214  |  Page 215  |  Page 216  |  Page 217  |  Page 218  |  Page 219  |  Page 220  |  Page 221  |  Page 222  |  Page 223  |  Page 224  |  Page 225  |  Page 226  |  Page 227  |  Page 228  |  Page 229  |  Page 230  |  Page 231  |  Page 232  |  Page 233  |  Page 234  |  Page 235  |  Page 236  |  Page 237  |  Page 238  |  Page 239  |  Page 240  |  Page 241  |  Page 242  |  Page 243  |  Page 244  |  Page 245  |  Page 246  |  Page 247  |  Page 248  |  Page 249  |  Page 250  |  Page 251  |  Page 252  |  Page 253  |  Page 254  |  Page 255  |  Page 256  |  Page 257  |  Page 258  |  Page 259  |  Page 260  |  Page 261  |  Page 262  |  Page 263  |  Page 264  |  Page 265  |  Page 266  |  Page 267  |  Page 268  |  Page 269  |  Page 270  |  Page 271  |  Page 272  |  Page 273  |  Page 274  |  Page 275  |  Page 276  |  Page 277  |  Page 278  |  Page 279  |  Page 280  |  Page 281  |  Page 282  |  Page 283  |  Page 284