search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
NetNotes


tissue (not sure if you are working with biological tissues). What you believe is a representative image will not represent the distribution of particles that you are analyzing within the whole ROI. Depending on the responses, I will provide links on other aspects which will affect the sampling including section thickness, shrinkage, size/ orientation of particles of interest, etc., as it is a huge topic, and we can discuss for years to come. Sathya Srinivasan scitecheditor@gmail.com


Tis may not be what is being asked, but I thought I’d throw


it out there anyway. It’s my two cents on number of FOV required to evaluate (pragmatically). A. Let’s make some assumptions (these are important and someone else can point out the bias and error issues if they are not true): 1. locations on a stub or sample are fairly random. If not, then at least systematically random, and locations selected from 25–35% of the sample (stub); 2. one can visually assess similarity of objects of significance [OOS] (those desired to be evaluated). If doing digital image processing, then the OOS must be known or described, and sufficiently characterized by a set of parameters (size, shape, elemental composition, etc.); and 3. the morphological (shape, size, volume) or chemical (EDS, XRF, etc.) or physical (hardness) parameters have been adequately chosen that the instrument/analyst combination can be analyzed with precision (say a Coefficient of Variation [CV] of 0.2 or less) on known samples. B. Ten one needs to make a BIG assumption about the spatial distribution of the OOS: a) normally distributed; b) lognormally distributed; c) Poisson or binomial negative; or d) other. [I have found that Geostatistical soſtware on previous samples can be beneficial in elucidating this]. C. One then needs to decide if they want a reasonable return on investigation (ROI) time, compared to reduction in the Mean (X) value for the OOS or the Variability (Standard Deviation or Geometric Standard Deviation). D. For a Normal distribution: a. the upper and lower confidence intervals on the Mean value quickly approach little or no change at 4 to 7 FOV for CV 0.15 to CV 0.45. [Do you want a reasonable ballpark estimate]; b. For the upper confidence interval (the one that is more important) on the standard deviation, for an estimated CV of 0.15, one needs 20 FOV before the ROI flattens out. For an estimated CV of 0.25, one needs about 30 FOV. For an estimated CV of 0.45, one needs about 50 FOV. However, 20 does a pretty good job for each of these. Perhaps that’s why I used to see 21 as a minimum sampling value in stats books; 5. for a lognormal distribution: a. If one assumes that an equivalent CV of 0.75 is about a Geometric Standard deviation (GSD) of 2 and a CV of 1.55 is about GSD of 3 then the upper and lower confidence intervals on the Mean value quickly approach little or no change at 20–25 locations for CV 0.15 to CV 0.45; b. For the upper confidence interval (the one that is more important) on the GSD, one needs 50 to 70 FOV before the ROI really starts to flatten out for a GSD of 2 to 3. However, a visual look shows that 30 FOV does a pretty good job. (I might expect a lognormal for diffusion and convection based spatial distributions, including crystallization aspects). 6. For a Poisson: a. in theory, the representativeness (average) is a function of the number of OOS and the number of FOV. It parallels the CV which 1/(λ)^0.5. Tus, it is a question of how much variability is acceptable. If a CV of 0.45 is OK, then 5 FOV, for 0.25 it’s 16 FOV, for 0.1 it’s 100 FOV (excluding analyst and preparation variability and bias). (Tis appears to be the case for asbestos fibers, mold spores in samples, modal analysis [point counting] for geologic and concrete thin section, etc.). 7. Other: another way to evaluate good enough is to track the OOS by parameter by the number of FOV and see what the “average” information is doing: how fast or how many FOV does it take for the analyst’s assessment to come to the same conclusion. It


82


is in essence a Bayesian approach. D. As a final note: 1 is not statistic. Tony Havics aahavics@pH2llc.com


A small but very important remark: it really depends on the


amplitude of the effect that one wants to show, no? All these steps are necessary when you want to see discrete differences. When one wants to show that a white sheet becomes black aſter treatment, there surely is no need for complicated assessments (p<0.000001)! It is sometimes hard to understand why some people want statistics for everything. What is evident doesn’t need any calculation! In my example, I just needed one picture of a white field and one of a black field; it is not necessary to take 50 FOV of each sheet to show that it is white or black everywhere! Stephane Nizet nizets2@yahoo.com


I don’t think a reply I posted last week was distributed to the


list. In biological sciences, people show quantification of N cells in X fields, let’s say 100 cells. Ten they show one representative image. Te problem is that the “representative” image oſten isn’t a cell whose quantification result falls within 1 SD of the mean, and sometimes this is glaring. Perhaps the reviewer was getting at this issue, what is representative of the phenomena globally, but didn’t articulate it well? Michael Cammer michael.cammer@nyulangone.org


“I just need to take one picture of a white field and one of a black


field, it is not necessary to take 50 FOV of each sheet to show that it is white or black everywhere!” Sorry. I never said you have to take an image. I meant how many FOV you looked at. Of course, if you are performing digital analysis, then you will have to take many FOV micrographs if you want to substantiate your data. In your case, no images, just a list of fields and black or white would do. Also, in your case, you cover a non-parametric approach (the ones I discussed are all parametric). It’s a presence-absence, in which case, a Wilcoxon signed-rank test would be a good way to check. Tony Havics aahavics@pH2llc.com


Image Manipulation Confocal Listserver I remember hearing or reading somewhere of a Fiji (maybe)


plugin to help group leaders detect potential image manipulation. Has anyone heard of it? Who can point me in the right direction? Sylvie Le Guyader sylvie.le.guyader@ki.se


Perhaps you mean InspectJ? Mika Ruonala mika@icit.bio You’ll hear from many others, and in particular I would listen


carefully to anything Doug Cromey has to say. From my research, (and I lectured on this matter for a number of years at UVA and around the US) programs developed to detect image manipulation have never been trustworthy and cannot replace human assessment. I realize you’ve asked only to be “pointed” to the possibility, but I’d strongly suggest avoiding that route if possible and leaning on tools found at the ORI (Office of Research Integrity) and on Doug Cromey’s digital image ethics website. I’m sure a good conversation will develop here and you’ll come to your own conclusions, but I do think you’ll save time by developing your own departmental guidelines for microbiology, and having a screening process for published papers in your department. You might also refer here: Digital Image Ethics | Te Microscopy Alliance | Te University of Arizona: Scientific digital images are data that can be compromised by inappropriate manipulations (http://microscopy.arizona.edu/learn/digital-image- ethics) and https://ori.hhs.gov/education/products/RIandImages/ default.html Kirsten Miles kmiles@tupelopress.org


www.microscopy-today.com • 2021 May


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92