search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
MEASUREMENT UNCERTAINTY


The information theoretic approach uses the concept of entropy. In qualitative analysis, entropy is used to measure uncertainty or information content in binary outcomes. A lower entropy value indicates a more certain or informative result


Conclusions


In doing so it provides estimated confidence intervals for proportions and the uncertainty associated with the observed values. Having a minimum (or so-called minimum bound) for clinical sensitivity or specificity is used to determine the uncertainty of proportions. If the lower bound for sensitivity is 92%, the sensitivity value must exceed that to meet the claimed clinical sensitivity. The uncertainty of proportions provides assurance that test results achieve performance targets.


Normal distribution method The Normal distribution is useful for assays that use quantitative signals to classify patients against predefined thresholds. This method assumes a normal distribution of the measurement response and estimates uncertainty based on TPR, FNR, TNR and FPR. It recognises that instrumental responses can vary due to assay variability, calibration errors, and other sources of uncertainty. Assuming a positive vs. negative classification is determined by a numerical signal exceeding a given cut-off, that initial numerical result is impacted by measurement uncertainty.1 Therefore although the pos/neg result is reported, numerical uncertainty has a direct impact on the classification of any given patient. The Normal distribution method is most applicable to automated assays that produce quantitative signals. It may not be suitable for all qualitative results particularly those that do not have continuous instrumental responses. The uncertainty has its biggest impact at the cut-off. This again is quite intuitive as any measurement variability in a sample where the ‘true value’ is around the cut-off runs the highest risk of a misclassification happening. As we move away from the cut-off point, we see a decreased impact of MU on the risk of misclassification.


Information theoretic approach Information theory is a branch of mathematics and computer science that deals with the communication


26


of information. It was developed by Claude Shannon in the late 1940s. The information theoretic approach uses the concept of entropy. In qualitative analysis, entropy is used to measure uncertainty or information content in binary outcomes. A lower entropy value indicates a more certain or informative result. When the probability of a specific outcome (Pos/ Neg) is close to 0 or 1, the entropy is low, as is the uncertainty. Conversely, when the probability is close to 0.5, the entropy is highest due to high uncertainty. The entropy (S) formula used in


the information theoretic approach is: S = -p * log2(p) - (1-p) * log2(1-p) - where p is the probability of a positive outcome (eg positive test result) and 1-p is the probability of a negative outcome (eg negative test result).


The logarithm is taken with base 2 to


measure entropy in units of bits. As the instrumental signal increases or decreases away from the cut-off, entropy decreases, as does uncertainty. This shape of the entropy curve reflects the changing probability distribution of the binary outcome and captures the uncertainty associated with the test result. It is important to note that the


information theoretic approach requires knowledge of the probabilities associated with each outcome. These probabilities can be estimated based on various factors, such as the performance characteristics of the assay, prevalence of the condition being tested, and other relevant data. The information theoretic approach utilises concepts from information theory, such as entropy, which may involve unfamiliar terminology and units (bits). This can make it difficult for laboratory professionals to understand and implement in practice. It is also most suitable for binary outcomes. It may not be as applicable to qualitative assays with multiple categories or outcomes. It relies on accurate estimation of probabilities for each outcome. However, obtaining reliable and accurate probability data can be challenging, especially for rare conditions or when limited data are available.


References 1 Lim YK, Kweon OJ, Lee MK, Kim HR.


Assessing the measurement uncertainty of qualitative analysis in the clinical laboratory J Lab Med. 2020; 44 (1): 3–10. https://doi. org/10.1515/labmed-2019-0155


2 Pereira P. Eurachem/CITAC Guide Assessment of Performance and Uncertainty in Qualitative Chemical Analysis—A Medical Laboratory Perspective. Standards 2022; 2(2): 194–201. https://doi.org/10.3390/ standards2020014


3 Bettencourt da Silva R, Ellison SLR eds. Eurachem/CITAC Guide: Assessment of performance and uncertainty in qualitative chemical analysis. 1st edn. Eurachem, 2021. ISBN 978-0-948926-39-6. www.eurachem.org


4 Clinical Laboratory Standards Institute. Evaluation of Qualitative, Binary Output Examination Performance. 3rd edn. CLSO Guideline EP12.


Further reading n Hibbert B, Lin TT, Bettencourt R et al.


Assessment of Performance and Uncertainty in Qualitative Chemical Analysis. Chemistry International. 2023; 45(1): 46-50. https://doi. org/10.1515/ci-2023-0127


n PathologyMU. The measurement uncertainty of qualitative assays – an online course. https://pathologymu.teachable.com


AUGUST 2024 WWW.PATHOLOGYINPRACTICE.COM


This article has attempted to introduce some approaches for describing uncertainty in categorical results. The statistical models discussed provide laboratories with valuable tools to quantify, express and understand uncertainty in qualitative results. Bayes’ theorem, the Normal distribution method, and the information theoretic approach offer different approaches for understanding uncertainty. Each method has its advantages and limitations, and the choice of method depends on the specific context and available data. It is clear that the widespread use of tests that fit into the semi-quantitative assay bracket means more research and discussion is needed for how to apply statistical approaches to their measurements, and the impact that has on clinical decision-making. But we must remember that measurement uncertainty is only useful when applied to a process where measurements are actually made. Those measurements may be in the steps leading to a categorical result, not just in determining whether a result is above or below a cut-off. Because of this there are many other applications to MU in parts of processes within the laboratory that we should consider, and indeed these will be discussed in a future article.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52