search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
COMPARABILITY ASSESSMENT


them to be a strong source of evidence for performance, such studies are not infallible.


Clinical studies may be subject to selection bias. Often they are not representative of real-world clinical practice. ‘Normal’ volunteers/donors are often used as a control; confirmed ‘diseased’ patients as the diseased group. We expect these samples to be from the extremes of the clinical spectrum we are likely to encounter. This focuses the assay range under test to those extremes and neglects the areas between. Numeric assay results will be expected to show a statistical difference in the means (average) of the two groups. Qualitative assays will show different proportions of positive patients when comparing control to diseased groups. The selection of ‘very positive’ cases and ‘very negative’ cases may have an influence on this. In the real world we see patients with a full spectrum of possible results, and often they are not in the extremes.


Patients also present with co- morbidities. How do these co-morbidities interact with the performance of our clinical results? Some of these co- morbidities affect our assay method, affecting sensitivity and specificity. These issues may not have been accounted for during validation so we must be prepared to account for these issues as part of our method comparison study. Our reference method may or may not


be affected and the same can be said for our assay under test. Differences in results may be genuine, and they may even be desirable if unwarranted effects are seen previously to be a problem in our index method. It is for this reason that we really must prepare our definition of what differences are expected, acceptable and what others need further investigation.


Prevalence We have seen during the current pandemic that assay performance must be considered in the context of external factors, including the disease prevalence, at the time of testing if using diseased samples in comparison studies. Our statistical inference is impacted by the probability of encountering positive samples in the sampling population. We will discuss this in the coming articles, but moving from high to low prevalence significantly changes the probability of a ‘genuine’ positive result being able to be interpreted as such. The reduction of prevalence may lead us to interpret a positive result as being more likely to be a false positive (reduced specificity) rather than a true positive (high sensitivity). The interpretation of results


18


If considering different methodologies as a back up to a primary assay, method comparability is essential


may be a balancing act, and it is important to have an awareness of what the impact of changing patient prevalence has on the availability of verification samples and interpretation of the results.


What is the impact of disagreement?


It is clear why we need to set limits for our expected agreement. These may be set by evidence we have or can reference. As long as we can justify the criteria as being appropriate we can assess our results. This leads to another question: What happens if the results do not agree by these criteria? Is it the end of the road? Well, that depends. If there is a difference, method-specific


reference ranges may be required. This becomes even more essential, not to mention problematic, if there is a direct clinical cut-off. It is undesirable that we would deviate from established (in some cases international) clinical decision limits as a consequence of poor method comparison study. Service resilience and continuity of


care is also important. If considering different methodologies as a back up to a primary assay, method comparability is essential. The worst-case scenario would be for reference ranges and/or clinical action points to be changing back and forth as a consequence of one assay being unavailable and an alternative being used.


Conclusions


Often, decisions in the laboratory to introduce a new test are first approached by the scientific and clinical staff who have a natural interest and desire to expand the repertoire. The procurement process is agnostic to this and takes the scientific validity of the implementation to be a given. The other aspects of implementation, including but not limited to procurement, supply chain, costing and billing will determine whether the assay is successful in being implemented. It is worth, from the outset, having this at the forefront of your thinking so that when you have the scientific evidence from the method comparison study you are ready for the rest of the process that will inevitably follow. All we can do is provide the scientific data to establish the incoming method as appropriate. The steps here are simple. Find a


source for performance specifications that you can test the assay against. This may be from manufacturers, peer-reviewed


publications, guidelines, biological variation databases or clinical outcome studies. Whichever you choose, you must be prepared to defend. Once you have them, we go back to our experimental data and see how we did. Clearly, the choice of specification needs to be in place prior to any experiments being run, as it will inform what types of samples are required. Along with our experimental design previously discussed, we can simply test the samples and assess.


The methods used to quantify the


agreement are next. In the following few articles we will use a case study approach to incorporate our previous articles and start to implement the analysis and interpretation methodology we have at our disposal.


Further reading n Fraser CG. The 1999 Stockholm


Consensus Conference on quality specifications in laboratory medicine. Clin Chem Lab Med 2015; 53 (6): 837–40. doi: 10.1515/cclm-2014-0914.


n Horvath AR, Bossuyt PM, Sandberg S et al.; Test Evaluation Working Group of the European Federation of Clinical Chemistry and Laboratory Medicine. Setting analytical performance specifications based on outcome studies – is it possible? Clin Chem Lab Med 2015; 53 (6): 841–8. doi: 10.1515/cclm-2015-0214.


n Panteghini M, Ceriotti F, Jones G, Oosterhuis W, Plebani M, Sandberg S; Task Force on Performance Specifications in Laboratory Medicine. Strategies to define performance specifications in laboratory medicine: 3 years on from the Milan Strategic Conference. Clin Chem Lab Med 2017; 55 (12): 1849–56. doi: 10.1515/cclm-2017-0772.


n Pathology uncertainty (https://pathologyuncertainty.com).


n Sandberg S, Fraser CG, Horvath AR et al. Defining analytical performance specifications: Consensus Statement from the 1st Strategic Conference of the European Federation of Clinical Chemistry and Laboratory Medicine. Clin Chem Lab Med 2015; 53 (6): 833–5. doi: 10.1515/cclm-2015-0067.


Dr Stephen MacDonald is Principal Clinical Scientist, The Specialist Haemostasis Unit, Cambridge University Hospitals NHS Foundation Trust, Cambridge Biomedical Campus, Hills Road, Cambridge CB2 0QQ (tel 01223 216746).


DECEMBER 2020 WWW.PATHOLOGYINPRACTICE.COM


PPi


PATHOLOGY IN PRACTICE


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62