QUALITY MANAGEMENT
could occur between QC events. In simpler terms, it estimates the worst-case number of wrong patient results that might be released if a test system goes out of control just after a QC event and the error is not detected until the next one.
This metric enables laboratories to set QC intervals not based on arbitrary time or batch sizes, but on quantified patient risk. The formula is derived from principles of probability and Bayesian risk modelling, incorporating: n P(error undetected): The probability that a shift or drift occurs and is not caught by the QC rules used.
n N(patients between QC events): The number of patient samples expected to be tested before the next QC event.
n Risk weight of error (clinical relevance): Often factored in implicitly when deciding what MaxE(nuf) threshold is tolerable.
n The likelihood that an incorrect result will be acted upon clinically, before the result can be corrected or a repeat, reliable result produced.
The key relationship is: Laboratories can set a MaxE(nuf) threshold (eg 1.0), meaning they are willing to tolerate a maximum of one potentially erroneous result between QC events. Using this model, the number of patient results between QC runs can be adjusted dynamically based on analytical performance and risk tolerance.
n How it improves upon traditional QC scheduling
The MaxE(nuf) model allows high- performing assays with low risk of error to have less frequent QC without compromising safety, while requiring increased QC frequency for tests with poorer performance or higher clinical risk (Table 1).
n Alignment with CLSI EP23-A, ISO 15189:2022, and ISO 22367:2020
The MaxE(nuf) approach exemplifies the risk-based QC planning required by the standards shown in Table 2. By quantifying risk in terms of undetected errors per patient sample, MaxE(nuf) provides a transparent, reproducible method for setting QC intervals based on residual risk, reducing unnecessary QC where safe or increasing QC where risk cannot be otherwise mitigated
Shared vs. analyser-specific QC statistics: a risk-based evaluation
In laboratories operating multiple identical analysers, a common quality
18 Counterpoint
Loss of Sensitivity to Localised Errors
Conditions Details
Using shared statistics carries a trade-off. If one analyser experiences a small but significant shift, this may be masked by the pooled mean and SD.
A large dataset causes narrower control limits, making common rules like 1-2s more prone to false rejections or missed shifts, depending on their configuration; but the opposite may be true as well!
Table 4. Counterpoint to shared Mean and SD.
assurance challenge arises: should QC statistics – namely, means and standard deviations – be shared across all platforms, or calculated individually for each? While a unified approach offers operational simplicity and statistical power, concerns persist that it may obscure performance issues on individual analysers. This debate is not merely theoretical – it sits at the heart of contemporary QC strategy, directly linking to frameworks like CLSI C24, EP23, and ISO 22367, all of which emphasise minimising patient risk through context- specific quality planning.
n The rationale for shared QC statistics
Proponents of using a common mean and standard deviation across identical analysers argue that: n The devices are calibrated and maintained identically, and thus, their performance should be statistically comparable.
n Pooling data leads to more precise estimates of mean and SD, enhancing the stability and robustness of control limits.
n From a risk-based viewpoint, the focus should be on minimising the overall probability of undetected error, not necessarily detecting every shift on every analyser. This aligns with MaxE(nuf) principles, where the goal is to reduce the expected number of erroneous patient results released between QC events.
In this model, QC rules are applied using a pooled dataset, and the control process focuses on detecting systemic issues rather than minor, possibly inconsequential analyser-specific drifts (Table 3).
n The counterpoint: loss of sensitivity to localised errors
However, using shared statistics carries a trade-off as shown in Table 4. Using analyser-specific QC statistics ensures that each analyser is judged relative to its own baseline. This makes QC more sensitive to localised shifts or trends, potentially enabling earlier detection of calibration drifts, reagent lot issues, or mechanical anomalies.
n Balancing risk and practicality Ultimately, the choice between shared and analyser-specific QC statistics is a risk-management decision. Laboratories must weigh: n The clinical risk of undetected shifts on one analyser
n The operational benefit of harmonised QC metrics
n The QC rules in use and their robustness to shifts of varying magnitude
n The volume of data available for establishing reliable statistics
ISO 22367 encourages laboratories to select controls that balance probability of harm and detectability, while CLSI C24 suggests that the use of common QC statistics may be appropriate only when performance equivalence is confirmed and continuously verified.
Conclusions
The shift to risk-based QC represents a meaningful advance in laboratory quality management, offering a more nuanced and effective means of ensuring patient safety. By moving away from fixed QC schedules and embracing a framework rooted in clinical risk and analytical performance, laboratories can tailor their quality systems to reflect real-world challenges. Tools such as analytical performance specifications, Sigma metrics, and the MaxE(nuf) model provide the quantitative foundation for this approach, while structured risk assessment techniques ensure that QC failures are evaluated in terms of their actual clinical consequences. Decisions around QC frequency, statistical modelling, and whether to use shared or analyser- specific QC parameters all become part of a broader strategy to minimise residual risk. Ultimately, this approach empowers laboratories to deliver more reliable results, optimise resource use, and respond more dynamically to the demands of modern clinical diagnostics.
Dr Stephen MacDonald is Principal Clinical Scientist, The Specialist Haemostasis Unit, Cambridge University Hospitals NHS Foundation Trust, Cambridge Biomedical Campus, Hills Road, Cambridge CB2 0QQ.
+44 (0)1223 216746. JUNE 2025
WWW.PATHOLOGYINPRACTICE.COM
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52