search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
QUALITY MANAGEMENT


Approach Traditional


Parvin MaxE(nuf) Basis for QC frequency


Time or batch-based (e.g. every 8 hours or 100 samples)


Based on the probability of undetected errors and sample throughput


Risk sensitivity None


High Table 1. Comparison of risk-based and traditional QC frequency approaches.


safety. This flexibility not only enhances error detection where it matters most, but also reduces unnecessary QC, improving efficiency without increasing risk.


Defining QC strategies based on risk assessment


The challenge becomes how to translate risk assessments into QC strategies. The purpose at this stage is not to continually revisit the identification of risk, but to determine how best to manage it. This involves making practical decisions about the design of QC processes – how frequently to run controls, how stringent the rules should be, and how these measures adapt over time in response to performance data.


Each identified risk must be matched to an appropriate mitigation strategy. Crucially, the choice of mitigation should reflect not just the existence of a risk, but its priority – determined by the severity of harm it could cause and the likelihood that it might occur or go undetected. A relatively minor failure mode with negligible patient impact may be monitored through existing checks, while a high-priority risk – such as an error that could affect critical results used in emergency care – may warrant enhanced control mechanisms or layered safeguards.


An essential concept introduced at this


stage is residual risk: the level of risk that remains after QC measures have been implemented. Every QC strategy must be evaluated not only on its theoretical design, but on whether it brings the residual risk within an acceptable threshold. This is not simply about technical performance – it is about whether the likelihood of a clinically significant error is now low enough to meet the laboratory’s defined risk tolerance. If the residual risk is still too high, further action is required – this could mean increasing QC frequency, tightening acceptance criteria, or adding complementary controls such as automated system checks or post- analytical alerts.


Defining a QC strategy therefore is an iterative process. It begins with matching mitigations to assessed risks but continues with monitoring and adjustment. Laboratories must evaluate whether their control measures perform as intended in practice – not


16


just whether they are documented in a plan. For example, if a test method consistently triggers QC rule violations despite previous assessments suggesting moderate risk, the laboratory must reassess both its original assumptions and its current control design.


This stage also opens the door to risk-based differentiation between assays. Rather than applying the same QC schedule to all methods or platforms, laboratories now have a justified basis for variation. Low-risk, high-Sigma assays may be subject to lighter control schedules, freeing resources for closer monitoring of more error-prone or clinically sensitive tests.


Linking QC performance to analytical performance specifications Analytical Performance Specifications (APS) provide the limits within which a method must operate to be considered fit for its intended purpose. In a risk- based framework, they become not just theoretical targets, but essential reference points for designing and evaluating QC procedures.


When planning a QC strategy, APS help determine what constitutes acceptable analytical performance. For instance, if the allowable total error (TEa) for an assay is defined, the laboratory can assess how its current level of imprecision and bias – quantified through routine QC and method validation – compares against this limit. From there, decisions can be made about how stringent QC rules need to be, how often controls should be run, or whether additional layers of control are required to manage the risk of exceeding those limits. In this way, APS influence the design of QC in very practical terms. A method that performs well within the defined APS may tolerate a lighter QC schedule


Standard CLSI EP23-A ISO 15189:2022 ISO 22367:2020 Description


Advises developing QC plans based on individual test system risk, patient impact, and manufacturer guidance


Mandates that QC frequency be justified using evidence of method performance and patient safety requirements


Calls for a structured, data-driven approach to risk estimation, control, and residual risk review


Table 2. How risk-based frequency of quality control aligns with standards in IQC. JUNE 2025 WWW.PATHOLOGYINPRACTICE.COM


Flexibility Fixed


Test-specific and adaptable


or more forgiving decision rules, without compromising patient safety. Conversely, a test that operates near the limits of acceptability demands a more sensitive and proactive control strategy. Sigma metrics are often used in this context as a consolidated measure of performance, allowing laboratories to align the intensity of their QC with how close a method is to its performance threshold. However, APS are not only useful at the planning stage. They also provide a critical reference point for evaluating whether a QC strategy is working as intended. If control data show results trending toward the edge of acceptability – whether for bias, imprecision, or overall error – it may signal that the current QC measures are insufficient to keep the method within safe operating limits. This is where APS become integral to the feedback loop between QC and method performance. They allow laboratories to track whether the residual risk, after mitigation, remains within tolerable bounds. Importantly, APS also allow laboratories to differentiate between statistical significance and clinical relevance. Not every QC rule violation necessarily implies a clinically meaningful error. By mapping control performance back to the APS, laboratories can interpret QC events in the correct context – recognising when an out-of-control signal requires immediate action and when it reflects statistical noise that does not jeopardise patient safety.


Evaluating QC failure risk and associated clinical harm In any QC system, failures will occasionally occur. What distinguishes a risk-based approach is not whether QC failures happen, but how they are evaluated, prioritised, and addressed. While earlier steps in the QC planning process focus on anticipating and mitigating risks, this stage requires laboratories to assess what happens when those measures fall short. In particular, it involves quantifying how often QC failures occur and understanding what those failures could mean for patient safety if not detected or acted upon promptly.


Although the potential clinical impact of undetected errors has already been


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52