search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
EDUCATON :: QUALITY CONTROL STRATEGIES


The focus of quality control strategies should be on patient outcomes, not technology


By John Yundt-Pacheco and Curtis Parvin, PhD L


aboratories use quality control (QC) procedures to assure the reliability of the test results they produce and


report. According to ISO 15189, the In- ternational Organization for Standard- ization’s (ISO) document on particular requirements for quality and competence in medical laboratories, “The laboratory shall design quality control procedures that verify the attainment of the intended quality of results.”1


That is, laboratory


QC procedures should assure that the test results reported by the lab are fit for use in providing good patient care. Unfortunately, there is often a temptation to focus on the instruments being used in the lab and the “big picture” view of patient outcomes can get forgotten when QC programs and strategies are designed.


Laboratory QC design The primary tool used by laboratories to perform routine QC is the periodic testing of QC specimens, which are manufactured to provide a stable analyte concentration with a long shelf life. A laboratory establishes the target con- centrations and analytic imprecision for the analytes in the control specimen assayed on their instruments. Thereaf- ter, the laboratory periodically tests the control specimens and applies QC rules to the control specimen results to make a decision about whether the instrument is operating as intended or whether an out- of-control error condition has occurred. Traditionally, laboratories determine


how many control specimens to test and what QC rules to use based on a desire to have a low probability of making the erroneous decision that the instrument is out-of-control when it is, in fact, operating as intended (false rejection rate) and a high probability of making the correct decision that the instrument is out-of- control when indeed an out-of-control error has occurred (error detection rate). Clearly, the in-control or out-of-control status of a laboratory’s instruments influ- ences the reliability (quality) of the patient results reported by the lab. Laboratories tend to design QC strategies with a focus limited to controlling the state of the instru- ment rather than controlling the risk of producing and reporting erroneous patient results that could compromise patient care.


32 DECEMBER 2022 MLO-ONLINE.COM


Figure 1.


A focus on the patient One approach that more directly focuses on the quality of patient results (rather than the state of the instrument) is to design QC strategies that control the expected number of erroneous patient results reported because of an undetected out-of-control error condition in the lab.3 What do we mean by “erroneous” patient results? The quality of a patient result depends on the difference between the patient specimen’s true concentration and the value reported by the laboratory. We define an erroneous patient result as one where the difference between the patient specimen’s true concentration and the value reported exceeds a specified total al- lowable error, TEa result exceeds TEa


. If the error in a patient’s , we assume it places the


patient at increased risk of experiencing a medically inappropriate action. How does a laboratory decide what the specification for an analyte should be? This is not a simple question to answer,


TEa


but a number of different lists of TEa specifications that include hundreds of analytes have been produced and are available from a variety of sources. Once a TEa


for an analyte has been spec-


ified, then for any possible out-of-control state, the probability of producing patient


results with errors that exceed the TEa can


be computed, as demonstrated in Figure 1. The gray curve represents the frequency distribution of measurement errors for an instrument operating as intended. The dis- tribution is centered on zero and the width of the distribution reflects the inherent analytical imprecision of the instrument. The black curve represents the frequency distribution of measurement errors after a hypothetical out-of-control error condition has occurred. The out-of-control error con- dition causes the instrument to produce results that are too high. The area under the out-of-control measurement error distribution that is either greater than TEa


(shaded in red) reflects the probability of producing an erroneous patient result. In this case, 10% of the area under the curve is shaded red. If an out-of-control error condition of this magnitude occurred, then while the instrument was operat- ing in this state, we’d expect 10% of the patient results produced to be erroneous. The expected number of erroneous patient results reported while an un- detected out-of-control error condition exists will not only depend on the likeli- hood of producing erroneous results in the presence of the error condition, but


or less than -TEa


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52