RISK MANAGEMENT
Failure mode and effects analysis: a proactive risk assessment tool
n Origins and evolution of FMEA in industry
Failure mode and effects analysis is a structured, proactive risk assessment methodology that systematically examines potential failure points within a process before they result in patient harm. Originally developed in the manufacturing sector, FMEA has since become a cornerstone of risk management in healthcare, aviation, automotive production, and laboratory medicine.
It was first developed by the US military in the 1940s to enhance the reliability of munitions and aerospace systems, providing a structured approach to anticipate and prevent failures. By the 1960s, NASA adopted FMEA to ensure space mission safety, and, by the 1970s, automotive manufacturers such as Ford and General Motors integrated it into vehicle production to detect defects early and reduce recall costs. In the 1990s, FMEA entered healthcare, initially in hospital safety and medical device manufacturing. It later became a key tool in clinical laboratories, helping to prevent errors in sample processing, reagent stability, and result reporting.
n Understanding FMEA in risk assessment
The primary objective of FMEA is to identify failure modes – ways in which a process might fail – and systematically evaluate their potential impact on patient outcomes. Unlike retrospective risk assessment tools, which focus on understanding failures after they have occurred, FMEA aims to prevent errors from happening in the first place. Each failure mode is assessed based on three key metrics: 1. Severity (S) – How serious the
Investigation area Potential causes Incorrect results n Calibration issues
generated
n Reagent degradation n Operator errors
(+ many other causes)
Quality control failures
n Lot variability n Instrument drift
n Improper QC storage (+ many other causes)
Reporting error n Software bugs in the LIS
n Detect reagent lot issues
through pre acceptance testing n Review instrument calibration
n Temperature monitoring of QC storage conditions n Perform system validation
n System integration failures n Implement real time result review/verification n Network connectivity issues n Set up alerts from LIS
Table 1. Common applications of FTA in medical laboratories.
WWW.PATHOLOGYINPRACTICE.COM APRIL 2025
consequences of the failure would be. 2. Occurrence (O) – The likelihood of the failure occurring. 3. Detectability (D) – The probability that the failure will be detected before causing harm.
Each of these factors is scored numerically, and the values are multiplied to calculate the Risk Priority Number (RPN). The RPN provides a quantitative measure of risk, allowing laboratories to prioritise high-risk failure modes for intervention.
n Severity (S): understanding the impact of failures Severity measures the extent of harm that would result if a failure mode occurs. In a clinical laboratory, severity is linked to patient outcomes, as laboratory errors can lead to delayed diagnoses, incorrect treatments, or even life-threatening consequences. Severity is typically rated on a scale from 1 to 10, where: n 1–3: Minor inconvenience, no impact on patient care.
n 4–6: Moderate risk, potential for incorrect results requiring re-testing.
n 7–8: Serious risk, incorrect results leading to delayed or improper treatment.
n 9–10: Catastrophic failure, potentially life-threatening consequences.
Other scales can be used, such as 1–5, for better clarity and ease of differentiation between severities. For example, an incorrect glucose
result in a routine diabetes screening might score 4 or 5 on the severity scale because it could result in misclassification of diabetes risk. However, an incorrect potassium result in an emergency department setting could score 9 or 10, as hyperkalaemia misdiagnosis can lead to fatal cardiac arrhythmias. This risk assessment is clinically based, but may be subject to variability between assessors
Corrective actions n Increase calibration frequency
n Improve staff training and competency testing n Implement new QC procedures
of risk so it is a good idea to sense check assessments with colleagues.
n Occurrence (O): evaluating how likely a failure is to happen
Occurrence reflects how frequently a particular failure is expected to occur within a given timeframe or number of tests performed. Laboratories rely on historical data, trend analysis, and quality control records to estimate occurrence rates. Laboratory QC management tools are extremely valuable for detecting occurrence of QC failures in summary form, particularly when dealing with large volumes of QC data. Again, occurrence is often rated on a scale from 1 to 10 (but again may be on other scales such as 1-5) where: n 1–3: Rarely or almost never occurs. n 4–6: Occurs occasionally or under specific conditions.
n 7–8: Happens frequently and requires close monitoring.
n 9–10: Highly likely to happen, potentially a systemic issue.
For instance, mislabelling a blood sample due to a manual transcription error might score 7 or 8, particularly if human factors such as time pressure or understaffing contribute to higher occurrence rates. However, an issue like reagent degradation due to improper storage conditions might score 3 or 4, as it is a more controlled variable.
n Detectability (D): measuring the ability to catch failures before harm occurs
Detectability assesses how easily a failure mode can be identified before it leads to harm. The ability to detect a problem early allows laboratories to implement corrective measures before results reach clinicians or patients. This is familiar to us when setting up QC and will be explored more in future articles. Detectability if rated on a scale from 1 to 10, may look like: n 1–3: The failure is easily detected through routine checks or automated systems.
n 4–6: The failure is sometimes detected, but not always.
n 7–8: The failure is difficult to detect without extensive manual review.
n 9–10: The failure is almost impossible to detect before harm occurs.
For example, instrument calibration errors might score 2 or 3 if the laboratory performs daily automated QC checks that quickly flag deviations. However, silent software errors in the laboratory information system (LIS) that result in misfiled patient reports could score 8
29
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56