December, 2017
www.us-
tech.com
Page 65
Enhanced Manufacturing Services 4.0 Part 2: NPI Driven by Test Coverage
By Christophe Lotz, President and CEO, ASTER Technologies, LLC T
he purpose of any test solution is to maximize test coverage, ensuring that the majority of
defects are detected, while minimiz- ing the test cost. If a product is not tested well enough, poor-quality products can be devastating to a manufacturer’s reputation. If a prod- uct is tested too much, it can nega- tively impact a variety of business goals, including production cost, time-to-market, and ship-to-target. Obviously, after testing, we
know the failed products that cannot be shipped. Larger and more complex designs will be repaired, because of the many high-value components present on the PCB. These expensive components would otherwise be scrapped. Only when the percentage of failures is very low, or the repair costs are much higher than the value of the product, can a “failure equals scrap” policy be considered.
Production Model TestWay, a key tool within the
ASTER digital suite, creates a pro- duction test model report that clearly summarizes the test process and key metrics. The test coverage, “Test,” is the percentage of defects that can be captured by a combination of inspec-
quality. If a PCB assembly is failing at system test, it is because it fell into the
that can be updated easily to reflect growing electronic
complexity.
place that are capable of detecting all of these defects. The ability to detect defects can be expressed by different facets of the test coverage, so that each defect category is aligned with the correct
coverage metrics.
Standard metrics have been defined for the industry by companies that include Philips Research (MPS), ASTER Technologies
Keysight (PCOLA/SOQ), and iNEMI (PCOLA/SOQ/FAM). These metrics enable the estima-
Production test model.
quantity of escapes (or slips), usually much higher than expected. There are two possible reasons
why this occurs. Either the defects per million opportunities (DPMO) figures are higher than expected or the combined coverage is less than optimal. Incorrect DPMO figures are probably due to limited defect trace- ability or incorrect root cause analy- sis. The unexpected low coverage could be due to the use of inadequate
Consider the many possible defects, including missing or damaged com- ponents, wrong value, misalignment, incorrect polarity, open circuits, short circuits, and insufficient or excessive solder. We must have test strategies in
tion of the theoretical coverage, or the measurement of the real coverage, for each unique test strategy. No single test strategy is capable of detecting all defects. It is a combination of comple- mentary test strategies that provides the best overall coverage. When calculating test coverage,
it is important to consider the DPMO that reflects the current manufactur- ing process. This way, the test cover- age can be aligned so that better cov- erage is provided wherever there exists a greater chance of defects
Continued on next page (PPVSF),
Test coverage metrics.
tion and test machines. “FPY” (first- pass yield) is the percentage of boards that pass the test. FPY can no longer be consid-
ered a good measure of production quality. This is easily demonstrated by a test coverage of 0 percent, which will result in an FPY of 100 percent. “FOR” (fall-off rate) is the number of boards that fail the test. This leads to a simple, yet
thought-provoking question: Is a board good because it passes the test? Practical experience raises the next questions: “Are all failing products actually faulty?” and “Are all products that are shipped good products?” The answer to both is a clear “No.” “Slip” is the escape rate. This is a
key metric and represents the faulty products that will be shipped to the end customer. Ultimately, the slip is how end users will measure the final
coverage metrics, such as the confu- sion between test accessibility and testability.
Test Coverage In assessing the results from a
combination of test methods,
TestWay simulates a variety of test strategies and predicts the test cov- erage. Initially, however, we must consider the metrics that can be used to calculate the test coverage. Consider an extremely simple
PCB assembly comprising only four components: three resistors and a BGA. The three resistors are meas- ured with very high accuracy, but there is no test on the BGA. Three resistors out of four components; is the board test score really 75 percent? Clearly that is not right.
Something is needed to weight the test coverage — something credible
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92