A test program never refused tests
If someone said that paper never refused ink, they could have said the same about test programs and tests.
Early on in my career as a test development engineer, I commented out all tests except the key test for the new product I was working on. This allowed me to isolate the test and fix its repeatability. The next pre-production run finished much earlier than I expected and I was impressed that the only test data logged by the operator was the one I was really interested in. Then the penny dropped.
There are better ways of saving test time!
The data sheet will tell you the important tests to add in your test program; designers and marketers will ask about performance variations across temperature and conditions and will want to see a characterization report that potentially has even thousands of tests.
Just before you go down the production road you might reduce this to 800 tests and a test time of say, twelve seconds. You have characterized the last out of your product and you know that all the tests really are tested only at the worst case conditions and you have included tolerance to take into account temperature variations. But you get that feeling of overkill and you think that twelve seconds is a bit exorbitant.
So what steps can you take next? Assuming you have implemented parallel sites testing, if that was possible, here are some additional practical ideas that many engineers will go through (especially if they have a really good database for test data) to reduce test time:
Run a ‘test that never fails’ analysis on all your data so far – evaluate any of these tests to see if you can eliminate them for good
Run a correlation report which correlates all tests against each other
Place failing tests at the start of your test sequence
Make sure to run Gage R&R and fix any tests that may be sensitive to set up conditions as these could cause false fails and some could be potentially eliminated
If you can run test time per test, do a pareto of the longest test time tests and focus on these
Minimize any wait statements in your test program and if you need them, get the tester to do calculations or other operations during the waits
Benchmark your test time versus other similar products in your division. You might actually find that other test programs come into focus for test time reduction after this exercise
Try and store test times in the datalog, even index times, and monitor. You can add these as tests and a good YMS system like yieldHUB can even allow you to alert if test time or index time is unexpectedly high
Run Test Coverage Analysis on all your rejects so far. Contact us for more details about this capability.
The last tip is interesting and we would encourage you to treat fails as important nuggets of information. Some customers of ours collect all rejects from every lot tested and retest them all together in one datalog. This, with our coverage analysis tool applied, resulted in one test program of nearly 900 tests reducing to 500 or so tests.
In the last example, the customer still samples tests, many of those tests for monitoring purposes.
In conclusion, test time reduction is always important and there are a few more low hanging fruits that will help you reduce your test time. Of course, it could be said that having all your data in a database for fast access brings those fruits down to ground level!