Jump to content

Fault coverage

From Wikipedia, the free encyclopedia

Fault coverage refers to the percentage of some type of fault that can be detected during the test of any engineered system. High fault coverage is particularly valuable during manufacturing test, and techniques such as Design For Test (DFT) and automatic test pattern generation are used to increase it.

Applications

[edit]

Digital electronics

[edit]

In digital electronics, fault coverage refers to stuck-at fault coverage.[1] It is measured by sticking each pin of the hardware model at logic '0' and logic '1', respectively, and running the test vectors. If at least one of the outputs differs from what is to be expected, the fault is said to be detected. Conceptually, the total number of simulation runs is twice the number of pins (since each pin is stuck in one of two ways, and both faults should be detected). However, there are many optimizations that can reduce the needed computation. In particular, often many non-interacting faults can be simulated in one run, and each simulation can be terminated as soon as a fault is detected.

A fault coverage test passes when at least a specified percentage of all possible faults can be detected. If it does not pass, at least three options are possible. First, the designer can augment or otherwise improve the vector set, perhaps by using a more effective automatic test pattern generation tool. Second, the circuit may be re-defined for better fault detectability (improved controllability and observability). Third, the designer may simply accept the lower coverage.

See also

[edit]

References

[edit]
  1. ^ Williams, Thomas W.; Sunter, Stephen K. (2000). How Should Fault Coverage Be Defined?. 18th IEEE VLSI Test Symposium (VTS 2000), 30 April - 4 May 2000, Montreal, Canada. pp. 325–328. doi:10.1109/VTS.2000.10003.
[edit]