search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Feature: Embedded


Figure 1: OneSpin assertion coverage display


run successfully – a useful start. But, there is no relation to the design code behaviour, nor do we know if the assertions are meaningful. It would be more useful to have some kind of metric based on the code itself. Simulation coverage is important, but it is not comprehensive enough to answer these questions, as it is only concerned with the quality of stimuli. If regions of the DUV are not


necessary and can, for example, be modified without affecting the verification, they are either not required or not verified. We need to ask what is necessary to make all the tests and assertions pass. If we measure the amount of observation done with checks and assertions, we can derive a different kind of coverage that is based on observability. There are several metrics based on


the observability principle, such as Cone of Influence (COI) analysis and mutation analysis. The transitive logic COI is the


collection of all signals and expressions potentially having an effect on the value of a signal of interest, subject to a check or assertion. COI analysis finds code locations that are trivially not observed by an assertion because they are not in its COI. However, the COI for some assertions can become very large without providing a good indication of what design sections are actually observed by the assertion. Hence, it is not clear whether any of the code locations in the COI of the assertion are actually observed or not.


Observation coverage A classic notion of observation coverage is that of mutation analysis. T e idea is to defi ne a set of potential modifi cations to the source code of the DUV that correspond to certain types of errors, a collection of which is called a “fault model”. A mutation coverage tool would then apply various mutations defi ned by the fault model to each potential fault location of the DUV, re-run the verifi cation for each location and each fault, and then test to see if the modifi cation causes a check fail. Obviously, the diff erent fault models lead to diff erent coverage results, rendering a comparison diffi cult. Various other shortcomings have also been identifi ed: • Mutations have to correspond to syntactically correct code modifi cations in order to be applied on the source code. T is limits the type of errors that can be modelled.


• Certain mutations at certain locations can render parts of the code unreachable, make assertions vacuous, or both. T is is hard to predict and the results become more diffi cult to understand.


• T e number of checks to be run is proportional to the number of checks, multiplied by the number of fault locations, multiplied by the number of error types. Although some process shortcuts are possible, for example restricting the analysis to the COI, they will make the process very expensive in run-time tool requirements. A good observation coverage metric should have the following attributes:


1. Poorly-written assertions that don’t verify anything should not skew coverage results, while additional verifi cation should lead to increased coverage.


2. T e measurement method should be independent of syntactic restrictions.


3. T e metric should relate back to the source code, so it is easily understood.


4. T e measurement method should not lead to unreachable scenarios.


5. T e metric computation should be accomplished with a reasonable level of compute resources and run time. Quantify is a technology available


from OneSpin Solutions that makes use of observation coverage to provide an exceptional measurement of code coverage achieved by proven assertions. T e solution analyses code statements and branches to provide a concise metric as to whether these statements are observed by a given set of assertions operating under a set of constraints. Quantify has been proven in real design


fl ows to exhibit a number of advantages over other coverage methodologies as well as other forms of observation coverage. The problem of accurate verification


coverage and its impact on the risk of product failure and schedule overrun is well documented. Coverage solutions to date have proven helpful but inadequate, although some may be improved through formal technologies. Observation coverage takes a different approach, and with the use of advanced formal-based technology, rigorous metrics may be produced that can significantly increase the confidence of effective design verification.


www.electronicsworld.co.uk April 2021 21


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44