search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Feature: Embedded


Mitigating verification risk by understanding the coverage


By Nicolae Tusinschi, Product Manager, OneSpin Solutions T


he effective testing of an IC during the verification process prior to fabrication remains a perennial issue for design engineers, made worse by the


inability to accurately measure the progress of verification. Tere are various verification coverage


techniques, all with disadvantages, leading to incomplete verification, poor test quality and duplicated verification effort on well- tested parts of the design. Many reasons lie behind these drawbacks, including poorly- defined metrics, incompatible tools and unclear methodologies. A good verification environment should


include all the important functions of a Design Under Verification (DUV) through coverage metrics, to provide a multi-dimensional measurement of the verification progress. For example, the quality of the input stimuli created during verification will determine if certain operational scenarios are triggered (functional coverage), or that certain parts of the DUV code are activated (structural coverage). Tese are valuable measurements when assessing the quality of the verification process as a whole.


Formal-based processes Before silicon tape-out, companies usually insist on specified coverage goals being reached in order to qualify the design. Tere are two potential reasons why a certain operational scenario is not covered: Te first is that the verification process doesn’t produce the right test patterns, requiring further effort; and the second is that the scenarios remaining uncovered


20 April 2021 www.electronicsworld.co.uk


cannot be activated at all, because they are simply unreachable – i.e., can’t be stimulated due to the design code structure. Overall, it can be difficult to ascertain which of these situations is occurring. Formal-based technologies can help


resolve these issues, by applying them in a simulation-based flow, generating scenarios due to the exhaustive nature of their operation. If coverage data is available from other


sources, such as simulation output or a coverage database, this automatic test generation process may be guided toward previously-untested scenarios or coverage “holes”. At this point, a formal engine can either automatically generate test stimulus for the DUV to cover the hole, or prove that no such stimulus exists, and it is therefore pointless to look for coverage where it is unachievable. Tis formal-based process is fully


automatic, and in both cases provides valuable information. Ideally, this process would be repeated for each iteration of the design. In this application, the formal technology is being leveraged to assist simulation-based verification. Engineering effort may be reduced significantly without requiring the engineers to acquire formal analysis expertise. Formal tools are exhaustive by design


and consider every potential input scenario. However, oſten some of these scenarios represent illegal inputs and would generate scenarios that can’t happen during device operation, hence should be excluded from the analysis. Formal constraints would then be


added to exclude illegal input scenarios. However, if these constraints are too tight and exclude legal input sequences, the tool might not evaluate vital behaviour. Te formal constraints must themselves be verified, to ensure the design verification process is not over-constrained. Running formal coverage reachability


analysis in the presence of constraints will provide an indication of code that may not be activated. If the constraints are too tight, some cover goals will become unreachable, highlighting which functionality has accidently been excluded. In this case we call the coverage goal “constrained”. Using formal technologies, we can


decide if a scenario is reachable or unreachable. If reachable, the formal tool will provide an indication of how it may be reached, allowing a test to be added to the verification environment. Constrained scenarios need further analysis to avoid accidental over-constraining. In the context of structural source code based metrics, we call this system a “simulation coverage metric”, since it relates to the notion of activation in a design common across simulation and formal verification. Even when a simulation coverage is


considered sufficient, there is still a key aspect that must be accounted for, yet is oſten overlooked.


Simulation-based coverage In a simulation-based verification environment, checks sometimes implemented as assertions are included to ensure that design behaviour is as expected. In a formal-based environment, all the tests are coded as assertions, which are evaluated against the design code. However, how do we know if we have written enough assertions? One basic option would be to measure


assertion coverage, which is the status of each assertion according to the depth in the design state space it has been proven (proof radius) and whether it was triggered (cover radius); see Figure 1. This way we know that assertions have been written and if they have been


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44