The Accellera Portable Stimulus utilises graph coverage to verify every path through the design for single-standard verification uses.
While attempts have been made to raise the abstraction used for design, the industry decided to stay at the Register Transfer Level (RTL) for most blocks and instead use a block assembly approach to create systems using internally developed IP or IP coming from third parties. However, verification costs have been rising and existing verification languages, such as SystemVerilog and methodologies, including the Universal Verification Methodology (UVM) fail to address issues associated with system-level verification. Thus, the stage was set for the development of a new system-level verification methodology.
The Accellera Portable Stimulus (PS) standard is expected to be released early this year and that will set the stage for the abstraction of verification to be raised to the system level.
Portable Stimulus is the first true verification language, a description of verification intent, encapsulated in a mixture of C++ and graphs. Tools transform that verification intent model into fully self-checking scenarios run on the design in a simulator or emulator. This process is very similar to the notions of synthesis being applied to a design model, except the output from the tool is a verification test case. While test case synthesis is the first application under consideration for PS, many other applications will be possible. The verification intent model is also a natural place to collect coverage data, but first the industry has to decide what coverage means at the system level.
The purpose of verification is to give designers enough confidence that no major problems exist in the design to cause a respin. The way most companies measure the level of confidence is through coverage level. The industry has become competent in the use of functional coverage, as described in SystemVerilog, but this is not usable at the system level. This article will outline the concepts of system-level coverage that can be defined using a graph-based approach to verification.
Let’s start with what’s available today. Constrained random test pattern generation, as is being used for block-level verification at the register transfer level, is all about stimulus generation. This is what is encapsulated into SystemVerilog and UVM testbenches. They focus on the creation of a vector of legal stimulus. Constraints, sequences and virtual sequences are used in an attempt to make the stimulus generation more useful. However, SystemVerilog has no concept of sequentiality or design purpose, so a vector knows nothing about the past or what might usefully follow. It just has to satisfy all of the constraints and the predefined snippets of directed test encapsulated in the sequences.
To measure confidence in this method of verification, a coverage model has to be created. While it has been given the name of functional coverage, this title is a misnomer. A better name would have been observation coverage because that is what it actually does. A SystemVerilog cover point defines an observation that, if seen during the act of verification, indirectly implies that some functionality has been executed. It assumes that a scoreboard or reference model has been run in parallel and that a checker has determined that the observation does in fact tie in with correct behaviour. These are very loose connections, and many teams have been surprised to find out that this correlation was not valid. Because of that, bugs have slipped through. For companies that require proof that this connection is valid, it is necessary to use a tool such as Certitude – now owned by Synopsys through a chain of acquisitions.
But that is the past. What about coverage at the system level and Portable Stimulus? There is some good news, and some not-so-good news, but nothing as bad as functional coverage.
The good news: Portable Stimulus solves all of the problems associated with functional coverage because it is a model of verification intent. This model defines the functionality of the design that can directly be used to measure coverage. No more having to create a coverage model. No more having to worry about coverage holes. No more uncertainty. Sound too good to be true? Well, it is that good, but only for part of the problem, and, it does create a different kind of problem.
The graph that is at the heart of Portable Stimulus defines every possible path of operation through the design. Some of those paths are limited by the constraints placed on the graph that narrow the legal paths to those that are intended to be supported by the hardware. Here’s where mathematics comes in. This corresponds to what, in the computer science world, is deemed to be path coverage, and is inherently what formal verification uses. When a formal proof is made, it is made for every possible path through the design. This is why some designs are beyond the limits of formal verification: the problem becomes too big. That statement defines the first of the new problems.
Graph coverage defines what it would mean to verify every path through the design. A design team now requires a strategy to narrow that down to what is most important, and prioritize. Figure 1 shows how two coverage goals could be added to a graph and a cross coverage defined.
Figure 1: Defining the paths to be verified through graphs.
A couple of terms need to be defined. First, a scenario model is a graph of all use case scenarios that should be supported by the design. Scenario coverage is the coverage of the paths through the scenario model. Unfortunately, scenario coverage is only one part of what coverage means at the system level.
As stated earlier, verifying every path through a graph is an impossible task. Unfortunately, even managing to do that, it is just the beginning of what needs to be verified. The next level required is concurrency coverage. Multiple graphs may be executed in parallel, which means that total coverage is more like the cross between the graph and itself for every independent thread that can be executed by the hardware. For example, did scenario A, that displayed a video read from the SD card and decompressed on the fly, run at the same time as scenario B, that took in an emergency-alert message through the radio? And, were two particular operations of those scenarios executed at the same time such that they both required use of a shared resource? An example of this is shown in Figure 2. This kind of coverage cannot be captured on the graph directly, but tools can be developed that would make that possible. For want of a better term, this is concurrency coverage.
Figure 2: Tools can be developed for multiple graphs may be executed in parallel.
Is this done yet? Sorry, but there are other kinds of coverage required. The next one is temporal concurrency coverage, and while the above example could be considered an example of that, it is somewhat simplistic. It asked for two processes to happen at the same time, or cause a synchronization to happen between them. A more complex example might be to create a test case that powers up as many independent power domains as possible so the designer can use this as a scenario to perform power integrity, thermal, electromigration (EM), and other power-grid-related verification functions. In this example, temporal relationships should be placed across concurrent actions, and these can be annotated as temporal constraints on paths through the graph.