Here's why specifications are so important for formal verification.
With 68% of the ASICs going through respins and 83% of the FPGA designs failing the first time around, verification poses interesting challenges. It’s also not a secret that nearly 60-70% of the cost of chip design to tape out is in verification. Simulation-based verification is not scaling up and bugs continue to escape. The challenges are even worse if you consider verification for safety-critical and security domains.
Formal verification is the only way to obtain proof of bug absence. Yes, a mathematical proof that a bug does not exist in your chip when verified against a set of requirements formalized as clear and mathematically-precise properties often called assertions.
As good as it sounds, there are challenges with formal verification as well. The most well-known is that although formal verification is known to be exhaustive, in practice for many design configurations, it may not yield exhaustive results due to what is known as the state-space explosion problem. Here, scalability can be a challenge. Because formal methods examine each state reachable in the design implementation, formal tools can run out of memory and results may be inconclusive.
However, is scalability the only challenge with formal methods deployment at scale for big design verification? Let us talk about the specification challenges. But before I delve into this any further, here is a summary of the key steps in a formal verification deployment for industrial-sized projects.
Key steps in formal verification deployment
Derived out of requirements, they form the main artery of verification. The goal of specifications is to describe “what” the design must do as opposed to “how” it is implemented.
Verification strategy: Assuming we have a good understanding of the requirements based on the specifications (not by reading the RTL code), we can capture the key ideas of “how” we will approach verifying the given design using formal. A strategy document typically contains a high-level overview of key methods used in putting together the testbench and outlining any related problem-reduction techniques that may be employed during testbench development.
Verification plan: This is a detailed list of “what” will be verified. The list is derived from the specifications and describes in detail what items will be coded and verified.
Note that strategy and plan serve a complementary purpose. Whereas the plan focuses on “what” the strategy focuses on “how.”
It uses the verification plan as the basis to code all target checks and constraints and covers with appropriate modelling code. A significant chunk of this activity involves understanding “how” to code for efficiency and not miss discovering any bugs due to over-constraints in the testbench.
In simulation environments, considerable effort is spent on creating a codebase for driving stimulus in the design. In formal verification, virtually 0% effort must be made in that direction as the stimulus is available for free. The only challenge is to stop illegal stimuli on the design interface. Therefore, attention to detail at this stage is crucial, so no over-constraints can block legal stimuli.
However, much of the discussion in the formal world has been on over-constraint detection and problem-reduction techniques, including a focus on these topics in training courses. Not much gets discussed on what happens if you under-constrain a test environment in formal and send an illegal stimulus. Under-constraining will cause spurious stimulus to fail assertions, creating an illusion that one caught a design bug when it’s a testbench issue.
There are multiple reasons why this can happen. The two most important are a lack of clear specification that causes confusion in the minds of verification engineers and the lack of expertise in transcribing what may be a clearly worded specification. The latter problem can be addressed through systematic coaching, mentoring, and training; however, when specifications are woolly, the problem is difficult to solve. A difficult but not impossible challenge.
Some industry data suggests that debugging can take up to 70% of the overall verification time. In formal verification, debugging is not as heavy as simulation or emulation. However, if a third of your time is spent debugging spurious issues caused by poor specifications in the testbench, it is a significant cost in verification. The tools themselves enable debugging and, for some bespoke domains such as RISC-V, customized debuggers can automatically root-cause a failure and dump a report and a VCD for a handover to designers for debugging.
Signoff for formal also relies on coverage, much the same way as simulation does. However, the coverage solution is more expansive in formal and leverages the six dimensions of quality markers, some of which leverage metrics and some that focus purely on qualitative methods. Starting from determining the coverage targets to then obtaining assertion coverage and ascertaining checker completeness and over-constraint analysis, one ends up leveraging the property-driven coverage and scenario coverage.
Note that the quality of determining coverage targets is dependent on the quality of specifications.
A common theme
The reader would have noted by now that all five steps rely heavily on specifications. If specifications are good, you can build a coverage spec that can provide exhaustive insight on “what” needs to be verified, offering the possibility of obtaining a complete well-rounded verification result.
However, if the specification is poor, it can cause a lot of time wastage in debugging, causing an unpredictable schedule and delays in verification while frustrating the verification and design team.
To sum up, why are specifications so important for formal verification? Formal-based verification employs reasoning about design correctness and establishes whether the implementation meets the specification against all possible stimuli, not a small specific subset.
Throughout the process of establishing this, the formal tool may encounter failures, which are states in the design where the assertions may not hold, thus identifying design bugs. When the tool can no longer find any failures, it will build proof that the given assertion holds against the implementation.
This is the case with formal property checking and equivalence checking with commercial tools. Theorem proving-based formal methods do not work like this and rely on chaining lemmas and theorems together to derive proof. It does not generate counter examples and waveforms-like property checking.
In all cases, specifications form the backbone of formal verification.
Coping with specification ambiguity
The beauty of working with specifications is that two people could interpret the same specification differently, exposing a validation issue. When a formal verification engineer asks the designer, are you sure this is the intended behavior? This question would come up with an answer: “Now, that you pointed it out, I will think about this a bit more because I’m not sure.” I’ve seen this in many projects. These reveal gaps in the designer’s understanding often identify mismatches between implementation and what the designers may believe the specification mandates.
So, discipline is important when resolving ambiguities. Consider a load-store unit (LSU) of a processor exchanging data with memory, and the designer specifies that all loads and store transactions that cross the processor boundary must stay high for four cycles for every load and store. The formal verification engineer models this requirement and the check fails.
In the failing trace, it becomes obvious that stores do not preserve this behavior. The designer believes that this is not a design bug but instead the requirement is a little different and suggests that for stores, the four-cycle requirement is not necessary. Probing further, the formal verification engineer discovers that the requirement states that stores do not have to enforce the four-cycle requirement unless they are back-to-back.
When this check is modelled again, a design bug is found showing some stores do not have this behavior either.
Source: Axiomise
These loops from specification to verification and debug can run a few iterations before the specification is fully conformant to the implementation. This usually tends to happen on legacy designs where specifications may have changed but are not reflected in the document and/or designers may be new to the project. One must be clear that the formal verification engineer must not read the design implementation to extrapolate the specifications as it can easily mask something obvious or corner-case bugs.
If your specification does not cover all the reachable input space of stimuli, you have an over-constraint problem and can miss real bugs. If your specification does not block all the illegal stimuli, you have the under-constraint problem and may find testbench-related issues and not design-related issues. This is the place for driving down discussions on the specification quality, paving the path to providing a formal specification for design implementation in addition to finding bugs in the design.
Is the specification problem only relevant to formal methods?
We need specifications for simulation as well as emulation. However, the lack of specifications often does not impact these as much. This is because, in many cases such as in emulation, the relevant firmware or software is driving design inputs, creating a realistic-looking stimulus pattern specific in scope.
With dynamic simulation such as universal verification methodology (UVM)-based verification, the specification problem can and does surface. The problem is less perceptible because simulation environments must create input stimulus patterns by hand.
The simulation testbench starts from ground zero. With formal verification, the testbench starts with all stimuli going into the design. In this way, the two environments are opposite to each other. In addition, simulation environments do not check the stimulus on all possible reachable states; they cannot. Therefore, they are unlikely to find all bugs. At the same time, they are also less likely to hit upon states on which one can see a spurious failure. A double-edged sword!
The point is that simulation testbenches do not perform formal reasoning on the correctness of implementation against the specification; they aim to test whether a specific input stimulus can produce any mismatch between a reference model in the testbench and the design. If so, a bug is found, fixed and then the same test is run to establish that the same stimulus does not find a mismatch.
In the case of UVM environments, one can play with the randomization to exercise a broader spectrum of input space, but the input stimulus space is non-exhaustively examined unlike formal, where every possible stimulus pattern will be tested on all reachable states.
Specifications play an important role in simulation testbenches when functional coverage specifications must be built. This usually happens quite late in the project, unlike formal environments where the first assertion failure in nine out of 10 cases is due to an incomplete specification.
Poor specifications will challenge simulation engineers as well as they will likely miss capturing interesting and necessary scenarios. Therefore, if there was a shortcoming in the stimulus apparatus, it won’t be caught, and a bug will be missed. Simulation testbench developers know the pain of closing functional coverage targets.
Both simulation and formal verification would be affected by specifications to varying degree.
It’s not too hard to see that poor-quality specifications cause damage in terms of wasted hours in debugging in formal environments, and in simulation environments causing real bugs to be missed.
The distraction caused by debugging of spurious failures in formal can be so significant that it can cause formal verification engineers to be frustrated and defocused on missing real bugs in formal as well. Designers get frustrated looking at spurious failures and management believes that formal verification is useless.
Work has been done in formal methods to identify ways of improving specifications mostly in the context of software engineering. I cannot cover all the work in this field in this article. In short, a major chunk of formal experts agrees that the use of higher-order logic along with temporal logic works as a vehicle to express complex requirements with ease. Leslie Lamport’s work on TLA+ is also a great resource.
Tips for creating better specifications
In the context of hardware verification, based on my experience of deploying formal in the field, here’s an outline for improving the quality of specifications.
When starting a new project for formal verification, identify different specifications as each serves a different purpose, providing a unique perspective but also connecting with other perspectives. Here is a non-exhaustive list of different specification documents.
Improve specification quality
To summarize, specifications are a great hidden bargain for shrinking testbench development time, shrinking debug time, increasing design bug throughput, finding corner-case bugs, and increasing coverage. No matter how you look at it and where you may be in your formal verification phases, improving specification quality will end up giving you a great bargain with verification quality. Everyone will win, not just the verification engineer.
Editor’s Note: Axiomise is furthering the adoption of formal verification through a combination of consulting, services, and specialized verification solutions for RISC-V, including i-RADAR, a customized debugger.
This article was originally published on EDN.
Dr. Ashish Darbari, founder and CEO of Axiomise, has been actively using formal methods for more than two decades and has trained nearly 200 designers and verification engineers globally.