Advanced verification: unlocking the door to a new era of AI chips

Article By : Kiran Vittal

A look at how chip manufacturers will have to adjust hardware architecture in order to accommodate ever-growing demand for increased AI functionality.

“Hey Siri, what’s today’s weather forecast?”

As our everyday lives continue to depend on Siri and other artificial intelligence (AI) assistants for convenient services such as playing songs and keeping track of meeting schedules, it’s no secret that it’s become increasingly difficult to safeguard personal data. With AI momentum growing and the threat of data loss looming, it’s more important than ever for chip designers to advance AI and security techniques to keep up with the pressing demand for more intelligence.

But in today’s era of smart everything, compute-intensive applications that incorporate AI techniques such as deep learning (DL) and machine learning (ML) require their own dedicated chips with well-rounded designs to power the intelligent functions. From autonomous vehicles to high-performance computing (HPC), the underlying technology driving these intensive workloads depends on advanced architectures that walk the delicate balance between packing a punch in the power department while also being tailored to improve decision-making capabilities.

As more smart devices connect to the cloud, there is greater potential for AI to evolve exponentially and create a variety of market opportunities. However, chip manufacturers must keep in mind that key portions of AI computations need to be completed within the hardware to emulate real-world conditions. Therefore, custom “AI chips” are not only preferred, but essential to integrate AI at scale in a cost-effective manner.

But considering that the current generation of chips for AI/ML/DL applications contain complex datapaths to accurately perform the necessary arithmetic analysis, the industry will have to be willing and ready to implement advanced verification methods to evolve and fuel AI’s next move.

Almost everyone is designing chips

It’s no secret in the chip design world that with the saturation of Moore’s law, it’s becoming increasingly difficult to achieve desired gains in performance from general-purpose processors. To mitigate this slowdown, companies beyond traditional semiconductor players are putting their hats in the chip design ring.

To name a few of these companies, big players like Google, Amazon, and Facebook are now heavily investing in the development of their own custom ASIC (application-specific integrated circuit) chips in-house to support their unique AI software and meet specific application requirements. This subsequent market expansion provides a plethora of opportunities for new design tools and solutions to support today’s demanding chip design environment.

AI chip design: control paths are different

A crucial driver in new AI system-on-chip (SoC) investments is the capacity to multitask calculations all at once in a distributed fashion (instead of the limited parallelism offered by traditional CPUs). The design necessary to perform these tasks entails data-heavy blocks comprised of a control path where the state machine processes outputs based on specific inputs, along with a compute block comprised of arithmetic logic to crunch the data. By employing these features, chip designers can dramatically accelerate the identical, predictable, and independent calculations required by AI algorithms.

Even though the arithmetic compute block is not typically a challenge, the sophistication increases severely as the number of arithmetic blocks and bits simultaneously increase, adding further burden to verification teams.

In the past decade, data-centric computing has evolved beyond the restrictive bounds of PCs and servers. Even in the simple case of a 4-bit multiplier, test vectors need to be written for all possible input combinations to verify its complete functionality, i.e., 24 = 16. Herein lies the challenge: when it comes to verifying realistic scenarios of today’s AI chips, teams need to verify adders that have 64-bit inputs. In other words, 264 states need to be verified — an achievement that would take years using classical approaches. This is just one isolated example of many possibilities, but as the adoption of AI chips quickly expands and the amount of data generated continues to explode, time-consuming challenges associated with hardware verification make the need for modern, secure, and flexible verification solutions critical.

Synopsys Kiran Vittal featured image - EDA - tools
Advanced verification gives chip designers the ability to innovate and expand AI quicker than ever before.
(Source: Synopsys)

The ultimate test: verification challenges

When teams design AI chips, C/C++, a fast and widely used design algorithm, is employed. Once the functional code is written up, the information needs to be translated into a more hardware-oriented representation using RTL (register transfer language). This process requires teams to either develop test vectors for all possible combinations or compare whether the RTL matches the original C/C++ architectural model, which often proves to be quite an intimidating task.

This is where formal verification comes into play. With this technique, a mathematical analysis is done to consider the entire hardware design at one time. While test vectors typically need to be written for every input combination, formal verification provides a conduit for verifying against a set of assertions specifying the intended behavior by leveraging model checkers.

Even a few years ago, it would have been inconceivable to think that formal verification could be a widely utilized method simply because of the high-level assertions required. But fast forward to today, and the average RTL designer or verification engineer can quickly and effectively learn the tricks of the trade.

However, based on the growing scale and complexity of today’s AI chips, it’s impossible to be fully proven by simply model checking. Verifying these mathematical functions using traditional methods rather than modern ones is inefficient, time-consuming, and ultimately impractical in the long run.

AI and ML applications need an extra hand

Using other forms of formal verification (e.g., equivalence checking) provides engineers with a robust system to verify even the most complex of AI datapaths. During the equivalence checking process, two representations of the design are compared, and the designs are either proven to be equivalent or the specific differences between them are identified. These sufficiently powerful formal engines provide a large leg up during the verification process as the two representations can be at completely different levels of abstraction and even written in different languages.

Let’s compare a chip design’s detailed RTL implementation to a high-level C/C++ architectural model. The comparison confirms that the same set of inputs produces the same outputs for both representations. This efficient method is a natural fit for many AI projects given that most already have C/C++ models available for results checking in simulation or as part of a virtual platform to support early software development and test.

Despite the rapid growth of AI applications, formal equivalence checking is the only technology that can provide exhaustive verification of design datapaths against a proven reference model. To aid AI’s so far uninhibited evolution, verification tools need the following traits: ease of use, ability to scale, and advanced debug capabilities.

On the horizon: homomorphic encryption

As the industry continues to produce trillions of bytes of data that require high-performance chips to sustain this computational feat, the forecast for an increasing number of bits is inevitable. Universities and research organizations around the world are looking at possibilities of working with larger bits of input data and developing contingency plans to design chips that can support this influx.

But with this inundation of data comes the subsequent need for hardware security. Homomorphic encryption will be an integral piece to the AI/ML puzzle. This type of encryption gives chip designers the ability to encrypt data and perform the same arithmetic computations required by the AI system without decrypting it and, thus, reduce risk of data breaches. To boost both quality of results and productivity for AI chip design through this encryption system, next-generation tools will be needed.

Edge AI will drive explosion of real-time abundant-data computing

A self-driving car crashing into an unnoticed obstacle isn’t on anyone’s wish list. This is only one example of the disaster AI chips could inflict if designs are not fully verified. As the market’s appetite for more AI capabilities in computing applications grows, new edge AI devices will drive an explosion of real-time abundant-data computing and transform how chip manufacturers approach semiconductor design, leading to higher productivity, faster turnaround times, and better verification solutions.

The dawn of an AI-first world is fast approaching and is more in reach than ever before. But can we run on the innovation hamster wheel long enough to make it happen? Only time will tell.

This article was originally published on Embedded.

Kiran Vittal is the senior product marketing director at the Synopsys verification group. He has over 25 years of experience in semiconductor and EDA industries spanning design, development, applications, technical and product marketing.

 

Leave a comment