Leveraging emulation in SSD development

Article By : Lauro Rizzatti

Hardware emulation solves the difficulties encountered when using FPGA-based prototyping in SSD development.

« Previously: Dissecting advantages, challenges of SSD
 

SSD technology is on the move as firmware continues to grow and implement more and more functionality. The NAND array is also growing in size and in number of instances.

The speed of the interface between the NAND array and the controller SoC is faster than the host interface and continues to increase, enlarging the performance gap between the two.

The overall complexity and size of the controller SoC also expands to cope with the increasing complexity of the NAND management.

As with all modern SoC designs, reducing power consumption is a critical requirement, and this trend will continue.

Design and verification of an SSD controller

The uniqueness of the SSD controller design is amplified by its verification/validation process. Unlike most chips in mobile designs that may be built on commercial intellectual property (IP) or run on a standard Linux kernel, the SSD controller must be designed from scratch. The hardware and embedded firmware must be highly customised and closely coupled. The hardware may use a few standard IP blocks such as CPUs, system buses, and peripherals like UARTs, but the firmware determines the major features of the SSD controller. To achieve the best performance and the lowest power consumption, the firmware must be fine-tuned on optimised hardware.

To accomplish this, the industry is adopting a software-driven chip design flow, as opposed to the traditional hardware/software design flow.

In a traditional hardware/software design flow (Figure 3), hardware and firmware designs are serialised or pipelined. Firmware engineers may participate in the system definition specs, but firmware implementation details are deferred to post tape-out when the hardware is completed. This flow leads to poor firmware functionality and limited performance. Also, since firmware development starts late, the design cycle increases considerably, oftentimes leading to missed schedule deadlines.

20170613_SSD_03
Figure 3: Hardware and firmware design is serialised or pipelined in a traditional hardware/software design flow. Source: Lauro Rizzatti

By comparison, in a software-driven design flow (Figure 4), firmware development starts at the same time as the hardware design. Firmware engineers participate in the spec definitions of the hardware and gain intimate knowledge of the hardware details during the hardware design phase. Since the development of hardware and software proceeds in parallel, they can influence each other. Testing early firmware on the hardware in development unearths bugs that otherwise would be found after tape-out. Likewise, performance optimisation can occur early on.

20170613_SSD_04
Figure 4: In a software-driven design flow, firmware development and hardware design proceed in parallel. Source: Lauro Rizzatti

By the time the design is ready for tape-out, hardware and firmware have been optimised and are virtually ready for mass production. When engineering samples are returned from the foundry, the lab bring up of hardware and firmware may require only one or two weeks, thereby dramatically accelerating time to market compared to the traditional design flow.

The design verification/validation in a software-driven design flow necessitates a high-performance system as close to the real chip environment as possible with powerful debug capabilities and easy bring up.

A traditional field programmable gate array (FPGA)-based prototyping board, as depicted in Figure 5, meets some of the requirements, such as speed of execution, but it also brings along a few drawbacks.

20170613_SSD_05
Figure 5: An FPGA-based prototyping board offers speed of execution. Source: Lauro Rizzatti

This setup involves the deployment of physical peripherals: an array of NAND Flash devices connects to the NAND controller mapped inside the FPGA and a set of DDR memory connect to the DDR controller in the FPGA. Peripheral interfaces, like UART and SPI, and a physical PCIe or NVMe interface to the host complete the setup. While an FPGA board is close to the real hardware, and supports the high-speed execution necessary for firmware validation, several disadvantages affect its deployment.

For example, the complier supports only a synthesisable subset of the register transfer level (RTL) language and models often need to be rewritten to fit the structure of the FPGAs. The bring up of the real peripheral devices imposes substantial effort to cope with electrical, mechanical, power dissipation, frequency modulation (FM) interference, and noise issues. Design debug in an FPGA is hard and time-consuming due to the poor internal design visibility created via compilation. Adding visibility forces recompilation of the whole design, thereby slowing down the entire verification process.

Fortuitously, hardware emulation solves all of these difficulties. A modern emulator possesses several advantages over an FPGA-based prototype. Mapping the design RTL code onto the emulator requires limited code modification. The emulator offers full design visibility similar to a hardware description language (HDL) simulator, but at 4x or 5x faster speed, closer to that of the FPGA prototype board, which is mandatory for firmware development. Such an emulator becomes a shared resource, accessible remotely if deployed in virtual mode, which is not possible with FPGA-based prototypes.

Emulators can be deployed in in-circuit-emulation (ICE) or virtual mode. The ICE mode follows the same path as the FPGA-based prototyping approach, and suffers from the same problems introduced by the physical peripherals. The virtual mode resolves all of the issues, preserving the benefits of an emulator.

In a virtual setup, all peripherals are modelled in software giving engineers full control and visibility of the emulation environments, not just the design under test (DUT), but also of peripherals and interfaces (Figure 6).

20170613_SSD_06
Figure 6: In a virtual setup, engineers control the emulation environment. Source: Lauro Rizzatti

Soft models implement the NAND Flash and DDR memory running on the emulator or host server. A serial peripheral interface (SPI) NOR Flash model can be synthesised and executed on the emulator. The PC server, connected to the DUT inside the emulator via a virtual PCIe based on a DPI interface, hosts a Quick EMUlator (QUMU) virtualiser. A UART model connects to an xterm window running on the host server. All virtual models are controllable and visible on a single host server connected to the emulator.

The virtual solution affords several advantages. First, it permits full-signal visibility of the DUT and the external virtual devices and their interfaces. PCI traffic can be monitored. The host OS runs a QEMU virtualiser providing complete control of the behaviour of the BIOS. The contents of the DDR memory, NAND Flash, and SPI NOR Flash can be read and written. Their types and sizes can be many different configurations, which is an impossible feat in a physical setup. The virtual domain offers a convenient debugging environment because everything can be seen and modified.

In the case of the PCIe interface, for example, VirtuaLAB PCIe from Mentor supports thorough debugging of the interface and monitoring of the traffic via a built-in virtual PCIe analyser.

The virtual setup adds three capabilities not viable with a physical setup as follows:

  1. It can be accessible remotely 24/7 from anywhere in the world.
  2. It can be shared by a multitude of concurrent users.
  3. It supports the same clock frequencies as the actual design since the peripheral clock frequencies do not have to be slowed down via speed adapters to match the slow-running clock of the emulator. This avoids discrepancies from different speed ratios and enables realistic performance evaluations.

20170613_SSD_07
Figure 7: The Mentor VirtuaLAB PCIe supports thorough debugging of the PCie interface and monitors traffic through a built-in virtual analyser. Source: Mentor Graphics

The virtual emulation environment is identical to the real-chip environment after the chip is released by the foundry. It can test the firmware, check design performance, and find bugs that cannot be found via an HDL simulator, and it facilitates architectural experimentation to accommodate different media storage types and interfaces.

Conclusion

To take advantage of the escalating market growth of SSD devices, engineers must accelerate the development cycle and avoid the risk of delivering under-tested and low-performing chips.

Only a state-of-the-art emulation platform can accomplish this demanding mission. Via its fast execution speed comparable to an FPGA prototyping board, and a fast and straightforward setup, an emulator deployed in virtual mode provides a suite of debugging capabilities that become the centrepiece of a software-driven design flow.

The SSD industry has proven it is possible to shed two to four months off the development cycle with emulation. When combined with deployment as a shared and remotely accessible resource, the return on investment (ROI) of an emulation system can certainly be justified.

Leave a comment