DDR is used on many of the interfaces in computer systems, one of which relates to the way the processor interfaces with the memory.
As every engineer learns at an early stage, clock edges must be obeyed. In the digital domain, synchronization through global and local clock trees, slew rate and rising/falling times all combine to make products ‘tick’. The concept of using both the rising and falling edge of the clock signal to increase data throughput—so-called double date rate, or DDR—revolutionized digital design.
Today, DDR is used on many of the interfaces in computer systems, one of which relates to the way the processor interfaces to memory. Every new application pushes the limits of this interface. The very latest, such as artificial intelligence (AI), machine learning (ML) and data mining, will push it even harder.
Development on the latest version of the DDR interface targeting high-bandwidth SDRAM, DDR5 started in 2017. The JESD79-5 DDR5 SDRAM standard was released in July 2020, later than expected and even more hotly anticipated.
What does DDR5 bring?
The main features promised for DDR5 over DDR4 were reduced power and double bandwidth. This means an increase from 3.2 Gbps to 6.4 Gbps and a commensurate increase in clock frequency from 1.6 GHz to 3.2 GHz. The lower power aspect is addressed through a slight (0.1 V) reduction in the supply voltage to 1.1 V.
This comes with a shift in the power management, which moves from the motherboard onto the dual in-line memory module (DIMM). The DIMM capacity also increases, from 16 Gb to 64 Gb, resulting in higher capacity memory modules. This is complemented by a change in the number of channels, from 1 to 2 per DIMM, each of which has a 40-bit data channel, as opposed to one 72-bit data channel in DDR4. The total number of data bits remains the same but delivering that through two channels has implications on how the clock signals are generated and distributed. This is intended to improve signal integrity (SI).
While the lower supply voltage will reduce power, it comes with a smaller noise margin, which impacts the design. However, DDR5 also moves the power management IC (PMIC) off the motherboard and onto the module. This is another significant change, which puts power management, voltage regulation, and power-up sequencing physically closer to the memory devices on the module. This should also help with power integrity (PI) and provide more control over how the PMIC operates.
Design challenges: PI and SI
It is clear that signal integrity has been considered during the standard’s development and moving the PMIC to the module should also bring its own advantages. However, designers will still need to consider the holistic impact of power-aware signal integrity. A traditional workflow would assume an ideal power distribution network (PDN) and may ignore the effects that coupled signal, power and ground planes have on the overall signal integrity across a PCB. If power integrity and signal integrity are analyzed separately, power-aware signal integrity issues may be missed.
This includes simultaneous switching noise (SSN), which is likened to ground bounce in a PCB. Effectively, SSN has the effect of changing the potential on the ground plane, or it may also manifest itself as droop on the power rail, caused by multiple sinks switching at the same time and drawing more power than the PDN can deliver in that instant. High-speed parallel buses, such as DDR, can be particularly affected by SSN when multiple signals switch together (Figure 1).
Figure 1 DDR5 signal quality is compared to the JEDEC specification using both 2D and 3D eye diagrams. Source: Cadence Design Systems
Accurately modeling the effects of SSN is not inherently simple and most traditional EDA tools tackle it using separate power-aware models; for instance, IBIS 5.0+ models and interconnect models. Most signal integrity tools aren’t able to carry out SSN analysis, as the power-aware interconnect model is available once layout is complete. This means noise analysis during the design phase is normally limited to design rule and geometric rule checking.
The FDTD approach
There is a fundamental disconnect between signal analysis and the PDN in most simulation technologies used today. This is a legacy that carries other shortcomings because the underlying simulation technology was developed long before parallel buses running at Gbps speeds, like DDR5, were conceived.
Often, SPICE models may vary in their complexity, using time-domain simulations to produce an accurate RLC model, while sometimes assuming an ideal ground. This results in time-domain models based on simple frequency responses, extracted through simulation. This trades accuracy for expediency, while for higher frequencies, engineers move to S-parameters that can be created with a hybrid solver. Both SPICE models and S-parameters are useful, not least because S-parameters don’t contain any low-frequency or DC information.
The finite difference time domain method, or FDTD, works with hybrid solvers to extend the coverage to signal, power and ground lines. Tools that integrate and combine the output of several solvers to address the circuit layout, along with transmission lines and electromagnetic fields, are better equipped to provide time-varying interactions between data and power/ground planes. An example would be the Sigrity SPEED2000 engine which uses the FDTD method for analyzing layouts for IC packages and PCBs.
Simulation for signoff
Using the FDTD method supports a rapid design flow with access to power-aware signal integrity analysis that supports multi-domain rule checks and simulation. But for final signoff, engineers may still turn to a 3D full-wave modeling approach, as this provides the needed accuracy (Figure 2).
Figure 2 The signal integrity signoff process requires accurate 3D modeling of coupled signals across multiple layers and multiple fabrics. Source: Cadence Design Systems
That accuracy comes at the price of compute power and simulation time. This can be addressed through segmentation but that only breaks the problem down into smaller pieces; those pieces still need to be processed.
This is where parallelization provides real performance benefits. By using a method based on finite element analysis (FEM), the task is broken down into smaller parts that can be distributed across a massively parallel architecture, such as a data center or cloud server. The results of the analysis are recombined into an S-parameter model based on a frequency response. The FEM is provided by the Clarity 3D Solver, while Sigrity technology can then analyze the models.
Power-aware signal integrity for DDR5
Traditional signal analysis often operates under the assumption that the PDN is ‘ideal’. This is for convenience and expediency rather than accuracy. As we move into the realms of DDR5 with 6.4-Gbps data rates and 3.2-GHz system clocks, the potential for power-aware signal integrity issues start to become more significant.
If engineers wish to adopt the performance offered by DDR5, it will become increasingly necessary to apply power-aware signal integrity analysis at all of the critical points in a system: the chip, the package, and the PCB. This level of analysis can introduce huge demands on the underlying compute platform, not to mention the total design time.
No single approach can provide the coverage needed to fully address power-aware signal integrity analysis. It is recommended to use an approach that treats analysis holistically, and the primary requirement is to have a hierarchy of tools that ultimately analyze the signal, power, and ground as complete electrical system.
Within that hierarchy, designers can use electrical rule checking (ERC) with estimated noise coupling between power and ground planes. However, ultimately the solution must include power-aware signal integrity analysis that incorporates fast and accurate field solvers for interconnect extraction.
This article was originally published on EDN.
Brad Griffin is product management group director for multiphysics system analysis at Cadence Design Systems.