Hypnotizing test engineers with figures of dubious merit

Article By : Ransom Stephens

Will complicated automated tests enhance design innovation or prevent it?

As standards increasingly adopt complicated figures of merit for high-speed serial data links that are calculated by one touch of a button, like channel operating margin (COM), transmitter dispersion eye closure quaternary (TDECQ), and signal to noise and distortion ratio (SNDR), with effective return loss (ERL) coming to a specification near you, do we risk test engineers losing sight of their jobs? Can these aggregate measurements provide insight? Is it possible to understand their uncertainties?

Since the dawn of time, standards documents have specified maximum and/or minimum values for design parameters to assure product performance and compatibility. Maximum allowed values for jitter, noise, insertion loss, rise and fall times, minimum extinction ratios, eye height and eye width. Everywhere you looked, a simple measurement screamed yay or nay, stay or go, good or bad, back to the drawing board or off to the bar. Design engineers followed simple constraints and life was okay. But in the last few years, the constraints have become both more complex and more flexible.

Design space
The problem with simple, cut-and-dry requirements is that they don’t always permit the most optimal combination of design parameters. A design space describes the range of possible values for different design parameters in a way that permits tradeoffs between them. Barter a little random jitter (RJ) for some deterministic jitter (DJ), swap some insertion loss (IL) for an extra tap of equalization, increase the voltage swing a few millivolts in exchange for some noise. You’re constantly making tradeoffs.

Think of all the parameters that go into your design—whether it’s a network element or a system—as a multi-dimensional space. The perfect design consists of that combination of parameters that (a) minimizes cost, (b) minimizes power consumption, (c) meets heating and longevity requirements, and (d) complies with everything in the specification, especially the ability to operate at or better than the specified bit error ratio (BER). Engineers who have the cunning and experience to find that optimum combination get paid serious money.

The question is: Do complex measurements that indicate overall signal integrity help or hinder the design process? You can probably hear Eric Bogatin answer the way he always does: “It depends.” In this case, it depends on who is performing the measurement.

COM, TDECQ, SNDR, and ERL all process and combine measurements into figures of merit
COM is assembled from a long list of assumed and measured parameters including presumed properties of crosstalk aggressors, reference receiver properties, transmitter and receiver equalization schemes, signal properties including RJ and DJ, spectral noise density, and transmitter noise, package transfer function, and, most importantly, the S-parameters of the channel in question. A single bit response is calculated and then an intermediate figure of merit (FOM) is formulated from all of the input parameters, the equalization variables must be optimized, and then, finally, the signal-to-noise-like value of COM is calculated.

TDECQ measures the performance of optical transmitters from a captured waveform by comparing the amount of noise that would have to be added to an ideal signal for the ideal transmitter to perform as badly as a real transmitter. You can measure TDECQ by pushing a button on a suitably equipped oscilloscope. Easy, or so it seems. If you set out to measure TDECQ from scratch, you’d need to find the minimum of an equation with at least a dozen variables while satisfying at least two constraints (Figure 1).

text
(c) text
Figure 1 TDECQ measurement, (a) a simulated ideal signal with applied Gaussian noise, (b) a real test signal, and (c) a TDECQ analysis display.

SNDR compares electrical signal strength to the combination of random noise and harmonic distortion. It is also a one touch oscilloscope measurement based on a captured waveform. If you didn’t buy SNDR software, you’d have to write code to perform a linear fit to the waveform, extract the pulse response from a combination of the fit and the ideal transmitted symbols, and calculate the deviations of the fit from the ideal waveform.

Each of these figures of merit have, well, merit
But what happens when the next generation of test engineers sleep walk through measurements, hitting buttons and believing the results? The key job requirement in test is understanding the implications of a measurement and those implications can’t be understood without understanding what exactly is being measured and its precision. The true meaning of complicated figures of merit are hidden in the algorithms.

I’ll wager that every one of us came out of freshman chemistry lab with little red marks on our graphics where irate teaching assistants wrote something like: “No error bars? A measurement independent of its uncertainty is meaningless!!!” We should forgive the overworked, underpaid chemistry grad students for using so many exclamation points, but she was right!

When I left particle physics research and joined EE industry, I was shocked that standard uncertainties are rarely quoted and nearly always ignored. It’s one thing to accept a measurement as sufficiently accurate without spending the time required to understand what “sufficiently accurate” means, but altogether another to put a set of complex measurements in an algorithm, shake it, fit it, stir it, optimize it, and then presume you have the faintest idea of its accuracy.

Calculating the uncertainty of a COM, TDECQ, or SNDR measurement would require more software than the measurement itself.

COM, TDECQ, and SNDR all offer insights into the performance of network elements. At their best, they allow designers the freedom to innovate designs with new ways of balancing different performance parameters. At their worst, they threaten to lull designers into a stupor of assumed, but unwarranted understanding of the performance of their designs.

How do these compound quantities affect your work? Are one touch measurements of complex quantities black boxes or do you get the insight you need from them?

Ransom Stephens is a technologist, science writer, novelist, and Raiders fan, even when his team loses.

Related articles:

Leave a comment