Why acquisition memory is important

Article By : Arthur Pini

Acquisition memory length is not a premier specification, but it does dramatically affect both bandwidth and sample rate.

The most important specifications for digitizers and oscilloscopes are bandwidth and sample rate. Acquisition memory length is not a premier specification, but it does dramatically affect both bandwidth and sample rate.

Digitizing instruments including digitizers and oscilloscopes capture data and store it in the instrument’s acquisition memory. This memory sits just behind the instrument’s digitizers and operates at the digitizing rate. The size of the acquisition memory effects the instrument’s sampling rate, the maximum record length, and the processing speed. Setting the memories’ size represents one of those always present engineering trade-offs.

Starting with the basics, an oscilloscope or digitizer’s maximum sampling rate has to be greater than 2× the analog bandwidth of the instrument. This is a statement of the Nyquist criteria governing all digitizing instruments. Since the front-end frequency response generally has a finite roll off, the sampling rate is set higher than twice the nominal bandwidth to minimize aliasing of these potential out-of-band signals. Generally, digitizing instruments use a minimum sampling rate to bandwidth ratio of at least 2.5:1.

The acquisition record length, the duration of the acquired signal, is directly proportional to the length of the acquisition memory used and can be expressed by the equation:

Trec = N * tS = N/fS

where:
Trec is the time duration of the acquired signal in seconds (s),
N is the length of the acquisition memory in samples (S),
tS is the sample period in seconds per sample (s/S), and
fS is the sample rate, the reciprocal of the sample period, in sample per second (S/s).

The duration of the acquisition is equal to the number of memory samples or points multiplied by the sampling period or divided by the sample rate.

Acquisition memory in most oscilloscopes is available in blocks that are multiples of 1, 2, 2.5, and 5; these settings combined with complementary available sample rates result in time per division settings that are multiples of 1, 2, and 5. The intention is to make it easy to read time measurements off the screen by counting graticule divisions and multiplying them by easy to compute factors.

As the acquisition time is increased by increasing the time per division setting of the oscilloscope, more memory is added, and the acquisition and the duration increases proportionally. When the memory length reaches its maximum limit, the only way to increase the record length is to decrease the sample rate as shown in Figure 1.

graph of sample rate as a function of the time per divisionFigure 1 This plot shows sample rate as a function of the time per division setting with maximum memory length as a parameter.

The figure shows that increasing the time per division setting for a device with a maximum sampling rate of 10 GS/s, keeps the sample rate at its maximum value until the maximum acquisition memory is reached. Increasing the time per division setting further causes the sample rate to fall. Graphs for maximum memory lengths of 50 MS, 5 MS, and 500 kS are shown. It should be obvious, and worth noting, that the more acquisition memory available the longer the maximum sample rate can be maintained as the acquisition time is increased.

Once the sampling rate begins to fall, the user has to be aware of the instruments’ effective bandwidth. The effective bandwidth of a digitizing instrument is the lesser of the analog bandwidth or one half the sample rate. So, a 1-GHz oscilloscope operating with a sample rate of 1 GS/s has an effective bandwidth of 500 MHz. Any signal component above 500 kHz will be aliased. Keep in mind also that the time resolution of the instrument is now reduced. If you want to make accurate measurements of time-related parameters like fall time, the accuracy can be affected. The slope of the measured edge is poorly defined if there are only a few samples on that edge.

Let’s look at an example of how setting up memory usage to maximize sample rate can improve a measurement. An oscilloscope with a maximum sampling rate of 10 GS/s is set up to acquire several packets of a UART signal as shown in Figure 2.

oscilloscope screenshot of acquiring packets of a UART signalFigure 2 Three packets of a UART signal are acquired with a record length of 10 MS. The inter-packet spacing is read by a cursor as 43.8 s and the packets are 2 ms long. At this 10 ms per division timebase setting, the sampling rate has fallen to 100 MS/s.

The oscilloscope timebase, set to 10 ms/division, is using 10 MS of memory and the sample rate has been reduced to 100 MS/s. The effective bandwidth for this setting is half the sample rate or 50 MHz.

Note that most of the waveform is taken up with inter-packet ‘deadtime.’ One way to increase the sampling rate is to eliminate the inter-packet deadtime, which can be done by acquiring the signal in sequence mode. This segments the acquisition memory and captures only the packets, eliminating most of the deadtime and reducing the amount of memory used. Setting up the oscilloscope in sequence mode and capturing three segments, each of 5 ms duration, using the same 2.5 MS total memory is shown in Figure 3. The effect of reducing the memory length to 2.5 MS is to increase the sampling rate from 100 MS/s to 500 MS/s.

oscilloscope screenshot of using sequence acquisition mode to decrease memoryFigure 3 Use sequence acquisition mode to decrease the memory used and increase the sampling rate. The sampling rate has been increased to 500 MS/s.

While there appears to be very little difference in the signal at either sampling rate because the signal’s bandwidth is about 14 MHz, there is a more obvious difference if we look at the measurement of the signal’s fall time (Figure 4).

The fall time is measured at both sampling rates. The waveform acquired at 100 MS/s has about six samples on the edge, while the waveform acquired at 500 MS/s has 30 samples on the edge. The resulting measurements show means that differ by about 10%. The key indicator is that the standard deviation for the data acquired at 500 MS/s is 573 ps, while the other measurement shows a standard deviation of 1.7 ns. The standard deviation measures the spread of the measured values about the mean, and it is a good indication of the uncertainty of the measurements. Basically, measurements made at the higher sampling rate have less uncertainty. Remember that the sampling rate varies directly with the acquisition memory length.

oscilloscope screenshot of fall time measurementsFigure 4 Comparing the fall time measurement at two different sampling rates, 100 MS/s and 500 MS/s shows a lower standard deviation for measurements made at 500 MS/s.

No matter how much memory you have in an instrument, there is going to be a measurement where there is not enough memory to make the measurement directly. In that case it may be necessary to break up the measurement into separate timing epochs. Figure 5 is an example of a waveform that has both high frequency and low frequency components.

oscilloscope screenshot of a remote control signal measurementFigure 5 This is the initial measurement of a remote control for an entrance gate that uses on-off keying of a 390 MHz carrier to encode identification information.

The top trace in the figure is the initial pulse digitized at 10 GS/s. A zoom view (Trace Z2) of that same waveform is the red trace in the bottom grid showing a sine wave. The parameter P2 measures the frequency as nominally 390 MHz. The problem begins when the whole waveform is acquired at 5 ms per division in the second trace from the top.

A zoom trace of that acquisition appears in the third trace from the top displayed at 100μs per division. Note that the envelop is identical to the first acquisition. There is a difference however; a zoom of that trace, Z3, the blue trace in the bottom grid, shows a ragged sine wave with a frequency of 110 MHz. Even with a 25 MS maximum memory length, the 25 ms acquisition can only manage a sample rate of 500 MS/s.

Obviously a problem as 500 MS/s is not greater than two times the carrier frequency of 390 MHz. That’s why the frequency of the carrier appears to be at 110 MHz, it’s aliased. Sampling is a mixing operation and the 390 MHz carrier mixed with the 500 MS/s sampling rate is being down converted, yielding a difference of 110 MHz, the aliased carrier frequency.

The type of measurements required can be broken down into two categories. The first is the RF measurements consisting mainly of measuring the carrier frequency. The second is to evaluate the low frequency modulation. The first measurement can be made by acquiring the RF bursts separately and measuring the carrier as was done with the top trace and the frequency parameter, P2.

The second set of measurements can be made on the aliased signal containing the full message. This will work because the signal is very narrow band having energy only about 390 MHz. The alias signal can be peak detected, and the demodulated signal envelope will provide information about the encoding as well as the gating characteristics of the carrier. The analysis is shown in Figure 6.

oscilloscope screenshot of demodulating and measuring the signal envelopeFigure 6 Demodulating and measuring the signal envelope requires measuring attack time, decay time, and width of the demodulated signal envelope. The histogram of the envelope width verifies that there are three distinct pulse widths used in the serial encoding.

The acquired waveform is shown in the top grid. It consists of an RF carrier on-off keyed with what appears to be a pulse width modulated signal. By peak detecting the acquired signal, the modulating signal can be recovered. Peak detection is accomplished by taking the absolute value of the modulated RF signal and then low pass filtering it. Math trace F1 does that processing, combining absolute value followed by the enhanced resolution (ERES) low pass filter. This is displayed in the second trace from the top. The third trace from the top shows the demodulated signal overlaid on the modulated carrier. Note how well the demodulated signal tracks the RF signal.

Measurements are now made on the extracted modulated signal, including the rise and fall times as well as the width of the first pulse, and repeating those measurements of all 21 pulses in the serial data stream. The rise and fall time represent the attack and decay time of keyed carrier. The histogram of the pulse width measurements, in the bottom grid, shows that there are only three distinct pulse widths 500 μs, 1 ms, and 1.5 ms

Even though the oscilloscope, due to a limited memory, cannot render the carrier when the full signal is acquired, it is still possible to gain a tremendous amount of information from the signal, but you have to be aware of what is happening.

Acquisition memory length is an important specification that can affect a digitizing instrument’s sampling rate and bandwidth. Memory length determines the acquisition duration at any fixed sampling rate. The longer the memory length, the greater the time per division setting that can be supported at the highest sampling rate. Once the maximum amount of memory is engaged, further increases in the time per division setting will cause the sampling rate to decrease causing the effective bandwidth of the instrument to decrease.

This article was originally published on EDN.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years experience in electronics test and measurement.

 Lucky Draw 2021

Leave a comment