Variable attenuators for receiver sensitivity testing

Article By : Jean-Jaques (JJ) DeLisle

The goal of receiver sensitivity test is to determine the lowest power signal a receiver can detect under a variety of possible conditions.

Wireless communication and sensing technologies are being integrated into nearly all electrified systems and are part of the justification for electrifying many other systems and technologies. From refrigerators to heavy machine equipment, wireless technologies are becoming an essential element of the mix to enhance efficiency, increase safety, improve security, bolster performance, and minimize maintenance and downtime.

With the growing adoption of multi-input multi-output (MIMO) and beamforming technologies, many new wireless systems have more complicated RF front-ends (RFFEs) and possibly even several receivers in a single wireless system. This is why there is an explosion in the numbers of wireless receivers that need to be tested.

Receiver sensitivity testing

One of the basic tests common to all receiver technologies is receiver sensitivity test. The goal of receiver sensitivity is to determine the lowest power signal that a receiver can usefully detect under a variety of possible conditions. For most modern wireless systems, the communications technology is digital, so the figure-of-merit of concern is generally the bit-error rate (BER). In the case of packet-based communications, the figure-of-merit may be packet-error rate (PER).

The concept is that as signal level lowers to the noise level of the receiver, the error rate will rise until the error rate exceeds a set threshold. The error rates and the majority of features of a given receiver test are typically dictated by the wireless standard; passing these tests is required for standards certification and to be legally sold in most markets.

Variable attenuators for receiver sensitivity testing

Figure 1 A signal generator stimulates the antenna port of the receiver and works as an ideal transmitter. Source: Keysight

Sometimes receiver selectivity testing is lumped into receiver sensitivity testing. Though similar, receiver selectivity testing is done to determine the performance of a given receiver in the presence of undesirable signals such as interference from adjacent channels and co-channel interference. It is possible that some automated test systems for rapidly and efficiently testing receiver sensitivity also perform selectivity and other receiver tests, all with the same setup to optimize the testing throughput.

Variable receivers enhance sensitivity testing

To properly test receiver sensitivity, there needs to be precise control of both the generated signal power and the level at the receiver input. There is naturally some loss and noise added between the test base station and signal generator in the receiver test signal chain, which naturally limits the signal levels usable in a given test environment with a given receiver.

Most test base station hardware and signal generators are typically not the best performers at extremely low output powers. Generally, these units are designed to operate at higher signal power levels for optimum linearity and minimal distortion. This creates an issue when attempting to perform receiver sensitivity testing at very low power levels near the noise floor of the test base station and signal generator units.

A way to compensate for this is to use a variable attenuator in-line with the output of the test base station or signal generator and the receiver under test (RUT). With the variable attenuator right before the input of the RUT, the test system can be calibrated to the receiver input. Moreover, a variable attenuator with attenuation levels appropriate for the power levels of the receiver can be chosen such that the test base station or signal generator can be adjusted to a power level for optimum linearity and minimal distortion.

Variable attenuators for receiver sensitivity testing

Figure 2 The RF power output level can be reduced by adjusting the internal attenuation. Source: LabSat

With a well-chosen variable attenuator, the test base station or signal generator power level shouldn’t need to be changed, and during calibration, the variable attenuator can be used to adjust the power level seen at the input of the receiver. This method also ensures that the power level at the input of the RUT is a calibrated value and not simply adjusted by the settings on the test base station or signal generator, which may not necessarily be the most accurate adjustment.

Without a variable attenuator, it may be necessary to calibrate the test setup across the entire power level range of the test base station or signal generator, which significantly increases test time and may still be less accurate than using a variable attenuator.

Using a variable attenuator at the input of the RUT also helps to ensure that the nonlinearities of the test base station or signal generator are minimized, which is crucial for complex modulation methods that are sensitive to nonlinearities. Excessive nonlinearities in the signal generation could lead to artificially lower RUT performance with more complex modulation methods.

Another factor to consider is that at low output power levels, the noise energy from the signal generator or test base station may be a significant amount of the overall signal energy. A variable attenuator that doesn’t introduce any significant added noise to the system can also lower the overall noise floor and extend the dynamic range capability of the test setup.

 

This article was originally published on Planet Analog.

Jean-Jaques (JJ) DeLisle, an electrical engineering graduate (MS) from Rochester Institute of Technology, has a diverse background in analog and RF R&D, as well as technical writing/editing for design engineering publications. He writes about analog and RF for Planet Analog.

 

Leave a comment