This article discusses an approach and provides an executable program that remedies relying on the user to specify different combinations of component values in search of solutions with ever lower sensitivities.
A number of years ago, I posted an article and spreadsheet on EDN that enables the design of 2nd and 3rd order Sallen-Key low pass filters given the poles of their filter characteristics. The user enters the values of the capacitors and of those resistors which set the DC gain of the single op amp employed by the filter. After specifying the tolerances of all passive components and selecting an “E” series of resistor values (E-12, E-24, E-48, E-96 or E-192), the spreadsheet calculates the nearest commercially available values of the unspecified resistors. It also calculates the root of the sum of the squares of the sensitivity of the magnitude of the filter response to all passive components. It does this at a user-specified frequency (recommended to be at resonance, where sensitivity is the highest.)
One limitation of this procedure is that it relies on the user to specify different combinations of component values in search of solutions with ever lower sensitivities. This is tedious, and there is no guarantee that the lowest sensitivity solution will ever be found. This article discusses an approach and provides an executable program that remedies this situation. Given filter poles and capacitor and resistor E-series values and range limits, the program tests all possible component combinations and identifies the solutions with the lowest (and what the heck, the highest) aggregate component value sensitivities.
Why all this effort for third order filters?
Well, why not? If you can add an extra pole’s worth of attenuation without having to add an op amp, why wouldn’t you at least consider this? But there’s an even better reason – avoiding stopband leakage (see the section of the same name in the article cited above.) Briefly, as op amp open loop gain falls with higher frequencies in a second order Sallen Key section, its output impedance rises. The section’s input signal is capacitively coupled to the op amp output. The result is that higher frequency stopband rejection is a lot worse than intended. A third order design remedies this by inserting an R-C section (whose capacitor is grounded) between the input and output. By attenuating the portion of input signal which reaches the filter capacitor connected to the output, stopband rejection is significantly improved. The effect of the op amp’s rising output impedance can is accordingly rendered negligible.
Getting down to it
Figure 1 shows a third order low pass Sallen-Key filter implemented with a single op amp.
In analyzing this filter, I’ll take a slightly different path from the one pursued in the original article. The transfer function of this circuit is given by
A third order filter has three poles: p1, p2 and p3. One must be real, and without loss of generality, that will be p3. p1 and p2 can be real and identical, real and different, or complex. The real portions of all poles are negative, and if p1 and p2 are complex, their imaginary portions are negatives of one another.
Regardless of the nature of p1 and p2, we can always write H(s) as
One way to start to determine sets of component values which satisfy these equations is to solve  for R1 and substitute that expression into  and . We can think of the result as two second order equations in R2:
These are standard quadratic equations. Each yields two solutions for R2. The filter design requires that at least one solution to  be equal to at least one solution to . But the six parameters other than R2 in these equations provide an overabundance of options for meeting that requirement. One approach that might tempt us would be to first normalize the equations so that the coefficient of the R22 terms in each was set to 1, and to then equate the coefficients of the R20 and R21 terms. This would result in two pairs of identical solutions. Unfortunately, this over-constrains the problem and limits the search for the best results. We don’t need two identical solution sets; we need only one solution that is common to both equations. That the normalization approach is undesirable is particularly frustrating in that equating coefficients simplifies the problem significantly by leading to algebraically tractable results – which are sadly non-optimal. A different approach for getting the best solution must be taken to properly constrain the problem so that it is manageable.
The selected successful strategy begins by limiting the component values under consideration to commercially available ones. IEC 63 (which evolved into IEC 60063) defines these values using something called an E-series. The E-x series specifies component value mantissas of 10i/x, where i is an integer from 1 to x, and x can be 3, 6, 12, 24, 48, 96 or 192. These mantissas are rounded to three digits for x ≥ 48 and two digits otherwise. (Because the values in the two-digit series were established before the IEC standard, not all of them follow this mathematical rule exactly.) Multiplying these mantissas by powers of ten yields available component values. Our strategy continues by setting the minimum and maximum values for to be considered for capacitors and resistors. Range limitations have practical value beyond reducing the number of component combinations. Few of us would want to design filters with 1 pF capacitors or 100 Megaohm resistors!
With these component value limitations in place, a computer program can iterate through all combinations to evaluate the two solutions each of equations  and . The four sets of differences between an equation  and an equation  solution can be inspected for changes of sign between successive iterations. Such a sign change indicates that a solution from  and one from  are nearly equal. The component values from the iteration yielding the smaller difference in the R2 values is selected as a solution for consideration.
This approach can yield multiple solutions. How can we select the best among them? The answer is to evaluate the sensitivity of the filter’s amplitude response to the variations in the values of components (component tolerances) from their precise E-series specifications. The obvious choice for the evaluation point is the resonance frequency, at least when the poles p1 and p2 are complex, since this is the frequency of greatest response variation in that case. Regardless, any evaluation frequency can be chosen.
The sensitivity of the amplitude response of a filter at frequency ω to variations in the value of component y is
Calculating the derivatives of this function for this filter is arduous. An acceptable alternative is to exploit the definition of a derivative and calculate differences instead, perturbing y by a very small fraction ε of its value, that is, multiplying it by delta = 1 + ε:
We can arrive at a measure of the sensitivity of the response to variations of all components by calculating the square root of the sum of the squares of the sensitivities to each of the individual component. But we must take into account the relative tolerances of each component if we are to select the solution with the least overall sensitivity. One way to do this is to multiply each individual sensitivity by the percent rating of the tolerance of its associated component. And so, we have the total sensitivity S for all i components yi:
With this figure of merit available, the solution with the best (least) sensitivity to component variations can be selected. For reference, the program also selects the solution with greatest sensitivity. LTSpice can do Monte Carlo runs on the designs for comparison purposes.
[Continue reading on EDN US: The program]
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.