This follow-up piece of "FIR Fundamentals" turns the process on its head and builds up complete FIR filters from small, easy-to-understand pieces...
Part 1 of this article pair broke down (i.e., factorized) an FIR filter’s polynomial representation into a product of quadratic and linear factors. This follow-up piece turns the process on its head and builds up complete FIR filters from small, easy-to-understand pieces.
The story so far …
Back in Part 1 we took a regular FIR filter design, targeting the removal of line frequency noise, and wrote down the filter coefficients in polynomial form to get equation :
Then we found all the roots of this polynomial, and used them to write down the factorized form of the polynomial:As a parting shot, I pointed out that there are three quadratic terms there with unity coefficients of z0 – and there are three deep nulls in the filter’s gain response, as was shown in the figures from Part 1. Let’s take a deep breath and examine the responses of all these individual linear and quadratic factors, to see if there are some clues there.
Figure 1 shows the individual responses, treated as two- or three-tap FIR filters, of each of the factors in parentheses in equation . The five quadratic factors are marked as q1 to q5 and the four linear factors as L1 to L4. It’s quite a jumble of a graph, but you don’t have to be very awake to see the major salient detail: three of those quadratic factors have deep notches in the frequency response. These are indeed the three factors whose constant (z0) coefficient is unity!
Figure 1 The frequency response of all the quadratic and linear factors of equation  shows where deep notches in the frequency response.
So, here’s the first takeaway. In an FIR filter whose stopband contains several sharp nulls, each one of them comes into being because the response of one of the polynomial’s quadratic factors falls to zero at one frequency. Just so that we don’t jump to conclusions about the particular form the factor needs to have, let’s do some more algebra to make sure. Are we having fun yet?
The frequency response of a quadratic factor used as a filter
Here’s where we make an important substitution. Until now, our z has been a mystery variable with no obvious relationship to the behavior of a sinewave. Let’s introduce the expression that actually defines the z-transform. The key relationships between z, z-1 and the frequency f of a sinewave input can be expressed in either an exponential or a trig format:
where fS is the sample rate. You can make a mental connection to the effect of a small time delay (equal to the sampling interval) on the phase of a sinewave of frequency f. Then z is a complex variable that ‘rotates round’ the complex plane between purely real and purely imaginary, as the frequency f affects the argument of the trig functions.
You’ll often encounter the exponential form  in filter books, but the complex trig form of  somehow seems more relevant to the engineer’s habit of stuffing a sinewave into something and seeing what happens. So, let’s substitute  into part 1’s equation  and see if we can develop an expression for the frequency response.
The first line in the boxed portion of  shows the real part, and the second line shows the imaginary part. Now, for a quadratic section to give a null in the frequency response, there must be a frequency fN for which Q(fN)=0. This means that both the real and the imaginary parts of equation  must be identically zero at that frequency. It should be easy to see that for the imaginary term in  to vanish for a frequency fN, we simply need to have
Bingo! This corroborates our earlier observation. What’s more, it gives us a direct expression for the quadratic factor that’s needed in order to locate a frequency response null at any fN that we want. Figure 2 shows the frequency response of quadratic factors (three-tap filters) built using equation , for several notch frequencies. Equations  and  give the real and imaginary parts, and we can see that there are some special cases. For zero frequency (z term = -2) and half the sample rate (z term = +2), the factor Q(z) is the product of two equal real factors, either (z-1) * (z-1) or (z+1) * (z+1). For a frequency of fS/4, the roots are purely imaginary, and Q(z) = (z+j) * (z-j).
Figure 2 Frequency response for factors of the form of equation  shows notches at various values of null frequency.
Now for a real example. Let’s say that I want very high attenuation between around 49 Hz and 51 Hz, again between 59 Hz and 61 Hz (i.e. the system must be able to reject both commonly-used AC line frequencies without change of coefficients or sample rate), and finally also between 98 Hz and 102 Hz. I’ll specify 220 Hz as the sample rate to ensure that the 2nd harmonic of 60 Hz aliases round and falls right into that same hole that the 50 Hz second harmonic does (homework, if you can’t see why that happens). So, what happens if I just build a filter that has nulls at these six frequencies?
It’s simple to make up a spreadsheet that calculates the z coefficient shown in equation  for a set of frequencies, then multiplies them up into a single polynomial. The six frequencies create six quadratic factors, shown in Table 1.
Table 1 Six quadratic factors, calculated for specific null frequencies.
When multiplied together (equation ), these six quadratic factors produce a twelfth-order polynomial (equation ).
The frequency response of this filter is shown in Figure 3, straight out of Excel. For the intended application, it’s already a pretty good filter, “straight out of the box,” and it has no wasted coefficients at all!
Figure 3 The frequency response of the filter defined by equation  calculated using LTspice confirms the desired nulls.
Now let’s take a closer look at the region around 50 Hz and 60 Hz in LTspice. Yep, those nulls are indeed where we wanted them to be – see Figure 4.
It’s easy to play around with this approach using the spreadsheet, adding and moving nulls until you get exactly what you want. Not all combinations of null frequencies will necessarily produce a response that you’re happy with first time around, though. You’ll find, for example, that as you try to push the stopband of a lowpass filter well below fS/4 you’ll need to add extra nulls above this frequency to ‘pin the stopband down.’ It’s an empirical way to proceed, for sure – but you certainly get a feeling for what’s going on with these nulls.
Figure 4 Up close and polynomial; LTspice confirms that the filter has nice notches just where we asked for them.
If you want a filter with an even number of taps, you can multiply in an extra linear term of (z+1), which has the effect of putting a null at fS/2 (which gives us some extra stopband rejection). That’s something we can deduce from one of those earlier special cases; if a function Q(z) = (z+1) * (z+1) has a null at fS/2, so must its square root.
Flattening the passband
Our filter has got useful stopband behaviour, but the passband response isn’t flat anywhere. It rolls off in the passband right from low frequencies. The response actually is close to Gaussian; you can see from the coefficients that the filter isn’t going to overshoot at all on a step input signal. How to tell? Just look at the sum of progressively more terms in the impulse response; it’s monotonically increasing because all the filter coefficients are positive, so the time response to a step never falls as each extra coefficient is ‘uncovered.’ That behavior can be useful, but what do you do if you’d rather have a flatter frequency response over at least part of the passband?
We can add more factors to our polynomial to flatten out that droop, choosing those that contribute a gently rising response as frequency increases from DC. Look back at Figure 1; some of those polynomial terms didn’t give us a null, but just a gentle slope up or down in response. These acted together to give a flatter overall frequency response to our original example, plotted in Part 1. This shows that we can construct our own terms to flatten out our filter’s gentle low frequency droop. Each quadratic factor you add will increase the coefficient count by two; multiplying in a linear term just adds one to the tap count.
The general analytic expression for the frequency response of an arbitrary root value (or pair of values) is quite clunky and I won’t write it down here. It’s quite possible, though, to build an optimization script (think Million Monkeys) that can tweak the coefficients of extra linear and quadratic terms to flatten out the passband to the desired level.
Here’s an example done entirely empirically. I’ve taken our synthesized filter and multiplied the polynomial by an additional (z – 0.65). The rising response of this extra term has the effect of flattening out the droopy passband at lower frequencies – it also degrades the stopband rejection a little. Figure 5 shows what those coefficients look like when entered back into the PSoC Creator Filter tool I still use so much. The impulse response has become non-symmetrical, meaning it’s no longer a linear phase filter. We can rectify that with yet more terms – but not today, I’m out of space and time! “Swings and roundabouts” works for filter design too.
Figure 5 Throw in another root (z-0.65) and we can flatten that passband a bit.
Let’s be honest; this isn’t intended as a replacement for more conventional filter design techniques, especially if you’re in a hurry! But it’s a really good way of understanding the effect that those roots – filter designers call them ‘zeroes’ – of the filter polynomial are getting up to under the hood. For our example hum rejection filter, we got a useful result (we’ll be using it commercially), with the added satisfaction that we built the coefficient set up right from first principles, and we understand exactly why it has its nulls where they are.
I hope this shows you that it’s sometimes useful to break down the apparently monolithic, single-stage nature of an FIR filter into simpler pieces that can be separately manipulated. Try dialing some of these nulls into your next FIR filter design. It’s the ‘notch-ural’ way!