2.1.1 What are “decimation” and “downsampling”?
Loosely speaking, “decimation” is the process of reducing the sampling rate. In practice, this usually implies lowpass-filtering a signal, then throwing away some of its samples.
“Downsampling” is a more specific term which refers to just the process of throwing away samples, without the lowpass filtering operation. Throughout this FAQ, though, we’ll just use the term “decimation” loosely, sometimes to mean “downsampling”.
2.1.2 What is the “decimation factor”?
The decimation factor is simply the ratio of the input rate to the output rate. It is usually symbolized by “M”, so input rate / output rate=M.
Tip: You can remember that “M” is the symbol for decimation factor by thinking of “deci-M-ation”. (Exercise for the student: which letter is used as the symbol for interpo-L-ation factor?)
2.1.3 Why decimate?
The most immediate reason to decimate is simply to reduce the sampling rate at the output of one system so a system operating at a lower sampling rate can input the signal. But a much more common motivation for decimation is to reduce the cost of processing: the calculation and/or memory required to implement a DSP system generally is proportional to the sampling rate, so the use of a lower sampling rate usually results in a cheaper implementation.
To that, Jim Thomas adds:
Almost anything you do to/with the signal can be done with fewer operations at a lower sample rate, and the workload is almost always reduced by more than a factor of M.
For example, if you double the sample rate, an equivalent filter will require four times as many operations to implement. This is because both amount of data (per second) and the length of the filter increase by two, so convolution goes up by four. Thus, if you can halve the sample rate, you can decrease the work load by a factor of four. I guess you could say that if you reduce the sample rate by M, the workload for a filter goes down to (1/M)^2.
2.1.4 Is there a restriction on decimation factors I can use?
Yes. Decimation involves throwing away samples, so you can only decimate by integer factors; you cannot decimate by fractional factors. (However, you can do interpolation prior to decimation to achieve an overall rational factor, for example, “4/5”; see Part 4: Resampling.)
2.1.5 Which signals can be downsampled?
A signal can be downsampled (without doing any filtering) whenever it is “oversampled”, that is, when a sampling rate was used that was greater than the Nyquist criteria required. Specifically, the signal’s highest frequency must be less than half the post-decimation sampling rate. (This just boils down to applying the Nyquist criteria to the input signal, relative to the new sampling rate.)
In most cases, though, you’ll end up lowpass-filtering your signal prior to downsampling, in order to enforce the Nyquist criteria at the post-decimation rate. For example, suppose you have a signal sampled at a rate of 30 kHz, whose highest frequency component is 10 kHz (which is less than the Nyquist frequency of 15 kHz). If you wish to reduce the sampling rate by a factor of three to 10 kHz, you must ensure that you have no components greater than 5 kHz, which is the Nyquist frequency for the reduced rate. However, since the original signal has components up to 10 kHz, you must lowpass-filter the signal prior to downsampling to remove all components above 5 kHz so that no aliasing will occur when downsampling.
This combined operation of filtering and downsampling is called decimation.
2.1.6 What happens if I violate the Nyquist criteria in downsampling or decimating?
You get aliasing–just as with other cases of violating the Nyquist criteria. (Aliasing is a type of distortion which cannot be corrected once it occurs.)
2.2.1 Can I decimate in multiple stages?
Yes, so long as the decimation factor, M, is not a prime number. For example, to decimate by a factor of 15, you could decimate by 5, then decimate by 3. The more prime factors M has, the more choices you have. For example you could decimate by a factor of 24 using:
- one stage: 24
- two stages: 6 and 4, or 8 and 3
- three stages: 4, 3, and 2
- four stages: 3, 2, 2, and 2
2.2.2 Cool. But why bother with all that?
If you are simply downsampling (that is, throwing away samples without filtering), there’s no benefit. But in the more common case of decimating (combining filtering and downsampling), the computational and memory requirements of the filters can usually be reduced by using multiple stages.
2.2.3 OK, so how do I figure out the optimum number of stages, and the decimation factor at each stage?
That’s a tough one. There isn’t a simple answer to this one: the answer varies depending on many things, so if you really want to find the optimum, you have to evaluate the resource requirements of each possibility.
However, here are a couple of rules of thumb which may help narrow down the choices:
- Using two or three stages is usually optimal or near-optimal.
- Decimate in order from the largest to smallest factor. In other words, use the largest factor at the highest sampling rate. For example, when decimating by a factor of 60 in three stages, decimate by 5, then by 4, then by 3.
The multirate book references give additional, more specific guidance.
2.3.1 How do I implement decimation?
Decimation consists of the processes of lowpass filtering, followed by downsampling.
To implement the filtering part, you can use either FIR or IIR filters.
To implement the downsampling part (by a downsampling factor of “M”) simply keep every Mth sample, and throw away the M-1 samples in between. For example, to decimate by 4, keep every fourth sample, and throw three out of every four samples away.
2.3.2 That almost sounds too easy…
Beauty, eh? 😉
2.3.3 If I’m going to throw away most of the lowpass filter’s outputs, why bother to calculate them in the first place?
You may be onto something. In the case of FIR filters, any output is a function only of the past inputs (because there is no feedback). Therefore, you only have to calculate outputs which will be used.
For IIR filters, you still have to do part or all of the filter calculation for each input, even when the corresponding output won’t be used. (Depending on the filter topology used, certain feed-forward parts of the calculation can be omitted.),. The reason is that outputs you do use are affected by the feedback from the outputs you don’t use.
The fact that only the outputs which will be used have to be calculated explains why decimating filters are almost always implemented using FIR filters!
2.4 FIR Decimators
2.4.1 What computational savings do I gain by using a FIR decimator?
Since you compute only one of every M outputs, you save M-1 operations per output, or an overall “savings” of (M-1)/M. Therefore, the larger the decimation factor is, the larger the savings, percentage-wise.
A simple way to think of the amount of computation required to implement a FIR decimator is that it is equal to the computation required for a non-decimating N-tap filter operating at the output rate.
2.4.2 How much memory savings do I gain by using a FIR decimator?
None. You still have to store every input sample in the FIR’s delay line, so the memory requirement is the same size as for a non-decimated FIR having the same number of taps.
2.4.3 How do I design a FIR decimator?
Just use your favorite FIR design method. The design criteria are:
- The passband lower frequency is zero; the passband upper frequency is whatever information bandwidth you want to preserve after decimating. The passband ripple is whatever your application can tolerate.
- The stopband lower frequency is half the output rate minus the passband upper frequency. The stopband attenuation is set according to whatever aliasing your application can stand. (Note that there will always be aliasing in a decimator, but you just reduce it to a negligible value with the decimating filter.)
- As with any FIR, the number of taps is whatever is required to meet the passband and stopband specifications.
2.4.4 How do I implement a FIR decimator?
A decimating FIR is actually the same as a regular FIR, except that you shift M samples into the delay line for each output you calculate. More specifically:
- Store M samples in the delay line.
- Calculate the decimated output as the sum-of-products of the delay line values and the filter coefficients.
- Shift the delay line by M places to make room for the inputs of the next decimation.
Also, just as with ordinary FIRs, circular buffers can be used to eliminate the requirement to literally shift the data in the delay line.
2.4.5 Where can I get source code to implement a FIR decimator in C?
2.4.6 Where can I get assembly code to implement a FIR decimator?
The major DSP vendors provide examples of FIR decimators in their data books and application notes; check their web sites.
2.4.7 How do I test a FIR decimator?
You can test a decimating FIR in most of the ways you might test an ordinary FIR:
- A special case of a decimator is an “ordinary” FIR. When given a value of “1” for M, a decimator should act exactly like an ordinary FIR. You can then do impulse, step, and sine tests on it just like you can on an ordinary FIR.
- If you put in a sine whose frequency is within the decimator’s passband, the output should be distortion-free (once the filter reaches steady-state), and the frequency of the output should be the same as the frequency of the input, in terms of absolute Hz.
- You also can extend the “impulse response” test used for ordinary FIRs by using a “fat impulse”, consisting of M consecutive “1” samples followed by a series of “0” samples. In that case, if the decimator has been implemented correctly, the output will not be the literal FIR filter coefficients, but will be the sum of every subset of M coefficients.
- You can use a step response test. Given a unity-valued step input, the output should be the sum of the FIR coefficients once the filter has reached steady state.