*Multirate* simply means "multiple sampling
rates". A multirate DSP system uses multiple sampling rates within the
system. Whenever a signal at one rate has to be used by a system that expects a
different rate, the rate has to be increased or decreased, and some processing
is required to do so. Therefore "Multirate DSP" really refers to the
art or science of *changing* sampling rates.

The most immediate reason is when you need to pass data between two systems which use incompatible sampling rates. For example, professional audio systems use 48 kHz rate, but consumer CD players use 44.1 kHz; when audio professionals transfer their recorded music to CDs, they need to do a rate conversion.

But the most common reason is that multirate DSP can greatly increase
processing efficiency (even by orders of magnitude!), which reduces DSP system
cost. *This makes the subject of multirate DSP vital to all professional DSP
practitioners.*

Multirate consists of:

**Decimation:**To decrease the sampling rate,**Interpolation:**To increase the sampling rate, or,**Resampling:**To combine decimation and interpolation in order to change the sampling rate by a fractional value that can be expressed as a ratio. For example, to resample by a factor of 1.5, you just interpolate by a factor of 3 then decimate by a factor of 2 (to change the sampling rate by a factor of 3/2=1.5.)

Right here. Our "multirate_algs" package includes a decimation, interpolation, and resampling routines. You can download it from dspGuru's DSP Algorithm Library.

Many DSP books omit the important subject of multirate altogether. But two introductory texts that briefly go into it are:

- Understanding Digital Signal Processing [Lyo97]
- Digital Signal Processing and the Microcontroller [Gro98]

But it's a big subject. The two most popular "industrial strength" (advanced) books that cover multirate in depth are:

Loosely speaking, "decimation" is the process of reducing the sampling rate. In practice, this usually implies lowpass-filtering a signal, then throwing away some of its samples.

"Downsampling" is a more specific term which refers to just the process of throwing away samples, without the lowpass filtering operation. Throughout this FAQ, though, we'll just use the term "decimation" loosely, sometimes to mean "downsampling".

The decimation factor is simply the ratio of the input rate to the output rate. It is usually symbolized by "M", so input rate / output rate=M.

Tip: You can remember that "M" is the symbol for decimation factor by thinking of "deci-

M-ation". (Exercise for the student: which letter is used as the symbol for interpo-L-ation factor?)

The most immediate reason to decimate is simply to reduce the sampling rate at the output of one system so a system operating at a lower sampling rate can input the signal. But a much more common motivation for decimation is to reduce the

costof processing: thecalculationand/ormemoryrequired to implement a DSP system generally is proportional to the sampling rate, so the use of a lower sampling rate usually results in a cheaper implementation.To that, Jim Thomas adds:

Almost anything you do to/with the signal can be done with fewer operations at a lower sample rate, and the workload is almost always reduced by

morethan a factor of M.For example, if you double the sample rate, an equivalent filter will require four times as many operations to implement. This is because both amount of data (per second) and the length of the filter increase by two, so convolution goes up by four. Thus, if you can halve the sample rate, you can decrease the work load by a factor of four. I guess you could say that if you reduce the sample rate by M, the workload for a filter goes down to (1/M)^2.

Yes. Decimation involves throwing away samples, so you can only decimate by

integerfactors; you cannot decimate byfractionalfactors. (However, youcando interpolation prior to decimation to achieve an overall rational factor, for example, "4/5"; see Part 4: Resampling.)

A signal can be downsampled (without doing any filtering) whenever it is "oversampled", that is, when a sampling rate was used that was greater than the Nyquist criteria required. Specifically, the signal's highest frequency must be less than half the

post-decimationsampling rate. (This just boils down to applying the Nyquist criteria to the input signal, relative to the new sampling rate.)In most cases, though, you'll end up lowpass-filtering your signal prior to downsampling, in order to enforce the Nyquist criteria at the

post-decimationrate. For example, suppose you have a signal sampled at a rate of 30 kHz, whose highest frequency component is 10 kHz (which is less than the Nyquist frequency of 15 kHz). If you wish to reduce the sampling rate by a factor of three to 10 kHz, you must ensure that you have no components greater than 5 kHz, which is the Nyquist frequency for the reduced rate. However, since the original signal has components up to 10 kHz, you must lowpass-filter the signal prior to downsampling to remove all components above 5 kHz so that no aliasing will occur when downsampling.This combined operation of filtering and downsampling is called

decimation.

You get aliasing--just as with other cases of violating the Nyquist criteria. (Aliasing is a type of distortion which cannot be corrected once it occurs.)

Yes, so long as the decimation factor, M, is not a prime number. For example, to decimate by a factor of 15, you could decimate by 5, then decimate by 3. The more prime factors M has, the more choices you have. For example you could decimate by a factor of 24 using:

- one stage: 24
- two stages: 6 and 4, or 8 and 3
- three stages: 4, 3, and 2
- four stages: 3, 2, 2, and 2

If you are simply downsampling (that is, throwing away samples without filtering), there's no benefit. But in the more common case of decimating (combining filtering and downsampling), the computational and memory requirements of the filters can usually be reduced by using multiple stages.

That's a tough one. There isn't a simple answer to this one: the answer varies depending on many things, so if you really want to find the optimum, you have to evaluate the resource requirements of each possibility.

However, here are a couple of rules of thumb which may help narrow down the choices:

- Using two or three stages is usually optimal or near-optimal.
- Decimate in order from the largest to smallest factor. In other words, use the largest factor at the highest sampling rate. For example, when decimating by a factor of 60 in three stages, decimate by 5, then by 4, then by 3.

The multirate book references give additional, more specific guidance.

Decimation consists of the processes of lowpass filtering, followed by downsampling.

To implement the filtering part, you can use either FIR or IIR filters.

To implement the downsampling part (by a downsampling factor of "M") simply keep every

Mthsample, and throw away theM-1samples in between. For example, to decimate by 4, keep every fourth sample, and throw three out of every four samples away.

Beauty, eh?;-)

You may be onto something. In the case of FIR filters, any output is a function only of the past inputs (because there is no feedback). Therefore, you only have to calculate outputs which will be used.

For IIR filters, you still have to do part or all of the filter calculation for each input, even when the corresponding output won't be used. (Depending on the filter topology used, certain feed-forward parts of the calculation can be omitted.),. The reason is that outputs you

douse are affected by the feedback from the outputs youdon'tuse.

The fact that only the outputs which will be used have to be calculated explains why decimating filters are almost always implemented using FIR filters!

Since you compute only one of every M outputs, you save M-1 operations per output, or an overall "savings" of (M-1)/M. Therefore, the larger the decimation factor is, the larger the savings, percentage-wise.

A simple way to think of the amount of computation required to implement a FIR decimator is that it is equal to the computation required for a non-decimating N-tap filter operating at the

outputrate.

None. You still have to store every input sample in the FIR's delay line, so the memory requirement is the same size as for a non-decimated FIR having the same number of taps.

Just use your favorite FIR design method. The design criteria are:

- The passband lower frequency is zero; the passband upper frequency is whatever information bandwidth you want to preserve after decimating. The passband ripple is whatever your application can tolerate.
- The stopband lower frequency is half the output rate minus the passband upper frequency. The stopband attenuation is set according to whatever aliasing your application can stand. (Note that there will always
bealiasing in a decimator, but you just reduce it to a negligible value with the decimating filter.)- As with any FIR, the number of taps is whatever is required to meet the passband and stopband specifications.

A decimating FIR is actually the same as a regular FIR, except that you shift M samples into the delay line for each output you calculate. More specifically:

- Store M samples in the delay line.
- Calculate the decimated output as the sum-of-products of the delay line values and the filter coefficients.
- Shift the delay line by M places to make room for the inputs of the next decimation.
Also, just as with ordinary FIRs, circular buffers can be used to eliminate the requirement to literally shift the data in the delay line.

Iowegian's ScopeFIR comes with a free set of multirate algorithms, including FIR decimation functions in C. Just download and install the ScopeFIR distribution file.

The major DSP vendors provide examples of FIR decimators in their data books and application notes; check their web sites.

You can test a decimating FIR in most of the ways you might test an ordinary FIR:

- A special case of a decimator is an "ordinary" FIR. When given a value of "1" for M, a decimator should act exactly like an ordinary FIR. You can then do impulse, step, and sine tests on it just like you can on an ordinary FIR.
- If you put in a sine whose frequency is within the decimator's passband, the output should be distortion-free (once the filter reaches steady-state), and the frequency of the output should be the same as the frequency of the input, in terms of absolute Hz.
- You also can extend the "impulse response" test used for ordinary FIRs by using a "fat impulse", consisting of M consecutive "1" samples followed by a series of "0" samples. In that case, if the decimator has been implemented correctly, the output will not be the literal FIR filter coefficients, but will be the sum of every subset of M coefficients.
- You can use a step response test. Given a unity-valued step input, the output should be the sum of the FIR coefficients once the filter has reached steady state.

"Upsampling" is the process of inserting zero-valued samples between original samples to increase the sampling rate. (This is called "zero-stuffing".) Upsampling adds to the original signal undesired spectral images which are centered on multiples of the original sampling rate.

"Interpolation", in the DSP sense, is the process of upsampling followed by filtering. (The filtering removes the undesired spectral images.) As a linear process, the DSP sense of interpolation is somewhat different from the "math" sense of interpolation, but the result is conceptually similar: to create "in-between" samples from the original samples. The result is as if you had just originally sampled your signal at the higher rate.

The primary reason to interpolate is simply to increase the sampling rate at the output of one system so that another system operating at a higher sampling rate can input the signal.

The interpolation factor is simply the ratio of the output rate to the input rate. It is usually symbolized by "L", so output rate / input rate=L.

Tip: You can remember that "L" is the symbol for interpolation factor by thinking of "interpo-

L-ation".

Yes. Since interpolation relies on zero-stuffing you can only interpolate by

integerfactors; you cannot interpolate by fractional factors. (However, youcancombine interpolation and decimation to achieve an overall rational factor, for example, 4/5; see Part 4: Resampling.)

All. There is no restriction.

Yes. Otherwise, you're doing upsampling. ;-)

Upsampling adds undesired spectral images to the signal at multiples of the original sampling rate, so unless you remove those by filtering, the upsampled signal is not the same as the original: it's distorted.

Some applications may be able to tolerate that, for example, if the images get removed later by an analog filter, but in most applications you will have to remove the undesired images via digital filtering. Therefore, interpolation is far more common that upsampling alone.

Yes, so long as the interpolation ratio, L, is not a prime number. For example, to interpolate by a factor of 15, you could interpolate by 3 then interpolate by 5. The more factors L has, the more choices you have. For example you could interpolate by 16 in:

- one stage: 16
- two stages: 4 and 4
- three stages: 2, 2, and 4
- four stages: 2, 2, 2, and 2

Just as with decimation, the computational and memory requirements of interpolation filtering can often be reduced by using multiple stages.

There isn't a simple answer to this one: the answer varies depending on many things. However, here are a couple of rules of thumb:

- Using two or three stages is usually optimal or near-optimal.
- Interpolate in order of the smallest to largest factors. For example, when interpolating by a factor of 60 in three stages, interpolate by 3, then by 4, then by 5. (Use the largest ratio on the highest rate.)

The multirate book references give additional, more specific guidance.

Interpolation always consists of two processes:

- Inserting L-1 zero-valued samples between each pair of input samples. This operation is called "zero stuffing".
- Lowpass-filtering the result.
The result (assuming an ideal interpolation filter) is a signal at L times the original sampling rate which has the same spectrum over the input Nyquist (0 to Fs/2) range, and with zero spectral content above the original Fs/2.

- The zero-stuffing creates a higher-rate signal whose spectrum is the same as the original over the original bandwidth, but has images of the original spectrum centered on multiples of the original sampling rate.
- The lowpass filtering eliminates the images.

This idea is appealing because, intuitively, this "stairstep" output seems more similar to the original than the zero-stuffed version. But in this case, intuition leads us down the garden path. This process causes a "zero-order hold" distortion in the original passband, and still creates undesired images (see below).

Although these effects could be un-done by filtering, it turns out that zero-stuffing approach is not only more "correct", it actually reduces the amount of computation required to implement a FIR interpolation filter. Therefore, interpolation is always done via zero-stuffing.

The output of a FIR filter is the sum each coefficient multiplied by each corresponding input sample. In the case of a FIR interpolation filter, some of the input samples are stuffed zeros. Each stuffed zero gets multiplied by a coefficient and summed with the others. However, this adding-and-summing processing has no effect when the data sample is zero--which we know in advance will be the case for L-1 out of each L input samples of a FIR interpolation filter. So why bother to calculate these taps?

The net result is that to interpolate by a factor of L, you calculate L outputs for each input using L different "sub-filters" derived from your original filter.

Here's an example of a 12-tap FIR filter that implements interpolation by a factor of four. The coefficients are h0-h11, and three data samples, x0-x2 (with the newest, x2, on the left) have made their way into the filter's delay line:

h0 h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 Result x2 0 0 0 x1 0 0 0 x0 0 0 0 x2·h0+x1·h4+x0·h8 0 x2 0 0 0 x1 0 0 0 x0 0 0 x2·h1+x1·h5+x0·h9 0 0 x2 0 0 0 x1 0 0 0 x0 0 x2·h2+x1·h6+x0·h10 0 0 0 x2 0 0 0 x1 0 0 0 x0 x2·h3+x1·h7+x0·h11

The table suggests the following general observations about FIR interpolators:

- Since the interpolation ratio is four (L=4), there are four "sub-filters" (whose coefficient sets are marked here with matching colors.) These sub-filters are officially called "polyphase filters".
- For each input, we calculate L outputs by doing L basic FIR calculations, each using a different set of coefficients.
- The number of taps per polyphase filter is 3, or, expressed as a formula: Npoly=Ntotal / L.
- The coefficients of each polyphase filter can be determined by skipping every Lth coefficient, starting at coefficients 0 through L-1, to calculate corresponding outputs 0 through L-1.
- Alternatively, if you rearranged your coefficients in advance in "scrambled" order like this:

h0, h4, h8,h1, h5, h9,h2, h6, h10, h3, h7, h11

then you could just step through them in order.- We have hinted here at the fact that N should be a multiple of L. This isn't absolutely necessary, but if N isn't a multiple of L, the added complication of using a non-multiple of L often isn't worth it. So if the minimum number of taps that your filter specification requires doesn't happen to be a multiple of L, your best bet is usually to just increase N to the next multiple of L. You can do this either by adding some zero-valued coefficients onto the end of the filter, or by re-designing the filter using the larger N value.

Since each output is calculated using only N/L coefficients (rather than N coefficients), you get an overall computational "savings" of (N - N/L) per output .

A simple way to think of the amount of computation required to implement a FIR interpolator is that it is equal to the computation required for a non-interpolating N-tap filter operating at the

inputrate. In effect, you have to calculate L filters using N/L taps each, so that's N total taps calculated per input.

Compared to the straight-forward implementation of interpolation by upsampling the signal by stuffing it with L-1 zeros , then filtering it, you save memory by a factor of (L-1)/L. In other words, you don't have to store L-1 zero-stuffed "upsamples" per actual input sample.

Just use your favorite FIR design method. The design criteria are:

- TBD

An interpolating FIR is actually the same as a regular FIR, except that, for each input, you calculate L outputs per input using L polyphase filters, each having N/L taps. More specifically:

- Store a sample in the delay line. (The size of the delay line is N/L.)
- For each of L polyphase coefficient sets, calculate an output as the sum-of-products of the delay line values and the filter coefficients.
- Shift the delay line by one to make room for the next input.
Also, just as with ordinary FIRs, circular buffers can be used to eliminate the requirement to literally shift the data in the delay line.

Iowegian's ScopeFIR comes with a free set of multirate algorithms, including FIR interpolation functions in C. Just download and install the ScopeFIR distribution file.

The major DSP vendors provide examples of FIR interpolators in their data books and application notes, so check their web sites.

You can test an interpolating FIR in most of the ways you might test an ordinary FIR:

- A special case of an interpolator is an ordinary FIR. When given a value of 1 for L, an interpolator should act exactly like an ordinary FIR. You can then do impulse, step, and sine tests on it just like you can on an ordinary FIR.
- If you put in a sine whose frequency is within the interpolator's passband, the output should be distortion-free (once the filter reaches steady state), and the frequency of the output should be the same as the frequency of the input, in terms of absolute Hz.
- You can use a step response test. Given a unity-valued step input, every group of L outputs should be the same as the sums of the coefficients of the L individual polyphase filters, once the filter has reached steady state.

"Resampling" means combining interpolation and decimation to change the sampling rate by a rational factor.

Resampling is usually done to interface two systems which have different sampling rates. If the ratio of two system's rates happens to be an integer, decimation or interpolation can be used to change the sampling rate (depending on whether the rate is being decreased or increased); otherwise, interpolation and decimation must be used

togetherto change the rateA practical and well-known example results from the fact that professional audio equipment uses a sampling rate of 48 kHz, but consumer audio equipment uses a rate of 44.1 kHz. Therefore, to transfer music from a professional recording to a CD, the sampling rate must be changed by a factor of:

(44100 / 48000) = (441 / 480) = (147 / 160)

There are no common factors in 147 and 160, so we must stop factoring at that point. Therefore, in this example, we would interpolate by a factor of 147 then decimate by a factor of 160.

The interpolation factor is simply the ratio of the output rate to the input rate. Given that the interpolation factor is L and the decimation factor is M, the resampling factor is L / M. In the above example, the resampling factor is 147 / 160 = 0.91875

Yes. As always, the Nyquist criteria must be met relative to the resulting output sampling rate, or aliasing will result. In other words, the output rate cannot be less than twice the highest frequency (of interest) of the input signal.

Yes. Since resampling includes interpolation, you need an interpolation filter. Otherwise, the images created by the zero-stuffing part of interpolation will remain, and the interpolated signal will not be "the same" as the original.

Likewise, since resampling includes decimation, you seemingly need a decimation filter. Or do you? Since the interpolation filter is in-line with the decimation filter, you could just combine the two filters by convolving their coefficients into a single filter to use for decimation. Better yet, since both are lowpass filters, just use whichever filter has the lowest cutoff frequency as the interpolation filter.

As hinted at above:

- Determine the cutoff frequency of the decimation filter (as explained in Part 2: Decimation.)
- Determine the cutoff frequency of the interpolation filter (as explained in Part 3: Interpolation)
- Use the lower of the two cutoff frequencies to design the resampling filter.

Yes, but there are a couple of restrictions:

- If either the interpolation or decimation factors are prime numbers, you won't be able to decompose those parts of the resampler into stages.
- You must preserve the Nyquist criteria at each stage or else aliasing will result. That is, no stage can have an output rate which is less than twice the highest frequency of interest.

Just as with interpolation and decimation, the computational and/or memory requirements of the resampling filtering can sometimes be greatly reduced by using multiple stages.

The straight-forward implementation of resampling is to do interpolation by a factor of L, then decimation by a factor of M. (You must do it in that order; otherwise, the decimator would remove part of the desired signal--which the interpolator could not restore.)

No. The problem is that for resampling factors close to 1.0, the interpolation factor can be quite large. For example, in the case described above of changing from the sampling rate from 48 kHz to 44.1 kHz, the ratio is only 0.91875, yet the interpolation factor is 147!

Also, you are filtering the signal twice: once in the interpolator and once in the decimator. However, one of the filters has a larger bandwidth than the other, so the larger-bandwidth filter is redundant.

Just combine the computational and memory advantages that FIR interpolator and decimator implementations can provide. (If you don't already understand those, be sure to read and understand Part 2: Decimation, and Part 3: Interpolation before continuing.)

First, let's briefly review what makes FIR interpolation and decimation efficient:

- When interpolating by a factor of L, you only have to actually calculate 1/L of the FIR taps per interpolator output.
- When decimating by a factor of M, you only have to calculate one output for every M decimator inputs.
So, combining these ideas, we will calculate only the outputs we actually need, using only a subset of the interpolation coefficients to calculate each output. That makes it possible to efficiently implement even FIR resamplers which have large interpolation and/or decimation factors.

The tricky part is figuring out which polyphase filters to apply to which inputs, to calculate the desired outputs, as a function of L and M. There are various ways of doing that, but they're all beyond our scope here.

Iowegian's ScopeFIR comes with a free set of multirate algorithms, including FIR resampling functions in C. Just download and install the ScopeFIR distribution file.

- The most obvious method is to put in a sine whose frequency is within the resampler's passband. If an undistorted sine comes out, that's a good sign. Note, however, that there will typically be a "ramp up" at the beginning of the sine, due to the filter's delay line filling with samples. Therefore, if you analyze the spectral content of the sine, be sure to skip past the ramp-up portion.
- Depending on the resampling factor, resampling can be thought of as a general case of other types of multirate filtering. It can be:
- Interpolation: The interpolation factor, L, is greater than one, and the decimation factor, M, is one.
- Decimation: The interpolation factor, L, is one, but the decimation factor, M, is greater than one.
- "Ordinary" Filtering: The interpolation and decimation factors, L and M, are both one.

Therefore, if you successfully test it with all these cases using the methods appropriate for each case, it probably is correct.