by Rick Lyons
Performing interpolation on a sequence of time-domain samples is an important (often used) process in DSP, and there are many descriptions of time-domain interpolation (a kind of curve fitting) in the literature and on the Internet. However, there’s a lesser-known scheme used for interpolation that employs the inverse discrete Fourier transform (IDFT). This little tutorial attempts to describe that technique.
One of the fundamental principles of discrete signals is that "zero padding" in one domain results in an increased sampling rate in the other domain. For example, the most common form of zero padding is to append a string of zero-valued samples to the end of some time-domain sequence. Looking at a simple example of this notion, Figure 1(a) shows a Hanning window sequence w(n) defined by 32 time samples. If we take a 32-point DFT of w(n), and just look at the magnitudes, we get the WdB(m) spectral magnitude plot shown in Figure 1(b).
If, on the other hand, we zero pad (append) 96 zero-valued samples to the end of w(n), we’ll have the 128-sample w’(n) sequence shown in Figure 1(c). The spectral magnitude plot of a 128-point DFT of w’(n) is shown in Figure 1(d), where we can see the more detailed structure of the Fourier transform of a Hanning window. So, in this case, we can say "zero padding in the time domain results in an increased sampling rate in the frequency domain". Here the zero padding increased our frequency-domain sampling (resolution) by a factor of four (128/32).
For each sample in Figure 1(b), we have four samples in Figure 1(d).
Many folk call this process "spectral interpolation".
OK, that’s time-domain zero padding. No big deal, you’ve probably seen this before. Now let’s see what frequency-domain zero padding (FDZP) will do for us by way of an example. Think of an 8-sample real discrete sequence comprising the sum of a 1 kHz and a 2 kHz sinusoids defined as:
x(n)=sin(2·pi·1000·nts) + 0.5sin(2·pi·2000·nts + 3·pi/4),
where ts is the sample period (1/fs), and the fs sample rate is 8000 samples/second. Our eight x(n) samples are shown as the black dots in Figure 2.
We’ll call x(n)’s 8-point DFT the frequency-domain sequence X(m), the real and imaginary parts of which are shown in Figure 3(a) and 3(b). OK, here’s where the zero padding comes in. If we insert 8 zero-valued complex samples in the middle of X(m), we’ll have a new complex spectrum whose real and imaginary parts are shown in Figure 3(c) and 3(d). Realize, now, that a complex zero is merely 0 + j0.
That "middle of X(m)" phrase means just prior to half the sample rate, or 4 kHz in this example. Because of the way the X(m)’s sample index numbering is mapped to the frequency-domain axis, we can think of this fs/2 = 4 kHz point as the highest positive frequency in the spectrum. Thus we insert the zeros after the first N/2 spectral samples, where N is the length of X(m), in order to maintain spectral symmetry. (We’re assuming that the 4 kHz, X(N/2), spectral component is zero, or at least negligibly small, in magnitude.)
OK, let’s call this new 16-sample discrete spectrum X’(m).
Now, here’s the slick part. If we take a 16-point inverse DFT of X’(m), we’ll get the interpolated time sequence shown in Figure 4. Check it out! In between each of the original x(n) samples (shaded dots), we’ve calculated the intermediate time samples (the black dots). All 16 dots in Figure 4 represent the new interpolated 16-sample x’(n) sequence. In this case, we can say "zero padding in the frequency domain results in an increased sampling rate in the time domain".
A few things to keep in mind about this FDZP technique:
1.) Notice the agreeable property that the FDZP’s interpolated time samples match the original time samples. (The shaded dots in Figure 4.)
2.) We must not append zeros to the end of the X(m) sequence, as occurs in time-domain zero padding. The complex zero padding must take place exactly in the middle of the original X(m) sequence, with the middle frequency sample being fs/2.
3.) The new time sequence x’(n), the inverse DFT of X’(m), is complex. However, if we stuffed the zeros properly X’(m) will symmetrical and x’(n)’s imaginary parts should all be zero (other than very small computational errors). Thus Figure 4 is a plot of the real parts of x’(n).
4.) Notice how the amplitudes of the new x’(n) time sequence were reduced by a factor of two in our example. (This amplitude reduction can, of course, be avoided by doubling either the X’(m) or the x’(n) amplitudes.)
5.) In Figure 4 we interpolated by a factor of two. Had we stuffed, say, 24 zeros into the X(m) sequence, we could perform interpolation by a factor of four using the inverse fast Fourier transform (IFFT). The point here is that the number of stuffed zeros must result in an X’(m) sequence whose length is an integer power of two if you want to use the efficient radix-2 inverse FFT algorithm. If your new X’(m) sequence’s length is not an integer power of two, you’ll have to use the inverse discrete Fourier (IDFT) transform to calculate your interpolated time-domain samples.
Although I haven't gone through a mathematical analysis of this scheme, the fact that it's called "exact interpolation" in the DSP literature is reasonable for periodic time-domain sequences. Think of the standard technique used to perform time-domain interpolation using a low-pass FIR filter. In that method, zero-valued samples are inserted between each of the original time samples, and then the new (lengthened) sequence is applied to an FIR filter to attenuate the spectral images caused by the inserted zeros. The accuracy of that FIR filter interpolation technique depends on the quality of the filter. The greater the low-pass FIR stopband attenuation, the greater the image rejection, and the more accurate the time-domain interpolation. In this FDZP technique, for periodic time signals, image rejection is ideal in the sense that the spectral images all have zero amplitudes.
If our original time-domain sequence is not periodic, then the FDZP scheme exhibits the Gibbs' phenomenon in that there will be errors at the beginning and end of the interpolated time samples. When we try to approximate discontinuities in the time-domain, with a finite number of values in the frequency-domain, ripples occur just before and after the approximated (interpolated) discontinuity in the time-domain. Of course, if your original time sequence is very large, perhaps you can discard some of the initial and final erroneous interpolated time samples. From a practical standpoint, it's a good idea to model this FDZP technique to see if it meets the requirements of your application.
One last thought here. For those readers with Law Degrees, don't try to cheat and use this FDZP technique to compensate for failing to meet the Nyquist sampling criterion when your x(n) time samples were originally obtained. This interpolation technique won't help because if you violated Nyquist to get x(n), your X(m) DFT samples will be invalid due to aliasing errors. This FDZP scheme only works if Nyquist was satisfied by the original x(n).
About the Author:
Rick Lyons is the author of the best-selling DSP book Understanding Digital Signal Processing [Lyo97], and also teaches the short course Digital Signal Processing Made Simple For Engineers.
© 1999 Richard G. Lyons Used with permission.
Like most things in DSP, there are several methods to create minimum-phase Finite Impulse Response (FIR) filters.
If you want to transform a symetric (linear phase) FIR into a minimum-phase FIR of the same length, you can simply determine the zeros of h(n), and then invert zeros which are outside the unit-circle (i.e., compute the reciprical). Since the zeroes of real, symetric FIR filters are grouped as conjugate recipricals (a + jb, a - jb, 1 / (a + jb), and 1 / (a - jb)), all of the zero's will be doubles. (However, if the FIR has even length, there will also be a zero at w = pi.) The filter will be suboptimal with respect to its length, but if you are experienced with pole-zero manipulation, you can move some of the zeros around to get a better filter.
A second method, described in [Lim88], is called Spectral Factorization. This method builds on the first. In the first method, we end up with pair of double zeros and a suboptimal filter. We can then design a prototype filter that meets |H(z)|^2 and then convert it into a filter |H(z)| that has minimum phase. We can outline this as:
This approach has a few subtlies, though. First is creating the prototype filter. We have to insure that the magnitude response remains positive. This somewhat complicates an equiripple design, but can be dealt with. If dp and ds are the required passband and stopband ripple for a lowpass minimum phase FIR, then the passband ripple of the prototype filter needs to be
4.0 * dp / (2.0 + 2.0 * dp * dp - ds * ds)
and the stopband ripple
ds * ds / (2.0 + 2.0 * dp * dp - ds * ds)
This leads us to the second problem. The requirements above can result in a farirly long filter because the prototype filter is designed proportional to the squares of the final ripples. This can make it difficult to accurately calculate the roots of the prototype filter.
A third method for creating a minimum phase filter uses homomorphic filtering. It is described in the both [Lim88] and [Opp89]. We take the coefficients of our prototype filter as a sequence, then extract the minimum-phase component using a frequency-invariant linear filter. The steps are:
The only real problem with this method is making sure that the DFT's are long enough to minimize the aliasing error. (Thanks to Roland Froehlich for giving me some code to test my implementation against.)
The above methods only work for creating real-valued minimum phase filters. In addition, the methods that delete zeros in the Z domain need to use polynomial inflation to turn the final zeros into the filter coeficients. Polynomial inflation, in addition to root finding, can suffer from catastrphic numerical errors.
A new method has been developed that can create both real and complex-valued FIR filters for arbitrary magnitude responses. This method is non-iterative and relies on properties of the Discrete Hilbert Transform. It can also be extended into higher dimensions. The interested reader should visit the page Optimal Design of Real and Complex Minimum-Phase Digital FIR Filters for more info.
Oscillators can be created in software directly, using the "sine" function, or they can be calculated indirectly using several different iterative methods. We survey those methods here.
First, let's assume that:
f = desired oscillator frequency
w = 2 pi f / Fs
phi = initial phase
y = ouput array containing the oscillator signal
Now, creating a real oscillator in software is equivalent to sampling a sinusoid, so
for i = 0 to N
y[n] = sin(w * i + phi)
is sufficient. This isn't very efficient, though, because we calculate a sine for each output point (though using a sine approximation can speed things up a little.)
Another way is to use the following recurrence relationship for sinusoids:
y[n] = a * y[n-1] - y[n-2], where a = 2 cos(w)
In pseudocode, it would look like:
a = 2 * cos(w)
y = sin(phi)
y = sin(w + phi)
for i = 2 to N
y[n] = a * y[n-1] - y[n-2]
This has a problem, though. The recurrence relationship is a 2nd-order all-pole system with:
H(z) = 1 / (1 - a*z^-1 + z^-2)
a = 1
b = -2cos(w)
c = 1
into the standard quadradic equation solver:
x = (-b +/- sqrt(b*b - 4*a*c)) / 2a
x = (2*cos(w) +/- sqrt(4*cos(w)*cos(w) - 4)) / 2
= (2*cos(w) +/- 2*sqrt(cos(w)*cos(w) - 1)) / 2
= cos(w) +/- sqrt(cos(w)*cos(w) - 1)
= cos(w) +/- sqrt(-1 * (1 - cos(w)*cos(w)))
= cos(w) +/- j*sqrt(1 - cos(w)*cos(w))
= cos(w) +/- j*sqrt(sin(w)*sin(w))
= cos(w) +/- j*sin(w)
H(z) = 1 / ((1 - e^jw * z^-1) * (1 - e^-jw * z^-1))
This has a pair of conjugate poles on the unit circle, and as a result, is nearly unstable. Despite this, the above approach tends to work well in practice, depending on the application, numerical representation, and precision used. ([Fre94] has an excellent discussion of this method.)
For a complex (quadrature) oscillator, we can use the above method using cosine for the real arm and sine for the quadrature arm (you can actually generate both at the same time; see the Frerking for details). Another method is to view the problem as a rotating vector. We can create an initial vector and rotate is using a complex multiply for each point. You can derive this method from the cos(A + B) and sin(A + B) trig relationships.
dr = cos(w) /* dr,di are used to rotate the vector */
di = sin(w)
y.r = cos(phi) /* initial vector at phase phi */
y.i = sin(phi)
for i = 1 to N
y[n].r = dr * y[n-1].r - di * y[n-1].i
y[n].i = dr * y[n-1].i + di * y[n-1].r
This approach still suffers from stability problems, though. The new recurrence relationship is
y[n] = e^jw * y[n-1]
H(z) = 1 / (1 - e^jw * z^-1)
This system has a single pole on the unit circle and is unstable. In this case, we can deal with it. We know that the length of the vector is 1, so we can normalize each point as we create it. If we define the magnitude of y[n] as
|y[n]| = sqrt(y[n].r * y[n].r + y[n].i * y[n].i)
we can add
y[n].r = y[n].r / |y[n]|
y[n].i = y[n].i / |y[n]|
inside of the "for" loop to normalize the magnitude of y[n]. We loose efficiency, though, because we calculate a square root for each point. However, if x is close to 1, then
1 / sqrt(x) ~= (3 - x) / 2
Since the length of the rotating vector will always be close to 1 (it will only stray from 1 due to roundoff error), we can use this relationship. We can change the AGC inside the for loop to
mag_sq = y[n].r * y[n].r + y[n].i * y[n].i
y[n].r=y[n].r * (3 - (mag_sq)) / 2
y[n].i=y[n].i * (3 - (mag_sq)) / 2
Another method for creating an oscillator is to use a lookup table. The key to understanding the lookup table approach is understanding the concept of the phase accumulator.
Let's look at a sampled sinusoid
for i=0 to N
y[n] = sin(w * i + phi)
sin(x) = sin(x + 2pi) = sin(x - 2pi)
we can write the sampled sinusoid as
for i = 0 to N
y[n] = sin((w * i + phi) mod 2pi)
Since the variable "i" is monotnically increasing by 1 each iteration, we can rewrite this as
dw = phi
for i=0 to N
y[n] = sin(dw mod 2pi)
dw=dw + w
and again as
dw = phi
for i = 0 to N
y[n] = sin(dw)
dw =(dw + w) mod 2pi
Now, here is the key. A very easy way to do modulo addition is to use two's complement math. So, if we make dw a two's complement register, the mod 2pi is implicitly done, and we can also use it to index into a precalculated lookup table. If we have M bits, then the most negative number will represent -pi and the most positive number will represent pi (minus a smidge).
In practical applications, if we need N bits of phase resolution in the lookup table, we only use a table with 2 ^ N/4 entries and use the symetrical properties of sine to choose the proper entry. We also usually use a phase accumulator, delta phase, and initial phase with M bits, and M > N. When we index into the lookup table we take N top bits of the phase accululator (either round or truncate) as the index.
[Fre94] Digital Signal Processing in Communication Systems by Marvin E. Frerking
White Gaussian Noise (WGN) is needed for DSP system testing or DSP system identification. Here are two methods for generating White Gaussian Noise. Both rely on having a good uniform random number generator. We will assume that the function "uniform()" returns a random variable in the range [0, 1] and has good statistical properties. (Be forewarned, though, that many standard random number generators have poor statistical properties.) Once we can generate noise signals with normal distribution, we can adjust the signal's mean and variance to use it for any purpose.
The Central Limit Theorm states that the sum of N randoms will approach normal distribution as N approaches infinity. We can outline an algorithm that uses this approach as:
for i = 1 to N
U = uniform()
X = X + U
/* for uniform randoms in [0,1], mu = 0.5 and var = 1/12 */
/* adjust X so mu = 0 and var = 1 */
X = X - N/2 /* set mean to 0 */
X = X * sqrt(12 / N) /* adjust variance to 1 */
When the algorithm finishes, X will be our unit normal random. X can be further modified to have a particular mean and variance, e.g.:
X' = mean + sqrt(variance) * X
The drawback to this method is that X will be in the range [-N, N], instead of (-Infinity, Infinity) and if the calls to uniform are not truly independent, then the noise will no longer be white. Jeruchim, et. al., recommend N >=20 for good results.
The Box-Muller method uses the technique of inverse transformation to turn two uniformly distributed randoms, U1 and U2, into two unit normal randoms, X and Y.
do /* if U1==0, then the log() below will blow up */
X=sqrt(-2 * log(U1)) * cos(2pi * U2)
Y=sqrt(-2 * log(U1)) * sin(2pi * U2)
We can use some trigonometry to simplify this a bit and get:
U1=uniform() /* U1=[0,1] */
U2=uniform() /* U2=[0,1] */
V1=2 * U1 -1 /* V1=[-1,1] */
V2=2 * U2 - 1 /* V2=[-1,1] */
S=V1 * V1 + V2 * V2
while S >=1
X=sqrt(-2 * log(S) / S) * V1
Y=sqrt(-2 * log(S) / S) * V2
The above is called the Polar Method and is fully described in the Ross book [Ros88].
X and Y will be unit normal random variables (mean = 0 and variance = 1), and can be easilly modified for different mean and variance.
X' = mean + sqrt(variance) * X
Y' = mean + sqrt(variance) * Y
by Matt Donadio
If the actual frequency of a signal does not fall on the center frequency of a DFT (FFT) bin, several bins near the actual frequency will appear to have a signal component. In that case, we can use the magnitudes of the nearby bins to determine the actual signal frequency.
This is a summary of some methods found at Some Frequency Estimation Algorithms. (Other methods of sinusoidal parameter estimation which do not rely on the DFT are found on that page; those methods also work well, and are sometimes faster than these.)
The following notation is used below:
The first two methods appeared in comp.dsp posts.
Of the above methods, Quinn's second estimator has the least RMS error.
Posted 1999/04/19, Released 1999/05/03, Revised 1999/05/05