Digital Signal Processing is a difficult and complex subject. Here, we offer tutorials to clear up some of the mysteries of DSP.

Understanding complex numbers and quadrature signals is essential for understanding DSP at both a theoretical and a practical level. Yet this strange, *complex* subject (based on the admittedly *imaginary* construct of the *square root of negative one*!) is among the hardest for DSP beginners to grasp - and is confusing at times even for advanced DSPers.

But don't despair: help is on the way. In this tutorial, Rick Lyons, author of the best-selling DSP books *Understanding Digital Signal Processing* and *Streamlining Digital Signal Processing: A Tricks of the Trade Guidebook*, clears the fog around this difficult subject by providing the clearest, most intuitive explanation yet of quadrature signals and their importantance in Digital Signal Processing.

In the classic paper, "An Economical Class of Digital Filters for Decimation and Interpolation", Hogenauer introduced an important class of digital filters called "Cascaded Integrator-Comb", or "CIC" for short (also sometimes called "Hogenauer filters").

Here, Matthew Donadio provides a more gentle introduction to the subject of CIC filters, geared specifically to the needs of practicing DSP designers:

A Maximum-Length Sequence (MLS) has two *different* (but
related) definitions:

One is the driving function applied to the input of a linear time invariant (LTI) system:

x[n] = X*(-1)^a[n]

The other definition is simply the binary sequence, a[n] = 0 or 1, used in the exponent. It is clear that x[n] = + or - X and that its RMS and peak values are both X, making its crest factor (peak/RMS) equal to 1, the lowest it can get. This is why MLS is as noise-immune (for broadbanded noise) as you can get. In comparison, the other popular measurement method, Linearly Swept Frequency measurements, has a crest factor of sqrt(2) which is good but not quite as good as MLS.

Also, the autocorrelation of x[n] is

N-2 Rx[n] = 1/(N-1) * SUM{ x[i]*x[n+i] } i=0 N-2 = (X^2)/(N-1) * SUM{ (-1)^(a[i] + a[n+i]) } i=0 N-2 = (X^2)/(N-1) * SUM{ (-1)^(a[i] XOR a[n+i]) } i=0

(N is defined later.) It is clear that

Rx[0] = X^2

The value of Rx[n] for n !=0 is determined later. The sequence a[n] is generated from a shift register exclusive-or (XOR) operation using the K bit word A[n] which is

K-1 A[n] = SUM{ a[k][n] * 2^k } k=0

a[k][n] are the bits of the word and a[n] can be chosen to be any of the K bits but, for simplicity, we'll choose a[n]=a[0][n], the LSB. The shift and XOR operation is explicitly

a[k][n+1] = (a[0][n] * p[k+1]) XOR a[k+1][n]

for 0 <= k < K and it is understood that
a[K][n] is always zero. p[k] are the K *most* significant bits of the Kth
order "primitive polynomial", P, (K+1 bits) which is a specially
chosen constant.

K P = SUM{ p[k] * 2^k } k=0

Now it is clear that A[n] is a function of its previous state, A[n-1] and the primitive polynomial, P. Since A has K bits, there are 2^K possible bit permutations and conceivably A[n] can have 2^K possible states before the sequence repeats. But it is clear that the zero state must map only to the zero state, no matter what P is.

So we define a[n] to be an MLS and P to be a primitive
polynomial* if*, starting with a nonzero state for A[n], __all__ other
nonzero states of A[n] are taken before A[n] gets back to the original state
and the sequence of A[n] states repeats. There are 2^K - 1 nonzero states so
the MLS would have length of 2^K - 1 (if you exclude the zero state, that's as
long as it gets) which we now define to be N-1 (or N=2^K). A few bit
combinations for P will result in A[n] being an MLS and other bit combinations
will result in A[n] returning to any given state before N-1 shift XOR
operations. Clearly the MSB of P, p[K], must be 1 or you immediately reduce the
number of A[n] states by 1/2.

Okay, if you pick on any bit of A[n], particularly the LSB, a[0][n] (or a[n]), you will see that there are exactly N/2 "1" bits and N/2 - 1 "0" bits because the zero state is not part of the A[n] sequence. So there is one more "1" bit to a[n] than "0" bits and there is one more sample that x[n] is -1 than +1 for each cycle of the MLS.

Now, here's the nontrivial part: if one takes an MLS, a[i], which is a circular or repeating sequence, and XOR that with a shifted copy, a[n+i], you will get either the zero sequence (if n=0), or the very same MLS circularly shifted a completely different shift delay, a[m+i]. That is

a[i+m] = a[i] XOR a[i+n] if n != 0, N-1, 2(N-1), ... 0 = a[i] XOR a[i+0]

This is because

a[k][i+1] XOR a[k][i+n+1] = {(a[0][i] * p[k+1]) XOR a[k+1][i] } XOR { (a[0][i+n] * p[k+1]) XOR a[k+1][i+n] } = ({a[0][i] XOR a[0][i+n]} * p[k+1]) XOR {a[k+1][i] XOR a[k+1][i+n]}

which says the very same rule that generates the MLS, a[i], is the rule that governs the sequence: a[i] XOR a[i+n]. That says the XOR sequence must either be the zero sequence (what happens when n = 0) or the same MLS (possibly circularly shifted to some other delay amount).

Now, what this says is that the autocorrelation defined above is

N-2 Rx[n] = (X^2)/(N-1) * SUM{ (-1)^(a[i] XOR a[n+i]) } i=0 { 1 for n = 0, N-1, 2(N-1), ... Rx[n] = X^2 * { { -1/(N-1) for other n

because when n is not 0, the SUM has N/2 "-1" terms and (N/2 - 1) "+1" terms resulting in one extra "-1" term.

Rx[n] = X^2 * ( N/(N-1)*d[n] - 1/(N-1) )

where d[n] is the periodic unit impulse function

{1 for n = 0, N-1, 2(N-1), ... d[n] = { {0 for other n

So, except for a little DC bias, x[n] has the same autocorrellation as white noise if N is large enough. Now, to make your MLS measurement, you squirt some MLS at your target and cross-correlate the result against all delayed versions of the same MLS

N-2 Ryx[n] = 1/(N-1) * SUM{ y[i] * x[i-n] } i=0

y[n] is the output of your LTI:

inf y[n] = LTI{ x[n] } = SUM{ h[j] * x[n-j] } j=-inf

where h[n] is the impulse response.

N-2 inf Ryx[n] = 1/(N-1) * SUM{ SUM{ h[j] * x[i-j] } * x[i-n] } i=0 j=-inf inf N-2 = 1/(N-1) * SUM{ h[j] * SUM{ x[i-j]*x[i-n] } } j=-inf i=0 inf N-2 = SUM{ h[j] * 1/(N-1)*SUM{ x[i-j]*x[i-n] } } j=-inf i=0 inf N-2-j = SUM{ h[j] * 1/(N-1)*SUM{ x[i]*x[i+j-n] } } j=-inf i=-j inf N-2 = SUM{ h[j] * 1/(N-1)*SUM{ x[i]*x[i+j-n] } } j=-inf i=0 inf = X^2 * SUM{ h[j] * ( N/(N-1)*d[j-n] - 1/(N-1) ) } j=-inf inf inf = N/(N-1) * X^2 * SUM{ h[j]*d[j-n] } - 1/(N-1) * X^2 * SUM{ h[j] } j=-inf j=-inf inf inf = N/(N-1) * X^2 * SUM{ h[n + m*(N-1)] } - 1/(N-1) * X^2 * SUM{ h[j] } m=-inf j=-inf

If the impulse response is much shorter than the MLS length N-1, than the only term in the left SUM we have to count is the m = 0 term, resulting in

Ryx[n] = X^2 * (N/(N-1) * h[n] - 1/(N-1) * H(0))

This says that, except for a small and negative DC offset, the result of the cross-correlation is the impulse response of the system scaled up by the energy of the driving MLS function. From the impulse response, everything else about the LTI system can be computed (frequency response, etc.) A note about usage: MLS drives the input with a period function of length N-1. This causes "time aliasing" in the resulting measurement which is also periodic with the same period. To avoid problems choose N to be at least twice as long as the expected length of the impulse response (a problem of circular logic requiring you to know something about the impulse response before you measure it). The other necessary measurement issue is that you must drive your system with two concatenated cycles of the MLS, the first one to excite the system into a steady state and the second to make the complete impulse response measurement.

MLS has truly weird behavior when there are small nonlinearities. now remember, linearity and time-invariancy is assumed when you are trying to measure frequency response using any method but some methods work better than others in the presence of nonlinearity.

The MLS that is applied to the system under test can be represented as:

x[n] = (-1)^a[n]

where a[n]=0 or 1 and is the LSB (or any single bit) of the shift register sequence (with exclusive-OR in the feed back path) that the computer science guys like to call a maximum length sequence. for clarity, i will call x[n] an "MLS" and a[n] an "MLS bit sequence". Bboth are periodic, with period N-1=2^K - 1, where K is the number of bits in the shift register word.

Now, one thing that is shown in the tutorial is that if you take the MLS bit sequence, a[n], and exclusive-OR it against a circularly shifted version of itself, you get another copy of a[n] but shifted by some other, seemingly random, amount. that is:

a[n] XOR a[n-j] = a[n-k] if j is not = 0 or a multiple of the MLS length, N-1

Restated slightly differently

a[n-i] XOR a[n-i-j] = a[n-i-k] (for any i) if j is not = 0 or a multiple of the MLS length, N-1

although, strictly speaking, k does depend on j, it's essentially a psuedo-random dependence. given an i and j (j not=0), k could be any wild number between 1 and N-1. For now just push that fact onto your stack (we'll pop it off later).

MLS measurement errors come about when there are nonlinearities (the
tutorial explains how, for a perfectly linear, time-invarient system, MLS
measures your system impulse response). in a 1995 letter to the AES, some guy
named Matthew Wright, had the insight to explain why these spurious and
*randomly* delayed spikes were getting added to the measured impulse
response.

If the system is modeled as a Volterra series (which is a pretty general way to model a nonlinear, non-memoryless system) you will get some cross-product terms that look like:

output y[n] = (linear terms like h[i]*x[n-i]) + (nonlinear terms like someCoef*x[n-i]*x[n-i-j])

Now, those nonlinear cross-products can be manipulated a little:

x[n-i]*x[n-i-j] = (-1)^a[n-i] * (-1)^a[n-i-j] = (-1)^(a[n-i] + a[n-i-j]) = (-1)^(a[n-i] XOR a[n-i-j]) = (-1)^a[n-i-k] = x[n-i-k]

That means the nonlinear cross-product term ( someCoef*x[n-i]*x[n-i-j] ) is gonna look just like the input sequence but delayed by some wild value, k+i, and scaled by someCoef. So the measured impulse response will have an impulse delayed by k+i and scaled by someCoef.

That is where you can get problems in the impulse response and problems in the frequency response. Now, since this behavior is deterministic and repeatable (even though it looks like a random delay), you can repeat the measurement and average until the cows come home (or the cliche of your choice), and this problem will not go away. There are techniques for dealing with it (like try different MLSs based on different primitive polynomials and median filtering) but averaging the response using the same MLS won't help.

- Douglas D. Rife and John Vanderkooy, "Transfer-Function Measurement with Maximum-Length Sequences", Journal of the Audio Engineering Society, Vol. 37, Number 6 pp. 419 (1989). This article presents a comprehensive analysis of transfer-function measurement based on maximum-length sequences (MLS). MLS methods employ efficient cross correlations between input and output to recover the periodic impulse response (PIR) of the system being measured.
- John Vanderkooy, "Aspects of MLS Measuring Systems", Journal of the Audio Engineering Society, vol 42, no 4, (Apr 94).
- Matthew Wright, "Comments on 'Aspects of MLS Measuring Systems'", Journal of the Audio Engineering Society, vol 43, no 1, (Jan/Feb 95).

Multipath distortion is a common problem in many DSP-based data transmission systems. Here, Neil Robertson shows how to model multipath channels using complex-coefficient FIR filters.

Attachment | Size |
---|---|

Multipath_Channel_Model_Using_DSP_Techniques.pdf | 53.78 KB |

In this tutorial, Richard G. Lyons, author of the best-selling DSP book *Understanding Digital Signal Processing, *discusses the estimation of time-domain sinewave peak amplitudes based on fast Fourier transform (FFT) data.

Such an operation sounds simple, but the scalloping loss characteristic of FFTs complicates the procedure. Here we present novel multiplier-free methods to accurately estimate sinewave amplitudes, based on FFT data, that greatly reduce scalloping loss problems.

The article also shows an example of what's called by the fancy name of "substructure sharing" to completely eliminate the need for multiplication operations in compensating for FFT scalloping error.

The zip file contains Matlab code (written by Rick Lyons) and C-code (written by Clay Turner) that demonstrate the algorithms in the article.

The attached files were provided by the author for publication on the Internet per his publishing agreement with the IEEE.

Attachment | Size |
---|---|

Scalloping Loss Compensation-Lyons.pdf | 308.99 KB |

Scalloping Loss Compensation-Lyons.zip | 55.63 KB |

Many DSP systems use composite signals consisting of a sum of sinusoids of the same frequency, often a sine and cosine. In this tutorial, Richard G. Lyons, author of the best-selling DSP book *Understanding Digital Signal Processing*, thoroughly covers this important DSP topic by explaining and deriving formulas for the sum of two sinusoids of the same frequency.