DSPs are purposely built to make short work of complex calculations.
What is in this article?:
- Digital Signal Processors "Think" Analog But Work Digitally
- Digital Signal Processors "Think" Analog But Work Digitally
Digital Signal Processors "Think" Analog But Work Digitally
You'll find them in your CD player, radio, and cell phone. Motor controllers depend on them for efficient operation. They adjust the air/ fuel ratio in your car and trigger air bags in a collision.
These are just a few of the applications of digital signal processors, microprocessors optimized for highspeed number crunching. What sets DSPs apart from other microprocessors is the type of input signal they use. Typical microprocessors require digitized data that the processor operates on in chunks. Modern DSPs, in contrast, typically take in analog signals and generate analog signals as output. But don't be fooled; moving from input to output is a completely digital process.
The block diagram of a basic DSP is quite simple. An analog signal is applied to an analog-to-digital converter (ADC). The ADC samples or quantizes the signal into discrete numeric values that represent the signal. This digitized signal is then applied to the DSP core, where it is acted upon according to the programming algorithm stored in the memory of the DSP. A typical programming algorithm might be a digital filter to remove all frequencies above 3.5 KHz. The modified discrete values are sent to a digitaltoanalog converter (DAC), changing the quantized data back to an analog form. The entire algorithm must complete its calculations of multiply/ accumulate, add, subtract, or bit-shift within the time between samples of the ADC.
Why go through all this trouble? Wouldn't a simple discrete filter serve the same purpose? The answer to that is, "Yes, and no." Everything a DSP does has a component-level equivalent. But the use of digital techniques produces a process many times more efficient and effective. An electronic engineer would find designing a 50-pole low-pass filter to fit onto a 1/2-in.2 PC board impractical,-if not impossible. Yet a DSP performs that function with ease. The key is the speed at which the DSP operates.
DSP designs are geared for fast number crunching. A typical DSP system has four major internal buses. Separate address and data buses exist for both program instructions and data. Typically two address generators fetch data while another sequencer controls program execution. The arithmetic unit handles both the arithmetic-logic unit (ALU) and multiply/accumulator (MAC) in addition to a bit shifter.
Major operations of DSPs involve multiplying and adding numbers. MACs specialize in providing the means to do this. Multiplying two 16-bit values gives a 32-bit answer. That result is usually added to other 32-bit results, which might create an overflow condition if the holding register is only 32 bits long. To prevent overflows many 16-bit DSPs contain larger MAC registers some as large as 40 bits.
Special registers within the DSP hold beginning and ending addresses for buffer areas in DSP memory. Because the address does not have to be computed each time, it's faster accessing sequential data from memory buffers. Circular addressing automatically wraps the buffer pointer to the beginning of the buffer after the last address is accessed. This happens without stealing time away from the processor's main calculation function.
Like hardware, DSP software is also geared for speed. Simple commands carry out complex processing functions. For example, once all the buffer registers are loaded, a single command fetches both signal data and multiplication factors and multiplies the two together. It then adds the result to the previous calculation and stores the total in the MAC. Meanwhile, the data address generators automatically increment to the next position.
Most DSPs incorporate a repeat function that affects other operations such as multiply/accumulate, block moves, I/O transfers, and table read/writes. When this repeat function is used ahead of these other commands, the commands become pipelined, executing over and over for the number of times specified in the repeat command register. During this time, the DSP does not respond to any outside interruptions until the repeat command finishes. Many repeated commands now take only one clock cycle per execution. A single table-read instruction, as an example, might take three or more clock cycles to execute. But if tied to a repeat command, a new table position is read every clock cycle.
The use of pipelined architecture is the key. Pipelining breaks the calculation process into individual hardware steps. For example, the addition of two numbers might take three steps. The first step merely fetches both values, while the second step adds the two numbers together. The final step places the sum of the addition in memory. If each step takes one clock cycle, then three clock cycles are required to complete each sample's processing. But, in pipeline mode, the next sample value is fetched while the first sample goes through addition. Then, on the third clock cycle, as the first sample's sum is stored, the second sample undergoes addition, while a third sample is retrieved from memory. Pipelined architecture thus provides a processed sample virtually every clock cycle.
All of this speed allows the DSP to operate on signal data in real-time mode, delayed only by the processing time of the DSP itself. The DSP emulates any analog circuit using that circuit's mathematical model. One such circuit filters various elements, such as whistles, clicks, scratches, or noise from a signal.
Designs today demand ever more complex filters. Increasing complexity also raises component sensitivity to temperature, manufacturing tolerances, and component aging over time. There is a practical limit to how complex an analog filter may become. In the digital world, values are stored as highly accurate delay elements and multipliers. Digital values won't change or drift over time or with temperature variations as do their analog counterparts. A 50-pole filter design is quite possible and readily done in the digital realm where, obviously, it would be totally impractical usinganalog components.
Two of the most basic filter types are the finite-impulse response (FIR) and the infinite-impulse response (IIR) filter. Finite-impulse response filters use only input signals to determine output. That is, the output is a product of the input signal and the filter-transfer function. Once the input signal stops, the action of the filter stops as well, giving it the finite tag.
When an IIR filter is used, not only is the input signal applied to the filter, but some of the output signal is fed back as well. This tightens filter bandwidth, but opens the door to filter instability and nonlinearity.
A short history of DSPs
While the DSP is a fairly recent device, its operating principles date back to the 16th century. It was then that researchers started applying mathematics to real-world situations and began developing tools that help simulate real-world events as mathematical models. Performing the calculations, though, took more than a lifetime. John Napier passed away before completing the calculations for his book on logarithms. His friend and colleague, Henry Briggs, completed the work and published the book in London.
Faster math was just around the corner. Newton's calculus, Simpson's rule, and the Fourier and Laplace transforms all revolutionized the science of mathematically modeling real-world dynamics. While all of these techniques provided more efficient calculating methods, many calculations were still quite onerous, taking up to several weeks of intense number crunching. It took the computer to make these faster methods shine. Now the DSP effectively integrates these century old methods into real-time applications, performing calculations that required days in just a matter of microseconds.