Prior to a few years ago, motion control products used ordinary microprocessors to calculate and execute motion algorithms. With the development of a “screamingly fast” new microprocessor — the digital signal processor (DSP) — today’s systems can execute motion algorithms up to ten times faster, often at less cost. But buyers beware. Not all implementations of DSP deliver on their marketing promises.

A digital signal processor is a specific type of microprocessor chip. It has an arithmetic and logic unit, address generation and management units, and sequencers to control the flow of data and instructions efficiently, just like a general purpose microprocessor. The computing architecture, though, is different from that of personal computer microprocessors.

The DSP architecture is known as Harvard. It gives this chip two distinct sets of memory, one for data, one for instructions, and two busses. One bus grabs data operands while the other grabs instructions. This ability to do two things at once is part of what gives a DSP its speed. Personal computing microprocessors typically follow the Von Neumann architecture. These chips use the same memory for instruction and data.

The latest crop of microprocessors, such as the Pentium family or the Power PC chips, may provide competition to DSPs in the future. Both types of chips are moving to a design where they have one large memory outside and two memory channels inside, in effect becoming more Harvard in architecture. But there are other reasons that these general-purpose microprocessors will not be as effective as DSPs in motion control.

A cycle in time

All microprocessors execute instructions in a measure of time known as a cycle. An instruction is an action, such as get a data bit from memory, store a bit, add two numbers, multiply two numbers, and so on. Each action executes in a specific number of cycles. In Von Neumann microprocessors, these instructions can take hundreds of cycles to execute. In a digital signal processor, most instructions takes one cycle. Depending on how multiplication is implemented in the motion algorithms, execution can take one or two cycles, including floating point numbers. Multiply instructions take the most amount of time to execute in microprocessors.

“Floating point computation is a must for high performance motion control. Trying to do this with a 16-bit integer microprocessor will degrade performance,” says Kasturi Rangan, engineering manager, Anorad Corp.

“Today’s microprocessors cannot approach the needed speed,” adds Tom Bucella, Teknic Inc. “You can do high speed fuzzy and non-linear augmentations to improve performance well beyond a standard PID motion control algorithm. You couldn’t dream of doing this on a microprocessor.”

Unlike microprocessors, DSPs can directly process sensor inputs. As well as offering complex algorithm handling capability, DSPs operate in the microsecond time frame, as do sensors.

If fast execution of mathematical instructions is that important, why not use an application specific integrated circuit (ASIC) chip or math coprocessor? DSPs have similar capabilities as ASICs and math coprocessors, however, DSPs offer additional advantages besides speed. ASICs are typically hardwired logic in a circuit. You cannot go into an ASIC and change its program. Instead, you have to design a new chip, an expensive process that really only sees benefits when large quantities of chips are produced. By contrast, DSPs have a software program in them that users can access and alter, as well as allowing more complicated functions.

A math coprocessor does nothing but math. It is fed data and instructions by another microprocessor, which also takes out the results. A DSP does all of those functions without the aid of an additional microprocessor. Adds Mr. Bucella, “You can take a first generation DSP (of 10 years ago) and write libraries for it that will beat a 386/387 math coprocessor.”

Says Jon Brabender, Project Engineer, Motion Control Div., Allen-Bradley Co., “DSPs are ideal for applications that require quick, repetitive operation, tight programming code, and minimal external interface. We use DSPs as a way to off load motor commutation, velocity, and position loops from microprocessors. The DSP can perform these actions at greater than 1 kHz update rate.”

“DSPs are handling indexing servo control, also known as control law algorithms, and velocity/current loop control at the amplifier,” says Mr. Rangan. “Here the available speed of a DSP permits all of the computational load to be handled by the one processor. A multiprocessor system would need memory and I/O overhead due to data transfer from one processor to the next.”

Buyer be aware

DSPs arrived five or six years ago for speech and video processing applications. However they were expensive and difficult to program. Improvements and cost reductions have now made DSPs readily available. Today, they are used extensively in cellular phones, modems, multimedia video cards, and data radios. “The big market is consumer devices,” adds Mr. Bucella. “Motion control is something of an afterthought.”

The major suppliers are Texas Instruments with almost 70% of the market, Analog Devices, and Motorola.

Because of the DSP chip potential, there is much marketing jargon. Some vendors say their products are DSPbased, or DSP-like. Others claim that with the DSP chip you can eliminate dither and drift. Some claim to use multiple DSP chips in their products. Others tout the benefits of DSPs combined with microprocessors. Vendors are leaving it up to customers to determine what is a benefit and what is market positioning.

Many man-years of software investment have gone into Von Neumann microprocessor- based products. While a DSP chip offers impressive math execution speeds, the true benefit of DSP comes when motion control algorithms are rewritten to take advantage of the chip’s hardware design. Some drive manufacturers, however, have simply placed their old motion control algorithms onto DSP chips, without alteration. This may improve motion control speed, depending on the algorithms. The improvement may be 10 to 20%. Then again, you may not see any improvement.

Some drive manufacturers have worked on their motion control algorithms but are not using DSP chips. They have cleaned out extraneous functions, tightened formulas, and improved the phase and time delay. In their marketing literature, they may claim that their products offer DSP-like performance. Often, their efforts do result in a decrease in execution time similar to that seen with DSPs. However, DSP-like is not DSP.

Using multiple DSPs and combining microprocessors with DSPs are other ways to focus attention on the new. But competing manufacturers debate the benefits of such additions. For some, the microprocessor is necessary to handle multiple, complex tasks that the DSP chip is too specialized to execute. Also, it is easier to program a microprocessor, because better programming tools and more memory are available. DSPs are usually programmed in assembly language.

If a motion system is not doing other functions, a microprocessor combined with a DSP may be more technology than is needed. A DSP chip can be programmed to handle one to eight axes of motion control. More digital signal processing chips will add cost, and may not give you any more or any better accuracy and control.

Combining a DSP and a microprocessor may offer improvements. On the other hand, it may mean that the microprocessor is still doing the bulk of the algorithm execution, because the manufacturer has such a large investment in its software and doesn’t want to spend the money for optimizing that algorithm.

Some drive manufacturers are taking a loose approach with the words digital signal processing. They may combine special chips that execute PID functions with a microprocessor, and then claim they have several DSPs in their microprocessor. It’s true, they are doing digital processing, but they are not using a fully programmable, super-high speed device known as a DSP chip.

A few manufacturers are claiming that with a DSP, they can eliminate dither, also known as hunting. Encoders send out pulses to the control to indicate position. If the system is tuned to get the most bandwidth and performance, it will tend to click back and forth, “hunting” between pulses. Motion control algorithms don’t calculate between pulses, whether you use a DSP or a microprocessor. “If you don’t get a tick from the encoder, you don’t get a tick,” says Mr. Bucella. “After you get a tick, then you can see how long it was since the last tick, and make inferences about the velocity.”

A few vendors are using fuzzy logic in their DSP chips to improve the response and reduce hunting. According to one vendor, such an implementation required an incredible amount of work to be done by the algorithm. But it was possible because of the speed of the digital signal processor.

Evaluating DSPs

“You’re not necessarily getting what’s advertised, and what’s advertised is not necessarily what matters,” says one engineer.

Comparing competing specification sheets will be confusing. There are no benchmarks, no standards to judge which DSP motion control product is best for your application. For example, asking about sample time rate will give you many responses. The sample rate in a specification may really be the rate for one axis out of the eight a system handles. Also, you may only get that sample time rate if all the other functions aren’t running. Otherwise, a listed 50 msec update time may be 500 msec or longer in typical operating conditions.

An important specification is phase delay, but not all manufacturers mention it. Phase delay is the amount of time from receipt of an input from the encoder to the time it is processed. If a motion control system reads all the encoders on all the axes, and processes all the inputs together, and then sends four outputs, the system has the maximum amount of phase delay. This may be very different from the number listed in a specification.

To some extent, phase delay is governed by the algorithm used to read the encoder. One efficient method is to read one tick of the encoder, process it, send it out, and then read the next axis. Another efficient method is to take advantage of a DSPs “parallel” processing abilities, and read some encoders while simultaneously processing others, then sending out data while processing the other inputs.

Keep in mind that use of a DSP in a system automatically reduces the phase delay because of the processing speed of this particular chip. The algorithm used with the chip may still be 20 years old.

The only way to really determine the differences between DSP products is through benchmarking or sampling products with the system you plan to use. See how fast can you go from point to point, how long it takes to stop and settle, or go around a circle and have a minimum amount of error.

Information for this article was provided by Allen-Bradley Co., Anorad Corp., Performance Motion Devices Inc., and Teknic Inc.

See Associated Figure 1

See Associated Figure 2