Proportional-integral-derivative (PID) loops are often employed to minimize position error in motion control systems. Typically, they are implemented with floating-point math, which simplifies programming but requires a processor and math co-processor, thereby raising costs. It is possible, however, to implement PID control with fixed-point math by modifying an ideal PID loop in a way that eliminates velocity-profile calculations. Benefits include faster response, less expensive hardware, and easier programming.

Ideal PID loop

Ideal PID loops consist of three terms, P, I, and D. Each term drives a process variable (such as actual position) to a desired value (set-point) without introducing system stress. An example of system stress is an out-of-tune PID loop that causes a machine tool to experience excessive shock and/or vibration, while the controller attempts to move the machine's table into position.

The ideal PID formula is represented by:

Differentiating both sides and taking the difference approximates a discrete version:

Where e = error

T = sampling period

CV = control variable or output driving the motor into position

The proportional term is a primary contributor to an ideal PID loop. Here, position error differences are multiplied by a gain factor, Kp, which alters the output proportionally. For a given Kp, a large difference in error between time samples generates a large corrective output; conversely, a small difference in error generates a small corrective output. While P helps generate an immediate corrective output for large error changes, it has little impact on small error changes.

As its name implies, the integral term integrates error and multiplies this value by its own factor, Ki. The integral tries eliminating total error over time, but becomes problematic when large errors exist for lengthy times. A large Ki can oscillate the position loop, and a small Ki may take too long to pull into position.

The derivative term calculates the rate of change in error and predicts an output that eliminates error in the loop:

A PID loop's output is the sum of all three individual components:

Modified PID loops

By its nature, single-axis position control is a series of set-point step changes. Applying an ideal PID equation is difficult because large set-point changes can cause the D term to generate large outputs. Consequently, D is unreliable in many situations.

For large set-point changes, a modified loop — Type-2 PID — replaces output error with the process variable in the derivative term, reducing unwanted output swings. So, with position control, for example, position replaces the derivative error. Type-2 PID control also minimizes response time by identifying, isolating, and eliminating sources of noise that can corrupt D.

Type-2 PID:

Where p = position

P in the modified PID loop also depends on set-point changes, but to a lesser degree than the original D term.

Taking the modification a step further to a Type-3 PID loop, the process variable, or position, replaces error in the P term as well. A key difference between types is that for Type 2, error is replaced with the process variable in D only, whereas Type 3 replaces error in the D and P terms. This modified loop is more immune than Type 1 or 2 to unwanted output, stemming from expected set-point changes when commanded to change position.

Type-3 PID:

This implementation, however, becomes problematic when software limits are imposed. Specifically, entering and exiting limit conditions can cause erratic operation.

Math options

Even with a modified PID, the required math greatly limits available processing bandwidth. But programming techniques using lower cost controls can sidestep this issue. For example, the Type-3 PID produces fractional results, prompting many designers to use floating-point notation. This consumes significant processing power when closing a position loop — and is not practical when using a low-cost controller. In such cases, it is better to choose byte or integer fractional formatting.

With byte fractions, the last byte in a group represents the fraction, allowing resolutions of one part in 256. For higher precision, this can expand to integer representation, and the resolution increases to one part in 65,536. This is also known as fixed-point math, since the number of bit positions containing fractions is fixed.

The upper eight bits signify the whole part of a number (left of decimal) and the lower eight bits signify the mantissa (right of decimal). Fixed numbers expand to 32 bits when two 16-bit numbers are multiplied. Then, the lower 16 bits hold the mantissa, and the upper 16 hold the number's whole part. When only the whole number is necessary, the lower 16-bits are dropped, thus losing accuracy. Keeping either the most significant eight bits of the fraction or the entire 16 bits of the mantissa improves accuracy.

Fixed-point notation not only avoids floating-point calculations, it allows one to multiply by fractions instead of using division. Although division is not as processor intensive as floating-point math, it is more processor intensive than multiplication. As most processors can multiply 8 × 8 or 16 × 16 bit fields in one instruction, calculation times decrease.

Additionally, two's complement math can work with fixed-point math for faster computing. Since signed numbers are stored with the uppermost bit indicating sign, testing for this bit establishes a positive or negative number. Dividing a negative number by powers of two occurs by shifting an appropriate number of bits, then replacing the leading number of bits with 1's to maintain sign integrity. Bit shifting a variable twice and moving the fixed decimal point by one byte divides by 1,024, leaving a 16-bit fractional result.

For more information, please contact Calmotion at (818) 357-5826, visit www.calmotion.com, or write the editor at ctelling@penton.com.

Putting it all together

After modifying an ideal PID loop and selecting a data scheme, the next step is coding the information. C is advisable because of its universal acceptance and transportability across hardware platforms. Data manipulation using C also lends itself to byte/integer math using unions. Unions hold data of different types in the same location for direct user access.

For our programming example, we use a PC104-based microcontroller to calculate the PID loop and generate velocity commands. The output drives a VH-5C rotary table from Fadal Machining Centers.

The control board, in this case, is based on a PIC18F8722 eight-bit microcontroller from Microchip, running at 40 MHz. The sampling period of the PID loop is 1,024 Hz, or 0.9765 msec, and each iteration requires 75 µsec. Accounting for interrupt service routines and reading/writing data across the PC104 bus consumes about 7% of the controller's bandwidth, leaving enough time to complete other tasks.

To ensure that motor velocity is consistent with the output command from the modified PID loop, a drive with encoder feedback is employed for closed loop velocity operation. In this case, a 10-A dc drive controls the motor and tracks 16 bits of position data. The controller polls the encoder and tracks rollovers, extrapolating 32 bits. Rollovers happen when a counter reaches the maximum count and starts over, such as when a car odometer's five digits reach 99,999 and roll over to zero.

MECHANICAL SPECIFICATIONS:

Gear ratio: 90:1 dc brush motor: 90 V, 5.7 A
Max speed: 2,000 rpm Encoder: 1,000 line quadrature output

The output of one move command from 0 to 90° (quarter turn) of the rotary table was 22.5 motor rev. Another generated a large move profile of eight rotary table revolutions, or 720 motor rev. In both final command positions, accuracy is ±30 arc-sec, which demonstrates the PID loop behaves similarly whether set-point changes are small or large.

These solutions can apply to a wide range of similar devices and are intended to be hardware independent. For example, analog position feedback can replace encoder feedback. Sample source code involved in this application is available at www.calmotion.com.