By Joseph E. Campbell
Adept Technology Inc.
San Jose, Calif.

EDITED BY LELAND TESCHLER

A typical task for industrial robots today is precision insertion. One example of an application in this class is the insertion of electrical connectors on printedcircuit boards, handled here by an Adept 550 robot and AdeptVision VME controller. An armmounted camera lets the system home in on where each connector should go on the board.

A typical task for industrial robots today is precision insertion. One example of an application in this class is the insertion of electrical connectors on printedcircuit boards, handled here by an Adept 550 robot and AdeptVision VME controller. An armmounted camera lets the system home in on where each connector should go on the board.


Modern robot controllers like this unit from Adept Technology can integrate motion control, machine vision, force sensing, and manufacturing logic in a single control platform. Amplifiers and controller boards can reside in a single compact unit having a VME form factor. Control tasks may involve numerous mechanisms working in conjunction with high-speed guidance vision, several cameras, conveyor tracking, and force sensing.

Modern robot controllers like this unit from Adept Technology can integrate motion control, machine vision, force sensing, and manufacturing logic in a single control platform. Amplifiers and controller boards can reside in a single compact unit having a VME form factor. Control tasks may involve numerous mechanisms working in conjunction with high-speed guidance vision, several cameras, conveyor tracking, and force sensing.


Adept Cobra robots are billed as the fastest four-axis Scara robots available. They frequently handle assembly and foodpackaging tasks.

Adept Cobra robots are billed as the fastest four-axis Scara robots available. They frequently handle assembly and foodpackaging tasks.


Precision positioning applications rely on industrial vision systems that guide robotic movements to within a few thousandths of an inch.

Precision positioning applications rely on industrial vision systems that guide robotic movements to within a few thousandths of an inch.


Kinematically, this is what a six-axis robot arm looks like. A robot controller calculates the position of an end effector in world coordinates from the product of 4 X 4 transformation matrices, one for each link in the arm. Inputs for high-level sensors such as a vision system or a multidegree-of-freedom force sensor are transformed into world coordinates using a similar approach.

Kinematically, this is what a six-axis robot arm looks like. A robot controller calculates the position of an end effector in world coordinates from the product of 4 X 4 transformation matrices, one for each link in the arm. Inputs for high-level sensors such as a vision system or a multidegree-of-freedom force sensor are transformed into world coordinates using a similar approach.


An interesting situation emerged recently when a manufacturer tried to put a vision system on an assembly line. The idea was to locate parts on a moving conveyor with a vision system, then position a robotic arm to pick them up one at a time. Engineers there diligently worked out numerous displacement fudge factors to relate the locations of the conveyor, end effector, and parts imaged by the camera. The fudge factors let the motion controller infer the physical location of a part from the vision system data, then direct the arm to the right place to pick it up.

Problem was, the relative position of the various components all changed every time the conveyor went back online after servicing or maintenance. The factors so carefully computed became useless. This necessitated regular rounds of recalculating new displacements.

At the root of these difficulties were some fundamental misunderstandings about how general-purpose motion controllers differ from more specialized robot controllers. Hardwarewise, the two can look similar. Both frequently employ Pentium-based processors or adopt a hybrid approach with a general CPU supervising one or more digital signal processors dedicated to servoloops.

However, the software architecture of a robot controller differs dramatically from that of an ordinary motion controller. First consider motion-controller software: It generally consists of a routine for closed-loop position or velocity control, operator interface functions, and routine for supervisory tasks.

An important point to note is at the supervisory level of control. Tasks there that relate to handling motion do not extend much past simply issuing position commands and scheduling the routines that control the individual axes. In other words, the supervisory level is relatively simple.

The supervisory level of a robot controller is more sophisticated. For one thing, it is written with the idea that most robotic systems incorporate feedback from high-level sensors that reside outside the position-encoder-feedback servoloops of individual axes. Typical examples include industrial vision systems and force sensors.

Most robotic work involves using information from these sensors to calculate the trajectory of a robot arm. To handle this calculation process, supervisory level software implements a trajectory planning algorithm. This algorithm relates the physical location of positioning elements, sensor feedback, and the objects being positioned in terms of what's called a world-coordinate system. This is in contrast to general-purpose motion equipment which tends to use a separate reference frame for each axis of motion.

One benefit of a world-coordinate system is that it can eliminate the need for fudge factors relating sensor data to the position of various components. The state of the art is such that straightforward setup routines can compute such information automatically. Moreover, data gathered during setup goes into transformation calculations that determine world coordinates and which are more precise than any manually deduced fudge factors.

REFERENCE KINEMATICS
It is useful to briefly review the way a robot controller implements world coordinates. Readers will probably recall from engineering mechanics that the position of an arbitrary point expressed in one coordinate system can be mapped into another through use of a 4 X 4 transformation. In the case of a Scara robot arm, the position of a point at the end of the arm can be expressed in terms of the product of 4 X 4 matrices, one matrix for each link in the robot arm. Matrix coefficients for the arm itself are determined by link length and geometry, and joint angle. Obviously link geometry is known. Joint angle coefficients come from feedback provided by joint servo encoders.

In an analogous manner, the coordinates inferred from the image of an industrial vision system can be expressed in world coordinates via another set of 4 X 4 transformations. The coefficients for the transformation matrices come from information determined during equipment setup.

Take, as an example, the case of parts laying on a conveyor. The robot arm will locate three points on the conveyor as part of the setup process. These points, of course, define the conveyor plane. The robot controller uses this information to deduce the transformation coefficients that will relate conveyor position in world coordinates. A point to note is that even if the conveyor is on an angle, this fact will be reflected in the transformation coefficients calculated automatically during setup. There is no need for computing additional displacements or other compensating offsets.

Modern robot controllers use programming languages that also work in world coordinates. Tool commands, vision commands, and conveyor definitions all get expressed this way. Put another way, the world-coordinate system and the transformations that make it possible are embedded in the controller programming language. One additional manifestation of this approach is that when programming moves, operators of such systems need not concern themselves with timing relationships at the operating-system level.

This is also one reason why robot controllers can implement a high-level calibration methodology. Once repositioned, a robot and its ancillary systems can find their bearings through use of a few software setup utilities that recalculate transformation coefficients.

This process contrasts with that necessary for more general-purpose motion controllers. Though these systems also tend to employ special-purpose automation software, positioning commands tend to assume coordinate systems that center on each axis of motion. It is certainly possible to define a world-coordinate system for these controllers. But control venders generally leave this task to OEMs handling applications where it specifically comes in handy.

The reason is that robotic positioning is a special case of motion control. World coordinates offer limited utility in simpler but more typical positioning applications that can range from converting machines to card readers in ATMs.

All in all, the process of fitting generalpurpose controllers to robotic applications puts the burden on OEMs for coordinate transformation relationships already available in robot controllers. Alternatively, they can simply try to make do with a series of less robust physical offsets and displacements.

TRAJECTORY PLANNING
The servoloop software that positions an axis on a robot is fairly conventional. Each axis has its own servoloop. An error term drives axis motion, derived from the difference between position feedback and a position command. There may be feedforward constants to adjust the position error under certain conditions. And as with general-purpose motion controllers, robotic servoloops execute on the order of once every millisecond.

The software that feeds position commands to each servoloop is called the trajectory planner. It is the trajectory planner that computes a model of where a tool tip must go from where it currently resides. To do so, it must take information out of world-coordinate form and translate it into joint angles (for a Scara robot) or into displacements for more general-purpose automation equipment. The trajectory planner repeats this process about every 16 msec.

Trajectory and servo cycle time enters into not only system bandwidth concerns, but also safety considerations. The robotic industry has issued strict safety standards that dictate minimum levels of performance in emergency situations. Perhaps most obvious of these is the emergency stop. Robotic controllers employ an emergency stop algorithm that bypasses the trajectory planner and its 16-msec cycle time and executes controlled-stop routines in firmware. This powers down each axis to a controlled stop in a few milliseconds.

This fast-but-controlled emergency stop can be contrasted with the technique used by many general-purpose motion controllers. The simplest approach is to just drive a large momentary surge of negative (halt) power to the amplifier. This certainly stops the positioning equipment. But in the case of a robot, it could easily snap off a wrist mechanism if there is enough inertia.

Finally, robot controllers employ other safety measures that are commonly found in NC equipment but which are rare in more general-purpose positioners. For example, loss of encoder feedback will generate an emergency stop. Ditto for reaching end-of-travel limits. These limits may be set either by hardware limit switches, or by declaring positions of the robot work envelop off limits.