A "feel your way around" strategy can let robot arms learn their surroundings.
|When Autocalibration calculations are complete, the robot wand can extend into the water cassette and precisely engage a wafer.|
Peter M. Zakit
Berkeley Process Control Inc.
Robots on production lines must be taught where to move. That is true in spades for robots that move silicon wafers in semiconductor manufacturing. Wafer-handling robots are carefully taught each location from which they retrieve wafers and where to deliver them. The teaching process is not a one-time affair. Robots must be taught after a tool is manufactured (for testing), after the tool is installed in the semiconductor fab, as part of maintenance, and again after many repairs.
The process of teaching a robot is time consuming and takes a lot of skill and judgment. And of course the stakes can be large. An incorrectly taught point can later result in a damaged or broken wafer. To calibrate wafer-handoff positions, the technician doing the teaching must be able to direct the appropriate robot motion and critically determine teach points. This can be tricky. The teacher must manually jog the robot on the proper path to the teach points -- typically with several hardware or software interlocks defeated. It is a situation ripe for human error and tool-damaging collisions.
To determine appropriate teach positions, the teacher must subjectively eyeball the location of the wafer handoff point to within ± 0.25 mm. This usually takes place in a semiconductor fab clean room, in a bunny suit, in the bowels of the semiconductor tool being calibrated. Small wonder that taught points commonly vary depending on the teacher's point of view, idiosyncrasies of the wafer transfer devices, and subtle differences in optimal handoff points for a given tool. All in all, there can be a substantial variation in points programmed from one session to another and even more deviation when someone new handles the teaching. The result: Compromised reliability and wafers frequently worth tens of thousands of dollars are placed in jeopardy.
|Autocalibration can work with proximity sensors as well as with physical touch. For example, a robot end-effector might interrupt the optical path of a through-beam sensor as an alternative to contacting a feature on nearby equipment.|
It can easily be a 6-hr job to manually teach a robot how to precisely place wafers. It is now possible, though, to let robots calibrate themselves through software. This approach can reduce the teaching process to about 20 min and eliminate the need for teaching skills.
The key to fast teaching is to have the robot automatically calibrate itself so it knows the geometry of its surroundings precisely. In a semiconductor tool equipped with Berkeley Process Control's Autocalibration technology, a technician presses a single button to execute a preprogrammed calibration route. That routine automatically finds critical wafer-handler physical reference features utilizing various application-specific sensing methods, including touch calibration. The control system thereby learns all of the wafer handoff positions. There's no judgment or skill involved.
Autocalibration technology is made possible by a tight integration of robot and closed-loop machine controller. It is realized in a shared-state, multitasking and multiaxis motion-and-machine controller. In contrast, it would be more difficult to realize in typical robot-control architectures that employ multiple disparate controllers communicating over serial links. The delays associated with this sort of architecture can complicate the coordination necessary between axes when gauging nearby geometry.
Plotting a course
In the touch method of Autocalibration technology, the robot is programmed to intentionally drive a part of a robot arm until it gently touches a known feature of the station or cassette. The controller also attempts to quickly determine just when the arm has touched the station.
The principle here is that some amount of motor torque is required to move the arm through free space. When the robot arm hits the obstruction, the motor driving the arm slows down. Thus the first indication that the arm has hit something is that the servomotor begins to slow.
|The steps in determining dynamic background motor torque calculate an average of torque samples while the arm moves through the air at a constant velocity. The system calculates a moving average of torque and compares it to a preset level to decide if it has touched an object.|
But at the first decline in velocity, the controller cannot tell whether the deceleration is a result of touching something or is a normal variation in the friction of bearings, belts, and so on. (Any system will experience variations in motor torque from nonuniform friction in bearings, belts, screws, and so forth.) So the controller's closed-loop software will respond by slightly increasing the motor current and, thus, motor torque.
Upon subsequent calculations of the servo velocity loop, the controller will have additional information about whether the robot arm has touched an obstruction. If the motor velocity again begins to increase, then the controller can deduce that it was friction and not an obstruction that caused the need for more motor torque. If, however, the servomotor continues to slow even with additional applied torque, then the controller deduces that the robot arm has found an obstruction. The controller notes the motor position while simultaneously reducing servomotor torque to ensure that contact with the tool is gentle.
It's possible to reduce the force generated during the intentional collision of the robot wand and the station by factoring in the background torque of the arm. Specifically, one measures the background torque for each robot axis to be touch-calibrated. The method is to first move the robot to a safe area where it can make short movements without touching anything. Then one-by-one, each motor is told to make a constant-velocity move (usually the same velocity used in the calibration step). When the axis has reached the constant velocity (that is, it has finished accelerating), the machine controller samples the average motor torque. This average is made up of numerous instantaneous motor torques, each such torque being the output of the closed-loop control.
Once this sampling process is complete, the background torque value is determined by taking the simple average of these samples. The sampling frequency and the number of samples taken depend on the specific design of the machine. But a common sample size might be 100 measurements. The average background torque would therefore be about 0.01 of the total value.
The system stores the background torques it has calculated for each of the axes to be used during touch calibration. Then as the arm moves toward the feature to be touched, the controller calculates a moving average of the torques it sees for each axis, and compares them to the stored background torques. The system can thus decide that it has touched something when this moving average exceeds some torque limit.
The torque limit equals the dynamic background torque plus a threshold limit. The threshold limit is a value chosen to be larger than the torque variations seen while moving at the touch-sensing slew velocity. During the routine to quantify the dynamic background torque, the controller gauges the statistical variation of the torque samples and sets the threshold value at some multiple of the background torque.
Once the system senses contact, it captures the current axis position and then moves away from the touch point.
Autocalibration in ActionA view of a generic wafer-handling station illustrates the key hardware components involved in the patented Autocalibration technique. (Berkeley has also trademarked the term.) The robot is programmed to extend its arm and touch various preselected structural features or locations on the process station such as the cassette stand. Coordinates of these features relative to the robot come from CAD drawings or manual measurements and are stored in controller memory. Precise data relating feature locations to the robot body come from physically touching these spots with the wand. The order in which the coordinates of the cassette features are found is important because successive axis calibrations use data collected from previous calibrations. For example, the robot is programmed to first determine the Z height, generally by touching the top plate of the cassette station. Then in sequence it finds the rough radial (R) position, the theta angle, the final radial position, and finally station yaw.
Touch me here
During a typical calibration procedure, the robot has a rough idea of where features are located even before it touches them. That's because tool developers prime the controller with the positions of these items from CAD drawings or manual measurements.
Designers chose the features to be touched such that the motion to locate each one is isolated to one robot axis. This ensures an unlinked, independent coordinate.
To accurately find a feature with touch calibration, it is important to note that most machines have semirigid drivetrains. The result is a certain amount of flexibility in each axis that calibration procedures must take into account. The way to cancel out this flexibility is by touching a feature from two opposite directions. In other words, make a positive velocity or directional move to determine a feature position, then a negative velocity or directional move to determine the same feature. In cases where it's not possible to touch a feature from both directions, it may be possible to touch a secondary feature with a known spatial relationship to the first.