More features and seamless communications make the latest generation of controllers, I/O devices, and networks the best ever.
CPU Product Manager
GE Fanuc Automation
Designing and implementing factory-automation systems is getting easier. That’s because the three major components that make them work — PLCs, PCs, and communication networks — have more computing punch than ever before. Systems are increasingly more open so combinations of PLCs and PCs can often communicate on the same network. And, they’re smaller, simpler to install, and easier to program and debug.
A good first step when designing automation systems is to choose a controller type. Issues such as memory size, I/O count, throughput speed, and network compatibility must all be addressed.
THE VENERABLE PLC C
onsider PLCs first. Although they were developed some 30 years ago, PLCs still remain the most widely used controller type. A PLC gathers data from external devices, such as position sensors, then stores the information in an onboard memory input table. The controller collects the data simultaneously so it gets an accurate assessment of the system at all times.
As the PLC program executes, the controller determines the state of each output device and stores this information in an output table. The data is held in this table until the final program instruction executes then it transfers to each output device through output modules. The PLC rechecks the input data and begins another cycle.
This cyclical scanning is characteristic of most PLC systems though some interrupt or event-driven operations can alter it. Depending on I/O module and device type, diagnostics held in separate fault tables may also be sent along with I/O data. Certain codes in this fault table can affect system operation either through custom application software instructions or by the controller manufacturer’s predetermined response to common faults.
Over the last 10 years, PLCs have become less proprietary and easier to use. Many modern PLCs have open architectures that support common network standards including DeviceNet, Profibus-DP, and Genius buses. PLCs such as these program with PC-based, GUI-driven software. And some units automatically recognize and configure added equipment much like a plugand- play USB port in PCs. Moreover, a so-called hot insertion capability permits modules to be removed or replaced even while a process runs.
However, additional components aren’t always separate from the PLC itself. For example, an operator control station can combine a software programmable logic controller, operator interface, and networking hardware all in one package. By tightly integrating hardware and software, such systems can help OEMs lower system cost and speed time to market. Benefits to end users include ease of use, enhanced local and remote monitoring, and compatibility with plantwide networks. Improvements such as these have helped PLCs maintain their lead even as new PC-based controllers become more viable.
PC-BASED CONTROLLERS GAINING GROUND
Unlike a PLC, the input conditions of a PC are not scanned simultaneously but rather are collected through interrupt-driven instructions. For example, depressing the “A” key on a PC keyboard generates an interrupt which instructs the processor to collect this input.
In like fashion, PC programs execute on a line-by-line or instruction-by-instruction basis. As each command is performed, the operands (or I/O) associated with that command are tested and controlled. Capturing simultaneous events, such as system snap shots, is difficult or impossible because input data may change as each instruction executes.
Despite these limitations and reliability concerns, PC-based controls offer some advantages over PLCs. For example, graphical Windowsbased software makes programming PC-based controls simpler than PLCs. And, unlike PLCs, memory capacity is virtually unlimited. Moreover, PC-based controls tend to be less expensive than PLCs especially those built around low-cost desktop PCs. A downside to such systems is that desktop units aren’t designed for harsh industrial environments. In these cases, a more costly, industrial-grade PC may be more appropriate.
The next step in control-system design involves choosing the communication structure. This structure determines how each controller communicates with other elements on the network. Some structures are hierarchical where others communicate as equals or peers. Hierarchical structures, such as master/slave and multimaster, grant increased authority to an element or group of elements.
In an operation called polling, masters sequentially control the actions of, or request information from, slaved devices. Slaved devices are passive, that is, they can only respond to commands and can’t initiate communication.
Some master/slave protocols only let a master transmit messages specific to a single slave. Each message has one producer (master) and one consumer (slave). Such protocols are acceptable where network traffic levels are not a concern. However, it is often preferred to have several consumers of a single message. The ability of a master to broadcast information to multiple slaves simultaneously eliminates repeat messages and improves network throughput. Protocols that allow such broadcast-style messages are termed multicast. Examples of broadcast messages include system-wide errors and multiple- device input data requests.
Master/slave structures are typically the simplest to implement and ideally suited for centralized control (single controller manages the network) with distributed I/O. Distributed I/O implies I/O devices are located at a machine or process. Lengthy conveyor systems, for example, commonly use a master/slave communication structure. Master/ slaves aren’t intended for distributed-control applications because a single controller must manage the network media at all times.
A variation of the master/slave scheme, called multimaster, allows more than one master device to access a network. Here, system control logic is divided among several controllers (distributed control). The masters use the network both to control individual slave devices and for resource sharing. Each master is assigned specific slave devices to control. Although masters typically receive information from all slaves, slaves are keyed to a specific master. Such segmented systems are generally easier to program than a single larger system. And, controllers with simpler logic require less processing power which can lower system cost.
In some multimaster schemes, each of the masters may be granted temporary media access through what is termed token-passing arbitration. Each master can poll individual slaves once it has control of the network media. When one polling cycle completes, the token passes to the next master. This structure most often finds use in systems containing two or more processes that are distinctly separate yet must periodically share resources. Because a master may only communicate with its own slave devices, each process must be independent. For example, status and system-wide parameters may be shared between masters, but I/O devices can’t. Multimaster networks make it possible to have multiple I/O devices with different update rates or control requirements.
In contrast to master-type systems, peer-topeer protocols don’t grant controlling status to a single device but instead share media through arbitration. This sharing is often accomplished via token passing much like master-type systems. However, peer-to-peer networks have no master or slave assignments. All peer elements are typically of similar complexity and equally share control of media access and timing. Moreover, communication between elements is not restricted and peers may share I/O devices. Peer-to-peer communication is often necessary in distributed control applications which aren’t separable into distinct functions or cells. For example, continuous flow processes such as those found in pulp-and-paper mills, metal processing, and petrochemical industries are candidates for peer-to-peer control.