A proven procedure explains how to match I/O modules with PLCs, PCs, and networks to ensure a successful open-architecture control system — centralized and distributed.
I/O Program Manager
GE Fanuc Automation
During the early stages of control system design, most designers pay more attention to the controllers than the input/output system. This shouldn’t be too surprising because for 30 years, the simplest solution was in the rack I/O that came with a programmable logic controller or the remote I/O bundled with a distributed control system. However, modern control systems require more balanced planning to meet demands for lower cost, higher reliability, and greater uptime.
For example, a wider variety of I/O modules are available today, control architectures have expanded to include PC controls which further multiply I/O configurations, and designers can mix-and-match some I/O modules from different PLC, DCS, and PC-control vendors. This freedom of choice can complicate the selection process, but it ultimately leads to better control systems.
The best way to find ideal I/O components is to evaluate the system requirements for each application and then define overall control attributes such as response times, bus communications, diagnostics, redundancy, and scalability. Bus compatibility is especially important because the number of local area networks can get out of hand by inadvertently selecting incompatible devices. For instance, three different buses might have been used previously on a single machine for variable-frequency drives, bar-code readers, and I/O. But a machine designed to work anywhere in the world usually requires a retrofit to accommodate the fieldbus that dominates in different regions. A universal I/O eliminates the need for a separate I/O bus while ensuring compatibility with varying global requirements — often DeviceNet in the United States and Profibus in Europe.
In addition, new I/O modules can ease the selection process by spanning control architectures. These devices are used in both centralized and distributed systems (whether PLC or PC controlled). While flexibility simplifies the selection, evaluate these I/O by analyzing and prioritizing features such as autoconfiguration, addressing, and hot insertion.
The next step considers both the I/O system and the control structure before selecting components to meet unique application needs. Identifying the physical structure or layout of the application, such as factory floor area to be covered, helps determine whether I/O should be local, remote, distributed, or a combination. The physical arrangement of the application usually determines an optimal I/O strategy. It could be either a centralized or distributed system.
Centralized applications that involve a physically small area and I/O, local to the host processor, may benefit most from rack-based PLC systems with a single processor or multiple processors acting in parallel. The advantage of centralized I/O is that it generally couples to the PLC processor through the backplane which responds quickly at a low cost per point. Also, some I/O modules provide inputs that can interrupt processes within 1 msec when needed to further reduce response time.
Widely distributed systems provide an option for running long wires between sensors and actuators to rack-mounted I/O in the control loop, or use a distributed I/O scheme. In a distributed network, I/O modules sit near their devices and connect over a serial bus. The advantage is it substantially reduces wiring and installation costs as well as panel space. A distributed system is also more modular for quick machine setup and easy expansion. In addition, it’s essential to consider throughput time for the distributed system. It involves input-signal conditioning, transmission time to the controller, and the controller response time back to the I/O device. Transmission delays can add as much as 80 to 100 msec which can be significant for certain applications.
Some designers employ distributed I/O in a physically local arrangement with control and I/O in a single enclosure. This setup can provide modularity, diagnostics, and short-circuit protection. Other applications may use both rack-mounted and distributed I/O. In these cases, rack-mounted I/O supports numerous points requiring fast response and local devices where wiring is minimal, while distributed I/O connects some remote devices. Also, systems that extend over a large area may use rack I/O for local control-panel pushbuttons, indicators, and related devices while using a distributed system for the remote devices.
In addition to a physical description, define the control system according to the processor structure. In some systems, a single processor — whether PC or PLC — provides all control capability, and the I/O can be local, remote, or both. Other systems split control across several processors. Each acts autonomously, often with yet another processor coordinating the overall system. The complexity of the control task is not necessarily related to the number of processors. For example, rather than burden the main processor with interrupts to perform certain high-speed operating tasks, multiple processors often satisfy the need. In other cases, multiple processors can perform specialized tasks such as motion control. The motion-control processors can solve complex motion algorithms at high speeds and are usually coordinated by another processor. Multiple processors typically provide higher system performance than a single, general-purpose processor. In a multiple processor system, each often controls its own I/O to maximize the system speed as well as provide a unique input such as a fast sensor or a specialized output such as speed reference for a motor drive.
Though often overlooked, I/O plays a pivotal role in whether or not a control system meets performance requirements and design goals. The optimum I/O can improve system speed, increase reliability, and decrease costs. In particular, consider response time and the characteristics that influence it, such as bus communications, diagnostics, redundancy, scalability, and modularity.
Response time is generally the period between receiving a signal external to the control and the time the control produces an output action in the process. Control response time includes input signal filtering time, I/O scan time to provide the signal to the CPU, time for the processor to act on the input, time to send the signal to the output device, and time for the output device to generate the output signal. Many factors affect throughput. Delays in sensors and actuators also add to the actual response time even though they are not part of the control loop.
In PLC systems, local and distributed I/O includes delays common to both, such as filter time and output actuation time. However, the time that the CPU scans the I/O is typically longer for distributed systems than for local I/O because operating over a serial bus is slower than running on a parallel backplane bus. The delay time introduced by the serial bus must be paid for twice — once for input scan and once for output scan. Among various distributed I/O systems, there may be a wide variation of speed depending on baud rates, number of drops on the bus, the degree of checks or redundancy built into the bus, the data packet, and the length of the network. But don’t optimize all bus parameters simultaneously. Consider which functions are the most critical to the application.
For example, when fast reaction time is needed, I/O with local processing remote from the central CPU offers an advantage of eliminating one CPU and two I/O bus scan times. Local logic processing affords faster response because the program is usually small and replaces the CPU scan time. The central CPU still provides conditional signals to the remote processing function and receives status signals from the distributed logic to ensure overall process coordination. Distributing the processing often decreases the speed requirements on the CPU and the I/O bus. It also reduces system cost and complexity. Distributed processing simplifies the logic structure of the overall control scheme by dividing the task into smaller, more manageable segments. Processing power will move from a single CPU to several in parallel, whether the system is distributed or centralized.
Diagnostics in an I/O should be capable of at least reporting the health of its internal bus and provide some indication of device failure. This capability lets an operator better isolate a system problem. New I/O modules allow going beyond this basic level of diagnostic reporting and permit automatic point-level reporting of faults as they occur on the factory floor. These devices can monitor and report on internal and external fault conditions. For example, one I/O can include diagnostics for module overtemperature, point-overload conditions, low line voltage, loss of communications to the CPU, alarm conditions, open wire between I/O and sensor, short-circuit load, and open-circuit load. Detailed fault information at the I/O level considerably reduces system downtime. And fault detection can be improved considerably over conventional I/O systems. Such diagnostics are a good example of how intelligence can be moved outward to free the central processor.
Redundancy is critical in operations where a single failure would be intolerable. To avoid these failures, consider building redundancy into the system for the CPU and the I/O. When a particular output state is vital to the safety or operation of a process, include redundancy for the particular I/O. Regardless of whether the CPU is running correctly or not, the I/O device is sensing inputs and driving outputs, and failures in I/O circuits can be as critical as a CPU failure.
I/O redundancy demands a design philosophy called fail-safe or fail-process-continue. This requires determining how the system should behave in the event of a failure. For example, allow for turning on the outputs for fail-process-continue or off for a fail-safe mode. Regardless of which mode of operation is required in the event of a failure, include a means to alert the operator to the failure of one redundant component, so it can be corrected before the second or back-up component fails. Considerations to be made include dual versus triple redundancy.
Scalability and modularity deserve some thought during the initial design phase because system needs can change. Some I/O allow extending the design easily without significant changes. For example, expansion with GE Fanuc VersaMax I/O simply requires snapping more modules into place on a DIN rail with a variety of wiring terminations such as spring clamps, connectors, and screw-down terminals. Other systems may demand special wiring and mounting when additional I/O is permitted or when rack space is unavailable. Modularity further eases the process by allowing mixing-and-matching devices while maintaining seamless communication.
In addition to basic system architecture and requirements, also compare and prioritize the various features found in the newest I/O devices. For example, some components permit autoconfiguration and automatic addressing which can cut setup time to 50%. Users also appreciate hot insertion, a feature that allows them to add and remove modules without rewiring or shutting down the machine. A variety of mounting options allow physically configuring modules to best suit a particular machine or an individual facility.
One I/O for three architectures
For greater uptime, a hot insertion feature allows adding and replacing I/O modules while a machine is running, without affecting field wiring. Some modules also include a point-level electronic short circuit fault indicator and auto reset. A field-power LED indicator confirms that power is available for driving outputs while a backplane power LED indicates power is available for each I/O module.