The devil is in the details when it comes to giving Ethernet a deterministic response.
National Instruments Corp.
Engineers familiar with industrial controls no doubt have noticed the influx of schemes for making Ethernet real time. These range from protocols such as Ether-net/IP which build real-time response into software, to Sercos III, an Ethernet protocol for distributed motion control that adds hardware to handle real-time tasks. Additionally, the emergence of the IEEE 1588 precision time protocol (PTP) has made Ethernet feasible for synchronized distributed applications. IEEE 1588 provides a standard way to synchronize devices on a network with submicrosecond precision.
The approach common to all these protocols is that they put strict timing rules in place to make the network response deterministic. These "rules" can be implemented in different ways to get the desired outcome; however they usually share two elements. First, all nodes on the network synchronize their time to a master clock. One node serves as the system master and generates this master clock signal. The IEEE 1588 protocol, for example, uses this methodology to establish the clock synchronization for a deterministic Ethernet network.
The second element consists of a schedule that determines when each node on the network can transmit data during each network cycle. System engineers create this schedule by analyzing the amount of data they expect to transfer over the network. The scheduling lets engineers reserve enough time for data to reach its destination.
Such measures are needed because distributed systems imply latencies. When nodes are physically separate, engineers must take into account the time data takes to transfer between them. The challenge is that the latency can be indeterminable. Because of indeterminant latency, nodes on the network must be prepared to wait both for data they expect to receive and for an opportunity to send data to another node. The waiting times can vary from one network cycle to the next.
For example, the waiting game in a typical Ethernet network can vary greatly because it depends on a host of factors such as the protocol used, bandwidth available, aggregate network traffic, and number of nodes. In many applications, engineers can work around indeterminate latencies with techniques such as buffering.
Consider, for example, an application in which a server streams video across a network connection. In an ordinary Ether-net network, video frames won't all arrive at the same rate. The client application might buffer several seconds worth of video to make up for the inconsistent data transfer.
Such buffering methods suit Web-based multimedia applications but are inadequate for advanced distributed control. Take the case of an aircraft simulator. A flight-control simulator might communicate a new angle parameter for the rudder every few milliseconds. Actuators simulating the effects of rudder movement must receive the latest angle parameter at the same rate; otherwise the flight control system won't respond the way a real aircraft would. Data buffering in this case does not alleviate the issue because late data are useless in a dynamic system such as an aircraft.
Aircraft simulation demonstrates the main challenge associated with distributed-control systems, that is, how to minimize latencies. Industrial control systems have similar challenges when I/O and processing are distributed among nodes in a network.
Latency can vary from one data frame to the next in an Ethernet network. This behavior arises from the design of the data-transmission mechanism in the IEEE 802.3 Ethernet standard. The data-transmission mechanism of Ethernet is known as carrier sense multiple access, with collision detection (CSMA/CD).
The CSMA/CD access mechanism and flow-control methods introduce a timing uncertainty to Ethernet networks that make them nondeterministic. Specifically, a network device wishing to send out information must listen until no other devices transmit. Only when it hears that the lines are clear can it begin a transmission.
However, if another device begins simultaneously transmitting, there may be a data collision on the network. Per the standard, both devices should detect the collision, back off, and wait a random period of time before attempting to retransmit. This random wait period results in latency that is unpredictable.
High-level Ethernet protocols, such as the commonly used transmission-flow protocol (TCP), introduce additional handshaking to ensure that data sent across Ethernet has been received. Senders of TCP data wait until the receiver sends a positive acknowledgement (ACK). If the sender hasn't received an ACK within a time-out period, it retransmits the data. TCP and other flow-control methods introduce additional uncertainty to the network.
The fact that such information exchanges lack determinism has led engineers to look elsewhere for deterministic buses when this quality is important. Among the most widely used candidates are the controller-area-network (CAN) bus and MIL-STD-1553 for automotive and military applications, respectively. Industrial applications are more likely to use widely adopted field buses such as DeviceNet and Profibus.
Other solutions include expensive reflective-memory networks with fixed traffic patterns on token-ring networks. The network infrastructures of these deterministic buses are generally devised to handle the needs of one particular vertical industry. For example, the CAN bus is designed for automotive networks and DeviceNet is designed for industrial sensor networks. This limits the scope of these applications to specific areas.
There is a growing trend toward devising versions of Ether-net that handle real-time communications, despite the technical challenges. One reason a real-time Ethernet is attractive is that Ethernet ports and networks are ubiquitous. Widget sorting machines and enterprise-level applications can connect to the same Ethernet network and can share information as well. The universal connections can increase the productivity and reduce cost of large systems.
In addition, Ethernet hardware is manufactured in high volumes so costs are low. This makes Ethernet highly desirable compared to buses such as CAN and MILSTD-1553 designed for niche applications. Moreover, the Ethernet infrastructure can support multiple protocols. Thus real-time extensions of Ethernet can run simultaneously on the same network connections as ordinary Ethernet. Additionally, high-speed versions of Ethernet overcome bandwidth limitations that hamper other kinds of networks. For example, the maximum throughput on a Gigabit Ethernet is close to 10 times that of DeviceNet.
It can be useful to examine how Ethernet protocols are given deterministic qualities. The latest version of the National Instruments LabView Real-Time Module can serve as an example of one approach.
Many readers are probably familiar with LabView and its graphical programming approach to developing virtual instruments. Its Real-Time Module uses a deterministic data-transfer scheme that employs time-triggered techniques. Time-trig-gered communication means that all network activities are triggered by the elapsing of time segments. In a time-triggered communication system, developers spell out specific uses for specific time segments within a network message.
(A time-triggered communication system contrasts with event-triggered schemes where events within network nodes cause the node to attempt to send a message over the net. The randomness of the event triggering is what causes the collisions in network traffic that make such networks nondeterministic.)
National Instruments equipment uses a time-triggered Ether-net that is private, i.e., it is separate from the Ethernet that goes to the rest of the enterprise. Nevertheless, this private network uses ordinary off-shelf-network interfaces. The usual configuration is to also connect devices on the private network to a second set of connections for a public network. This topology provides a degree of redundancy.
Use of a private network can be contrasted with methods in other deterministic Ethernet protocols, such as Ethernet Power-Link. PowerLink uses one set of connections for deterministic and nondeterministic packets but splits the network cycle time so regular Internet traffic, such as TCP/IP, can commence after deterministically scheduled packets are sent and received. This method has the advantage of less wiring between real-time nodes because real-time and ordinary Ethernet traffic coexist on the same cables. The downside is that it typically requires a gateway, which is aware of the deter-ministically scheduled packets, to manage traffic coming from outside the real-time network.
LabView time-triggered methods transfer data across a deter-ministic Ethernet through two schemes: shared memory and dedicated time slots. Shared memory involves having all nodes on the network allocate a block of memory for every other node. Each shared-memory block contains any variables that one node will share with all other nodes on the network. During a network cycle, all data from one node are sent to all other nodes as one packet. The configuration is such that a node can only write to the portion of memory reserved for it in another node.
For example, consider the case of two nodes, A and B, passing information. B reserves a block of memory for use by A. It also reserves a block of memory for each of the other nodes in the network it knows about. When A sends a message intended for B, B writes it only in the portion of memory reserved for A. In every network cycle, data written from one node go or are "reflected" to all other nodes on the network.
It is a scheduling algorithm that makes a shared-memory network deterministic. The system must schedule the reflection of each shared-memory block so reflections do not overlap. To configure the network, one node is designated as the master. At the start of each cycle, the master node sends a cycle-start packet to the rest of the nodes. Once the cycle-start packet ends, nodes can begin to send shared-memory blocks.
One advantage of a shared memory block is that the block can hold multiple shared variables or data items. Thus, the transfer of one memory block can potentially distribute several pieces of data that might otherwise each need a separate handshaking sequence to get from one node to another. This use of the memory block reduces the over-head per transfer.
The system can also use the clock signal from the master node as a synchronized network clock. This signal can serve as a source from which to derive secondary timing sources based on when data are sent.
An alternate way of transferring data is to use a dedicated-slot method. Here there's a time scheduled during each network cycle for each data item to transfer. Developers specify the slot for each data item in terms of the amount of offset time after the start of the network cycle. There is also a maximum and minimum allowed slot time. For example, the time slot specification for one particular item might be to begin 175 sec after the start of the network cycle. The slot transmission time might take up 33 sec.
The advantage of dedicated-slot variables is that a control loop can close across a network, even during a single network cycle. This loop closing can be implemented by scheduling a dedicated slot variable or shared-memory blocks in the first part of the network cycle, then defining a time slot for a calculated variable later in the same network cycle. A processor can use the shared variables in its calculations, then, if the calculations are short enough, perform them in time to issue a new calculated variable when the appropriate slot time arrives.
Similarly, a system using dedicated slots can acquire data and actuate I/O even when the network is busy. The I/O node can send its feedback during its time slot. The controller can note this feedback, process it, and transmit an output during a later time slot within the same network cycle. Engineers can use this method for closing any type of control loop across the network, allowing a decentralized system for control that reduces overall wiring and complexity.
The process of developing and implementing a deterministic network presents several challenges. One of these is how to re-cover when transmitted data gets garbled. Recall that standard Ethernet has a simple way of dealing with data-transmission problems. It simply allows for data retransmission in cases where information has been lost from network noise or collisions. Deterministic Ethernet networks eliminate collisions, but they still face a challenge when data is lost to noise. They can't rely on re-transmission because to do so would violate the predefined network schedule.
One way of dealing with such problems is to have users incorporate redundant data transmission into the schedule; however, this reduces the overall network throughput. Consequently, deterministic Ethernet protocols must include robust error checking to alert nodes of possible data loss. Then users must decide how to handle situations when data is lost.
Another challenge arises when incorporating deterministic Ethernet into applications distributed across a large geographical area, as in cases with multiple subnets. Generally speaking, deterministic Ethernet networks cannot span more than one subnet because routing and switching elements introduce large delays into network data transfers.
While it is difficult to transfer data deterministically between subnets, the IEEE 1588 protocol uses a special scheme to keep subnets in sync: It includes specifications for boundary clocks that can propagate a synchronized clock to a particular subnet. But the general recommendation is to avoid this complexity and have nodes reside on the same subnet when a system must deterministically transfer data.
All in all, emerging Ethernet technologies can overcome technical limitations that once kept these networks out of real-time and industrial applications. Deterministic Ethernet technologies will likely find use in a number of distributed-control schemes spanning numerous industries. MD
National Instruments Corp.,NI.com