Industrial networks increasingly incorporate switching gear to avoid bottlenecks that can bog down critical operations.
What is in this article?:
- The Basics of Network Switching Technology
- Where Ethernet switches make sense
Senior Technical Support
Edited by Leland Teschler
Networks are starting to become a critical part of many industrial installations. Production equipment increasingly incorporates diagnostics that are available over an Ethernet, and ever more factory gear is not only networked but also has its own IP address. No wonder, then, that the networks to which these devices connect is becoming more complicated.
Industrial networks are increasingly likely to make use of Ethernet switches, hubs, and other connective equipment to better manage network traffic so there are no delays in timecritical data.
Modern networks can potentially use numerous styles of switching equipment. But there are important differences among these devices. Circumstances dictate where one type makes more sense than another. A brief tutorial can explain how to tell whether a network will benefit from a specific kind of switching device, and how to add switches so the network performs better than before. The latter point is important because switches can both harm and benefit network capacity and speed. They are not a cure-all for network issues.
The uninitiated may have a mental image of a network "switch" as something containing electrical contacts like an electromechanical relay. This image is nothing at all like reality. To begin with, network switches are entirely solid-state devices. And the switching action they provide is more sophisticated than simple switching, as in on/off switches.
Network switches examine each data packet and process it accordingly. In this regard, they are more able than another type of network connection called a hub. A hub simply repeats the signal it sees on one port to all the others that contain connections. All nodes connected to a hub see the same traffic. There is no intelligence behind either the data transmission or the determination of which node receives packets.
Specifically, switches note the Ethernet addresses of the nodes residing on each network segment and then allow only the traffic destined for those nodes to pass through. When the switch receives a packet, it examines the destination and source hardware addresses and compares them to a table of network segments and addresses. If the segments are the same, the packet is dropped or "filtered;" if the segments are different, the switch forwards the packet to the proper segment. Additionally, switches prevent bad or misaligned packets from spreading by not forwarding them.
The filtering of packets and regeneration of forwarded packets lets switching technology split a network into separate collision domains. The regeneration of packets lets networks span greater distances and handle more nodes, and it dramatically lowers the overall collision rates. In switched networks, each segment is an independent collision domain. This also allows for parallelism, meaning up to one-half of the computers connected to a switch can send data at the same time.
Most switches are self-learning. They determine the Ethernet addresses in use on each segment and build a table as they pass packets. When the switch receives a frame, it saves the MAC address of the originator and remembers the port on which the frame arrived. This plug-and-play element makes switches an attractive alternative to hubs.
Switches can connect different network types (such as Ethernet and Fast Ethernet) or networks of the same type. Many switches today offer high-speed links, like Fast Ethernet, which can be used to link the switches together or to give added bandwidth to important servers that get a lot of traffic. A network composed of several switches linked together via these fast uplinks is called a collapsed backbone network.
Another way to speed access for critical computers is to dedicate ports on switches to individual nodes. Servers and power users can take advantage of a full segment for one node, so some networks connect high traffic nodes to a dedicated switch port. This type of infrastructure would benefit such nodes as file or application servers, which many other nodes on the network routinely access.
Full duplex is another method to increase bandwidth to dedicated workstations or servers. To use full duplex, network interface cards used both in the server or workstation and in the switch must support full duplex operation. Full duplex doubles the potential bandwidth on that link.
We are beginning to see more installations implementing highend switches that support Gigabit Ethernet. Some of these switches can both switch and route traffic between different network segments. Adding Gigabit Ethernet lets enterprise networks boost their potential speed tenfold from 100 to 1,000 Mbps.
Switches reduce network congestion, so it is useful to review what causes this congestion. Performance deteriorates as more users share a network or as applications need more data. This is because all users on a shared network are competitors for the Ethernet bus.
A moderately loaded 10-Mbps Ethernet network can sustain 35% capacity use and throughput in the neighborhood of 2.5 Mbps after accounting for packet overhead, interpacket gaps and collisions. A moderately loaded Fast Ethernet or Gigabit Ethernet shares 25 Mbps or 250 Mbps of real data in the same circumstances. With shared Ethernet and Fast Ethernet, the likelihood of collisions rises as more nodes and traffic operate over the shared collision domain.
Ethernet itself is a shared media, so there are rules for sending packets to avoid conflicts and protect data integrity. Nodes on an Ethernet network send packets when they see the network is idle. This method of collision detection is known as CSMA/CD (carrier-sense multiple-access collision detection). With this method each node essentially waits to receive a frame before it tries to send its message. However, it is possible that two nodes at different locations could try to send data simultaneously. In this case a collision will result. Both nodes retransmit packets, adding to the traffic problem.
Too many collisions result when there are too many users or too much traffic on the network. This can slow network performance from the user's point of view. Thus segmenting, where switches or routers divide a network into different pieces joined together logically, reduces congestion in an overcrowded network by eliminating the shared collision domain.
Collision rates measure the number of packets with collisions as a percentage of total packets in a given unit of time. Some collisions are inevitable. Well-run networks might have 10% of the packets traversing them collide.
Utilization rate is another widely accessible statistic describing the health of a network. It is the amount of total traffic as a percent of the theoretical maximum for the network type: 10 Mbps in Ethernet, 100 Mbps in Fast Ethernet. LAN monitoring equipment often provides this information. Utilization in an average network above 35% indicates potential problems. This 35% utilization is considered near optimum, but some networks see higher or lower utilization optimums because of factors such as packet size and peak load deviation.
A switch is said to work at "wire speed" if it has enough processing power to handle full Ethernet speed at minimum packet sizes. Most switches on the market are well ahead of network traffic capabilities. They support the full wire speed of Ethernet, 14,480 pps (packets per second), and Fast Ethernet, 148,800 pps. Some Gigabit switches hit 115 million pps.
Routers can be considered a special type of switch in that they filter out network traffic. But rather than filtering by packet addresses, they filter by specific protocol. Routers were born out of the necessity for dividing networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments.
Routers recalculate the checksum and rewrite the MAC header of every packet. The price paid for this type of intelligent forwarding and filtering is usually calculated in terms of latency, or the delay that a packet experiences inside the router. Such filtering takes more time than in a switch, which only looks at the Ethernet address.
Routers can also be set up with more complicated static filtering through what are called access control lists. This feature lets the router act as a firewall, blocking certain types of packets and protocols from crossing its interfaces, thus keeping them out of LAN or WAN segments.
An additional benefit of routers is that they automatically filter broadcast messages. But overall the configuration of a router can be more complex than that of a switch.
Switches are perhaps three to five times more expensive than hubs. Nevertheless, the market for Ethernet switches has been doubling annually. The reason is switch prices have been dropping precipitously, while hubs are a mature technology with small price declines. This means there is far less difference between switch costs and hub costs than there once was, and the gap is narrowing.
In addition, switches install as easily as hubs and operate on the same hardware layer. So there are no protocol issues in swapping a switch for a hub.
It is easy to identify where networks will benefit from adding a switch to extend the distance covered. It is harder to see where a switch would help relieve congestion enough to improve the performance of the network. Utilization factors and collision rates will show whether congestion is causing network problems. In general, networks that have a utilization factor above 35% and collision rates exceeding 10% are candidates for performance boosts from switching.
All switches add small latency delays to packet processing. These processing delays, switch buffer limitations, and the retransmissions that can result sometimes make switched networks slower than what is possible with hubs. That's why deploying switches unnecessarily can actually slow network performance. Putting the first switch on a network has different implications than adding yet more switched ports. And it is important to understand traffic patterns as a prerequisite to deploying switches. A switch that forwards almost all the traffic it receives will relieve much less congestion than one that filters most of the traffic.
Network response times (the part of network performance that users see) suffer as the load on the network rises. Under heavy loads small increases in user traffic can degrade performance significantly. The situation is analogous to automobile freeway dynamics: Adding more cars boosts cars-per-hour throughput up to a point, but further increases in demand make true throughput deteriorate rapidly. In Ethernet, the number of collisions rises as the network is loaded. This causes retransmissions that further increase the network load and cause even more collisions. Network traffic slows considerably.
It is now possible to find what are called managed switches that are designed for industrial networks. Managed switches are increasingly used in networks containing time-critical devices or where it is important to know immediately when the network has problems.
Managed switches often can generate real-time warnings for notable events such as power on/off, traffic overload, configuration changes, and so forth. Managed switches generally support SNMP (Simple Network Management Protocol), a protocol used to monitor network devices for any conditions that warrant administrative attention. This lets operators monitor managed switches from a central location. Networks using SNMP may manage every device or just the more critical areas.
Some managed switches come with Power-over-Ethernet capability. This lets the switch power an attached device through its own port and an Ethernet cable. As industrial networking starts to rely more heavily on network switching technology, PoE will spread.
VLANs (virtual LANs) can be implemented through use of a managed switch. A VLAN allows grouping network nodes into logical LANs that behave as one network, though they may physically connect to different segments of a LAN.
The main benefit of a VLAN is the ability to manage broadcast and multicast traffic. A VLANbased switch may be the best bet for optimizing traffic if the network has logical groupings that differ from its physical groupings. VLANs also offer network security when used with access control lists and MAC address filtering.
In contrast, an unmanaged switch will pass broadcast and multicast packets through to all its ports. These types of switches do not offer networkmanagement tools associated with their managed cousins.
Another benefit to managed switches is they permit use of a Spanning Tree Algorithm. Spanning Tree lets the network manager design-in redundant links, with switches attached in loops. The Spanning Tree protocol lets the switches coordinate with each other so only one redundant link carries traffic (unless there is a failure, in which case the backup link automatically activates).
Network managers may want to have redundant links in critical applications and would use managed switches to do so. These types of features are particularly important in a network handling automation equipment where redundancy is imperative. However, unmanaged switches would do quite well for the rest of the network and are much less expensive.
LAN switches come in two basic architectures, cut-through and store-and-forward. Cutthrough switches only examine the destination address before forwarding it on to the right segment. A store-and-forward switch, on the other hand, analyzes the entire packet before forwarding it. Examining the entire packet takes more time, but lets the switch catch certain packet errors and collisions and prevent bad packets from propagating through the network.
Today, the speed of store-andforward switches has caught up with cut-through switches to the point where there is a minimal difference between the two. Also, there are a large number of hybrid switches available that mix both cut-through and storeand-forward architectures.
BLOCKING VERSUS NONBLOCKING
Take a switch's specifications and add up all the ports at theoretical maximum speed. This gives the theoretical sum total of a switch's throughput. The switch is considered a blocking switch if its switching components cannot handle the theoretical total traffic of all ports. There is debate whether all switches should be designed as nonblocking. But the added costs of doing so are only reasonable for switches designed to work in the largest network backbones. For ordinary applications, blocking switches having reasonable throughput levels will work just fine.
Consider an eight-port 10/100 switch. Each port can theoretically handle 200 Mbps (full duplex). Thus there is a theoretical need for 1,600 Mbps, or 1.6 Gbps. In the real world each port will not exceed 50% utilization, and an 800-Mbps switching bus is adequate.
The switch holds packets in buffers as they are processed. If the destination segment is congested, the switch holds the packet until it sees bandwidth become available on the crowded segment. In this manner, supercongested networks may make switch buffers over-flow. There are two strategies for handling full buffers. One is backpressure flow control, which sends packets back upstream to their source. This compares to the strategy of simply dropping the packet and relying on the source to retransmit them automatically.
Neither technique is attractive. The first spreads problems in one segment to others. The latter solution causes retransmissions which further boost network traffic. Both exacerbate the problem, so switch vendors use large buffers and advise network managers to eliminate congested segments through design of switched network topologies.
A hybrid device is the latest improvement in internetworking technology. Combining the packet handling of routers and the speed of switching, these multilayer switches (also sometimes called layer-three switches) operate on both layer two and three (data link and network) of the OSI network model. (As a quick review, data packets are encoded and decoded into bits at layer two. It is also this layer that controls how a computer on the network gains access to network data and gets permission to transmit it, as well as handles frame synchronization and error checking. Layer three governs switching and routing, congestion control, and packet sequencing.)
Multilayer switches target the core of large enterprise networks. Sometimes called routing switches or IP switches, the switches look for common traffic flows and switch these flows on the hardware layer for speed. Multilayer switches use routing functions for traffic outside the normal flows. This strategy reserves routing functions that incur high overhead for only the situations where they are needed.
Many vendors are working on high-end multilayer switches, and the technology is definitely a work in process. As networking technology evolves, multilayer switches are likely to replace routers in most large networks.