57
The two diagrams look essentially similar, but there is a huge potential performance advantage to the collapsed backbone design. The advantage exists because the central concentrator device is able to switch
packets between its ports directly through its own high-speed backplane. In most cases, this means that the aggregate throughput of the network is over an order of magnitude higher.
The essential problem is that all network segments must share the bandwidth of the backbone for all traffic crossing it. But how much traffic is that? If the separate segments are relatively autonomous, with their
own file and application servers, there may be very little reason to send a packet through the backbone. But, in most large LAN environments, at least one central computer room contains the most heavily used
servers. If everybody shares these servers, then they also share the backbone. Where will the bottleneck occur?
3.4.1.2 Backbone capacity
In the diagram shown, bottleneck is actually a bit of a moot point because there is only one central server segment. If all traffic crossing the backbone goes either to or from that one segment, then its fairly clear
that all you need to do is control backbone contention a little bit better than on the server segment and the bottleneck will happen in the computer room. But this is not the usual case. Drawing the central server
segment with all of those servers directly connected to a Fast Ethernet switch at full duplex would be more realistic. With just three such servers as in the drawing, the peak theoretical loading on the backbone will
be 600Mbps 100Mbps for Fast Ethernet times two for full duplex times three servers.
Clearly that number is a maximum theoretical burst. In the following section I will discuss how to appropriately size such trunk connections. The important point here is that it is very easy to get into
situations in which backbone contention is a serious issue.
This is where the collapsed backbone concept shows its strength. If that central concentrator is any commonly available Fast Ethernet switch from any vendor, it will have well over 1000Mbps of aggregate
throughput. The backplane of the switch has become the backbone of the network, which provides an extremely cost effective way of achieving high throughput on a network backbone. The other wonderful
advantage to this design is that it will generally have significantly lower latency from end to end because the network can take advantage of the high-speed port-to-port packet switching functions of the central
switch.
In Figure 3-8
, each user segment connects to the backbone via some sort of Access device. The device may be an Ethernet repeater, a bridge, or perhaps even a router. The important thing is that any packet passing
from one segment to another must pass through one of these devices to get onto the backbone and through another to get off. With the collapsed backbone design, there is only one hop. The extra latency may or
may not be an issue, depending on the other network tolerances, but it is worth noting that each extra hop takes its toll.
3.4.1.3 Backbone redundancy
The biggest problem with this collapsed backbone design should already be clear. The central collapse point is also a single point of failure for the entire network.
Figure 3-10 shows the easiest way around this
problem, but forces me to be more specific about what protocols and technology the example network uses.
58
Figure 3-10. A collapsed backbone with redundancy
The most common way to collapse a LAN backbone is through a layer 2 Ethernet switch. So lets suppose that each user segment is either Ethernet or Fast Ethernet or perhaps a combination of the two. The
central device is a multiport Fast Ethernet switch with an aggregate backplane speed of, say, 1Gbps this number is much lower than what is currently available in backbone switches from any of the major
vendors, but its high enough for the example. Each user LAN segment connects to these central switches using two fiber optic Fast Ethernet connections, one to each switch.
Then the two switches can be configured to use the Spanning Tree protocol. This configuration allows one switch to act as primary and the other as backup. On a port-by-port basis, it is able to ensure that each user
LAN segment is connected to only one of the two switches at a time. Note that a switch-to-switch connection is indicated in the diagram as well. This connection is provided in case LAN segment 1 is
active on Switch A and segment 2 is active on Switch B. If this happens, there needs to be a way to cross over from one switch to the other.
There are several important redundancy considerations. First, it may seem more complicated to use port- by-port redundancy rather than redundancy from one whole switch to the other. After all, it means that
there will probably be complicated switch-to-switch communication, and seems to require the switch-to- switch link that wasnt previously required. But this is actually an important advantage. It means that the
switch can suffer a failure affecting any one port without having to flip the entire backbone of the network from one switch to the other. There are a lot of ways to suffer a single port failure. One could lose one of
the fiber transceivers, or have a cut in one of our fiber bundles, or even have a hardware failure in one port or one card of a switch. So minimizing the impact to the rest of the network when this happens will result
in a more stable network.
This example specified Ethernet and Spanning Tree, but there are other possibilities. If all LAN segments used Token Ring, for example, you could use two central Token Ring switches and the Token Ring flavor
of Spanning Tree. Exactly the same comments would apply.
Alternatively, for an IP network you could have done exactly the same thing at Layer 3 by using two central routers. In this case, you could use the Cisco proprietary HSRP protocol or the RFC 2338 standard
VRRP protocol. These protocols allow two routers to own this address, but only one is active at a time. The result provides exactly the same port-by-port redundancy and collapsed backbone properties using
routers instead of switches.
59
3.4.2 Distributed Backbone