63
capacity your network provides in practice, i.e., how much of the maximum is actually available. Traffic measurements will give you an idea of how the capacity is being used.
My goal in this section is not a definitive analysis of performance. Rather, I describe ways to collect some general numbers that can be used to see if you have a reasonable level of performance or if you
need to delve deeper. If you want to go beyond the quick-and-dirty approaches described here, you might consider some of the more advanced tools described in
Chapter 9 . The tools mentioned here
should help you focus your efforts.
4.2.1 Performance Measurements
Several terms are used, sometimes inconsistently, to describe the capacity or performance of a link. Without getting too formal, lets review some of these terms to avoid potential confusion.
Two factors determine how long it takes to send a packet or frame across a single link. The amount of time it takes to put the signal onto the cable is known as the transmission time or transmission delay.
This will depend on the transmission rate or interface speed and the size of the frame. The amount of time it takes for the signal to travel across the cable is known as the propagation time or
propagation delay. Propagation time is determined by the type of media used and the distance involved. It often comes as a surprise that a signal transmitted at 100 Mbps will have the same
propagation delay as a signal transmitted at 10 Mbps. The first signal is being transmitted 10 times as fast, but, once it is on a cable, it doesnt propagate any faster. That is, the difference between 10 Mbps
and 100 Mbps is not the speed the bits travel, but the length of the bits.
Once we move to multihop paths, a third consideration enters the picture—the delay introduced from processing packets at intermediate devices such as routers and switches. This is usually called the
queuing delay since, for the most part, it arises from the time packets spend in queues within the device. The total delay in delivering a packet is the sum of these three delays. Transmission and
propagation delays are usually quite predictable and stable. Queuing delays, however, can introduce considerable variability.
The term bandwidth is typically used to describe the capacity of a link. For our purposes, this is the transmission rate for the link.
[2]
If we can transmit onto a link at 10 Mbps, then we say we have a bandwidth of 10 Mbps.
[2]
My apologies to any purist offended by my somewhat relaxed, pragmatic definition of bandwidth.
Throughput is a measure of the amount of data that can be sent over a link in a given amount of time. Throughput estimates, typically obtained through measurements based on the bulk transfer of data, are
usually expressed in bits per second or packets per second. Throughput is frequently used as an estimate of the bandwidth of a network, but bandwidth and throughput are really two different things.
Throughput measurement may be affected by considerable overhead that is not included in bandwidth measurements. Consequently, throughput is a more realistic estimator of the actual performance you
will see.
Throughput is generally an end-to-end measurement. When dealing with multihop paths, however, the bandwidths may vary from link to link. The bottleneck bandwidth is the bandwidth of the slowest link
on a path, i.e., the link with the lowest bandwidth. While introduced here, bottleneck analysis is discussed in greater detail in
Chapter 12 .
64
Additional metrics will sometimes be needed. The best choice is usually task dependent. If you are sending real-time audio packets over a long link, you may want to minimize both delay and variability
in the delay. If you are using FTP to do bulk transfers, you may be more concerned with the throughput. If you are evaluating the quality of your link to the Internet, you may want to look at
bottleneck bandwidth for the path. The development of reliable metrics is an active area of research.
4.2.2 Bandwidth Measurements