The RT-VBR Service ATM SERVICE CATEGORIES
94 CONGESTION CONTROL IN ATM NETWORKS
as the decision to accept or reject a new connection is based purely on whether its peak bit rate is less than the available bandwidth on the link. Let us consider, for example,
a non-blocking switch with output buffering see Figure 3.10, and suppose that a new connection with a peak bit rate of 1 Mbps has to be established through output link 1. Then,
the new connection is accepted if the link’s available capacity is more or equal to 1 Mbps.
In the case where nonstatistical allocation is used for all of the connections routed through a link, the sum of the peak bit rates of all of the existing connections is less than
the link’s capacity. Peak bit rate allocation can lead to a grossly underutilized link, unless the connections transmit continuously at peak bit rate.
Statistical Bandwidth Allocation In statistical bandwidth allocation, the allocated bandwidth on the output link is less
than the source peak bit rate. In the case where statistical allocation is used for all of the connections on the link, the sum of the peak bit rates of all of the connections can
exceed the link’s capacity. Statistical allocation makes economic sense when dealing with bursty sources, but it is difficult to implement effectively. This is due to the fact that it
is not always possible to characterize accurately the traffic generated by a source and how it is modified deep in an ATM network. For instance, let us assume that a source
has a maximum burst size of 100 cells. As the cells that belong to the same burst travel through the network, they get buffered in each switch. Due to multiplexing with cells
from other connections and scheduling priorities, the maximum burst of 100 cells might become much larger deep in the network. Other traffic descriptors, such as the PCR and
the SCR, can be similarly modified deep in the network. For instance, let us consider a source with a peak bit rate of 128 Kbps. Due to multiplexing and scheduling priorities,
it is possible that several cells from this source can get batched together in the buffer of an output port of a switch. Let us assume that this output port has a speed of, say
1.544 Mbps. Then, these cells will be transmitted out back-to-back at 1.544 Mbps, which will cause the peak bit rate of the source to increase temporarily
Another difficulty in designing a CAC algorithm for statistical allocation is due to the fact that an SVC has to be set up in real-time. Therefore, the CAC algorithm cannot
be CPU intensive. This problem might not be as important when setting up PVCs. The problem of whether to accept or reject a new connection can be formulated as a queueing
problem. For instance, let us consider again our non-blocking switch with output buffering. The CAC algorithm has to be applied to each output port. If we isolate an output port
and its buffer from the switch, we will obtain the queueing model shown in Figure 4.9.
Output port
New Connection
Existing Connections
Figure 4.9 An ATM multiplexer.
CALL ADMISSION CONTROL CAC 95
This type of queueing structure is known as the ATM multiplexer. It represents a number of ATM sources feeding a finite-capacity queue, which is served by a server, i.e., the
output port. The service time is constant and is equal to the time it takes to transmit an ATM cell.
Let us assume that the QoS, expressed in cell loss rate, of the existing connections is satisfied. The question that arises is whether the cell loss rate will still be maintained if
the new connection is accepted. This can be answered by solving the ATM multiplexer queueing model with the existing connections and the new connection. However, the
solution to this problem is CPU intensive and it cannot be done in real-time. In view of this, a variety of different CAC algorithms have been proposed which do not require the
solution of such a queueing model.
Most of the CAC algorithms that have been proposed are based solely on the cell loss rate QoS parameter. That is, the decision to accept or reject a new connection is
based on whether the switch can provide the new connection with the requested cell loss rate without affecting the cell loss rate of the existing connections. No other QoS
parameters, such as peak-to-peak cell delay variation and the max CTD, are considered by these algorithms. A very popular example of this type of algorithm is the equivalent
bandwidth,
described below. CAC algorithms based on the cell transfer delay have also been proposed. In these
algorithms, the decision to accept or reject a new connection is based on a calculated absolute upper bound of the end-to-end delay of a cell. These algorithms are closely
associated with specific scheduling mechanisms, such as static priorities, early deadline first, and weighted fair queueing. Given that the same scheduling algorithm runs on all
of the switches in the path of a connection, it is possible to construct an upper bound of the end-to-end delay. If this is less than the requested end-to-end delay, then the new
connection is accepted.
Below, we examine the equivalent bandwidth scheme and then we present the ATM block transfer ABT
scheme used for bursty sources. In this scheme, bandwidth is allo- cated on demand and only for the duration of a burst. Finally, we present a scheme
for controlling the amount of traffic in an ATM network based on virtual path connec- tions VPC.