AAL Structure ATM Adaptation Layer AAL

Figure 5-16 General AAL functions. At the transmitter the user data flowing into the ATM network through the AAL are often in variable At the receiver again SAR is responsible for reassembling these sequences of 48 byte payloads, transmitted across the ATM network into variable length data units. Thus the AAL shields the user application from having to worry about the inner working details of the ATM network. The convergence sublayer CS interacts with the higher layer protocols or user application through the service access point SAP. It sets up the data stream based on the service requirements of the application and sends it on to the SAR for segmentation. As illustrated in Figure 5-16 variable length higher layer Protocol Data Units PDUs enter the AAL i. e., the ATM Protocol stack via the service access point SAP Here it is converted to an AAL service data unit SDU by adding application specific headers and trailers. Then it is passed in the SAR where it is broken up into 48 byte SAR-PDUs They include some additional headers and trailers that are placed on these fragments. The data are then passed on to the ATM through the ATM-SAP and cells are composed by adding 5-byte cell headers. Previous Table of Contents Next Copyr ight © CRC Pr ess LLC by Abhijit S. Pandya; Ercan Sen CRC Press, CRC Press LLC ISBN: 0849331390 Pub Date: 110198 Previous Table of Contents Next

Chapter 6 Bandwidth Allocation in ATM Networks

Flexible and reliable bandwidth allocation is one of the key features of ATM. In terms of bandwidth allocation capability, ATM represents a somewhat more rational approach with respect to both ends of the spectrum. At one end of spectrum is the very rigid synchronous SONETSDH technology. At the other end of the spectrum is shared medium Ethernet technology which allows bandwidth usage with no restriction at all. SONETSDH has been initially designed to multiplex voice circuits. The smallest unit of bandwidth allocation possible is at the T1 rate 1.5 Mbps for SONET and E1 rate 2 Mbps for SDH. Without any slience suppression, voice samples flow through the SONETSDH network at a constant rate, i.e., 64 Kbps per voice call. However, when it comes to carrying bursty data traffic, a significant amount of the allocated bandwidth for a particular connection is wasted. In addition, the bandwidth has to be allocated according to peak rate. Hence, if the peak rate for a data connection is 3 Mbps, then the next available payload greater than 3 Mbps is the DS3 rate 45 Mbps for SONET. In this case we would be wasting minimum 42 Mps bandwidth capacity. Ethernet technology allows a maximum level of sharing of available bandwidth on a physical medium. Hence, on a typical 10 Mbps Ethernet bus, each user attached to the bus is capable of using the total 10 Mbps bandwidth. However, due to lack of coordination, conflicts arise among the users as a result of simultaneous attempts to access the bus by multiple users. Therefore, during a heavy traffic period, total available bandwidth is hardly accessible to any of the users. ATM technology, on the other hand, partitions the physical medium bandwidth logically and creates sub- rates at a very fine granularity. Each user is assigned a logical bandwidth pipe according to its needs. Through this logical partitioning, a user is isolated from the other users. Hence, ATM allows guarantee of service at the negotiated rates to every user. We will discuss in greater detail how ATM manages bandwidth allocation, types of bandwidth allocation strategies and bandwidth allocation classes later in this chapter.

I. Variable Bandwidth Allocation

Variable bandwidth allocation is one of the cornerstones of ATM technology. Through the use of Virtual Path VP and Virtual Channel VC concepts, ATM is capable of logically partitioning the available bandwidth on a physical medium such as fiber optics, copper and coaxial cables, and ATM switches. Each of these VPs and VCs represents a logical pipe which carries user traffic. Typically, during the path setup, each user is allocated a VP or VC based on the negotiated bandwidth as part of the traffic contract. Through the use of ATM traffic management, the terms of these traffic contracts are enforced. For example, the ATM traffic management ensures that users do not exceed the limits of their bandwidth allocations. The VP and VC mechanisms allow allocation of bandwidth to each user at a very fine granularity so that the available bandwidth of the physical medium is utilized very efficiently. As mentioned earlier, one of the main drawbacks of synchronous SONETSDH technology is the fact that it allocates bandwidth in fixed quantities called Virtual Containers. Thus, if the user request does not exactly match the bandwidth size of the allocated Virtual Container, then the unused portion of the allocated bandwidth is wasted. Fortunately, by using ATM on top of the SONETSDH transport, efficient usage of the physical bandwidth capacity can be achieved. Figure 6-1 Flexible bandwidth allocation via VPVC mechanism.

II. Virtual Path and Virtual Channel Concepts

The VPVC concept of ATM allows creation of logical pipes on a single physical transport medium. Although users are sharing the same physical medium, the VPVC concept limits the amount of traffic each user can generate according to the negotiated traffic contract between the user and the ATM network. The ATM cells from the users who exceed their traffic contract are either immediately discarded at the control point or marked as a candidate for discard by downstream control points in case of network congestion. Unlike the shared medium-based Ethernet technology, ATM provides a certain degree of control via VP VC mechanism for the amount of traffic each user can generate on the shared medium. Hence, ATM is capable of providing a quality of service QoS guarantee, a very desirable feature which the Ethernet technology can not provide in its present form. This is one of the most significant distinctions between ATM and LAN technologies such as Ethernet. This distinction becomes more apparent when dealing with delay sensitive real-time voicevideo traffic. However, today, there are attempts being made to