ATM (Asyncronous Transfer Mode)


ATM is a new technology that is with its own unique set of problems. The standards set for ATM is derived by a political organization, the ATM forum. Various vendors who debate over interests, who are proprietary in nature, compromise the ATM forum. The standards for ATM is still being developed. If an ATM system is to be implemented new equipment needs to be purchased and installed. (Berc online)

Overhead is added as ATM uses 20-30% of the bandwidth to send its packets. Additional overhead is needed for higher level packets, such as Ethernet, IP, and Token Ring, as the AAL layer encapsulates these PDUs before they are sent over the network. IP packets over ATM are a complex, non-standard protocol resulting in various proposals from organizations such as the ATM Forum and the IETF. Overhead is also added by training, finding an experienced technical staff, and “It requires a separate management platform and consumes additional administrative resources” ( online).

The ATM cell is small and fixed in size. The cell size leads to a 10% use of bandwidth immediately stemming from the 5 bytes used for addressing. The 48 bytes used for the payload do not fit into the binary numbers of computer systems. The cell size does not allow flexibility of data traffic over 32 bytes resulting in more overhead added to the ATM network. (Steinberg online)


An end user study conducted in 1997 by the ATM forum and Sage University concluded that 70% of those surveyed are upgrading to an ATM LAN over their current installation. 60% of these ATM LAN users are incorporating existing Ethernet technologies with their networks. The reasons for upgrading to the ATM LAN cited by these end users are the scalability of ATM, increased bandwidth, high speeds, QoS, multi-protocol support, and large-scale data needs. ( online) Local Area Network Emulation is needed to establish a hybrid system of ATM and Legacy LAN technologies.

Local Area Network Emulation (LANE) is a standard developed by the ATM forum fulfilling several goals. First, a traditional LAN can be established among workstations, PCs, using ATM. Second, LANE will allow established LANs of the enterprise, i.e., Ethernet and Token Ring, to inter-operate with the ATM end-stations. Third, LANE will allow software applications that use high level protocols to send packets over the ATM network as though they were using a connectionless LAN. (Pandya and Sen 177-178) The end result is LANE emulating a Token Ring or Ethernet LAN MAC frames with high speeds. (Downs, et. al. 291)

A benefit of LANE is to allow separate ATM clients to be connected to the same emulated LAN located in separate buildings. This is unlike standard LANs where clients need to be located within the same office or building. LANE allows the PC, or workstation to belong to more than one emulated LAN (ELAN). (Pandya and Sen 178)

LANE is compliant with Ethernet Spanning Tree and Source Routing Bridging allowing Ethernet or Token Ring LANs to interconnect with each other or the LAN Emulated Client through an ATM network. (Pandya and Sen 178) LANE will not allow Ethernet or Token Ring to interconnect within the boundaries of the same ELAN. However, a router can be used to establish connections between separate ELANs of Ethernet and Token Ring. (Pandya and Sen 181)

The protocol of LANE is supplied through the ATM Network Interface Card and ATM switches and routers, which interconnect to the ATM network. The ATM NIC is located in the host PC, Workstation, servers, switches, bridges, routers, etc., which allow the host to automatically join an existing emulated LAN. The higher level protocols within the host of the ATM NIC behave as a Legacy LAN. (Downes, et. al. 293) An Emulated LAN can consist of 500 to 2,000 stations. The slowest connection will determine the amount of broadcast traffic capable on the emulated LAN. (McDysan and Spohn 524)

Interfaces and Layers

The OSI layers from the Logical Link Control and above remain unchanged in an ATM workstation. Applications, LAN drivers, etc. are unaware of the lower level protocols being used in the ATM network such as IP, and IPX. The difference between an ATM and Ethernet layers is the data layer. The MAC layer is replaced by several ATM components. (McDysan and Spohn 524)

The LANE software sits above the other components of the data layer. The LANE is provided either by the operating system or the ATM Network Interface Card. The LANE software is responsible for connecting with the ATM NIC through the driver software and affords the equivalent LLC interface as in Ethernet. (McDysan and Spohn 525) The LAN Emulation UNI of data transfer, registration, address resolution, and the initialization is a component of the LANE software. (Pandya and Sen 182)

LANE is layered above AAL5 containing the Service Specific Connection Orientated Protocol. As in other ATM functions the connections needs to be established before any data is sent. Switched Virtual Circuits (SVCs) are usually used in ELAN to establish connections between the clients and servers. The ATM layer sits above the physical layer. (McDysan and Spohn 526-527)

The layers present in the ATM switch, which interfaces the clients with the servers, are the following: UNI 3.1, SSCOP, AAL5, ATM, and the physical layer. The switch then interfaces with the ATM LAN Bridge\ Switch. This device is composed of 2 sets of layers first the layers of LANE, UNI 3.1, SSCOP, AAL5, ATM, physical and second the Legacy LAN layers of MAC and physical. This device is able to end the ATM protocols and modifies the cells for transmission into the MAC layer of the legacy LAN. (McDysan and Spohn 526)

The LAN MAC frames, either Ethernet or Token Ring, remain the same for the most part. LANE will add a 2-byte header for addressing the packets with a LEC ID, the VCI, and keep all the data that was originated from the upper layer protocols. LANE in turn will remove this header in the ATM LAN Bridge before sending it forward into the legacy LAN. The packets are not segmented into the 53 byte cells of ATM due to the legacy LAN nodes still need to receive the Ethernet or Token Ring frames. ( online)

Flow control is determined by what version of LANE is installed in the system. If LANE 1.0 only unspecified bit rate (UBR) is used on a “first-come, first-served basis”. If LANE 2.0 is used on the system there can be UBR, available bit rate (ABR) which keeps the connections occupied to reserve bandwidth for traffic. Constant Bit Rate (CBR), which a set amount of bandwidth is appropriated, and Variable Bit Rate (VBR), which “locks in an average rate of bandwidth” to permit bursty traffic and compressed voice. (Zimmerman online)


LANE uses a client/server structure to perform the necessary functions associated with a LAN. The components of a LANE is the following; LAN Emulation Client (LEC), and the server structure, which has three functions, a LAN Emulation Server (LES), a LAN Emulation Configuration Server (LECS), and a Broadcast and Unknown Server (BUS). (Pandya and Sen 178) Vendors can offer switches or a dedicated server, which contain the above mentioned server components.

The LEC is located at the workstation, or end-station, switch, etc. which contains the LANE software responsible for establishing the UNI. The client provides the ATM address and corresponding MAC address. The LEC can be established through the ATM NIC, or on the ports of the routers, etc. (Onvural 391) There are two types of LECs, proxy and non-proxy. The proxy LEC contains other MAC addresses of clients other than its self and acts as a bridge. The non-proxy LEC has its own MAC address acting as its own host. The LEC will identify what type it is when it joins the emulated LAN. ( online)

The LECS will provide the address of the LES to the LEC and assign the clients to an ELAN. The LECS will be located by the LEC through ILMI, a Configuration VCC, Configuration Direct SVC, or a LECS PVC. Virtual LANs can also be established through the LECS as to provide flexibility in establishing connections with clients that roam throughout the network. (McDysan and Spohn 526)

The LES obtains and provides the table on all LECs on that particular ELAN. The LEC will register their MAC\ATM addresses with the server. The LES performs Address Resolution protocol (ARP) for the clients and assign the LEC to a particular BUS. The BUS will send out multicast and broadcast traffic to the various clients, and will also flood the unfamiliar destination address traffic onto the ELAN. If the ELAN is compromised of Several BUSs, the LEC sends it traffic to only one BUS. The MAC address of the BUS is configured beforehand into the LES. (McDysan and Spohn 528)


LANE will use a four-step process in establishing connections between the various components of the ELAN. First, initialization and configuration second, Joining and Registration, third, Data Transfer, and fourth Spanning Tree Protocol. (McDysan and Spohn 529) There are two different types of connections using Virtual Channel Connections, control and data. The control or data connects can be Private Virtual Circuits or Switched Virtual Circuits. (Pandya and Sen 186-187)

The control VCC's direct traffic between the LEC, the LECS, and the LES. The first type of control connection is the Configuration Direct VCC. This is a point to point, bi-directional path between the LECS and the LEC. This connection provides the location of the LES to the LEC and can be used in future instances to obtain the location of the other clients located in the ELAN. The configuration direct VCC also provides to the client the largest frame size, ELAN type whether it is Ethernet or Token Ring, the MAC address of the BUS, and what BUS the LEC will have access to. (McDysan and Spohn 529)

The second type of control connection is the Control Direct VCC that is a point to point, bi-directional path between the LEC and LES. (Pandya and Sen 186) The LES will give the LEC an identifier. (McDysan and Spohn 530) One use of the LEC ID is a means of filtering out the LEC's own packets it has sent to the BUS, which in turn has sent back to the client through multicasting or broadcasting traffic. (McDysan and Spohn 533) The LEC will provide to the LES with its MAC and corresponding ATM address. The LEC can also provide the server with other MAC addresses it can access as obtained through STP. (McDysan and Spohn 530) The LEC and the LES needs to maintain this connection as long as the client is associated with the ELAN. (Pandya and Sen 186)

The third type of control connection is the Control Distribute VCC. This VCC is a point to point, point to multi-point one way path, which needs to be maintained throughout the LEC's participation in the ELAN. (Pandya and Sen 187) This connection establishes the LAN Emulation Address Resolution Protocol (LE-ARP). The LES will reply to request from the LEC for a particular MAC address. If the LES do not have the MAC address it will send out a request for a response using point to multi-point connections with other clients on the ELAN. (McDysan and Spohn 530)

Once the LES does have the corresponding MAC address it will send it to the LEC using the Control Distribute VCC. A choice for the LES is to send the LE-ARP to all LECs who store it in memory. This can lead to a notable decreased traffic load on the LES. The client will also send a LE-ARP request for the Bus MAC\ATM address to the LES using this connection. (McDysan and Spohn 530)

The transfer of packets also consists of three different types of connections. The Data Direct VCC is a bi-directional, point-to-point communication path between two LECs to transfer data. If the client has not sent data to the receiving LEC previously it will request a LE-ARP from the LES. The sending LEC has the option of flooding the packets to the BUS for transmission while it awaits a response from the LES. The flooding will decrease the chance of packets being lost. (McDysan and Spohn 530)

The LEC will construct its own table of MAC\ATM address through processing requests from the LE-ARP responses. The LEC will use this cached table beforehand to establish its connections. If the table does not provide the results the LEC is seeking then it will send out a LE-ARP request. The LEC will use a time out procedure of clearing its own tables of MAC\ATM addresses and any Data Direct VCCs it is not using within a specific amount of time. (McDysan and Spohn 533)

The LEC will receive the LE-ARP from the LES and establish the Data Direct VCC with the intended client. The sending LEC transmit a ready indication frame to the receiving LEC. The sending LEC will then proceed to send the data directly to its destination, bypassing the BUS Multicast Forward VCC. If a previous existing Data Direct VCC already exists between two clients they will continue to use this connection to send data. If needed, a parallel Data Direct VCC can be established between two LECs for QoS purposes. (McDysan and Spohn 530, 532)

The LANE Flush process might be instituted before a LEC sets up a Data Direct VCC to transfer data to another LEC. The LANE Flush procedure guarantees the previous sent packets are sent and arrive in succession from the BUS to the receiving LEC. A Flush cell is sent to the BUS, which sends it multicast through the ELAN. The receiving LEC will send the Flush ACK to the LES, which in turn sends the cell back to the sending client through the Control Direct VCC. The sending LEC will store other packets until the Flush ACK packet is received. Once the Flush ACK packet is received the LEC will set up a Data Direct VCC to send data directly to the receiving client. (McDysan and Spohn 532)

The Multicast Send VCC is a point-to-point, two-way communication path between the LEC and the BUS established during the joining and registration phase. The LEC will obtain the BUS's address through a LE-ARP from the LES. The LEC will use this connection to send its multicast or broadcast traffic to the BUS, which in turns send the data onto the ELAN. The BUS can also send traffic to the LEC through this connection or respond to a LEC's request. The LEC will need to keep this connection while participating in the ELAN. The LEC can use the Multicast Send VCC to transmit the first unidirectional data to an unknown address through the BUS. (Pandya and Sen 187)

The Multicast Forward VCC is a unidirectional point-to-point or point-to-multipoint connection establishing a connection from the BUS to the LECs. This connection allows the BUS to transmit data to all clients in its address mapping of the ELAN. The Multicast Forward VCC is first used during the joining and registration phase of establishing connections in the ELAN. The BUS has the option of sending the data to the LECs through the Multicast Forward VCC or the Multicast Send VCC without duplicating the data to the LEC. The LEC needs to receive and to keep this connection until it leaves the ELAN. (Pandya and Sen 187)

LANE will use Spanning Tree Protocol as in other LANs to assist the LECs of determining any alterations or adjustments that have been made to the ELAN. Loops can be determined through STP and can be disabled by the ATM switch. Problems of looping can occur as Ethernet or Token Ring bridges interconnect within an ELAN. The looping problems can be resolved as the LEC updates its address table through the time out procedure discussed earlier. (McDysan and Spohn 533)

The LEC implementing STP can also detect changes in the network and update the ELAN. The LEC will send a BPDU configuration update message to the LES along with a LE Topology Request. Once the LECs receive the LE Topology Request message they will discard the outdated ARP information. It is important that the LEC updates it addressing tables, if not another LEC can have more than one MAC address causing connection problems within the ELAN. (McDysan and Spohn 533)

Summary on LANE

As this writer was researching LANE there was a mixed opinion among professionals if ATM was a viable option for a LAN. It is a very complex and expensive system to implement. However as reported by Telecommunications Online ATM LANs was being implemented by “plenty of users”. The article continues to state “A cross-section of fairly conservative outfits are putting in large scale LANE networks. These installations have several hundred to a few thousand PCs attached to LAN switches with LAN emulation client software that performs the magic of converting the legacy LAN traffic to and from ATM.” (Jeffries online)

ATM for the LAN is being compared to Switched Gigabit Ethernet on a regular basis. NetReference a consulting firm advises against enterprises of establishing any new ATM LANs, and favor Switched Fast or Gigabit Ethernet. NetReference disfavors ATM for the LAN due to its high cost and the complex protocols of LANE. NetReference also believes that ATM's QoS is “overkill” for a LAN, and that Ethernet can be used for a backbone on a campus instead of ATM. (Passmore and Lazar online)

The enterprise needs to base their decision on choosing the one that fits their needs based on the following paradigms. The Gigabit Ethernet paradigm is one of “buy bandwidth instead of bandwidth management because bandwidth is cheap enough to oversupply your network.” The ATM paradigm is one of “bandwidth management is important; capacity can't be taken for granted, so you'll need a network architecture that can manage capacity for you”. ( online)

ATM is the network solution to implement over Gigabit Ethernet if the users need more bandwidth than what Gigabit Ethernet supplies and wants the QoS of ATM. If the enterprise is wishing for a network with these attributes and is not concerned over cost then it is the solution to choose.


Organizations around the world want to reduce rising communications costs. The consolidation of separate voice and data networks offers an opportunity for significant savings. Accordingly, the challenge of integrating voice and data networks is becoming a rising priority for many network managers.

Organizations are pursuing solutions which will enable them to take advantage of excess capacity on broadband networks for voice and data transmission, as well as utilize the Internet and company Intranets as alternatives to costlier mediums. A voice over packet application meets the challenges of combining legacy voice networks and packet networks by allowing both voice and signaling information to be transported over the packet network. This paper references a general class of packet networks since the modular software objects allow networks such as ATM, Frame Relay and Internet/Intranet (IP) to transport voice. The legacy telephony terminals that are addressed range from standard two wire Plain Old Telephone Service (POTS) and Fax Terminals to digital and analog PBX interfaces. Packet networks supported are ATM, Frame Relay, and Internet Applications. A wide variety of applications are enabled by the transmission of voice over packet networks.

Basic Bandwidth

Voice and data place very different demands on the networks that carry them. Bursty data traffic needs widely varying amounts of bandwidth and is typically very tolerant of network delay--thanks to the windowing mechanisms of commonly used transmission protocols like TCP.

Voice traffic, on the other hand, needs a small amount of continuous bandwidth and is very sensitive to delay, which is usually manifested as degraded audio quality.

Echo, for example, becomes an issue if one-way delay is more than about 25 milliseconds (which means echo suppression has to be added to the circuit). Delays of more than 75 milliseconds are noticeable to participants in a conversation; at 200 milliseconds, perceived quality is affected. Here's the problem:

When the speakers stop talking, each hears silence on the line and mistakenly takes this as a signal to continue. What usually happens is that both participants start talking at once, which is irritating and distracting, to say the least.

The circuit-switching networks used for private and public voice services exhibit very low end-to-end delay, typically a few tens of milliseconds or less. Packet-based networks like the Internet, however, are a long way from being able to meet the end-to-end delays demanded by business-quality voice. One of the reasons for this is that the Internet treats all traffic equally. Delay-sensitive voice packets wait their turn in the queue at congested ports along with bursty data.

ATM, in contrast, makes sure that voice and data get the treatment they need by enabling an end-station to request a specific quality of service (QOS) when it sets up a connection. QOS specifies how much bandwidth the end-station application wants and what end-to-end delay it can tolerate. If the ATM network accepts the QOS request, it guarantees to deliver cells on this connection within the specified upper limit for end-to-end delay.

Packetization Problems

ATM also solves a problem that hamstrings conventional networks: packetization delay, the time it takes to fill a cell or packet with voice samples at the transmitting station.

IP networks need to use fairly large packets to achieve reasonable bandwidth efficiency. Trouble is, the resulting packetization delay verges on the unacceptable in terms of audio quality. For example, a 512-byte packet would incur a packetization delay of 64 milliseconds. (This figure is derived by multiplying the 512-byte payload by 8 bits (4,096 bits) and dividing the product by 64,000 bits per second.) For a 1,518-byte packet, the delay would be a whopping 190 milliseconds. Further, those figures assume there is no compression. If voice is compressed by, say, a 4:1 ratio there will be a fourfold increase in packetization delay. Now take a look at ATM.

The technology carries information in cells with a fixed payload of 48 bytes. Do the math and the packetization delay comes in at 6 milliseconds, low enough to ensure toll-quality voice.

A Walk on the WAN Side

ATM services are beginning to become widely available in North America and in some European countries. The early focus has been on very high-bandwidth services, typically running at T3 (45 Mbit/s) or OC3 (155 Mbit/s). But these offerings usually deliver far more capacity than most companies need (or can afford). That helps explain the recent rollouts of affordable T1 (1.544-Mbit/s) ATM services. And when public ATM services aren't available, the alternative is to create a private ATM WAN built with switches that are typically linked to fractional TI/T3 leased lines.

The new ATM offerings may eventually replace VPNs--today's leading choice for carrying intra-enterprise voice traffic across the wide area. VPNs are attractively tariffed when compared with the public switched network. Again, where VPNs are not available, leased lines are usually deployed for voice. The disadvantage is that a completely separate infrastructure is needed for intra-enterprise data, usually based on some combination of leased lines and frame relay. Thus voice and data represent two separate sets of recurring costs.

ATM has the potential to bring together these two separate infrastructures and their attendant costs. In an ATM network, voice and data are both carried as streams of cells. This makes for very efficient multiplexing of different traffic classes: All of the bandwidth that is not being used for voice is instantaneously available to data.

This is in sharp contrast to conventional TDM (time-division multiplexing), in which the bandwidth allocated to voice and data is fixed and inflexible.

Packing a Trunk

Carrying intra-enterprise voice over ATM on the wide area requires a trunk connection from a company's PBXs into the ATM network--for example, a T1 trunk which uses 23 DS-0 (64-kbit/s) channels for voice and 1 channel for signaling. The ATM Forum's Voice Telephony Over ATM (VTOA) group is working on two schemes for connecting a voice trunk to an ATM network, with first releases expected sometime between February to April 1997 (see "Voice Over ATM: Standard Speak"). One specification, circuit emulation service (CES), defines an interface to the ATM network that carries a constant 1.5-Mbit/s channel that is compatible with the physical transmission spec for PBX trunk connections. In other words, CES simply uses the ATM network to emulate a point-to-point T1 leased line.

CES is simple but fairly inefficient. A company looking for fully meshed connectivity between PBXs at multiple sites would need permanently nailed-up CES connections to each PBX for each trunk. Further, since CES furnishes a constant 1.5-Mbit/s channel across the ATM network, this bandwidth must be permanently dedicated to voice--even if there are no active channels.

A more efficient way to handle voice is via voice trunking over ATM, another spec in the works at the VTOA. With this approach, each voice call is shunted over its own SVC (switched virtual circuit).

Voice trunking reduces the number of physical interfaces needed between PBXs and the ATM network.

Each PBX only needs one interface (depending on required capacity) and the ATM net work can set up voice connections from any PBX directly to any other. With voice trunking, the ATM cloud essentially acts as a gigantic intermediate switch, thus avoiding hop-by-hop connections via intermediate PBXs.

Bandwidth is only set aside for voice when it is needed.

ATM (Asyncronous Transfer Mode)
The ATM Forum has defined two methods for linking PBXs over ATM WANs. Circuit emulation service (CES) establishes permanent virtual circuits (PVCs), each emulating a T1 (1.544-Mbit/s) line with 23 voice channels (a). Voice trunking dynamically establishes a switched virtual circuits (SVC) for each call between PBXs (b).

Silence is Golden

Voice trunking also can save 50 percent of the available bandwidth by implementing silence suppression (which simply means not transmitting cells that contain only silence). Since voice calls use a bidirectional channel (even though only one person is typically speaking at a time), half the bandwidth essentially goes unused. Silence suppression allows the bandwidth reserved for the silent partner to be recovered and used for something else, such as data.

This approach requires a far more sophisticated kind of connection between the PBX and the ATM network. The ATM voice gateway must be able to understand the voice signaling on the trunk connection and map called phone numbers to ATM addresses. It also must be capable of converting each 64-kbit/s voice channel into a separate cell stream, applying silence suppression and compression when applicable.

Staying in Synch

The early economic indicators for voice over ATM are promising, but mass migrations are unlikely. Most net managers will schedule phased rollouts, and that means voice over ATM will coexist with circuit switching for the foreseeable future. To ensure peaceful coexistence, corporate networkers need to address integration issues between circuit-switched voice and ATM.

Today's digital phone networks, whether public or private, operate synchronously. Voice is encoded digitally by sampling 8,000 times per second and converting each sample into an 8-bit value, generating a 64-kbit/s stream. Phone networks are generally locked to a common 8-kHz clock. If individual network elements were timed to local clocks, minor variations between clock frequencies would result in occasional "clock slips," which are audible as clicks on the line.

Clock slippage also is an issue between circuit-switched and ATM networks. There are two solutions. First, lock the frequencies of the physical transmission clocks on all ATM links to the common clock from the public network. This requires an ATM switch that supports clock locking on physical interfaces.

Second, deduce the clocking frequency at the edges of the ATM network from the rate of arrival of ATM voice cells. With this approach, each interface to the ATM WAN would have its own clock for voice transmissions. This clock would be continuously adjusted so the rate at which voice cells are sent exactly matches the rate at which they're received from the ATM gateway.

It may sound esoteric, but synchronicity is a critical concern. Corporate networkers should quiz their ATM switch vendors carefully about how they handle clock locking to circuit-switched networks, even if ATM is initially going to be used only for data.

Looking at the LAN

Bringing ATM to the desktop opens up the intriguing possibility of integrating voice and data all the way to the end-station. For many net managers that may raise a rhetorical question: "The PBX isn't broken, so why fix it?" Still, it's worth taking a moment to consider the full implications.

First, if an ATM LAN can switch virtual circuits handling voice and if some server-based call control can be applied, then there is no need for a PBX. All that's needed instead is some kind of gateway between the ATM LAN and the external phone services.

Second, very basic (and cheap) phone headsets or handsets can be attached to PCs. This eliminates the need for pricey phones: The PC can implement all the fancy functions (and do a better job of it than a handset). Third, now that all digitized voice runs over the LAN, low-cost server-based intelligence can be applied to voice-processing applications, including voice mail, interactive voice response, faxback services, text-to-speech conversion, and the like.

Finally, ATM on the LAN makes it possible to switch video and voice. It also brings common call control to videoconferencing and voice telephony. Try doing that with a PBX.

Client-Server Choices

What the foregoing means is that voice telephony can become just another client-server LAN application. Today, PBXs resemble mainframes, with all the attendant characteristics: proprietary, closed architecture, slow evolution of new features, and high maintenance costs. Not only that, but all the add-ons needed for voice mail, interactive voice response, and so on also conform to the proprietary mainframe model--and are priced accordingly. When voice telephony is run as a client-server app, network managers benefit from the commodity processing power and storage offered by PCs. Further, they stand to gain from the creativity of voice applications vendors, who are no longer shackled by the legacy of PBX hardware.

The actual transmission of voice over ATM LANs is only a small part of t he overall problem. Here again, the ATM Forum's VTOA group has a spec in the works: voice to the desktop over ATM. Voice is encoded into ATM cell streams without any complex network- or transport-layer protocols. Because the cost of bandwidth for voice is not really an issue on the LAN, there is little point in taking the trouble to perform compression or silence suppression. The real complexity of voice over ATM in the LAN comes in three areas: harnessing the end-station application software to the ATM network to enable it to take advantage of QOS and to process voice with acceptably low software delays. connecting voice calls between the ATM LAN and circuit-switched public networks and PBXs, preserving clocking relationships and furnishing signaling protocols. implementing an open scheme for call control that matches or improves upon what PBXs deliver (directory services, hunt groups, and the like).

Net managers should be asking their LAN vendors how they are going to address these issues. Fortunately, many vendors from the PBX and LAN industries are working on the problems involved in bringing ATM-based voice/LAN solutions to the market. Desktop ATM is likely to remain a minority taste for some time to come. When voice and data integration offers significant rewards, such as in some call centers, then voice will be a powerful driver of ATM to the desktop. Most likely there won't be much migration in general office environments until there are good solutions for extending quality of service to Ethernet and token ring desktops via an ATM backbone.

Such solutions may be based on the recently adopted cells-in-frames (CIF) specification, which defines a method of encapsulating ATM cells in Ethernet or token ring frames. Another possibility is mapping the Internet Engineering Task Force's resource reservation protocol (RSVP) requests to ATM QOS, either as an extension to LAN emulation (LANE) or multiprotocol over ATM (MPOA).


1. Introduction

In the last decade, the growth of cellular radio communications has been remarkable because of its portable and flexible, which fit for the modern life. This success of cellular mobile communications has spurned the telecommunications industry to push the implementation Personal Communications Services (PCS), which will provide integrated services including voice, text, video and data. As a direct result, the demand for higher transmission speed and mobility is even greater than ever.

ATM is currently viewed as the paradigm of high speed integrated network. Because of the wide range of services supported by ATM networks, ATM technology is expected to become the dominant networking technology for both public infrastructure networks and LANs (Local Area Network). That is, the ATM infrastructure will support all types of services, from time-sensitive voice communications to multimedia conferencing.

Extending the ATM infrastructure with wireless access meets the needs of users and customers who want a unified end-to-end networking infrastructure with high performance and consistent service. The growth of wireless communications paired with the rapid developments in ATM technology signals a new era in telecommunications.

2. Wireless Technologies

Wireless technologies and systems are still emerging on the telecommunication scene. Currently, wireless LAN technologies are comprised of infrared, UHF (Ultra-High Frequency) radio, spread spectrum, and microwave radio, which covers the frequencies range from MHz (US), GHz (Europe), to infrared frequencies.

The wireless technology mainly has two access modes: code-division multiple access (CDMA) and time-division multiple access (TDMA). Selecting CDMA or TDMA technique may actually vary with the specific personnel communication scenario to be addressed.

CDMA uses spread spectrum technology, which means the occupied bandwidth is considerably greater than the information rate. In CDMA it is possible to transmit several such signals in the same portion of spectrum by using pseudo-random codes for each one. This can be achieved by using either frequency hopping (a series of pulses of carrier at different frequencies, in a predetermined pattern) or direct sequence (a pseudo-random modulating binary waveform, whose symbol rate is a large multiple of the bit rate of the original bit stream) spread spectrum.

TDMA divides the radio carriers into an endlessly repeated sequence of small time slots (channels). Each conversation occupies just one of these time slots. So instead of just one conversation, each radio carrier carries a number of conversations at one time.

3. Asynchronous Transfer Mode (ATM)

ATM has been advocated as an important technology for the wide area interconnection of heterogeneous networks. In ATM networks, the data is divided into small, fixed length units called cells. The cell is 53 bytes, which contains a five-byte header that comprises of identification, control priority and routing information. The rest 48 bytes are the actual data. ATM does not provide any error detection operations on the user payload inside the cell. ATM also provides no retransmission services, and only few operations are performed on the small header.

ATM switches support two kinds of interfaces: user-network interface (UNI) and network-node interface (NNI). UNI connects ATM end systems (hosts, routers etc.) to an ATM switch, while an NNI may be imprecisely defined as an interface connection two ATM switches together. The ITU-T Recommendation requires that an ATM connection be identified with connection identifiers that are assigned for each user connection in the ATM network. At the UNI, the connection is identified by two values in the cell header: the virtual path identifier (VPI) and the virtual channel identifier (VCI). Both these VPI and VCI combines together to form a virtual circuit identifier.

There are two fundamental types of ATM connections: Permanent Virtual Connections (PVC) and Switched Virtual Connections (SVC). A PVC is a connection set up by some external mechanism, typically network management, in which a set of switches between an ATM source and destination ATM systems are programmed with the appropriate VPI/VCI values. The PVCs always require some manual configuration. A SVC is a connection that is set up automatically through signaling protocol. The SVCs do not require the manual interaction needed to set up PVCs and, as such, are likely to be much more widely used. All higher layer protocols operating over ATM primarily use SVCs.

4. Why Wireless ATM (WATM)?

Since the beginning, the concept of ATM is end-to-end communications in a Wide Area Network environment. The communication protocol will be the same and companies will no longer have to buy extra equipment, such as routers or gateways, to interconnect their LANs. At the same time, ATM reduces the complexity of the network and improves the flexibility while providing end-end consideration of traffic performance. B-ISDN has adopted ATM as the backbone network infrastructure so as to integrate all communications into a single universal system. That is the reason that an ATM cell-relay paradigm will be adopted as the basis for next generation wireless transport architectures.

While ATM helps to bring multi-media to the desktop, Wireless ATM provides similar services to mobile computers and devices. Although the bandwidth provided by existing mobile phone systems is sufficient for data and voice traffic, it is still insufficient to support real-time multi-media traffic. With Wireless ATM, wireless users will be allowed to access such services over the wireless media so that the potential of portable devices will be greatly opened. In addition, Wireless ATM networks aim to provide seamless integration into the B-ISDN network, which will be important for the future integrated high-speed network environment.

5. Wireless ATM Reference Model

The WATM system reference model, proposed by ATM Forum Wireless ATM (WATM) group, specifies the signaling interfaces among the mobile terminal, wireless terminal adapter, wireless radio port, mobile ATM switch and non-mobile ATM switch. It also specifies the user and control planes protocol layering architecture. This model is commonly advocated by many communication companies, such as NEC, Motorola, NTT, Nokia, Symbionics, and ORL. The WATM model is illustrated in Figure 1.

The major components of a Wireless ATM system are: a) WATM terminal, b) WATM terminal adapter, c) WATM radio port, d) mobile ATM switch, e) standard ATM network and f) ATM host.

ATM (Asyncronous Transfer Mode)

Figure 1 the WATM system reference model

The system reference model consists of a radio access segment and a fixed network segment. The fixed network is defined by "M (mobile ATM)" UNI and NNI interfaces while the wireless segment is defined by "R (Radio)" radio access layer (RAL) interface. The union of the "M" UNI and "R" RAL yields the full wireless ATM "W" UNI specification. The "W" UNI is concerned with handover signaling, location management, wireless link and QoS control. The "R" RAL governs the signaling exchange between the WATM terminal adapter and the mobile base station. Hence, it concerns channel access, datalink control, meta-signaling, etc. The "M" NNI governs the signaling exchange between the WATM base station and a mobile capable ATM switch. It is also concerned with mobility-related signaling between the mobile capable ATM switches.

6. Wireless ATM Design Issue

WATM's architecture is based on the ATM protocol stack. The wireless segment of the network will require new mobility functions to be added to the ATM protocol stack.

6.1 WATM Protocol Architecture

The protocol architecture currently proposed by ATM Forum is shown in Figure 2.

The WATM items are divided into two distinct parts: Mobile ATM (Control Plane), and Radio Access Layer (Wireless Control).

Mobile ATM is dealing with the higher-layer control/signaling functions needed to support mobility. These control/signaling include handover, location management, routing, addressing, and traffic management.

Radio Access Layer is responsible for the radio link protocols for wireless ATM access. Radio Access Layers consists of PHY (Physical Layer), MAC (Media Access Layer), DLC (Data Link Layer), and RRC (Radio Resource Control).

6.2 Radio Access Layer

To support wireless communication, new wireless channel specific physical, medium access and data link layers are added below the ATM network layer. These layers are called Radio Access Layer in the WATM network.

6.2.1 Architecture

The WATM architecture is composed of a large number of small transmission cells, called pico-cells. Each pico-cell is served by a base station. All the base stations in the network are connected via the wired ATM network. The use of ATM switching for intercell traffic also avoids the crucial problem of developing a new backbone network with sufficient throughput to support intercommunication among large number of small cells. The basic role of the base station is interconnection between the LAN or WAN and the wireless subnets, and also to transfer packets and converting them to the wired ATM network from the mobile units. To avoid hard boundaries between pico-cells, the base stations can operate on the same frequency. Reducing the size of the pico-cells provides the flexibility of reusing the same frequency, thus avoiding the problem of running out of bandwidth.

6.2.2 Cell Size

The ATM cell size (53 bytes) is too big for some wireless LANs due to low speed and high error rates. Therefore wireless LANs may use 16 or 24byte payload. The ATM header can also be compressed and be expanded to standard ATM at the base station. An example of ATM header compression is to use 2 bytes containing 12-bit VCI (virtual channel identifier) and 4 bit control (payload type, cell loss priority etc.).

One of the cell format proposed by Porter and Hopper is to have a compatible pay-load size and addressing scheme, which should be different from the standard ATM cell format. Mobility should be as transparent as possible to the end-points and therefore the VCIs used by the end-points should not change during hand-over. The allocation of the VCI should remain valid as the mobile moves through different pico-cells within the same domain. The translation of the VCIs should be as simple as possible due to movement between domains. This can be done by splitting the VCI space into a number of fields like Domain Identifier, Mobile Identifier, Base Station Identifier and Virtual Circuit number. A 16 bit CRC is also used to detect bit errors, due to high error rate of mobile networks.

6.2.3 Physical Layer (PHY)

While a fixed station may own a 25Mbit/s up to 155Mbit/s data rate ATM link, 5GHz band is currently used to provide 51Mbit/s channel with advanced modulation and special coding techniques in the wireless environment. People believe that 155Mbit/s will soon be available in the 60GHz band and 622Mbit/s would be reached in the not-too-distant future. It is likely that both TDMA and CDMA solutions will co-exist with this specific application scenario. CDMA provides an efficient integrated solution for frequency reuse and multiple access, and can typically achieve a net bandwidth efficiency 2-4 times that of comparable narrow-band approaches. However, a major weakness of CDMA for multi-service personnel communication network is that for a given system bandwidth, spectrum spreading limits the peak user data rate to a relatively low value. Compared to CDMA, TDMA can be used to achieve high bit rates in the range of 8 -16 Mb/s, by using the narrow band approach. Overall, with a good physical level design, it should be possible for macro (5-10 km), micro (0.5 km), and pico (100m) cells to support baud rates of the order 0.1-0.25Msym/s, 0.5-1.5Msym/s and 2-4Msym/s. These rates should be sufficient enough to accomodate many of the broadband services.

6.2.4 Medium Access Control (MAC)

The challenge in designing the MAC protocol for Wireless ATM is to identify a wireless, multimedia capable MAC, which provides a sufficient degree of transparency for many ATM applications.

One of the major problem of Wireless ATM is to find a suitable channel sharing/media access control technique at the data-link layer because shared media access leads to poor quantitative performance in wireless networks. Researchers suggested slotted ALOHA with exponential back-off as the protocol used for MAC. Slotted ALOHA has considerably better delay performance at low utilization than a fixed allocation scheme and fits well with the statistical multiplexing of ATM.

WATM MAC is responsible for providing functionally point to point links for the higher protocol layer to use. To identify each station, both IEEE 48-bit address and local significant address, which is assigned dynamically within a cell, are allowed. Each station registers it's address to it's hub during a hub initiated slotted-ALOHA content period for new registration so that make itself know by others. In a shared environment, there must be some control over the usage of the medium to guarantee QoS. Each station may use the media only when it's informed by the central control elements (hub). Each can send out several packets at a time. To minimum overhead, the MAC should support multiple ATM cells in a packet.

Another design issue of MAC layer is to support multiple PHY layers. Currently, people are interested in different wireless bands, which includes infra-red (IR) medium, 5 GHz radio band, and 60 GHz band. Different PHY will be need for different medium. WATM MAC should support all of them. Some other design issues like error recovery and support for sleep are also under consideration.

6.2.5 Data Link Control (DLC)

Wireless ATM needs a custom data link layer protocol due to high error rate and different packet size of Wireless ATM. Data Link Control is responsible for providing service to ATM layer.

Mitigating the effect of radio channel errors should be done in this layer before cells are sent to the ATM layer. In order to fulfill this requirement, error detection/retransmission protocols and forward error correction methods are recommended. A service type field is needed so as to indicate whether a packet is of type supervisory/control, CBR, VBR, ABR etc. Wireless ATM should provide an error control achieved using a packet sequence number filed (e.g. 10 bits) in the header along with a standard 2-byte CRC frame check sequence trailer. Since Wireless ATM may use 16 or 24 byte cells, segmentation and reassembly is required. This can be achieved with a segment counter that uses, for example, the two least significant bits of the error control sequence (PSN) number. Currently, the DLC protocol and syntax, interface to MAC layer, and interface to ATM layer have not been proposed yet.

6.2.6 Radio Resource Control (RRC)

RRC is needed for support of control plane functions related to the radio access layer. It should support radio resource control and management functions for PHY, MAC, and DLC layers. The design issues of RRC will include control/management syntax for PHY, MAC and DLC layers; meta-signaling support for mobile ATM; and interface to ATM control plane.

6.3 Mobile ATM

To support mobility, Mobile ATM is designed to handling handover, location management, routing, addressing, and traffic management.

6.3.1 Handover

In WATM networks, a mobile end-user establishes a virtual circuit (VC) to communicate with another end user (either mobile or ATM end user). When the mobile end user moves from one AP (access point) to another AP, proper handover is required. To minimize the interruption to cell transport, an efficient and fast switching of the active VCs from the old data path to new data path is needed. When the handover occurs, since a mobile user may be in the access range of several APs, it will select the one which can provides the best QoS.

There is a possibility that some cells will get lost when the connection is broken. Cell buffering is used to guarantee that no cell is lost and cell sequence is preserved. Cell buffering consists of Uplink Buffering and Downlink Buffering, which buffer the outgoing and downlink cells for sudden link interruptions, congestion, or transmissions.

6.3.2 Location Management

In order to establish connections between the mobile unit and the base station, the mobile must be searched and registered. Searching involves a form of broadcast in which the whole network is queried. Objects are responsible for their own registration at a well-known registration point. Subsequent inquiries about the object are directed to this register using static routing mechanism. When a mobile is within a domain, it is registered at the appropriate Domain Location Server (DLS) and this registers the mobile at its Home Register (HR). The HR keeps a record of the mobiles current DLS location. Each mobile has a statically bound home address, which is mapped to the HR address.

There are two basic location management schemes: the mobile PNNI scheme and the location register scheme.

In the mobile PNNI scheme, when a mobile moves, the reachability update information only propagates to the nodes in a limited region. The switch within the region has the correct reachable information for the mobiles. When a call is originated by a switch in this region, it can use the location information to directly establish the connection. If a call is originated by a switch outside this region, a connection is established between this switch and the mobile's Home Agent, which then forward the cells to the mobile. This scheme decreases the number of signaling messages during a local handover.

In the location register scheme, an explicit search is required to prior to the establishment of connections. A hierarchy of location registers, which is limited to a certain level, is used.

In connection establishment, the host generates a connection signal, specifying the network address of the two end-points. If the destination address is that of the mobile unit, then the local Domain Location Server (DLS) is contacted. If the local DLS have no knowledge of the unit, then the routing information is forwarded to the Home Registry (HR) of the mobile. The HR returns the address of the remote DLS that in turn returns the address of the mobiles MR (mobile registration). Once the address of the MR is known, then all the requests are directed to that particular MR. When the connection request arrives at the MR, it first consults the Mobile, which in turn decides whether to accept the call or not. If the mobile accepts the call then it allocates a virtual circuit number and returns it to the MR. The MR then creates the virtual circuits between the Mobile Switching Point (MSP) and the remote end-point and adds the new virtual circuit to the active and inactive virtual paths between the MSP and the base stations close to the mobile.

6.3.3 Routing

Due to the mobility feature of mobile ATM, routing signaling is a little bit different from that for the wired ATM network. First, mapping of mobile terminal routing-ID's to paths in the network is necessary. Also rerouting is needed to re-establish connection when the mobiles move around.

A "mobile controller" node is required in every subnet of the wireless network. Each of these controllers has its network-layer software enhanced by an additional sublayer, which performs routing to mobile systems. Mobile systems are free to move between the subnets, and a network wide name server provides location information when communication with a mobile is to be initiated. A system of local caching of location information and forwarding of data allows movements to be hidden from the Transport Layer. Each processor that attaches to the ATM switch maintains a virtual connection to each other processor over which it passes data packets, as proposed by Comer and Russo. In addition, the processors use a second, separate virtual circuit for routing updates. Therefore, each wireless base station will have two virtual circuits open to each other base station and to each router. As the packet arrives the base station from the mobile wireless unit, it chooses the circuit that leads to the correct destination. Using two connections guarantees that routing information will not be confused with data because data packets never travel on the virtual circuits used for routing, and routing packets never travel on the circuits used for data. The circuits can also be assigned priority to guarantee that stations receive and process routing updates quickly.

6.3.4 Addressing

Addressing issue of WATM focuses on the addressing of the mobile terminal (or mobile end user device). The current solution is that each mobile terminal has a name and a local address. The name of the mobile terminal is a regular ATM end-system address. A local address is assigned when the mobile terminal attaches to a different switch during roaming.

6.3.5 Traffic Management/QoS

The mobility feature puts additional impact on traffic control and QoS control. Currently a reference model for resource allocation in WATM is still unavailable. Support for dynamic QoS renegotiation and extensions to ABR control policy to deal with handover and other related design issues have not been proposed yet.

6.3.6 Wireless Network Management

In wireless networks, the topology is changing in time. This, as well as other mobility features, presents a unique set of network management challenges. A specific method must be designed to maintain the dynamic nature of the network topology. Other issues, like network and user administration, fault identification and isolation, and performance management are also need to be considered.

7. Summary

While the wired network has been going towards B-ISDN with ATM concepts for high data rate, wireless personal communication networks are also experiencing fast development. People are expecting an exciting future of portable and mobile computing which is not tethered to a fixed point by a wire and which has a wide range of services and wide availability. The quality of services is close to today's fixed networks. WATM has the potential to bring wireless networks a new generation. Both ATM and wireless communities have put a lot of attention on Wireless ATM.

While WATM is showed to be a promising area, the success of WATM will highly relies on the success of ATM/B-ISDN in wired networks. If ATM network is to be a standard in the wired area, the success of WATM will be seen in the very near future.

Video over ATM Networks

Video over ATM networks issues is discussed. These issues include video encoding methods, ATM adaptation layer options, quality of service issues, error concealment and correction issues, the ATM Forum Video on Demand, video over wireless ATM.

1. Introduction

The ability of ATM networks to combine voice, video and data communications capabilities in one network are expected to make ATM the networking method of choice for video delivery in the future. The ATM Forum is currently developing standards to address the issues associated with video delivery. The Audiovisual Multimedia Services Technical Committee is currently addressing the issues relating to numerous video applications including broadcast video, video conferencing, desktop multimedia, video on demand, interactive video, near video on demand, distance learning and interactive gaming. This paper provides a survey of the current issues relating to video delivery over ATM networks.

2. Video Compression Methods

The bandwidth requirements of uncompressed video far exceed the available resources for the typical end user. Typical uncompressed video streams can require 100 to 240 Mbps to be delivered without distortions or delays at the receiving end. Uncompressed high definition television streams require around 1 Gbps for proper delivery. Several compression methods have been developed which can reduce the bandwidth requirements for video streams to levels acceptable for existing networks.

Compression is achieved by removing redundancy. In video streams that redundancy can be from within a video frame as well as between frames in close proximity. Compression can be either lossy or lossless. Data compressed with a lossless method can be recovered exactly. Therefore, lossless methods are often used to compress data on disks. The compression ratios typically achieved are around 2:1 to 4:1 with lossless methods. Lossless methods do not provide adequate compression for video in most cases. In contrast, lossy compression methods can provide much higher compression ratios. Ratios as high as 200:1 are typical with these methods. Data compressed with a lossy method cannot be recovered exactly. The high compression ratios make these the methods of choice for video compression.

Compression methods can be symmetric or asymmetric. For symmetric compression methods it takes the same amount of computational effort to perform the compression operation as it does to perform the decompression operation. Motion JPEG, which is described below, is an example of a symmetric compression method while MPEG-1 and MPEG-2 are asymmetric. Video compression methods are typically asymmetric. Since many of the video on demand applications will involve one source with many recipients it is generally desirable that the compression method place most of the required computational complexity on the source side. While limiting the complexity and therefore the cost of the equipment at the destination side or end user.

2.1 MPEG-2

The MPEG-2 (Moving Picture Experts Group) standard is an extension of the MPEG-1 standard described below. MPEG-2 was designed to provide high quality video encoding suitable for transmission over computer networks. It is believed that MPEG-2 will be the primary

compression protocol used in transmitting video over ATM networks.

MPEG-2 (and MPEG-1) video compression makes use of the Discrete Cosine Transform (DCT) algorithm to transform 8x8 blocks of pixels into variable length codes (VLC). These VLC's are the representation of the quantized coefficients from the DCT. MPEG-2 encoders produce three types of frames: Intra (I) frames, Predictive (P) frames, and Bi-directional (B) frames. The relationship between these three frames is depicted in Figure 1 .As the name would suggest, I frame use only intra-frame compression and because of this they are much larger than P or B frames. P frames use motion compensated prediction from the previous I or P frame in the video sequence. This forward prediction is indicated by the upper arrows in Figure 1. B frames use motion compensated prediction by either forward predicting from future I or P frames, backward predicting from previous I or P frames, or interpolating between both forward and backward I or P frames. This bi-directional prediction is indicated by the lower arrows in Figure 1. B frames achieve the highest degree of compression and are therefore the smallest frames.

Frames are generated by the MPEG-2 encoder by first generating the 8x8 blocks. Four of these blocks are combined to form a macroblock. A macroblock is a 16x16 region of pixels. The macroblocks are then combined to form a slice. A series of slices makes up a frame. The address and motion vectors of the first macroblock in a slice are coded absolutely. The remaining macroblock parameters are differentially coded with respect to the first macroblock in the slice. In the event of errors in transmission of MPEG-2 video, it is at the first macroblock of the next slice where the image decoding can continue correctly. This will be discussed further in the section on error correction and concealment later in this document. The MPEG-2 systems layer provides features necessary for multiplexing and synchronization of video, audio, and data streams. Video streams are broken into units called video access units. A video access unit corresponds to one of the image frames, I, P, or B, described above. A collection of video access units is a video elementary stream and a several elementary streams can be combined and packetized to form packetized elementary streams (PES). PES streams can be stored or transmitted as they are but are more commonly converted into either program streams or transport streams. Program streams (PS) resemble the original MPEG-1 streams. They consist of variable length packets and are intended for use in media where there is a very low probability of bit errors or data loss. The transport streams (TS) are a fixed length. Each TS packet is 188 bytes long with 4 bytes of header information. The TS packets are intended for transport over media where bit errors or loss of information is more likely. The PES packets are loaded into TS packets such that the first byte of a PES packet is the first byte of a TS payload and a single TS packet can only carry data from one PES.

Synchronization information is built into the MPEG-2 system layer. This is accomplished through the use of time stamps. Two time stamps, the presentation video-conferencing time stamp (PTS) and the decoder time stamp (DTS) are included in the PES packet header. These tell the decoder when to display-decoded information to the end user and when to decode information in the decoder buffers respectively. The clocks between the encoder and the decoder must also be synchronized. This task is accomplished through the use of program clock references (PCR). A PCR can be inserted into a TS packet in a field just after the TS header. PCR's are inserted at regular intervals to maintain synchronization between the encoder and the decoder. The use of these time stamps assumes that the transmission media offers a constant transmission delay. In an ATM network cell delay variation (CDV), or jitter, is always present.

2.2 MPEG-1

The MPEG-1 video-encoding standard was designed to support video encoding at bit rates of approximately 1.5 Mbps. The video encoding methods employ I, P and B frames described above. The quality of the video achieved with this standard is roughly similar to that of a VHS VCR. This level of quality is generally not acceptable for broadcast quality video. It is expected that most video over ATM applications will utilize MPEG-2 rather than MPEG-1.

2.3 H.261

The ITU-T Recommendation H.261 describes a video-encoding standard for two way audio and video transmission. It has traditionally utilized 64 kbps or 128 kbps ISDN links. The H.261 method uses buffering to smooth out short term variations in bit rate from the video encoder. A near constant bit rate is achieved by feeding back the status of the buffer to the encoder. When the buffer is nearly full, the encoder can adjust the bit rate by increasing the quantization step size. This will reduce the bit rate from the encoder at the expense of video quality. H.261 defines two fixed resolutions 352x288 and 176x144. The latter is often used due to the low bit rate of the ISDN connections. Like MPEG-2 this encoding method employs motion compensated prediction. The output bit structure is similar to the structure for MPEG-2. H.261 employs VLC's at the base level. Groups of blocks form macroblocks. Groups of macroblocks form groups of blocks at the picture level. Video-conferencing using H.261 encoding can be accommodated over ATM networks via circuit emulation utilizing the features of AAL 1.

2.4 Motion JPEG

Motion JPEG (Joint Photographic Experts Group) is an extension of the joint ITU and ISO JPEG standard for still images. Motion JPEG is a symmetric compression method that typically results in from 10:1 up to 50:1 compression ratios. As an extension of the JPEG still image standard, motion JPEG only removes intra-frame redundancy and not intra-frame redundancy. This results in significantly less compression than a method, which would remove both. Another drawback of Motion JPEG is that audio is not integrated into the compression method. There are four modes of JPEG operation defined.

1) Sequential - The compression method proceeds left to right and top to bottom.

2) Progressive - Compression can take place in multiple scans of an image. This allows an image to be displayed in stages. The image quality improves with each stage. This is particularly useful on a low bandwidth link.

3) Lossless Encoding - This method allows exact recovery of the image at the expense of less compression.

4) Hierarchical Encoding - An image encoded with this method is encoded and can be displayed at multiple resolutions without uncompressing at the highest resolution first.

The lack of inter-frame coding can be viewed as a feature for some video applications. If direct access to a random video frame is desired, Motion JPEG will allow faster access than MPEG-1 or MPEG-2. With interframe coding only a fraction of the frames transmitted are not encoded in relation to previous or future frames. Thus, it may be necessary to wait for multiple frames to arrive before decoding a specific one. With Motion JPEG any frame received can be decoded immediately.

3. Mapping MPEG-2 Bit Streams into ATM Cells

There are two main options for mapping MPEG-2 bit streams into ATM Cells, constant bit rate (CBR) transmission over ATM adaptation layer 1 (AAL 1) and transmission over AAL 5. Originally, AAL 2 was envisioned as the adaptation layer that would provide the necessary support for video services over ATM. Currently, AAL 2 is undefined. The merits of AAL 5 and AAL 1 are showed and discussed below.

3.1 AAL 5

Shows the mapping of TS packets into AAL 5 Protocol Data Units (PDU). Two TS packets will map exactly into 8 ATM cells. One major drawback of using ATM AAL 5 is it lacks a built in mechanism for timing recovery. Also, AAL 5 does not have a built in capacity for supporting forward error correction (FEC). One major advantage of using AAL 5 may be financial. Since video applications will require a signaling capability, AAL 5 will already be implemented in the ATM equipment. Another advantage of using AAL 5 is that by adopting a NULL convergence sub-layer (CS) no additional network functionality will need to be defined. There are two major categories of video that would likely be transmitted over ATM using AAL 5. Video which is being sent over heterogeneous networks would likely be sent via AAL 5. This video would probably be carried as IP packets over ATM and would be encoded in proprietary formats such as Quicktime or AVI. The AAL 5 would provide no quality of service guarantees from the network for this class of video. The second class of video would be variable bit rate (VBR) traffic that is native to the ATM network. This video would be able to benefit from AAL 5 quality of service guarantees.

3.2 AAL 1

A TS packet will map neatly into 4 ATM AAL 1 cells. One major advantage of AAL 1 over AAL 5 is that it was designed for real-time applications. The major disadvantage of AAL 1 is that it only supports constant bit rate applications. Future video applications will probably want to take advantage of variable bit rate transmission options. AAL 1 will also need to be supported in end equipment in addition to the AAL 5 functionality. AAL 1 does provide for forward error correction (FEC). This may be important for some video applications, especially over media prone to errors, such as wireless ATM. AAL 1 is expected to be the media of choice to support video from H.261 or H.263 encoders. H.261 video has traditionally been transported over lines which are multiples of 64 kbps or ISDN lines.

4. Quality of Service Issues

To provide video of acceptable quality to the user the network must provide a certain level of service. Cell delay variation, bit errors and cell loss all can have severe effects on the quality of the video stream received. A transmission link with a bit error rates of 10-5 would be acceptable for non real-time data transmission with some form of error correction. In a video stream, however, this error rate would cause a serious degradation in the quality of the received video. Similarly, cell delay, cell loss, and rate control issues also have a significant impact on the quality of video received. This section examines these issues.

4.1 Cell Delay Variation (CDV)

Cell delay variation or jitter can have a significant impact on the quality of a video stream. MPEG-2 video systems use a 27 MHz system clock in the encoder and the decoder. This clock is used to synchronize the operations at the decoder with those at the encoder. This enables video and audio streams to be correctly synchronized and also regulates the retrieval of frames from the decoder buffer to prevent overflow or underflow. To keep the encoder and decoder in synchronization with each other the encoder places program clock references (PCR) periodically in the TS. These are used to adjust the system clock at the decoder as necessary. If there is jitter in the ATM cells the PCR's will also experience jitter. Jitter in the PCR's will propagate to the system clock, which is used to synchronize the other timing functions of the decoder. This will result in picture quality degradation. One proposed solution for traffic over AAL 1 is to use synchronous residual time stamps (SRTS) In this method both ends of the transmission would need to have access to the same standard network clock. This reference clock could then be used to determine and counter the effects of the CDV. Whether this clock would be readily available is unknown. Also, there is some question whether AAL 1 would provide enough bits for SRTS to be effective.

4.2 Bit Error Rate (BER)

Encoded video streams are highly susceptible to loss of quality due to bit errors. Bit error rates are media dependent with the least error rates expected from optical fiber. The encoding method of MPEG-2 video makes it susceptible to picture quality loss due to bit errors. When a bit error occurs, the error that occurs in one cell can propagate both spatially and temporally through the video sequence. A spatial error occurs because the variable length codes (VLC) that make up the blocks and slices are coded differentially and utilize motion vectors from the previous VLC. If a VLC is lost then that error will propagate to the next point of absolute coding. In an MPEG-2 stream this point is at the start of the next video slice. Therefore, a bit error can degrade the picture quality of a larger strip in the frame. Temporal error propagation occurs due to the forward and bi-directional prediction in P and B frames. An error that occurs in an I frame will propagate through a previous B frame and all subsequent P and B frames until the next I frame occurs. The strip is in error in the original frame due to the loss of VLC synchronization. The error is propagated temporally through the group of pictures. In a typical video sequence, a GOP can last for 12 to 15 frames. At 25 to 30 frames per second the error could persist for 0.5 seconds. This would be long enough to make the video quality objectionable in many cases. Bit errors that occur in P frames will be propagated in a similar manner to surrounding B frames generating a similar, but more limited effect. Bit errors in B frames would only effect that frame.

4.3 Cell Loss Rate (CLR)

For the reasons described in the previous section the cell loss rate also plays a critical role in the quality of the decoded video stream. The cell loss rate can depend on a number of factors including the physical media in use, the switching technique, the switch buffer size, the number of switches traversed in a connection, the QoS class used for the service, and whether the video stream is CBR or VBR. Losses of cells in ATM networks are often a result of congestion in the switches. Providing appropriate rate control is one way to limit cell loss.

4.4 Rate Control

A traffic contract is negotiated between the user and an ATM network at connection setup time. This contract is policed by a the usage parameter control (UPC), typically using the generic cell rate algorithm (GCRA), to ensure that the source does not violate this traffic contract. It may be difficult to determine at connection setup time exactly what the required bit rate will be for a particular video stream. Video bit rates can vary with changes in scene content. A scene with little motion and limited scene details will require low bit rates, but if the motion suddenly increases the bit rate required for transmission will rise sharply causing the traffic contract to be violated and cells may be lost. It would be inefficient to allocate bandwidth at the peak cell rate and maximum burst sizes. Allocating too little bandwidth can lead to cell loss.

If the user exceeds the negotiated contract, the UPC can tag those cells, which violate the contract so they can be dropped in the event of network congestion. Studies have shown that when video streams violate their ATM traffic contracts it is most often the larger I frame which are at fault and subject to being dropped rather than P or B frames. Lost I frames also lead to the greatest degradation in picture quality of the three frame types.

Various rate control methods have been proposed for both CBR and VBR video. With CBR video a buffer can be employed to smooth out slight variations in frame sizes. If the bit rate from the encoder rises sharply the buffer can be exceeded and cells lost. In order to prevent large changes in bit rate, one method is to use a closed loop encoder. With a closed loop encoder the status of the buffer is fed back to the encoder. If the buffer is close to full, the encoder can lower the bit rate of the frames it is encoding by increasing the DCT quantization step size. This is done at the expense of video quality.

CBR video can be transmitted with constant quality by employing a buffer in the end user equipment and a delay before beginning video playback. This method is useful for (VoD), which will tolerate a delay before playback. This delay and buffer can allow a number of frames to be transmitted to the decoder ahead of time. This allows the bursty MPEG frames to be sent at a constant transmission rate. The relationship between the initial delay, buffer size, and transmission rate has also been studied. Rate control methods have also been proposed for VBR traffic. One proposed method uses rate based flow control. This method is a modified form of explicit backward congestion notification that was first proposed for available bit rate (ABR) service. With this scheme, the queue occupancy of each switching node is monitored as a measure of congestion. The users are notified of the congestion status of the switches and are instructed to adjust their transmission rates accordingly. The signaling information is transmitted to the user through operation and maintenance (OAM) cells called resource management (RM) cells. Studies of the trade off between congestion levels and picture quality degradation have been reported. Another method for rate control replaces the leaky bucket UPC with a control based on fuzzy logic. In simulations the fuzzy policer was able to perform the functions of minimizing the cell loss rate and minimizing the effects of policing on picture quality. Work continues in this area as well.

5. Error Correction/Concealment

Bit errors and cell loss in video transmissions tends to cause noticeable picture quality degradation. Error correction and concealment techniques provide methods for the decoder to deal with errors in a way that minimizes the quality loss. Error correction techniques remove the errors and restore the original information. Error concealment techniques do not remove the errors, but manage them in a way that makes them less noticeable to the viewer. Encoding parameter adjustments can also be made that reduce the effects of errors and cell loss.

5.1 Error Correction

Error correction is more difficult for real time data than it is for non-real time data. The real time nature of video streams means that they cannot tolerate the delay that would be associated with a traditional retransmission based error correction technique such as automatic repeat request (ARQ). Delay is introduced in the acknowledgment of receipt of frames as well as in waiting for the timeout to expire before a frame is retransmitted. For this reason ARQ is not useful for error correction of video streams. Forward error correction (FEC) is another error correcting technique. This is supported in ATM by AAL 1. FEC takes a set of input symbols representing data and adds redundancy, producing a different and larger set of output symbols. FEC methods that can be used are Hamming, Bose Chaudhuri Hocquenghen (BCH) and Reed-Solomon. FEC presents a trade off to the user. On the positive side, FEC allows lost information to be recovered. On the negative side, this ability is paid for in the form of a higher bandwidth requirement for transmission. This added traffic could introduce additional congestion to the network leading to a greater number of lost cells. These additional lost cells may or may not be recoverable with FEC. The role of FEC in video is still a topic of discussion.

5.2 Error Concealment

Error concealment is a method of reducing the magnitude of errors and cell loss in the video stream. These methods include temporal concealment, spatial concealment, and motion compensated concealment. With temporal concealment, the error data in the current frame is replaced by the unerror data from the previous frame. In video sequences where there is little motion in the scene, this method will be quite effective. Another method of concealing errors is spatial concealment. Spatial concealment involves interpolating the data that surrounds an error block in a frame. This method is most useful if the data does not contain a high level of detail. Motion compensated concealment involves estimating the motion vectors from neighboring error free blocks. This method could be used to enhance spatial or temporal concealment techniques. I frames cannot be used with this technique since they have no motion vectors.

5.3 Encoding Parameter Adjustment

The encoding parameters for a video stream can be adjusted to make a stream more resistant to bit errors and cell loss. MPEG-2 (as well as Motion JPEG) support scalable coding. Scalable coding allows multiple qualities of service to be encoded in the same video stream. When congestion was not present in the network, all the cells would arrive at the decoder and the quality would be optimal. When congestion was present, the coding could be performed so those cells which provided a base layer of quality would reach the decoder while the enhancement cells would be lost. Temporal localization is another method that can improve the quality of the video in the presence of cell loss. This involves adding additional I frame to the video stream. Additional I frames prevent long error propagation strings when a cell is lost since errors rarely are propagated beyond the next I frame encountered. The additional I frame are larger than the P or B frames they replace and compression efficiency will be reduced. In addition the greater bit rate required for these added I frames can contribute to network congestion. A third technique that can be performed at the encoder is to decrease the slice size. Since re-synchronization after an error occurs at the start of the next slice, decreasing the slice size will allow this re-synchronization to occur sooner.

6. Video on Demand (VoD)

The Audiovisual Multimedia Services Technical Committee of the ATM Forum released the Video on Demand document represents the first phase of a study of multimedia issues relating to ATM. The document addresses issues relating to the transport of constant packet rate MPEG-2 Single Program Transport Streams over ATM networks. While the scope of the document is very limited, many believe it will serve as a guide for carriage of a wide range of video over ATM networks.

6.1 Reference Model

The configuration consists of a server, client, and a separate session/connection control unit. The client could be either a set-top-terminal (STT) or inter working unit (IWU). The reference depicts five communications links, which would be served by five separate virtual connections (VC). If the server and client both support signaling (ATM Forum Signaling Specification 4.0), then the user-to-network signaling VC's would be as shown. In the event either the server or the client or both did not support signaling, proxy signaling could be employed as described in a later section. The MPEG-2 Single Program Transport Stream traffic would be accommodated on a separate VC. This VC would be the last VC connection established. The User-to-User Control VC would be used for implementation specific information flows. VoD Specification 1.1 indicates that one of the main purposes for this VC would be to exchange program selection information between the client and the server. This would allow the end user to select a specific item (e.g. a movie) for viewing and inform the server of that selection. The VoD Session Control VC would be used for session control information. This link would be utilized to facilitate connection set up between the server and the client in the event that proxy signaling was required.

6.2 Protocols

In Figure 7, the protocol reference has been combined with the reference configuration for VoD. The network adaptation uses AAL 5 in the manner described in the previous section on packing MPEG-2 TS packets into AAL 5 cells. Specification 1.1 allows for the following mapping:

1) Every AAL5-SDU shall contain N MPEG-2 SPTS packets, unless there are less than N packets left in the SPTS. (Remaining packets are placed in the final CPCS-SDU)

2) The value of N is established via ATM signaling using N = the AAL5 CPCS-SDU size divided by 188. The default AAL5 CPCS-SDU size is 376 octets, which is two TS packets (N=2)

3) In order to ensure a base level of interoperability, all equipment shall support the value N=2 (AAL5 CPCS-SDU size of 376 octets)

6.3 Proxy Signaling

Proxy signaling procedures are defined by the VoD Specification. Proxy signaling is supported when either the server or the client or both do not support signaling. The basic procedure outlined for proxy signaling is the client contacts the session controller. The session controller provides the client with a list of servers from which to choose. When the client selects a server, the session controller informs the server that the client wishes to establish a connection. If the server agrees, the session controller informs the ATM connection controller to establish a VC for user to user control information. It is the over the user to user control VC that the client will make a specific program selection. (e.g. what movie to receive) The VC would then be established for the transfer of MPEG-2 SPTS video from the server to the client.

7. Video over Wireless ATM Networks

There has been a great deal of interest recently in the area of wireless networking. Issues such as bit error rates and cell loss rates are even more important when transmitting video over a wireless network. A very high performance wireless local area network (VHP-WLAN) which operates in the 60 GHz millimeter wave band can experience cell loss rates of 10-4 to 10-2. To provide adequate picture quality to the user some form of error correction or concealment must be employed. One option is to use the MPEG-2 error resilience techniques that were described previously. ARQ will probably not work for the reasons discussed previously. One proposed solution is to modify the MPEG-2 standard slightly when it is used over wireless ATM networks. This technique is known as macroblock re-synchronization. In macroblock re-synchronization the first macroblock in every ATM cell is coded absolutely rather than differentially. This allows for re-synchronization of the video stream much more often than would be possible if re-synchronization could only take place at the slice level. These proposals indicate that it would be relatively simple to incorporate this method with the existing MPEG-2 coding standard by adding an inter-working adapter at the boundary between the fixed and wireless network. A second proposal for improving error resilience in wireless networks is to use FEC methods. In addition, improved performance can be achieved by using a two layer scalable MPEG-2 coding scheme rather than one layer.

8. Summary

Many issues relating to video delivery over ATM have been discussed in this paper. These include, video compression, ATM adaptation layer selection, quality of service, error correction and concealment, video on demand, and wireless ATM issues. Most of these areas are still the focus of debate. With the potential that ATM networks have for the delivery of video services it is clear that this topic will continue to be of great interest in the near future.


Berc, L. A BAGNet Reality Check 7 Feb. 2000 7 Feb. 2000 7 Feb. 2000 9 Feb. 2000 7 Feb. 2000

Jeffries, R. ATM Goes to Work in the LAN Telecommunications Online December 1996 7 Feb. 2000

McDysan, D. and Spohn, D ATM Theory and Applications New York McGraw- Hill


Onvural, R. O. Asynchronous Transfer Mode Networks Boston Artech House 1995

Pandya, A. S. and Sen, E. ATM Technology for Broadband Telecommunications

Networks Boca Raton CRC Press 1999

Passmore, D and Lazar, I. Pulling the plug on ATM Network World, 11/29/99 13 Feb. 2000

Steinberg, S. Netheads vs Bellheads 10-96 7 Feb. 2000

Zimmerman, C. ATM: The Technology That Would Not Die, 4/21/1999 14 Feb. 2000

Wireless ATM,

Wireless ATM - An Overview,

Wireless ATM (WATM),

Wireless ATM: Technology and Applications,

C-K Toh, WIRELESS ATM and AD-HOC NETWORKS Protocols and Architectures, Kluwer Academic Publishers, 1997

G. Bauz, “Addressng in Wireless ATM networks”, ATM Forum/97-0322, 1997.

Uyless Black, “Emerging Communications Technologies, 2nd Edition”, 1997


K-net Corporation


First Virtual Corporation


STS Technologies Corporation


Ahern Communications Corporation

ATM (Asyncronous Transfer Mode)

Figure 2 WATM Protocol Architecture

Enviado por:Pedro Ml. Taveras
Idioma: inglés
País: España

Te va a interesar