SG242580 IP Network Design Guide


User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 324 [warning: Documents this large are best viewed by clicking the View PDF Link!]

Martin W. Murhammer, Kok-Keong Lee, Payam Motallebi,
Paolo Borghi, Karl Wozabal
International Technical Support Organization
International Technical Support Organization SG24-2580-01
IP Network Design Guide
June 1999
© Copyright International Business Machines Corporation 1995 1999. All rights reserved.
Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
Second Edition (June 1999)
This edition applies to Transmission Control Protocol/Internet Protocol (TCP/IP) in general and selected IBM and
OEM implementations thereof.
Comments may be addressed to:
IBM Corporation, International Technical Support Organization
Dept. HZ8 Building 678
P.O. Box 12195
Research Triangle Park, NC 27709-2195
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way
it believes appropriate without incurring any obligation to you.
Before using this information and the product it supports, be sure to read the general information in Appendix C,
“Special Notices” on page 287.
Take Note!
© Copyright IBM Corp. 1995 1999 iii
Preface ......................................................ix
How This Book Is Organized . .........................................ix
The Team That Wrote This Redbook . ...................................x
CommentsWelcome ................................................xi
Chapter 1. Introduction..........................................1
1.1 The Internet Model . . . ........................................1
1.1.1 A Brief History of the Internet and IP Technologies . . . ............1
1.1.2 The Open Systems Interconnection (OSI) Model.................2
1.1.3 The TCP/IP Model........................................4
1.1.4 TheNeedforDesigninIPNetworks..........................5
1.1.5 DesigninganIPNetwork...................................6
1.2 ApplicationConsiderations....................................11
1.2.1 Bandwidth Requirements .................................11
1.2.2 Performance Requirements................................12
1.2.3 Protocols Required ......................................12
1.2.4 QualityofService/TypeofService(QoS/ToS)..................12
1.2.5 SensitivitytoPacketLossandDelay.........................13
1.2.6 Multicast..............................................13
1.2.7 Proxy-Enabled . . .......................................13
1.2.8 Directory Needs . .......................................13
1.2.9 DistributedApplications...................................14
1.2.10 Scalability ............................................14
1.2.11 Security..............................................14
1.3 PlatformConsiderations......................................14
1.4 InfrastructureConsiderations ..................................16
1.5 ThePerfectNetwork.........................................17
Chapter 2. The Network Infrastructure ............................19
2.1 Technology ................................................20
2.1.1 TheBasics ............................................20
2.1.2 LAN Technologies .......................................22
2.1.3 WAN Technologies ......................................31
2.1.4 Asynchronous Transfer Mode (ATM). . . ......................47
2.1.5 FastInternetAccess.....................................51
2.1.6 WirelessIP............................................55
2.2 The Connecting Devices ......................................57
2.2.1 Hub..................................................57
2.2.2 Bridge................................................58
2.2.3 Router................................................60
2.2.4 Switch................................................62
2.3 ATM Versus Switched High-Speed LAN ..........................67
2.4 FactorsThatAffectaNetworkDesign............................68
2.4.1 SizeMatters...........................................68
2.4.2 Geographies ...........................................68
2.4.3 Politics ...............................................68
2.4.4 TypesofApplication.....................................68
2.4.5 NeedForFaultTolerance.................................69
2.4.6 ToSwitchorNottoSwitch ................................69
2.4.7 Strategy ..............................................69
2.4.8 CostConstraints........................................69
iv IP Network Design Guide
2.4.9 Standards . . . .........................................69
Chapter 3. Address, Name and Network Management ............... 71
3.1 Address Management . . . ....................................71
3.1.1 IPAddressesandAddressClasses.........................71
3.1.2 SpecialCaseAddresses .................................73
3.1.3 Subnets ..............................................74
3.1.4 IPAddressRegistration.................................. 79
3.1.5 IP Address Exhaustion...................................80
3.1.6 ClasslessInter-DomainRouting(CIDR)......................81
3.1.7 The Next Generation of the Internet Address IPv6, IPng . ........ 83
3.1.8 Address Management Design Considerations . . . ..............83
3.2 AddressAssignment........................................ 86
3.2.1 Static................................................86
3.2.2 ReverseAddressResolutionProtocol(RARP).................86
3.2.3 BootstrapProtocol(BootP) ............................... 86
3.2.4 Dynamic Host Configuration Protocol (DHCP) . . . ..............87
3.3 Name Management .........................................89
3.3.1 StaticFiles............................................ 89
3.3.2 TheDomainNameSystem(DNS)..........................90
3.3.3 Dynamic Domain Name System (DDNS) . . .................. 104
3.3.4 DNSSecurity......................................... 104
3.3.5 DoesTheNetworkNeedDNS?........................... 106
3.3.6 DomainAdministration.................................. 107
3.3.7 A Few Words on Creating Subdomains . . . .................. 112
3.3.8 ANoteonNamingInfrastructure..........................113
3.3.9 RegisteringAnOrganizationsDomainName ................ 113
3.3.10 DynamicDNSNames(DDNS)........................... 114
3.3.11 Microsoft Windows Considerations ....................... 115
3.3.12 FinalWordOnDNS................................... 118
3.4 Network Management . . . ................................... 118
3.4.1 TheVariousDisciplines................................. 119
3.4.2 The Mechanics of Network Management . . .................. 119
3.4.3 The Effects of Network Management on Networks.............123
3.4.4 The Management Strategy. . ............................. 124
Chapter 4. IP Routing and Design .............................. 127
4.1 TheNeedforRouting ...................................... 127
4.2 TheBasics .............................................. 128
4.3 TheRoutingProtocols...................................... 130
4.3.1 StaticRoutingversusDynamicRouting..................... 131
4.3.2 RoutingInformationProtocol(RIP) ........................ 135
4.3.3 RIPVersion2 ........................................ 137
4.3.4 OpenShortestPathFirst(OSPF).......................... 138
4.3.5 BorderGatewayProtocol-4(BGP-4) ....................... 141
4.4 Choosing a Routing Protocol ................................. 142
4.5 BypassingRouters ........................................ 144
4.5.1 RouterAccelerator..................................... 144
4.5.2 Next Hop Resolution Protocol (NHRP)...................... 145
4.5.3 RouteSwitching....................................... 148
4.5.4 MultiprotocoloverATM(MPOA) ..........................149
4.5.5 VLAN IP Cut-Through .................................. 150
4.6 Important Notes about IP Design . ............................. 151
4.6.1 Physical versus Logical Network Design .....................152
4.6.2 FlatversusHierarchicalDesign............................152
4.6.3 CentralizedRoutingversusDistributedRouting................152
4.6.4 Redundancy ..........................................153
4.6.5 FrameSize...........................................154
4.6.6 Filtering..............................................155
4.6.7 Multicast Support ......................................155
4.6.8 Policy-BasedRouting ...................................155
4.6.9 Performance..........................................155
Chapter 5. Remote Access .....................................159
5.1 RemoteAccessEnvironments ................................159
5.1.1 Remote-to-Remote.....................................159
5.1.2 Remote-to-LAN........................................160
5.1.3 LAN-to-Remote........................................160
5.1.4 LAN-to-LAN...........................................161
5.2 Remote Access Technologies . ................................162
5.2.1 RemoteControlApproach................................163
5.2.2 RemoteClientApproach.................................163
5.2.3 RemoteNodeApproach .................................164
5.2.4 RemoteDialAccess....................................164
5.2.5 Dial Scenario Design....................................166
5.2.6 Remote Access Authentication Protocols ....................168
5.2.7 Point-to-Point Tunneling Protocol (PPTP) ....................170
5.2.8 Layer2Forwarding(L2F)................................171
5.2.9 Layer 2 Tunneling Protocol (L2TP) .........................172
5.2.10 VPNRemoteUserAccess...............................180
Chapter 6. IP Security.........................................187
6.1 SecurityIssues............................................187
6.1.1 CommonAttacks.......................................187
6.1.2 ObservingtheBasics ...................................187
6.2 SolutionstoSecurityIssues ..................................188
6.2.1 Implementations.......................................191
6.3 TheNeedforaSecurityPolicy................................192
6.3.1 NetworkSecurityPolicy..................................193
6.4 IncorporatingSecurityintoYourNetworkDesign ..................194
6.4.1 Expecting the Worst, Planning for the Worst . . ................194
6.4.2 Which Technology To Apply, and Where? ....................195
6.5 Security Technologies. ......................................197
6.5.1 SecuringtheNetwork ...................................197
6.5.2 SecuringtheTransactions................................210
6.5.3 SecuringtheData......................................215
6.5.4 SecuringtheServers....................................218
6.5.5 HotTopicsinIPSecurity.................................218
Chapter 7. Multicasting and Quality of Service.....................227
7.1 TheRoadtoMulticasting ....................................227
7.1.1 BasicsofMulticasting...................................229
7.1.2 TypesofMulticastingApplications..........................229
7.2 Multicasting...............................................229
7.2.1 Multicast Backbone on the Internet (MBONE) . ................230
7.2.2 IPMulticastTransport...................................231
7.2.3 MulticastRouting ......................................234
vi IP Network Design Guide
7.2.4 MulticastAddressResolutionServer(MARS) ................ 238
7.3 DesigningaMulticastingNetwork ............................. 239
7.4 QualityofService ......................................... 241
7.4.1 TransportforNewApplications ........................... 241
7.4.2 QualityofServiceforIPNetworks......................... 243
7.4.3 ResourceReservationProtocol(RSVP)..................... 243
7.4.4 Multiprotocol Label Switching (MPLS) ...................... 244
7.4.5 DifferentiatedServices.................................. 245
7.5 Congestion Control ........................................ 245
7.5.1 First-In-First-Out(FIFO)................................. 246
7.5.2 Priority Queuing....................................... 246
7.5.3 Weighted Fair Queuing (WFQ)............................ 246
7.6 ImplementingQoS......................................... 247
Chapter 8. Internetwork Design Study ........................... 249
8.1 SmallSizedNetwork(<80Users) ............................. 249
8.1.1 Connectivity Design . ................................... 250
8.1.2 Logical Network Design ................................. 252
8.1.3 Network Management .................................. 253
8.1.4 Addressing........................................... 254
8.1.5 Naming ............................................. 255
8.1.6 Connecting the Network to the Internet . . . .................. 255
8.2 MediumSizeNetwork(<500Users)............................ 256
8.2.1 Connectivity Design . ................................... 258
8.2.2 Logical Network Design ................................. 259
8.2.3 Addressing........................................... 261
8.2.4 Naming ............................................. 262
8.2.5 RemoteAccess....................................... 263
8.2.6 Connecting the Network to the Internet . . . .................. 264
8.3 LargeSizeNetwork(>500Users)............................. 265
Appendix A. Voice over IP ........................................271
A.1 The Need for Standardization ....................................271
A.1.1 The H.323 ITU-T Recommendations . . . ........................271
A.2 TheVoiceoverIPProtocolStack .................................273
A.3 VoiceTerminologyandParameters................................273
A.4 VoiceoverIPDesignandImplementations..........................275
A.4.1 TheVoiceoverIPDesignApproach...........................277
Appendix B. IBM TCP/IP Products Functional Overview ..............279
B.1 SoftwareOperatingSystemImplementations........................279
B.2 IBMHardwarePlatformImplementations ...........................284
Appendix C. Special Notices ......................................287
Appendix D. Related Publications .................................289
D.1 International Technical Support Organization Publications . . . ...........289
D.2 Redbooks on CD-ROMs . . . .....................................289
D.3 OtherResources..............................................289
How to Get ITSO Redbooks .................................... 291
IBM Redbook Order Form ...........................................292
List of Abbreviations ..........................................293
Index .......................................................299
ITSO Redbook Evaluation ......................................309
viii IP Network Design Guide
© Copyright IBM Corp. 1995 1999 ix
This redbook identifies some of the basic design aspects of IP networks and
explains how to deal with them when implementing new IP networks or
redesigning existing IP networks. This project focuses on internetwork and
transport layer issues such as address and name management, routing, network
management, security, load balancing and performance, design impacts of the
underlying networking hardware, remote access, quality of service, and
platform-specific issues. Application design aspects, such as e-mail, gateways,
Web integration, etc., are discussed briefly where they influence the design of an
IP network.
After a general discussion of the aforementioned design areas, this redbook
provides three examples for IP network design, depicting a small, medium and
large network. You are taken through the steps of the design and the reasoning
as to why things are shown one way instead of another. Of course, every network
is different and therefore these examples are not intended to generalize. Their
main purpose is to illustrate a systematic approach to an IP network design given
a specific set of requirements, expectations, technologies and budgets.
This redbook will help you design, create or change IP networks implementing
the basic logical infrastructures required for a successful operation of such
networks. This book does not describe how to deploy corporate applications such
as e-mail, e-commerce, Web server or distributed databases, just to name a few.
How This Book Is Organized
Chapter 1 contains an introduction to TCP/IP and to important considerations of
network design in general. It explains the importance of applications and
business models that ultimately dictate the way a design approach will take,
which is important for you to understand before you begin the actual network
Chapter 2 contains an overview of network hardware, infrastructure and standard
protocols on top of which IP networks can be built. It describes the benefits and
peculiarities of those architectures and points out specific issues that are
important when IP networks are to be built on top of a particular network.
Chapter 3 contains information on structuring IP networks in regard to addresses,
domains and names. It explains how to derive the most practical
implementations, and it describes the influence that each of those can have on
the network design.
Chapter 4 explains routing, a cornerstone in any IP network design. This chapter
closes the gap between the network infrastructure and the logical structure of the
IP network that runs on top of it. If you master the topics and suggestions in this
chapter, you will have made the biggest step toward a successful design.
Chapter 5 contains information on remote access, one of the fastest growing
areas in IP networks today. This information will help you identify the issues that
are inherent to various approaches of remote access and it will help you find the
right solution to the design of such network elements.
xIP Network Design Guide
Chapter 6 contains information on IP security. It illustrates how different security
architectures protect different levels of the TCP/IP stack, from the application to
the physical layer, and what the influences of some of the more popular security
architectures are on the design of IP networks.
Chapter 7 gives you a thorough tune-up on IP multicasting and IP quality of
service (QoS), describing the pros and cons and the best design approaches to
networks that have to include these features.
Chapter 8 contains descriptions of sample network designs for small, medium
and large companies that implement an IP network in their environment. These
examples are meant to illustrate a systematic design approach but are slightly
influenced by real-world scenarios.
Appendix A provides an overview of the Voice over IP technology and design
considerations for implementing it.
Appendix B provides a cross-platform TCP/IP functional comparison for IBM
hardware and software and Microsoft Windows platforms.
The Team That Wrote This Redbook
This redbook was produced by a team of specialists from around the world
working at the International Technical Support Organization, Raleigh Center. The
leader of this project was Martin W. Murhammer.
Martin W. Murhammer is a Senior I/T Availability Professional at the ITSO
Raleigh Center. Before joining the ITSO in 1996, he was a Systems Engineer in
the Systems Service Center at IBM Austria. He has 13 years of experience in the
personal computing environment including areas such as heterogeneous
connectivity, server design, system recovery, and Internet solutions. He is an IBM
Certified OS/2 and LAN Server Engineer and a Microsoft Certified Professional
for Windows NT. Martin has co-authored a number of redbooks during
residencies at the ITSO Raleigh and Austin Centers. His latest publications are
TCP/IP Tutorial and Technical Overview,
GG24-3376, and
A Comprehensive
Guide to Virtual Private Networks Volume 1: IBM Firewall, Server and Client
Kok-Keong Lee is an Advisory Networking Specialist with IBM Singapore. He
has 10 years of experience in the networking field. He holds a degree in
Computer and Information Sciences from the National University of Singapore.
His areas of expertise include ATM, LAN switches and Fast Internet design for
cable/ADSL networks.
Payam Motallebi is an IT Specialist with IBM Australia. He has three years of
experience in the IT field. He holds a degree in Computer Engineering from
Wollongong University where he is currently undertaking a Master of Computer
Engineering in Digital Signal Processing. He has worked at IBM for one year. His
areas of expertise include UNIX, specifically AIX, and TCP/IP services.
Paolo Borghi is a System Engineer in the IBM Global Services Network Services
at IBM Italia S.p.A. He has three years of experience in the TCP/IP and
Multiprotocol internetworking area in the technical support for Network
Outsourcing and in network design for cross industries solutions. He holds a
degree in High Energy Particle Physics from Universita degli Studi di Milano.
Karl Wozabal is a Senior Networking Specialist at the ITSO Raleigh Center. He
writes extensively and teaches IBM classes worldwide on all areas of TCP/IP.
Before joining the ITSO, Karl worked at IBM Austria as a Networking Support
Thanks to the following people for their invaluable contributions to this project:
Jonathan Follows, Shawn Walsh, Linda Robinson
International Technical Support Organization, Raleigh Center
Thanks to the authors of the first edition of this redbook:
Alfred B. Christensen, Peter Hutchinson, Andrea Paravan, Pete Smith
Comments Welcome
Your comments are important to us!
We want our redbooks to be as helpful as possible. Please send us your
comments about this or other redbooks in one of the following ways:
Fax the evaluation form found in “ITSO Redbook Evaluation” on page 309 to
the fax number shown on the form.
Use the online evaluation form found at
Send your comments in an Internet note to
xii IP Network Design Guide
© Copyright IBM Corp. 1995 1999 1
Chapter 1. Introduction
We have seen dramatic changes in the business climate in the 1990s, especially
with the growth of e-business on the Internet. More business is conducted
electronically and deals are closed in lightning speed. These changes have
affected how a company operates in this electronic age and computer systems
have taken a very important role in a company’s profile. The Internet has
introduced a new turf for companies to compete and more companies are going
global at the same time to grow revenues. Connectivity has never been as
important as it is today.
The growth of the Internet has reached a stage where a company has to get
connected to it in order to stay relevant and compete. The traditional text-based
transaction systems have been replaced by Web-based applications with
multimedia contents. The technologies that are related to the Internet have
become mandatory subjects not only for MIS personnel, but even the CEO. And
TCP/IP has become a buzzword overnight.
What is TCP/IP?
How does one build a TCP/IP network?
What are the technologies involved?
How does one get connected to the Internet, if the need arises?
Are there any guidelines?
While this book does not and cannot teach you how to run your business, it briefly
describes the various TCP/IP components and provides a comprehensive
approach in building a TCP/IP network.
1.1 The Internet Model
It has been estimated that there are currently 40,000,000 hosts connected to the
Internet. The rapid rise in popularity of the Internet is mainly due to the World
Wide Web (WWW) and e-mail systems that enable free exchanges of information.
A cursory glance at the history of the Internet and its growth enables you to
understand the reason for its popularity and perhaps, predict some trend towards
how future networks should be built.
1.1.1 A Brief History of the Internet and IP Technologies
In the 1960s and 1970s, many different networks were running their own
protocols and implementations. Sharing of information among these networks
soon became a problem and there was a need for a common protocol to be
developed. The Defense Advanced Research Projects Agency (DARPA) funded
the exploration of this common protocol and the ARPANET protocol suite, which
introduced the fundamental concept of layering. The TCP/IP protocol suite then
evolved from the ARPANET protocol suite and took its shape in 1978. With the
use of TCP/IP, a network was created that was mainly used by government
agencies and research institutes for the purpose of information sharing and
research collaboration.
In the early 1980s TCP/IP became the backbone protocol in multivendor networks
such as ARPANET, NFSNET and regional networks. The protocol suite was
2IP Network Design Guide
integrated into the University of California at Berkeleys UNIX operating system
and became available to the public for a nominal fee. From this point on TCP/IP
became widely used due to its inexpensive availability in UNIX and its spread to
other operating systems.
Today, TCP/IP provides the ability for corporations to merge differing physical
networks while giving users a common suite of functions. It allows interoperability
between equipment supplied by multiple vendors on multiple platforms, and it
provides access to the Internet.
The Internet of today consists of large international, national and regional
backbone networks, which allow local and campus networks and individuals
access to global resources. Use of the Internet has grown exponentially over the
last three years, especially with the consumer market adopting it.
So why has the use of TCP/IP grown at such a rate?
The reasons include the availability of common application functions across
differing platforms and the ability to access the Internet, but the primary reason is
that of interoperability. The open standards of TCP/IP allow corporations to
interconnect or merge different platforms. An example is the simple case of
allowing file transfer capability between an IBM MVS/ESA host and, perhaps, an
Apple Macintosh workstation.
TCP/IP also provides transport for other protocols such as IPX, NetBIOS or SNA.
For example, these protocols could make use of a TCP/IP network to connect to
other networks of similar protocol.
One further reason for the growth of TCP/IP is the popularity of the socket
programming interface, which is the programming interface between the TCP/IP
transport protocol layer and TCP/IP applications. A large number of applications
today have been written for the TCP/IP socket interface. The Request for
Comments (RFC) process, overseen by the Internet Architecture Board (IAB) and
the Internet Engineering Task Force (IETF), provides for the continual upgrading
and extension of the protocol suite.
1.1.2 The Open Systems Interconnection (OSI) Model
Around the time that DARPA was researching into an internetworking protocol
suite, which eventually led to TCP/IP and the Internet (see 1.1.1, “A Brief History
of the Internet and IP Technologies” on page 1), an alternative standard approach
was being led by the CCITT (Comité Consultatif International Telegraphique et
Telephonique, or Consultative Committee on International Telegraph and
Telephone), and the ISO (International Organization for Standardization). The
CCITT has since become the ITU-T (International Telecommunication Union -
The resulting standard was the OSI (Open Systems Interconnection) Reference
Model (ISO 7498), which defined a seven-layer model of data communications,
as shown in Figure 1 on page 3. Each layer of the OSI Reference Model provides
a set of functions to the layer above and, in turn, relies on the functions provided
by the layer below. Although messages can only pass vertically through the stack
from layer to layer, from a logical point of view, each layer communicates directly
with its peer layer on other nodes.
Introduction 3
Figure 1. OSI Reference Stack
The seven layers are:
The application layer gives the user access to all the lower OSI functions, and
its purpose is to support semantic exchanges between applications existing in
open systems. An example is the Web browser.
The presentation layer is concerned with the representation of user or system
data. This includes necessary conversations (for example, a printer control
character), and code translation (for example, ASCII to EBCDIC).
The session layer provides mechanisms for organizing and structuring
interaction between applications and/or devices.
The transport layer provides transparent and reliable end-to-end data transfer,
relying on lower layer functions for handling the peculiarities of the actual
transfer medium. TCP and UDP are examples of a Transport layer protocol.
The network layer provides the means to establish connections between
networks. The standard also includes procedures for the operational control of
internetwork communications and for the routing of information through
multiple networks. The IP is an example of a Network layer protocol.
Data Link
The data link layer provides the functions and protocols to transfer data
between network entities and to detect (and possibly correct) errors that may
occur in the physical layer.
Data Link
Data Link
4IP Network Design Guide
The physical layer is responsible for physically transmitting the data over the
communication link. It provides the mechanical, electrical, functional and
procedural standards to access the physical medium.
The layered approach was selected as a basis to provide flexibility and
open-ended capability through defined interfaces. The interfaces permit some
layers to be changed while leaving other layers unchanged. In principle, as long
as standard interfaces to the adjacent layers are adhered to, an implementation
can still work.
1.1.3 The TCP/IP Model
While the OSI protocols developed slowly, due mainly to their formal committee-
based engineering approach, the TCP/IP protocol suite rapidly evolved and
matured. With its public Request for Comments (RFC) policy of improving and
updating the protocol stack, it has established itself as the protocol of choice for
most data communication networks.
As in the OSI model and most other data communication protocols, TCP/IP
consists of a protocol stack, made up of four layers (see Figure 2 on page 4).
Figure 2. TCP/IP Stack
The layers of the TCP/IP protocol are:
Application Layer
The application layer is provided by the user’s program that uses TCP/IP for
communication. Examples of common applications that use TCP/IP are Telnet,
FTP, SMTP, and Gopher. The interfaces between the application and transport
layers are defined by port numbers and sockets.
Transport Layer
The transport layer provides the end-to-end data transfer. It is responsible for
providing a reliable exchange of information. The main transport layer protocol is
the Transmission Control Protocol (TCP). Another transport layer protocol is User
Datagram Protocol (UDP), which provides a connectionless service in
Network Interface
Network Interface
and Hardware
Introduction 5
comparison to TCP, which provides a connection-oriented service. That means
that applications using UDP as the transport protocol have to provide their own
end-to-end flow control. Usually, UDP is used by applications that need a fast
transport mechanism.
Internetwork Layer
The internetwork layer, also called the internet layer or the network layer,
separates the physical network from the layers above it. The Internet Protocol (IP)
is the most important protocol in this layer. It is a connectionless protocol that
doesn't assume reliability from the lower layers. IP does not provide reliability,
flow control or error recovery. These functions must be provided at a higher level,
namely the transport layer if using TCP or the application layer if using UDP.
A message unit in an IP network is called an IP datagram. This is the basic unit of
information transmitted across TCP/IP networks. IP provides routing functions for
distributing these datagrams to the correct recipient for the protocol stack. Other
internetwork layer protocols are ICMP, IGMP, ARP and RARP.
Network Interface Layer
The network interface layer, also called the link layer or the data link layer, is the
interface to the actual network hardware. This layer does not guarantee reliable
delivery; that is left to the higher layers, and may be packet or stream oriented.
TCP/IP does not specify any particular protocol for this layer. It can use almost
any network interface available making it a flexible network while providing
backwards compatibility with legacy infrastructure. Examples of supported
network interface protocols are IEEE 802.2, X.25 (which is reliable in itself), ATM,
FDDI and even SNA.
1.1.4 The Need for Design in IP Networks
If you do not take time to plan your network, the ease of interconnection through
the use of TCP/IP can lead to problems. The purpose of this book is to point out
some of the problems and highlight the types of decisions you will need to make
as you consider implementing a TCP/IP solution.
For example, lack of effective planning of network addresses may result in
serious limitations in the number of hosts you are able to connect to your network.
Lack of centralized coordination may lead to duplicate resource names and
addresses, which may prevent you from being able to interconnect isolated
networks. Address mismatches may prevent you from connecting to the Internet,
and other possible problems may include the inability to translate resource names
to resource addresses because connections have not been made between name
Some problems arising from a badly designed or an unplanned network are trivial
to correct. Some, however, require significant time and effort to correct. Imagine
manually configuring every host on a 3000-host network because the addressing
scheme chosen no longer fits a business’ needs!
When faced with the task of either designing a new TCP/IP network or allowing
existing networks to interconnect, there are several important design issues that
will need to be resolved. For example, how to allocate addresses to network
resources, how to alter existing addresses, whether to use static or dynamic
routing, how to configure your name servers and how to protect your network are
6IP Network Design Guide
all questions that need to be answered. At the same time the issues of reliability,
availability and backup will need to be considered, along with how you will
manage and administer your network.
The following chapters will discuss these and other concerns, and provide the
information you need to make your decisions. Where possible we will provide
general guidelines for IP network design rather than discussing product-specific
or platform-specific considerations. This is because the product-specific
documentation in most cases already exists and provides the necessary details
for configuration and implementation. We will not attempt to discuss TCP/IP
applications in any depth due to the information also being available to you in
other documents.
1.1.5 Designing an IP Network
Due to the simplicity and flexibility of IP, a network can be "hacked" together in an
unordered fashion. It is common for a network to be connected in this manner,
and this may work well for small networks. The problem arises when changes are
required and documentation is not found. Worst of all, if the network
design/implementation teams leave the organization, the replacements are left
with the daunting task of finding out what the network does, how it fits together,
and what goes where!
An IP network that has not been designed in a systematic fashion will invariably
run into problems from the beginning of the implementation stage. When you are
upgrading an existing network, there are usually legacy networks that need to be
connected. Introducing of new technology without studying the limitations of the
current network may lead to unforeseen problems. You may end up trying to solve
a problem that was created unnecessarily. For example, the introduction of an
Ethernet network in a token-ring environment has to be carefully studied.
The design of the network must take place before any implementation takes
place. The design of the IP network must also be constantly reviewed as
requirements change over time, as illustrated in Figure 3 on page 7.
Introduction 7
Figure 3. IP Network Design Implementation and Change
A good IP network design also includes detailed documentation of the network for
future reference. A well designed IP network should be easy to implement, with
few surprises. It is always good to remember the
Keep It Simple,
Stupid! The Design Methodology
The design methodology recommended for use in the design of an IP network is a
top-down design approach.
This technique of design loosely follows the TCP/IP stack. As seen in Figure 2 on
page 4, at the top of the stack lies the application layer. This is the first layer
considered when designing the IP network. The next two layers are the transport
and network layers with the final layer being the data link layer.
The design of an application is dictated by business requirements. The rules of
the business, the process flow, the security requirements and the expected
results all get translated into the application’s specification. These requirements
not only affect the design of the application but their influence permeates all the
way down to the lower layers.
Once the application layer requirements have been identified, the requirements
for the lower layers follow. For example, if the application layer has a program that
demands a guaranteed two-second response time for any network transaction,
the IP network design will need to take this into consideration and maybe place
performance optimization as high priority. The link layer will need to be designed
in such a manner that this requirement is met. Using a flat network model for the
link layer with a few hundred Windows-based PCs may not be an ideal design in
this case.
Once the design of the IP network has been completed with regard to the
application layer, the implementation of the network is carried out.
Initial Design
Design Change
8IP Network Design Guide
The design for the network infrastructure plays an important part, as it ultimately
affects the overall design. A good example of this is the modularity and scalability
of the overall IP network. The following are some basic considerations in
designing an IP network. Overall Design Considerations
Although much could be said about design considerations that is beyond the
scope of this book, there are a few major points that you need to know:
• Scalability
A well designed network should be scalable, so as to grow with increasing
requirement. Introduction of new hosts, servers, or networks to the network
should not require a complete redesign of the network topology. The
topology chosen should be able to accommodate expansion due to
business requirements.
Open Standards
The entire design and the components that build the network should be
based on open standards. Open standards imply flexibility, as there may be
a need to interconnect different devices from different vendors. Proprietary
features may be suitable to meet a short term requirement but in the long
run, they will limit choices as it will be difficult to find a common technology.
• Availability/Reliability
Business requirements assuredly demand a level of availability and
reliability of the network. A stock trading system based on a network that
guarantees transaction response times of three seconds is meaningless if
the network is down three out of seven days a week!
The mean time between failures (MTBF) of the components must be
considered when designing the network, as must the mean time to repair
(MTTR). Designing logical redundancy in the network is as important as
physical redundancy.
It is too late and costly to consider redundancy and reliability of a network
when you are already halfway through the implementation stage.
• Modularity
An important concept to adopt is the modular design approach in building a
network. Modularity divides a complex system into smaller, manageable
ones and makes implementation much easier to handle. Modularity also
ensures that a failure at a certain part of the network can be isolated so
that it will not bring down the entire network.
The expendability of a network is improved by implementing a modular
design. For example, adding a new network segment or a new application
to the network will not require re-addressing all the hosts on the network if
the network has been implemented in a modular design.
• Security
The security of an organization’s network is an important aspect in a
design, especially when the network is going to interface with the Internet.
Considering security risks and taking care of them in the design stage of
the IP network is essential for complete certitude in the network.
Considering security at a later stage leaves the network open to attack until
Introduction 9
all security holes are closed, a reactive rather than proactive approach that
sometimes is very costly. Although new security holes may be found as the
hackers get smarter, the basic known security problems can easily be
incorporated into the design stage.
Network Management
IP network management should not be an afterthought of building a
network. Network management is important because it provides a way to
monitor the health of the network, to ascertain operating conditions, to
isolate faults and configure devices to effect changes.
Implementing a management framework should be integrated into the
design of the network from the beginning. Designing and implementing an
IP network and then trying to "fit" a management framework to the network
may cause unneccessary issues. A little proactivity in the design stage can
lead to a much easier implementation of management resources.
• Performance
There are two types of performance measures that should be considered
for the network. One is the throughput requirement and the other is the
response time. Throughput is how much data can be sent in the shortest
time possible, while response time is how long a user must wait before a
result is returned from the system.
Both of these factors need to be considered when designing the network. It
is not acceptable to design a network only to fail to meet the organization’s
requirements in the response times for the network. The scalability of the
network with respect to the performance requirements must also be
considered, as mentioned above.
• Economics
An IP network design that meets all of the requirements of the organization
but is 200% of the budget, may need to be reviewed.
Balancing cost and meeting requirements are perhaps the most difficult
aspects of a good network design. The essence is in the word compromise.
One may need to trade off some fancy features to meet the cost, while still
meeting the basic requirements. Network Design Steps
Below is a generic rule-of-thumb approach to IP network design. It presents a
structured approach to analyzing and developing a network design to suit the
needs of an organization.
10 IP Network Design Guide
Figure 4. Network Design Steps
Network Objectives
What are the objectives of this IP network? What are the business requirements
that need to be satisfied? This step of the design process needs research and
can be time consuming. The following, among other things, should be considered:
Who are the users of the IP network and what are their requirements?
What applications must be supported?
Does the IP network replace an existing communications system?
What migration steps must be considered?
What are the requirements as defined in, “Overall Design
Considerations” on page 8?
Who is responsible for network management?
Should the network be divided into more manageable segments?
What is the life expectancy of the network?
What is the budget?
Collecting Design Information
The information that is required for building the network depends on each
individual implementation. However, the main types of information required can
be deduced from Part, “Overall Design Considerations” on page 8.
Create Design Proposal
Network Objectives
Collect Design Information
Propose Configuration
Have all designs
been considered?
Make Selection
Move to Implementation
Introduction 11
It is important to collect this information and spend time analyzing it to develop a
thorough understanding of the environment and limitations imposed upon the
design of the new IP network.
Create a Proposal or Specification
Upon analysis of the collected information and the objectives of the network, a
design proposal can be devised and later optimized. The design considerations
can be met with one goal overriding others. So the network can be:
Optimized for performance
Optimized for resilience
Optimized for security
Once the design priorities have been identified the design can be created and
The final stage in the design process is to review the design before it is
implemented. The design can be modified at this stage easily, before any
investment is made into infrastructure or development work. With this completed,
the implementation stage can be initiated.
1.2 Application Considerations
As presented in chapter one, the TCP/IP model’s highest layer is the application
layer. As the elements that populate this layer are defined by the business
requirements of the overall system, these components must be considered the
most important in the initial design considerations with a top-down design
The type of applications that the network needs to support and the types of
network resources these applications require, must be taken into consideration
when designing the IP network. There are a number of these issues that must be
considered for the network design, some that are common to all applications,
while others pertain to a subset of applications. These issues will be defined and
Remember, building a complex ATM network to send plain text in a small
workgroup of 10 users is a waste of time and resources, unless you get them for
1.2.1 Bandwidth Requirements
Different applications require varying amounts of network bandwidth. A simple
SMTP e-mail application does not have the same bandwidth requirement as a
Voice over IP application. Voice and data compression have not reached that
level yet.
It is obvious that the applications your network will need to support determine the
type of network you will finally design. It is not a good idea to design a network
without considering what applications you currently require, and what
applications your business needs will require your network to support in the
12 IP Network Design Guide
1.2.2 Performance Requirements
The performance requirements of the users of the applications must be
considered. A user of the network may be willing to wait for a slow response from
an HTTP or FTP application, but they will not accept delays in a Voice over IP
application - it’s hard to understand what someone is saying when it’s all broken
The delay in the delivery of network traffic also needs to be considered. Long
delays will not be acceptable to applications that stream data, such as video over
IP applications.
The accuracy with which the network is able to provide data to the application is
also relevant to the network design. Differing infrastructure designs provide
differing levels of accuracy from the network.
1.2.3 Protocols Required
The TCP/IP application layer supports an ever increasing number of protocols.
The basic choice in protocol for applications is whether or not the application will
use TCP or UDP. TCP delivers a reliable connection-oriented service. UDP
delivers faster network response by eliminating the overhead of the TCP header;
however, it loses TCPs reliability, flow control and error recovery features.
It is clear that it depends on the application’s service focus as to which protocol it
will use. An FTP application, for example, will not use UDP. FTP uses TCP to
provide reliable end-to-end connections. The extra speed provided by using UDP
does not outweigh the reliability offered by TCP.
The Trivial File Transfer Protocol (TFTP), however, although similar to FTP, is
based on a UDP transport layer. As TFTP transactions are generally small in size
and very simple, the reliability of the TCP protocol is outweighed by the added
speed provided by UDP. Then why use FTP? Although TFTP is more efficient
than FTP over a local network, it is not good for transfers across the Internet as
its speed is rendered ineffective due to its lack of reliability. Unlike FTP
applications TFTP applications are also insecure.
1.2.4 Quality of Service/Type of Service (QoS/ToS)
Quality of Service (QoS) and Type of Service (ToS) arise simply for one reason:
some users’ data is more "important" then others. And there is a need to provide
these users with "premium" service, just like a VIP queue at the airport.
The requirement for QoS and ToS that gets incorporated into an application also
has implications for the network design. The connecting devices, the routers and
switches, have to be able to ensure "premium" delivery of information so as to
support the requirement of the application. Real-Time Applications
Some applications, such as a Voice over IP or an ordering system, need to be
real time. The need for real-time applications necessitates a network that can
guarantee a level of service.
A real-time application will need to implement its own flow control and error
checking if it is to use UDP as a transport protocol. The requirements of real-time
Introduction 13
applications will also influence the type of network infrastructure implemented. An
ATM network can inherently fulfill the requirements, however, a shared Ethernet
network will not fulfill the requirement.
1.2.5 Sensitivity to Packet Loss and Delay
An application’s sensitivity to packet loss and delay can have dramatic effects on
the user. The network must provide reliable packet delivery for these applications.
For example, a real-time application, with little buffering, does not tolerate packet
delivery delays, let alone packet loss! Voice over IP is one example of such an
application, as opposed to an application such as Web browsing.
1.2.6 Multicast
Multicasting has been proven to be a good way of saving network bandwidth.
That is true, if it has been implemented properly and did not break the network in
the first place.
Getting multicasting to work involves getting all the connecting devices, such as
routers and switches, the applications, the clients’ operating systems, and the
servers to work hand in hand. Multicasting will not work if any of these
subsystems cannot meet the requirement, or if they have severe limitations.
1.2.7 Proxy-Enabled
The ability of an application protocol to be proxyed has implications on the
bandwidth requirements and the security of the network.
An HTTP application will be easily manageable when a firewall is installed for
security, as a proxy service can be placed outside the firewall in a demilitarized
zone to serve HTTP traffic through the firewall to the application.
An application based upon the TELNET protocol will not have such an easy time
as the HTTP application. The TELNET protocol does not support proxying of its
traffic. Thus, a firewall must remain open on this port, the application must use a
SOCKS server or the application cannot communicate through the firewall. You
either have a nonworking application, an added server or a security hole.
1.2.8 Directory Needs
Various applications require directory services with the IP network. Directory
services include DNS, NIS, LDAP, X.500 and DCE, among others. The choice of
Directory services depends on the application support for these services. An
application based upon the ITU X.500 standard will not respond well to a network
with only DNS servers.
Some applications, such as those based upon the PING and TFTP protocols, do
not require directory services to function, although the difficulty in their use would
be greatly increased. Other applications require directory services implicitly, such
as e-mail applications based on the SMTP protocol.
14 IP Network Design Guide
1.2.9 Distributed Applications
Distributed applications will require a certain level of services from the IP
network. These services must be catered for by the network, so they must be
considered in the network design.
Take Distributed Computing Environment (DCE) as an example. It provides a
platform for the construction and use of distributed applications that relies on
services such as remote procedure call (RPC), the Cell Directory Service (CDS),
Global Directory Service (GDS), the Security Service, DCE Threads, Distributed
Time Service (DTS), and Distributed File Service (DFS). These services have to
made available through the network, so that collectively, they provide the basic
secure core for the DCE environment.
1.2.10 Scalability
Applications that require scalability must have a network capable to cater for their
future requirements, or be able to be upgraded for future requirements. If an
application is modular in design, the network must also be modular to enable it to
scale linearly with the application’s requirements.
1.2.11 Security
The security of applications is catered for by the underlying protocols or by the
application itself. If an application uses UDP for its transport layer, it cannot rely
on SSL for security, hence it must use its own encryption and provide its own
security needs.
Some applications that need to be run on the network do not have built-in security
features, or have not implemented standard security concepts such as SSL. An
application based on the TELNET protocol, for example, will invariably be
unsecure. If the network security requirements are such that a TELNET
application sending out unencrypted passwords is unacceptable, then either the
TELNET port must be closed on the firewall or the application must be rewritten.
Is it really worth rewriting your TELNET program?
1.3 Platform Considerations
An important step toward building an application is to find out the capabilities of
the end user’s workstation - the platform for the application. Some of the basic
questions that have to be answered include:
Whether the workstation supports graphics or only text
Whether the workstation meets the basic performance requirement in terms of
CPU speed, memory size, disk space and so on
Whether the workstation has the connectivity options required
Of these questions, features and performance criteria are easy to understand and
information is readily obtainable. The connectivity option is a difficult one to
handle because it can involve many fact findings, some of which may not be
easily available. Many times, these tasks are learned through painful experience.
Take for example, the following questions that may need to be answered if we
want to develop an application that runs on TCP/IP:
Does the workstation support a particular network interface card?
Introduction 15
Does the network interface card support certain cabling options?
Does the network interface card come with readily available drivers?
Does the workstation’s operating system support the TCP/IP protocol?
Does the workstations TCP/IP stack support subnetting?
Does the operating system support the required APIs?
Does the operating system support multiple default routes?
Does the operating system support multiple DNS definitions?
Does the operating system support multicasting?
Does the operating system support advanced features such as Resource
Reservation Protocol (RSVP)?
Depending on the type of application, the above questions may not be relevant,
but they are definitely not exhaustive. You may say the above questions are trivial
and unimportant, but the impact could be far more reaching than just merely the
availability of functions. Here’s why:
Does the workstation support a particular network interface card?
You may want to develop a multimedia application and make use of ATM’s
superb delivery capability. But the truth is, not all workstations support ATM
Does the network interface card support certain cabling options?
Even if the network interface card is available, it may not have the required
cabling option such as a UTP port or multimode fiber SC connection port. You
may need a UTP port because UTP cabling is cost effective. But you may also
end up requiring fiber connectivity because you are the only employee located
in the attic and the connecting device is situated down in the basement.
Does the network interface card come with readily available drivers?
Right, so we have the network interface card and it does support fiber SC
connections, but what about the bug that causes the workstation to hang? The
necessary patch may be six months away.
Does the workstation’s operating system support the TCP/IP protocol?
It may seem an awkward question but there may be a different flavor of TCP/IP
implementation. A good example is the Classical IP (CIP) and LAN emulation
(LANE) implementation in an ATM network. Some operating systems may
support only CIP, while some may only support LANE.
Does the workstations TCP/IP stack support subnetting?
In the world of IP address shortages, there may be a need to subdivide a
precious network subnet address further. And not all systems support
subnetting, especially the old systems.
Does the operating system support the required APIs?
One popular way of developing a TCP/IP application is to use sockets
programming. But the TCP/IP stack on the user’s workstation may not fully
support it. This gets worse if there are many workstation types in the network,
each running different operating systems.
Does the operating system support multiple default routes?
16 IP Network Design Guide
Unlike other systems, Windows 95 does not support multiple default routes. If
you are trying to develop a mission-critical application, this may be a serious
single point of failure. Some other workaround has to be implemented just to
alleviate this shortcoming.
Does the operating system support multiple DNS definitions?
This one has the same impact as the point above. With clients capable of
having only one DNS definition, a high availability option may have to be built
into the DNS server. On the other hand, with clients capable of supporting
multiple DNS, the applications must be supported with APIs that can provide
such facilities.
Does the operating system support multicasting?
There may be a need to deliver video to the users, and one of the ways is
through multicasting. Multicasting is a good choice as it conserves the network
bandwidth. But not all clients support multicasting.
Does the operating system support advanced features such as RSVP?
Although standards like RSVP had been rectified for quite some time, many
operating systems do not support such features. For example, Windows 95
does not support RSVP.
1.4 Infrastructure Considerations
The applications need a transport mechanism to share information, to transmit
data or to send requests for some services. The transport mechanism is provided
by the underlying layer called the network infrastructure.
Building a network infrastructure can be a daunting task for the inexperienced.
Imagine building a network for a company with 100,000 employees and 90
different locations around the world. How do you go about building it? And where
do you begin?
As in the application consideration, building a network infrastructure involves
many decision making processes:
What are the technologies out there?
Which technology should I use for the LAN?
Which technology should I use for the WAN?
How do I put everything together?
What is this thing called switching?
How should the network design look?
What equipment is required?
How should it grow?
How much does it cost?
Can I manage it?
Can I meet the deployment schedule?
Is there a strategy to adopt?
Introduction 17
The Internet as we have it today grew out of circumstances. In the beginning, it
was not designed to be what it is today. In fact, there was not any planning or
design work done for it. It is merely a network of different networks put together,
and we have already seen its problems and limitations:
It has almost run out of IP addresses
It has performance problems
It cannot readily support new generation applications
It does not have redundancy
It has security problems
It has erratic response time
Work has begun on building the so-called New Generation Internet (NGI) and it is
supposed to be able to address most, if not all, of the problems that we are
experiencing with the Internet today. The NGI will be entirely different from what
we have today, as it is the first time that a systematic approach has been used to
design and build an Internet.
1.5 The Perfect Network
So, you may ask: Is there such a thing as a perfect network?
If a network manager is assigned to build a network for a company, he/she would
have to know how to avoid all the problems we have mentioned above. He or she
would use the best equipment and would have chosen the best networking
technologies available, but may still not have built a perfect network. Why?
The truth is, there is no such thing as a perfect network. A network design that is
based on today’s requirements may not address those of the future. Business
environments change, and this has a spiraling effect on the infrastructure.
Expectations of employees change, the users’ requirements change, and new
needs have to be addressed by the applications, and these in turn affect how all
the various systems tie up together, which means there is a change in the
network infrastructure involved. At best, what the network could do is to scale and
adapt to changes. Until the day it has reached its technical limitation, these are
the two criteria for a network to stay relevant; after that, a forklift operation may be
Networks evolve over time. They have to do so to add value.
The above sections have highlighted that much work has to be done before an
application gets to be deployed to support a business’ needs. From the network
infrastructure to the various system designs, server deployments, security
considerations and types of client workstations, they all have to be well
coordinated. A minor error could mean back to the drawing board for the system
designer, and lots of money for the board of directors.
18 IP Network Design Guide
© Copyright IBM Corp. 1995 1999 19
Chapter 2. The Network Infrastructure
The network infrastructure is an important component in IP network design. It is
important simply because, at the end of the day, it is those wires that carry the
information. A well thought-out network infrastructure not only provides reliable
and fast delivery of that information, but it is also able to adapt to changes, and
grow as your business expands.
Building a network infrastructure is a complex task, requiring work such as
information gathering, planning, designing, and modeling. Though it deals mainly
with bits and bytes, it is more of an art than a science, because there are no fast
rules to building one.
When you build a network infrastructure, you look more at the lower three layers
of the OSI model, although many other factors need to be considered. There are
many technologies available that you can use to build a network, and the
challenge that a network manager faces, is to choose the correct one and the tool
that comes with it. It is important to know the implications of selecting a particular
technology, because the network manager ultimately decides what equipment is
required. When selecting a piece of networking equipment, it is important to know
at which layer of the OSI model the device functions. The functionality of the
equipment is important because it has to conform to certain standards, it has to
live up to the expectation of the application, and it has to perform tasks that are
required by the blue print - the network architecture.
The implementation of IP over different protocols depends on the mechanism
used for mapping the IP addresses to the hardware addresses, or MAC address,
at the data link layer of the OSI model. Some important aspects to consider when
using IP over any data link protocol are:
Address mapping
Different data link layer protocols have different ways of mapping the IP
address to the hardware address. In the TCP/IP protocol suite, the Address
Resolution Protocol (ARP) is used for this purpose, and it works only in a
broadcast network.
Encapsulation and overheads
The encapsulation of the IP packets into the data link layer packet and the
overheads incurred should be evaluated. Because different data link layer
protocols transport information differently, one may be more suitable than the
• Routing
Routing is the process of transporting the IP packets from network to network,
and is an important component in an IP network. Many protocols are available
to provide the intelligence in the routing of the IP protocol, some with
sophisticated capabilities. The introduction of switching and some other data
link layer protocols has introduced the possibility of building switched paths in
the network that can bypass the routing process. This saves network
resources and reduces the network delay by eliminating the slower process of
routing that relies on software rather than on hardware or microcode switching
Maximum Transmission Unit (MTU)
20 IP Network Design Guide
Another parameter that should be considered in the IP implementation over
different data link layer protocols is the maximum transmission unit (MTU)
size. MTU size refers to the size of the data frame (in bytes) that has to be
transmitted to the destination through the network. A bigger MTU size means
one can send more information within a frame, thus requiring a lower total
number of packets to transmit a piece of information.
Different data link layers have different MTU sizes for the operation of the
network. If you connect two networks with different MTU sizes, then a process
called fragmentation takes place and this has to be performed by an external
device, such as a router. Fragmentation takes a larger packet and breaks it up
into smaller ones so that it can be sent onto the network with a smaller MTU
size. Fragmentation slows down the traffic flow and should be avoided as
much as possible.
2.1 Technology
Besides having wires to connect all the devices together, you have to decide the
way these devices connect, the protocol in which the devices should talk to each
other. Various technologies are available, each different from one another in
standards and implementation.
In this section, a few popular technologies are covered with each of their
characteristics highlighted. These technologies cover the LAN, WAN as well as
the remote access area. For a detailed description of each technology, please
refer to
Local Area Network Concepts and Products: LAN Architecture,
2.1.1 The Basics
It is important to understand the fundamentals of how data is transmitted in an IP
network, so that the difference in how the various technologies work can be better
Each workstation connects to the network through a network interface card (NIC)
that has a unique hardware address. At the physical layer, these workstations
communicate with each other through the hardware addresses. IP, being a higher
level protocol in the OSI model, communicates through a logical address, which
in this case, is the IP address. When one workstation with an IP address of wishes to communicate with another with the address, the NIC
does not understand these logical addresses. Some mechanism has to be
implemented to translate the destination address to a hardware address
that the NIC can understand. Broadcast versus Non-Broadcast Network
Generally, all networks can be grouped into two categories: broadcast and
non-broadcast. The mechanism for mapping the logical address to the hardware
address is different for these two groups of networks. The best way of describing
a broadcast network is to imagine a teacher teaching a class. The teacher talks
and every student listens. An example of a non-broadcast network would be a
mail correspondence - at any time, only the sender and receiver of the mail know
what the conversation is about, the rest of the people don’t. Examples of
broadcast networks are Ethernet, token-ring and FDDI, while examples of
non-broadcast networks are frame relay and ATM.
The Network Infrastructure 21
It is important to differentiate the behaviors of both broadcast and non-broadcast
networks, so that the usage and limitation can both be taken into consideration in
the design of an IP network. Address Resolution Protocol (ARP)
In a broadcast network, the Address Resolution Protocol (ARP) is used to
translate the IP address to the hardware address of the destination host. Every
workstation that runs the TCP/IP protocol keeps a table, called an ARP cache,
containing the mapping of the IP address to the hardware address of the hosts
with which it is communicating. When a destination entry is not found in the ARP
cache, a broadcast, called ARP broadcast, is sent out to the network. All
workstations that are located within the same network will receive this request
and go on to check the IP address entry in the request. If one of the workstations
recognizes its own IP address in this request, it will proceed to respond with an
ARP reply, indicating its hardware address. The originating workstation then
stores this information and commences to send data through the newly learned
hardware address.
ARP provides a simple and effective mechanism for mapping an IP address to a
hardware address. However, in a large network, especially in a bridged
environment, a phenomenon known as a broadcast storm can occur if
workstations misbehave, assuming hundreds of workstations are connected to a
LAN, and ARP is used to resolve the address mapping issue. If the workstation’s
ARP cache is too small, it means the workstation has to send more broadcasts to
find out the hardware address of the destination. Having hundreds of
workstations continuously sending out ARP broadcasts would soon render the
LAN useless because nobody can send any data.
For a detailed description of ARP, please refer to
TCP/IP Tutorial and Technical
GG24-3376. Proxy ARP
The standard ARP protocol does not allow the mapping of hardware addresses
between two physically separated networks that are interconnected by a router. In
this situation, when one is having a combination of new workstations and older
workstations that do not support the implementation of subnetting, ARP will not
Proxy ARP or RFC 1027, is used to solve this problem by having the router reply
to an ARP request with its own MAC address on behalf of the workstations that
are located on the other side of the router. It is useful in situations when multiple
LAN segments are required to share the same network number but are connected
by a router. This can happen when there is a need to reduce broadcast domains
but the workstation’s IP address cannot be changed. In fact, some old
workstations may still be running an old implementation of TCP/IP that does not
understand subnetting.
A potential problem can arise though, and that is when the Proxy ARP function is
turned on in a router by mistake. This problem would manifest itself when displays
of the ARP cache on the workstations show multiple IP addresses all sharing the
same MAC addresses.
22 IP Network Design Guide Reverse Address Resolution Protocol (RARP)
Some workstations, especially diskless workstations, do not know their IP
address when they are initialized. A RARP server in the network has to inform the
workstation of its IP address when an RARP request is sent by the workstation.
RARP will not work in a non-broadcast network.
Typically in a non-broadcast network, workstations communicate in a one-to-one
manner. There is no need to map a logical address to a hardware address
because they are statically defined. Most of the WAN protocols can be
considered as non-broadcast.
2.1.2 LAN Technologies
There are a few LAN technologies that are widely implemented today. Although
they may have been invented many years ago, they have all been proven reliable
and stood the test of time. Ethernet/IEEE 802.3
Today, Ethernet LAN is the most popular type of network in the world. It is popular
because it is easy to implement, and the cost of ownership is relatively lower than
that of other technologies. It is also easy to manage and the Ethernet products
are readily available.
The technology was invented by Xerox in the 1970s and was known as Ethernet
V1. It was later modified by a consortium made up of Digital, Intel and Xerox, and
the new standard became Ethernet (DIX) V2. This was later rectified by the IEEE,
to be accepted as an international standard, with slight modification, and hence,
IEEE 802.3 was introduced.
The Ethernet LAN is an example of a carrier sense multiple access with collision
detection (CSMA/CD) network, that is, members of a same LAN transmit
information at random and retransmit when collision occurs. The CSMA/CD
network is a classic example of a broadcast network because all workstations
"see" all information that is transmitted on the network.
Figure 5. The Ethernet LAN as an Example of a CSMA/CD Network
Although different in specifications, the Ethernet, IEEE 802.3, Fast Ethernet
and Gigabit Ethernet LANs shall be collectively known as the Ethernet LAN in
this book.
The Network Infrastructure 23
In the above diagram, when workstation A wants to transmit data on the network,
it first listens to see if somebody else is transmitting on the network. If the
network is busy, it waits for the transmission to stop before sending out its data in
units called frames. Because the network is of a certain length and takes some
time for the frame from A to reach D, D may think that nobody is using the
network and proceed to transmit its data. In this case, a collision occurs and is
detected by all stations. When a collision occurs, both transmitting workstations
have to stop their transmission and use a random backoff algorithm to wait for a
certain time before they retransmit their data.
As one can see, the chance of a collision depends on the following:
The number of workstations on the network. The more workstations, the more
likely collisions will occur.
The length of the network. The longer the network, the greater the chance for
collisions to occur.
The length of the data packet, the MTU size. A larger packet length takes a
longer time to transmit, which increases the chance of a collision. The size of
the frame in an Ethernet network ranges from 64 to 1516 bytes.
Therefore, one important aspect of Ethernet LAN design is to ensure an adequate
number of workstations per network segment, so that the length of the network
does not exceed what the standard specifies, and that the correct frame size is
used. While a larger frame means that a fewer number of them is required to
transmit a single piece of information, it can mean that there is a greater chance
of collisions. On the other hand, a smaller frame reduces the chance of a
collision, but it then takes more frames to transmit the same piece of information.
It was mentioned earlier that the Ethernet and IEEE 802.3 standards are not the
same. The difference lies in the frame format, which means workstations
configured with Ethernet will not be able to communicate with workstations that
have been configured with IEEE 802.3. The difference in frame format is as
Figure 6. Ethernet Frame versus IEEE 802.3 Frame
To implement Ethernet, network managers need to follow certain rules, and it can
very much tie in with the type of cables being used. Ethernet can be implemented
using coaxial (10Base5 or 10Base2), fiber optic (10BaseF) or UTP Category 3
Length Data Frame
Type Data Frame
24 IP Network Design Guide
cables (10BaseT). These different cabling types impose different restrictions and
it is important to know the difference. Also, Ethernet generally follows the 5-4-3
rule. That is, in a single collision domain, there can be only five physical
segments, connected by four repeaters. No two communicating workstations can
be separated by more than three segments. The other two segments must be a
link segment, that is, with no workstations attached to them.
Table 1. Comparing Ethernet Technologies
Although it was once thought that Ethernet would not scale and thus would be
replaced by other better technologies, vendors have made modifications and
improvements to its delivery capabilities to make it more efficient.
The Ethernet technology has evolved from the traditional 10 Mbps network to the
100 Mbps network or Fast Ethernet, and now to the 1 Gbps network, or better
known as Gigabit Ethernet.
The Fast Ethernet, or the IEEE 802.3u standard, is 10 times faster than the 10
Mbps Ethernet. The cabling used for Fast Ethernet is 100BaseTx, 100BaseT4
and the 100BaseFx. The framing used in Fast Ethernet is the same as that used
in Ethernet. Therefore it is very easy for network managers to upgrade from
Ethernet to Fast Ethernet. Since the framing and size are the same as that of
Ethernet and yet the speed has been increased 10 times, the length of the
network now has to be greatly reduced, or else the collision would not be
detected and would cause problems to the network.
The Gigabit Ethernet, or IEEE 802.3z standard, is 10 times faster than the Fast
Ethernet. The framing used is still the same as that of Ethernet, and thus reduces
the network distance by a tremendous amount as compared to the Ethernet.
Gigabit Ethernet is usually connected using the short wavelength (1000BaseSx)
or the long wavelength (1000BaseLx) fiber optic cables, although the standard for
the UTP (1000BaseT) is available now. The distance limitation has been resolved
with the new fiber optic technologies. For example, 1000BaseLx with a 9 micron
single mode fiber drives up to five kilometers on the S/390 OSA. An offering
called the Jumbo Frame implements a much larger frame size, but its use has
been a topic of hot debate for network managers. Nonetheless, vendors are
beginning to offer the Jumbo Frame feature in their products. IBM is offering a 9
KB Jumbo Frame feature, using device drivers from ALTEON, on the newly
announced S/390 OSA, and future RS/6000 and AS/400 implementations will
also be capable of this.
Gigabit Ethernet is mainly used for creating high speed backbones, a simple and
logical choice for upgrading current Fast Ethernet backbones. Many switches with
10Base5 10Base2 10BaseT
Topology Bus Bus Star
Cabling type Coaxial Coaxial UTP
Maximum cable
length 500m 185m 100m
Topology limitation 5-4-3 rule 5-4-3 rule 5-4-3 rule
Maximum number of
workstations on a
single segment
100 30 1 (requires the
workstation to be
connected to a hub)
The Network Infrastructure 25
100BaseT ports, like the IBM 8271 and 8275 switches, are beginning to offer a
Gigabit Ethernet port as an uplink port, so that more bandwidth can be provided
for connections to the higher level of network for access to servers.
Besides raw speed improvement, new devices such as switches now provide
duplex mode operation, which allows workstations to send and receive data at the
same time, effectively doubling the bandwidth for the connection. The duplex
mode operation requires a Category-5 UTP cable, with two pairs of wire used for
transmitting and receiving data. Therefore, the operation of duplex mode may not
work on old networks because they usually run on Category-3 UTP cables.
Most of the early Ethernet workstations are connected to the LAN at 10 Mbps
because they were implemented quite some time ago. It is still popular as the
network interface card and 10 Mbps hubs are very affordable. At this point, it is
important to note that in network planning and design, more bandwidth or a faster
network does not mean that the user will benefit from the speed. Due to the
development of higher speed networks such as Fast Ethernet and Gigabit
Ethernet, a 10 Mbps network seems to have become less popular now. The fact
is, it can still carry a lot of information and a user may not be able to handle the
information if there is anymore available. With the introduction of switches that
provides dedicated 10 Mbps connection to each individual user, this has become
even more true. Here’s what information a 10 Mbps connection can carry:
Table 2. Application Bandwidth Requirements
The question now is: Can a user clear his/her e-mail inbox, save some
spreadsheet data to the server, talk to his/her colleague through the telephony
software, watch a training video produced by the finance department and
participate in a videoconferencing meeting, all at the same time?
Giving a user a 100 Mbps connection may not mean it would be utilized
adequately. A 10 Mbps connection is still a good solution to use for its cost
effectiveness. This may be a good option to meet certain budget constrains, while
keeping an upgrade option open for the future.
Applications Mbps Bandwidth Occupied
Network applications
(read e-mail, save some spreadsheets) 2
Voice 0.064
Watching MPEG-1 training video
(small window) 0.6
Videoconferencing 0.384
Total bandwidth < 4
It is generally agreed that the maximum "usable" bandwidth for Ethernet LAN is
about 40%, after which the effect of collision is so bad that efficiency actually
begins to drop.
26 IP Network Design Guide
Nowadays, with card vendors manufacturing mostly 10/100Mbps Ethernet cards,
more and more workstations have the option of connecting to the network at
100Mbps. The Gigabit Ethernet is a new technology and it is positioned to be a
backbone technology rather than being used to connect to the end users. As
standards evolve, Gigabit Ethernet will see widespread usage in the data center
and most of the servers that connect to the network at 100 Mbps today will
eventually move to a Gigabit Ethernet.
Ethernet is a good technology to deploy for a low volume network or application
that does not demand high bandwidth. Because it does not have complicated
access control to the network, it is simple and can provide better efficiency in
delivery of data. Due to its indeterministic nature of collision, response time in an
Ethernet cannot be determined and hence, another technology has to be
deployed in the event that this is needed.
Although Ethernet technology has been around for quite some time, it will be
deployed for many years to come because it is simple and economical. Its
plug-and-play nature allows it to be positioned as a consumer product and users
require very little training to se up an Ethernet LAN. With the explosion of Internet
usage and e-commerce proliferating, more companies, especially the small ones
and the small office, home office (SoHo) establishment, will continue to drive the
demand for Ethernet products. Token-Ring/IEEE 802.5
The token-ring technology was invented by IBM in the 1970s and it is the second
most popular LAN architecture. It supports speeds of 1, 4 or 16 Mbps. There is a
new technology, called the High-Speed Token-Ring being developed by the IEEE
and it will run at 100 Mbps.
The token-ring LAN is an example of a token-passing network, that is, members
of the LAN transmit information only when they get hold of the token. Since the
transmission of data is decided by the control of the token, a token-ring LAN has
no collision.
Although different in specifications, both the IBM Token-Ring and IEEE 802.5
LANs will be collectively known as the token-ring LAN in this book.
The Network Infrastructure 27
Figure 7. Passing of Token in a Token-Ring LAN
As shown in the above diagram, all workstations are connected to the network in
a logical ring manner, and access to the ring is controlled by a circulating token
frame. When station A with data to transmit to D receives the token, it changes
the content of the token frame, appends data to the frame and retransmits the
frame. As the frame passes the next station B, B checks to see f the frame is
meant for it. Since the data is meant for D, B then retransmits the frame, and this
action is repeated through C and finally to D. When D receives the frame, it
copies the information in the frame, sees the frame copied and address
recognition bits and retransmits the modified frame in the network. Eventually, A
receives the frame, strips the information from it, and releases a new token into
the ring so that other workstations may use it. The following diagram shows the
frame formats for data and token frames:
Figure 8. Token-Ring Frame Formats
As described, the token passing technique is different from Ethernet’s random
manner of access. This important feature makes a token-ring LAN deterministic
Data Frame
1 1 1 6 6 710 4
Data Frame
Token Frame
28 IP Network Design Guide
and allows delays to be determined. Besides this difference, token-ring also
offers extensive network diagnostics and self-recovery features such as:
Power-on and ring insertion diagnostics
Lobe-insertion testing and online lobe fault detection
Signal loss detection, beacon support for automatic test and removal
Active and standby ring monitor functions
Ring transmission errors detection and reporting
Failing components isolation for automatic or manual recovery
It is not surprising that with such extensive features, token-ring adapters are more
expensive than the Ethernet ones because all of these functions are implemented
in the adapter microcode.
The token-ring LAN is particularly stable and efficient even under high load
conditions. The impact of an increase in the number of workstations on the same
LAN does not affect token-ring as much as it would Ethernet. It guarantees fair
access to all workstations on the same LAN and is further enhanced with an
eight-level priority mechanism. With extensive features like self recovery and auto
configuration at the electrical level, the token-ring LAN is the network of choice for
networks that require reliability and predictable response times. Networks such
as factory manufacturing systems and airline reservation systems typically use
token-ring LANs for these reasons. Fiber Distributed Digital Interface (FDDI)
FDDI was developed in the early 1980s for high speed host connections but it
soon became a popular choice for building LAN backbones. Similar to the
token-ring LAN, FDDI uses a token passing method to operate but it uses two
rings, one primary and one secondary, running at 100 Mbps. Under normal
conditions, the primary ring is used while the secondary is in a standby mode.
FDDI provides flexibility in its connectivity and redundancy and offers a few ways
of connecting the workstations, one of which is called the dual attachment station
In a dual attachment station ring, workstations are called Dual Attachment
Stations (DAS). All of them have two ports (A and B) available for connection to
the network as shown in the following diagram:
The Network Infrastructure 29
Figure 9. FDDI Dual Attachment Rings
In the above setup, the network consists of a primary ring and a secondary ring in
which data flows in opposite directions. Under normal conditions, data flows in
the primary ring and the secondary merely functions as a backup. In the event of
a DAS or cable failure, the two adjacent DASs would "wrap" their respective ports
that are connected to the failed DAS. The network now becomes a single ring and
continues to operate as shown in the following diagram:
Figure 10. FDDI Redundancy
Primary Ring
Secondary Ring
Primary Ring
Secondary Ring
30 IP Network Design Guide
It is easy to note the robustness of FDDI and appreciate its use in a high
availability network. Since it is similar in nature to token-ring, FDDI offers
capabilities such as self recovery and security. Because it mostly runs on fiber, it
is not affected by electromagnetic interference. Due to its robustness and high
speed, FDDI was being touted as the backbone of choice. But with the
development of 100 Mbps Ethernet technology, network managers who are going
for bandwidth rather than reliability have chosen to implement 100 Mbps Ethernet
rather than FDDI.
Though it may not be as popular as Ethernet or token-ring, one can still find many
networks operating on FDDI technology. Comparison of LAN Technologies
It is appropriate, at this point, to compare the various LAN technologies that we
have discussed. These technologies are the most popular ones deployed, each
tend to be dominant in certain particular working environments.
Table 3. Comparing LAN Technologies
Ethernet Token-Ring FDDI
Topology Bus Ring Dual Rings
Access Method CSMA/CD Token Passing Token Passing
Speed (in Mbps) 10/100/1000 1/4/16/100 100
Broadcast Broadcast Broadcast Broadcast
Packet Size (Bytes) 64-1516 32-16K 32-4400
Self Recovery No Yes Yes
Data Path
Redundancy No No Yes
Response Times No Yes Yes
Priority Classes No Yes Yes
Maximum Cable
Length Yes Yes Yes
Cost of Deployment
(relative to each
Cheap Moderate Expensive
The Ethernet, token-ring and the FDDI technologies are generally referred to
as the legacy LANs, as opposed to new technology like ATM.
The Network Infrastructure 31
The above table shows the difference in characteristics of each of the
technologies. From the comparisons, it shows that each of these technologies is
more suitable than the rest for certain operating requirements.
The Ethernet technology tends to be deployed in networks where network
response time is not critical to the functions of the applications. It is commonly
found in educational institutes, mainly for its cost effectiveness, and e-commerce,
for its simplicity in technical requirements. The token-ring is most suitable for
networks that require predictable network response time. Airline reservation
systems, manufacturing systems, as well as some banking and financial
applications, have stringent network response time requirements. These
networks tend to be token-ring, although there may be few exceptions. The FDDI
is commonly deployed as a backbone network in a medium- to-large networks. It
can be found in both an Ethernet or a token-ring environment. As mentioned, with
the popularity of the Internet growing and the number of e-commerce setups is
increasing at an enormous pace, Ethernet is the popular choice for building an IP
Thus, in deciding on which technology is most suitable for deployment, a network
manager needs to ascertain the requirement carefully, and make the correct
decision based on the type of environment he/she operates in, the type of
applications to be supported, and the overall expectations of the end users.
2.1.3 WAN Technologies
WAN technologies are mainly used to connect networks that are geographically
separated. For example, a remote branch office located in city A connecting to
the central office in city B. Routers are usually used in WAN connectivity although
switches may be deployed.
The requirements and choices of WAN technologies are different from LAN
technologies. The main reason is that WAN technologies are usually a subscribed
service offered by carriers, and they are very costly. WAN also differs from LAN
technologies in the area of speed. While LAN technologies are running at
megabits per second, the WANs are usually in kilobits per second. Also, WAN
connections tend to be point-to-point in nature, while LAN is multiaccess.
The following table describes the differences between LAN and WAN
Table 4. Comparing LAN and WAN Technologies
Typical Deployment
Environment Small Offices,
Most Corporate
Manufacturing Floor,
technology for
medium and large
Service No Yes
Ethernet Token-Ring FDDI
32 IP Network Design Guide
It would seem obvious that the criteria for choosing a suitable WAN technology is
different from that of a LAN. It is very much dependent on the choice of service
offered by the carrier, the tariffs, the service quality of the carrier and availability
of expertise. Leased Lines
Leased lines are the most common way of connecting remote offices to the head
office. It is basically a permanent circuit leased from the carrier and connects in a
point-to-point manner.
The leased line technology has been around for quite some time and many
network managers are familiar with it. With speed ranging from 64 kbps to as high
as 45 Mbps, it usually runs protocol such as IP and IPX over a point-to-point
protocol (PPP).
Routers are usually deployed to connect to leased lines to connect remote offices
to a central site. A device called a data service unit/channel service unit
(DSU/CSU) connects the router to the leased line, and for every leased line
connection, a pair of DSU/CSU is required.
Due to its cost and the introduction of many other WAN technologies, network
managers have begun to replace leased lines with some other technologies for
reasons such as cost and features. X.25
X.25 was developed by the carriers in the early1970s, and it allows the transport
of data over a public data network service. The body that oversee its development
is the International Telecommunication Union (ITU). Since ITU is made up of most
of the telephone companies, this makes X.25 a truly international standard. X.25
is a classic example of a WAN protocol and a non-broadcast network.
The components that make up an X.25 network are:
Speed 4,10,16,100,
155, 622 Mbps,
1 Gbps
9.6, 14.4, 28.8, 56
64, 128, 256,
512 kbps
1.5, 2, 45, 155,
622 Mbps
Cost per kbps
(relative to each
Cheap Very expensive
Performance of
major decision
Yes No
Cost of major
decision criteria Maybe Yes
Cost of redundancy
(as opposed to
May be expensive Very expensive
Need specially
trained personnel May not Definitely
The Network Infrastructure 33
Data terminal equipment (DTE)
DTEs are the communication devices located at an end user’s premises.
Examples of DTEs are routers or hosts.
Packet assembler/disassembler (PAD)
A PAD connects the DTE to the DCE and acts as a translator.
Data circuit-terminating equipment (DCE)
DCEs are the devices that connect the DTEs to the main network. An example
of a DCE is the modem.
Packet switching exchange (PSE)
PSEs are the switches located in the carrier’s facilities. The PSEs form the
backbone of the X.25 network.
X.25 end devices communicate just like how we use a telephone network. To
initiate a communication path, called a
virtual circuit
, one workstation calls
another and upon successful connection of the call, data begins to be
transmitted. As opposed to the broadcast network, there is no facility such as
ARP to map an IP address to an X.25 address. Instead, mappings are done
statically and there is no broadcast required. In an X.25 network, there are two
types of virtual circuit:
Permanent virtual circuit (PVC)
PVCs are established for busy networks that always require the service of a
virtual circuit. Rather than making repetitive calls, the virtual circuit is made
Switched virtual circuit (SVC)
SVCs are used with seldom-used data transfers. It is set up on demand and is
taken down when transmission ends.
The X.25 specification maps to the first three layers of the OSI model, as shown
in the following diagram:
Figure 11. X.25 Layers versus OSI Model
The encapsulation of IP over X.25 networks is described in RFC 1356. The RFC
proposes larger X.25 maximum data packet size and the mechanism for
encapsulating longer IP packets over the original draft.
34 IP Network Design Guide
When data is sent to an X.25 data communication equipment one or more virtual
circuits are opened in the network to transmit it to the final destination. The IP
datagrams are the protocol data units (PDUs) when the IP over X.25
encapsulation occurs. The PDUs are sent as X.25
complete packet sequences
across the network. That is, PDUs begin on X.25 data packet boundaries and the
M bit (more data) is used to fragment PDUs that are larger than one X.25 data
There have been many discussions about performance in an X.25 network. The
RFC 1356 specifies that every system must be able to receive and transmit PDUs
up to 1600 bytes. To accomplish the interoperability with the original draft, RFC
877, the default value for IP datagrams should be 1500 bytes, and configurable in
the range from 576 to 1600 bytes. This standard approach has been used to
accomplish the default value of 1500-byte IP packets used in LAN and WAN
environments so that one can avoid the router fragmentation process.
Typically, X.25 public data networks make use of low speed data links and a
certain number of routes is incurred before data is transmitted to a destination.
The way X.25 switches store the complete packet before sending it on the output
link causes a longer delay with longer X.25 packets. If a small end-to-end window
size is used, it also decreases the end-to-end throughput of the X.25 circuit.
Fragmenting large IP packets in smaller X.25 packets can improve the throughput
allowing a greater pipeline on the X.25 switches. Large X.25 packets combined
over low speed links can also introduce higher packet latency. Thus, the use of
larger X.25 packets will not increase the network performance but often it
decreases it and some care should be taken in choosing the packet size.
It is also noted that some switches in the X.25 network will further fragment
packets, so the performance of a link is also decided by the characteristics of the
carrier’s network.
A different approach for increasing performance relies on opening multiple virtual
channels, but this increases the delivering costs over the public data networks.
However, this method can overcome problems introduced by the limitation of a
small X.25 window size increasing the used shares of the available bandwidth.
The low speed performance of X.25 can sometimes pose problems for some
TCP/IP applications that time out easily. In this manner, other connecting
protocols would have to be deployed in place of X.25. With the advent of
multiprotocol routers, you can find TCP/IP running on other WAN protocols while
X.25 is used for other protocols. In fact, with the proliferation of TCP/IP networks,
a new way transporting connections started to emerge: that of transporting X.25
networks across a TCP/IP network.
An example is the X.25 Transport Protocol (XTP) provided by the 221X Nways
Multiprotocol routers family. This protocol works as a protocol forwarder,
transferring the incoming X.25 packets to the final X.25 connection destination
using the TCP/IP network. A common situation is depicted in the following
The Network Infrastructure 35
Figure 12. X.25 over IP (XTP) Integrated Services Digital Network (ISDN)
Integrated services digital network (ISDN) is a subscribed service offered by
phone companies. It makes use of digital technology to transport various
information, including data, voice and video, by using phone lines.
There are two types of ISDN interfaces, the basic rate interface (BRI) and the
Primary Rate Interface (PRI). The BRI provides 2 x 64 kbps for data transmission
(called the B channels) and 1 x 16 kbps for control transmission (called the D
channel). The B channels are used as HDLC frame delimited 64 kbps pipes, while
the D channel can also be used for X.25 traffic. The PRI provides T1 or E1
support. For T1, it supports 23 x 64 kbps B channels and 1 x 64 kbps D channel.
The E1 supports 30 x 64 kbps for data and 1 x 64 kbps for control transmissions.
ISDN provides a "dial-on-demand" service that means a circuit is only connected
when there is a requirement for it. The charging scheme of a fixed rate plus
charges based on connections makes ISDN ideal for situations where a
permanent connection is not necessary. It is especially attractive in situations
where remote branches need to connect to the main office only for a batch update
of records.
Another useful way of deploying ISDN is to act as a backup for a primary link. For
example, a remote office may be connected to the central office through a leased
line, with an ISDN link used as a backup. Under normal operation, traffic flows
through the leased line and the ISDN link is idle. In the event of a leased line
failure, the router at the remote site can use the ISDN connection to dial to the
C lie n t
C lie n t
TCP Connections
X.25 Server
TCP Connections
36 IP Network Design Guide
central office for connection. The IBM 2212 Access Utility, for example, is a useful
tool in this scenario.
X.31- Supports of X.25 over ISDN
The ITU standard X.31 is for transmitting X.25 packets over ISDN. This
standard provides support for X.25 with unconditional notification on the
ISDN BRI D channel.
X.31 is available from service providers in many countries. It gives the
router a 9600 bps X.25 circuit. Since the D-channel is always present, this
condition can be an X.25 PVC or SVC. Frame Relay
Frame relay is a fast switching technique that can combine the use of fiber optic
technologies (1.544 Mbps in the US and 2.048 Mbps in Europe) with the benefits
of port sharing characteristics typical of networks such as X.25. The design point
of frame relay is that networks are now very reliable and therefore leave the error
checking to the DTE. Thus, frame relay does not perform link-level error checks
and enjoys higher performance as compared to X.25.
The frame relay network consists of switches that are provided by the carrier and
that are responsible for directing the traffic within the network to the final
destination. The routers are connected to the frame relay network as terminal
equipment, and connections are provided by standard-based interfaces.
The frame relay standards describe both the interface between the terminal
equipment (router) and the frame relay network, called user-to-network interface
(UNI), and the interface between adjacent frame relay networks, called
network-to-network interface (NNI).
The Network Infrastructure 37
Figure 13. Frame Relay Network
There are three important concepts in frame relay that you need to know:
Data link connection identifier (DLCI)
The DLCI is just like the MAC address equivalent in a LAN environment. Data
is encapsulated by the router in the frame relay frames and delivered through
the network based on the DLCI. The DLCI can have a local or a global
significance, both uniquely identify a communication channel.
Traffic destined for or originating from each of the partnering endstations is
multiplexed, carrying different DLCIs, on the same user-network interface. The
DLCI is used by the network to associate a frame with a specific virtual circuit.
The Address Field is either two, three or four octets long. The default frame
relay address field used by most implementations, is a two octet field. The
DLCI is a multiple bit field of the address field and whose length depend on the
address field length.
Permanent virtual circuits (PVC)
The PVCs are predefined paths through the frame relay network that connect
two end systems to each other. They are logical paths in the network identified
As part of a subscription option, the bandwidth for PVCs is pre-allocated and
charge is imposed regardless of traffic volume.
Switched virtual circuits (SVC)
Unlike the PVCs, SVCs are not permanently defined in the frame relay
network. The connected terminal equipment may request for a call setup when
there is a requirement to transmit data. A few options, related to the
Fast Packet
User A
User B
38 IP Network Design Guide
transmission, are specified during the setup of the connection. The SVCs are
activated by the terminal equipment, such as routers connected to the frame
relay networks, and the charges applied by a public frame relay carrier are
based upon the circuit activities and are different from that of PVCs.
It is interesting to note that although regarded as a non-broadcast network,
frame relay supports the ARP protocol as well as the rest of TCP/IP routing
Frame Relay Congestion Management
Frame relay provides a mechanism to control and avoid congestion within the
network. There are some basic concepts that need to be described:
Forward Explicit Congestion Notification (FECN)
This is a 1-bit field that notifies the user that the network is experiencing
congestion in the direction the frame was sent. The users will take action to
relieve the congestion.
Backward Explicit Congestion Notification (BECN)
This is a 1-bit field that notifies the user that the network is experiencing
congestion in the reverse direction of the frame. The users can slow down the
rate of delivering packets through the network to relieve the congestion.
Discard Eligibility (DE)
This is a 1-bit field indicating whether or not this frame should be discarded by
the network in preference to other frames if there are congested nodes in the
network. The use of DE requires that everyone in the network "play the game".
In networks such as public frame relay networks, DTEs never set DE bit
because in the event of a congestion, its operation will be the first one
The congestion control mechanism ensures that no stations can monopolize the
network at the expense of others. The congestion control mechanism includes
both congestion avoidance and congestion recovery.
The frame relay network does not guarantee data delivery and relies on the
higher level protocol for error recovery. When experiencing congestion, the
network resources will inform its users to take appropriate corrective actions.
FECN/BECN bits will be set during mild congestion, while the network is still able
to transfer frames. In the event of severe congestion, frames are discarded. The
mechanism to prioritize the discarding process of frames relies on the discard
eligibility (DE) bit in the address field of the frame header. The network will start
to discard frames with the DE field set first. To avoid severe congestion from
happening, a technique called traffic shaping, by the end user systems is
The Network Infrastructure 39
Figure 14. Frame Relay Congestion Management
Traffic Management
For each PVC and SVC, a set of parameters can be specified to indicate the
bandwidth requirement and to manage the burst and peak traffic values. This
mechanism relies on:
Access Rate
The access rate is the maximum rate that the terminal equipment can use to
send data into the frame relay network. It is related to the speed of the access
link that connects the DTE to the frame relay switch device.
Committed Information Rate (CIR)
The Committed Information Rate (CIR) has been defined as the amount of
data that the network is committed to transfer under normal conditions. The
rate is averaged over a period of time. The CIR is also referred to as minimum
acceptable throughput. The CIR can be set lower than or equal to the access
rate, but the DTE can send frames at a higher rate than the CIR.
The Burst Committed (BC)
The BC is the maximum committed amount of data that a user may send to the
network in a measured period of time and for which the network will guarantee
message delivery under normal conditions.
Burst Exceeded (BE)
The BE is the amount of data by which a user can exceed the BC during the
measured period of time. If there is spare capacity in the network, these
excess frames will be delivered to the destination. To avoid congestion, a
practical implementation is to set all these frames with the discard eligible
(DE) bit on. However, in a period of one second, the CIR plus BE rate cannot
exceed the access rate.
When circuit monitoring is enabled on the attached routers they can use CIR
and BE parameters to send traffic at the proper rate to the frame relay
Local Management Interface (LMI) Extension
The LMI is a set of procedures and messages that will be exchanged between
the routers and the frame relay switch on the health of the network through:
Data Data
DE 1
DLCI C/R 0 DLCI (cont.) F
DE 1
DLCI C/R 0 DLCI (cont.)
40 IP Network Design Guide
Status of the link between the connected router and switch
Notification of added and deleted PVCs and SVCs
Status messages of the circuits’ availability
Some of the features in LMI are standard implementations while some may be
treated as an option. Besides the status checking for the circuits, the LMI can
have optional features such as multicasting. Multicasting allows the network to
deliver multiple copies of information to multiple destinations in a network.
This is a useful feature especially when running protocols that use broadcast,
for example ARP. Also routers such as the IBM 2212 provide features such as
Protocol Broadcast which, when turned on, allows protocols such as RIP to
function across the frame relay network.
IP Encapsulation in Frame Relay
The specifications for multiprotocol encapsulation in frame relay is described in
RFC 2427. This RFC obsoletes the widely implemented RFC 1490. Changes
have been made in the formalization of the SNAP and Network Level Protocol ID
(NLPID) support, in the removed fragmentation process, address resolution in the
SVC environment, source routing BPDUs support and security enhancements.
The NLPID field is administered by ISO and the ITU. It contains values for many
different protocols including IP, CLNP, and IEEE Subnetwork Access Protocol
(SNAP). This field tells the receiver what encapsulation or what protocol follows in
a transmission.
Internet Protocol (IP) datagrams are sent over a frame relay network in
encapsulated format. Within this context, IP can be encapsulated in two different
ways: NLPID value indicating IP or NLPID value indicating SNAP. Although both
of these encapsulations are supported under the given definitions, it is
advantageous to select only one method as the appropriate mechanism for
encapsulating IP data. Therefore, IP data should be encapsulated using the
NLPID value of 0xCC indicating an IP packet. This option is more efficient
because it transmits 48 fewer bits without the SNAP header and is consistent with
the encapsulation of IP in an X.25 network.
The use of the NLPID and SNAP network layer identifier enables multiprotocol
transport over the frame relay network, thus avoiding other encapsulation
techniques either for bridged or for routed datagrams. This goal was achieved
with the RFC 1490 specifications. This multiplexing of various protocols over a
single circuit saves cost and looks attractive to network managers. But care has
to be taken so that mission-critical data is not affected by other lesser important
data traffic. Some implementations use a separate circuit to carry mission-critical
applications but a better approach is to use a single PVC for all traffic and
managing prioritization by a relatively sophisticated queuing system such as
Frame relay stations may choose to support the exchange identification (XID)
specified in Appendix III of Q.922. This XID exchange allows the following
parameters to be negotiated at the initialization of a frame relay circuit: maximum
frame size, retransmission timer, and the maximum number of outstanding
information (I) frames.
The Network Infrastructure 41
If this exchange is not used, these values must be statically configured by mutual
agreement of data link connection (DLC) endpoints, or must be defaulted to the
values specified in Q.922.
There is no commonly implemented minimum or maximum frame size for frame
relay networks. Generally, the maximum will be greater than or equal to 1600
octets, but each frame relay provider will specify an appropriate value for its
network. A frame relay data terminal equipment (DTE), therefore, must allow the
maximum acceptable frame size to be configurable.
Inverse ARP
There are situations in which a frame relay station may wish to dynamically
resolve a protocol address over a PVC. This may be accomplished using the
standard ARP encapsulated within a SNAP-encoded frame relay packet.
Because of the inefficiencies of emulating broadcasts in a frame relay
environment, a new address resolution variation was developed. It is called
Inverse ARP and describes a method for resolving a protocol address when the
hardware address is already known. In a frame relay network, the known
hardware address is the DLCI. Support for Inverse ARP function is not required,
but it has proven to be useful for frame relay interface autoconfiguration.
At times, stations must be able to map more than one IP address in the same IP
subnet to a particular DLCI on a frame relay interface. This need arises from
situations involving remote access, where servers must act as ARP proxies for
many dial-in clients, each assigned a unique IP address while sharing the
bandwidth on the same DLC. The dynamic nature of such applications results in
frequent address association changes with no effect on the DLC’s status.
As with any other interface that utilizes ARP, stations may learn the associations
between IP addresses and DLCIs by processing unsolicited ARP requests that
arrive on the DLC. If one station wishes to inform its peer station on the other end
of a frame relay DLC of a new association between an IP address and that PVC,
it should send an unsolicited ARP request with the source IP address equal to the
destination IP address, and both set to the new IP address being used on the
DLC. This allows a station to "announce" new client connections on a particular
DLCI. The receiving station must store the new association, and remove any
existing association, if necessary, from any other DLCI on the interface.
It is common for network managers to run an IP network across a frame relay
network and there may be a need to deploy protocols that rely on a broadcast
mechanism to work. In this case, some configuration is required so that these
protocols continue to work across the frame relay network:
OSPF over PVCs
When using a dynamic routing protocol such as Open Shortest Path First
(OSPF) over a frame relay network, the OSPF protocol has to be told about
the non-broadcast multiaccess network’s (NBMA) understanding of frame
relay. Although OSPF is usually deployed in a broadcast network, it does work
in a non-broadcast network with some configuration changes. In a
non-broadcast network, network managers have to provide a router with static
information such as the Designated Router and all the neighbors. Generally,
you need to perform the following tasks:
Define the frame relay interface as non-broadcast.
42 IP Network Design Guide
Configure the IP addresses of the OSPF neighbors on the frame relay
Set up the router with the highest priority to become the designated router.
In most frame relay implementations, the topology is typically a star, or
so-called hub and spoke. The router at the central site has all the branches
connected to it with PVCs. Some products provide added features to simplify
the configuration for OSPF in this setup. In the IBM Nways router family, you
can use the OSPF point-to-multipoint frame relay enhancement. Network
managers just need to configure a single IP subnet for all the entire frame
relay network, instead of multiple subnets for every PVC connection. The
central router is configured to have the highest router priority so that it is
always chosen as the designated router.
Figure 15. Star Topology in a Frame Relay Network
IP Routing with SVCs
The use of SVCs in a frame relay network offers more flexibility and features such
as dial-on-demand and data path cut-through. With SVCs, network design can be
simplified and performance can be improved.
Bandwidth and cost have always been at odds when it comes to network design.
It is important to strike a balance, whereby an acceptable performance is made
available within a budget. In some cases, having permanent connectivity is a
waste of resources because information exchange takes place only at a certain
time of the day. In this case, having the ability to "dial on demand" when the
connectivity is required saves cost. The IP address of the destination is
associated with a DLCI and a call setup request is initiated when a connection to
The Network Infrastructure 43
that IP address is required. After the originating workstation has sent its data, the
circuit is taken down after a certain timeout period.
Usually, remote branches are connected to the central site and there is little
requirement for them to have interconnection. Building a mesh topology using
PVCs is costly and not practical. SVCs are more suitable here because they help
to conserve network bandwidth, as well as reducing bandwidth cost. Moreover, in
a star topology configuration, inter-branches communication has to go through
the central site router, which increases the number of hops to reach the
Figure 16. SVCs in a Frame Relay Network
With SVCs, the following protocols can be implemented across the frame relay
•BGP-4 Serial Line IP (SLIP)
Point-to-point connections have been the mainstay for data communication for
many years. In the history of TCP/IP, the Serial Line IP (SLIP) protocol has been
the de-facto standard for connecting remote devices and you can still find its
implementation. SLIP provides the ability for two endstations to communicate
across a serial line interface and it is usually used across a low bandwidth link.
SLIP is a very simple framing protocol that describes the format of packets over
serial line interfaces and has the following characteristics:
IP data only
As its name implies, SLIP transports only the IP protocol and the configuration
of the destination IP address is defined statically before communication
Branch B
Branch C
Data Center
(Router A)
44 IP Network Design Guide
Limited error recovery
SLIP does not provide any mechanism for error handling and recovering,
leaving all error detection responsibility to the higher level protocols such as
TCP. The checksum field of these protocols can be enough to determine the
errors that occur in noisy lines.
Limited compression mechanism
Ironic as it may seem, the protocol itself does not provide compression,
especially for frequently used IP header fields. In the case of a TELNET
session, most of the packet headers are the same and this leads to
inefficiency in the link when too many almost identical packets are sent.
There have been some modifications to make SLIP more efficient, such as Van
Jacobson header compression, and many SLIP implementations use them. Point-to-Point Protocol (PPP)
The Point-to-Point Protocol (PPP) is an Internet standard that has been
developed to overcome the problems associated with SLIP. For instance, PPP
allows negotiation of addresses across the connection instead of statically
defining them. PPP is a network-specific standard protocol with STD number 51.
Its status is elective and it is described in RFC 1661 and RFC 1662.
PPP implements reliable delivery of datagrams over both synchronous and
asynchronous serial lines. It also implements data compression and can be used
to route a wide variety of network protocols.
PPP has three main components:
A method for encapsulating datagrams over serial links.
A Link Control Protocol (LCP) for establishing, configuring and testing the
data-link connection.
A family of Network control protocols (NCP) for establishing and configuring
different network-layer protocols. PPP is designed to allow the simultaneous
use of multiple network-layer protocols.
The format of the PPP frame is similar to the HDLC one. The Point-to-Point
Protocol provides a byte-oriented connection exchanging information and
message packets in a single format frame. The PPP Link Control Protocol (LCP)
is used to establish, configure, maintain and terminate the connection and goes
through the following phases to establish a connection:
Link establishment and configuration negotiation
The connection for PPP is opened only when a set of LCP packets is
exchanged between the endstations’ PPP processes. Among the information
exchanged is the maximum packet size that can be carried over the link and
use of authentication. A successful negotiation leads the LCP to the Open
Link quality determination
The optional phase does not specify the policy for quality of the link but
instead provides tools such as echo request and reply.
• Authentication
The Network Infrastructure 45
The next step is going through the authentication process. Each of the end
systems is required to use the authentication protocol as agreed upon in the
link establishment stage to identify the remote peer. If the authentication
process fails the link goes to the Down state.
Network control protocol negotiation
Once the link is open, endstations negotiate the use of various layer-3
protocols (for example, IP, IPX, DECnet, Banyan VINES and APPN/HPR) by
using the network control protocol (NCP) packets. Each layer 3 protocol has
its own associated network control protocol. For example IP has IP Control
Protocol (IPCP).
The NCP negotiation is independently managed for every network control
protocol and the specific state of the NCP (up or down) indicates if that
network protocol traffic will be carried over the link.
Authentication Protocols
PPP authentication protocols provide a form of security between two nodes
connected via a PPP link. There are different authentication protocols supported:
Password Authentication Protocol (PAP)
PAP is described in RFC 1334. PAP provides a simple mechanism of
authentication after the link establishment. One peer sends an ID and a
password to the other peer and waits to receive an acknowledgment.
Passwords are sent in clear text and there is no encryption involved.
Challenge/Handshake Authentication Protocol (CHAP)
CHAP is described in RFC 1994. The CHAP protocol is used to check
periodically the identity of the peer and not only at the beginning of the link
establishment. The authenticator sends a challenge message to the peer that
responds with a value calculated with a hash function. The authenticator
verifies the value of the hash function with the expected value to accept or
terminate the connection.
Microsoft PPP CHAP (MS-CHAP)
MS-CHAP is used to authenticate Windows workstations and peer routers.
Shiva Password Authentication Protocol (SPAP)
The SPAP is a Shiva proprietary protocol.
The authentication mechanism starts at the LCP exchange, because if one of the
end systems refuses to use an authentication protocol requested by the other the
link setup fails. Also some authentication protocols, for instance CHAP, may
require the end systems to exchange the authentication messages during
connection setup.
The Network Control Protocol (NCP)
PPP has many network control protocols (NCP) for establishing and configuring
different network layer protocols. They are used to individually set up and
terminate specific network layer protocol connections. PPP supports many NCPs
such as the following:
AppleTalk Control Protocol (ATCP)
Banyan VINES Control Protocol (BVCP)
Bridging protocols (BCP, NBCP, and NBFCP)
46 IP Network Design Guide
Callback Control Protocol
DECnet Control Protocol (DNCP)
IP Control Protocol (IPCP)
IPv6 Control Protocol (IPv6CP)
IPX Control Protocol (IPXCP)
OSI Control Protocol (OSICP)
APPN High Performance Routing Control Protocol (APPN HPRCP)
APPN Intermediate Session Routing Control Protocol (APPN ISRCP)
IPCP is described in RFC 1332 and specifies some features such as the Van
Jacobson header compression mechanism or the IP address assignment
An endstation can either send its IP address to the peer or accept an IP address.
Moreover it can supply an IP address to the peer if the peer requests that
address. The first situation you will handle an unnumbered interface. That is that
both ends of the point-to-point connection will have the same IP address and will
be seen as a single interface. This does not create problems in the IP routing
algorithms. Otherwise the other end system of the link will be provided with its
own address.
The router will automatically add a static route directed to the PPP interface for
the address that is successfully negotiated, allowing data to be properly routed.
When the IPCP connection is ended this static route is subsequently removed.
This is a common configuration used for dial-in users.
Multilink PPP
Multilink PPP (MP) is an important enhancement that has been introduced in the
PPP extensions to allow multiple parallel PPP physical links to be bundled
together as if they were a single physical path. The implementation of multilink
PPP can accomplish dynamic bandwidth allocation and also on-demand features
to increase the available bandwidth for a single logical connection. The use of
multilink PPP is also an enhancement that can have importance in the area of
multimedia application support.
Multilink PPP is based on the fragmentation process of large frames and
rebuilding them, sequentially. When the PPP links are configured for multilink
PPP support they are said to be bundled. The multilink PPP sender is allowed to
fragment large packets and the fragmented frames are delivered with an added
multilink PPP header that basically consists of a sequence number that identifies
each fragmented packet. The multilink PPP receiver reassembles the input
packets in the correct order following the sequence numbers in the multilink PPP
The virtual connection made up by multilink PPP has more bandwidth than the
original PPP link. The resulting MP bundled bandwidth is almost equal to the sum
of the bandwidths of the individual links. The advantage is that large data packets
can be transmitted within a shorter time.
The multilink PPP implementation in the Nways 221x family can accomplish both
the Bandwidth Allocation Protocol (BAP) and the Bandwidth Allocation Control
Protocol (BACP) to dynamically add and drop PPP dial circuits to a virtual link.
The Network Infrastructure 47
Multilink PPP also uses Bandwidth On Demand (BOD) to add dial-up links to an
existing multilink PPP bundle.
The multilink PPP links can be defined in two different ways:
Dedicated link
A dedicated link is a multilink PPP enabled interface that has been configured
as a link to a particular multilink PPP interface. If this link attempts to join
another multilink PPP bundle, it is terminated.
Enabled link
An enabled link is simply one that is not dedicated and can become a link in
any multilink PPP bundle.
The Bandwidth Allocation Protocol (BAP) and the Bandwidth Allocation Control
Protocol (BACP) are used to increase and decrease the multilink PPP interface
bandwidth. These protocols rely on processes that when the actual bandwidth
utilization thresholds are reached they can manage to add an enabled multilink
PPP dial circuit to the MP bundle, if any is available and the negotiation process
with the partner does not fail. The dedicated links have the priority of being added
to the bundle, followed by the enabled ones.
The Bandwidth On Demand protocol (BOD) adds dial links to the MP bundle
using configured dial circuit’s telephone numbers. They are added in sequence
and lasts for the time that the bundle is in use.
Using multilink PPP needs some careful planning of the configured bundles.
Limitations exist for mixing leased lines and dial-up circuits in the same bundle.
Multilink PPP capabilities are being investigated to support multi-class functions
in order to provide a reliable data link layer protocol for multimedia traffic over low
speed links . The multilink PPP implementation in the Nways 221x router family
supports also the Multilink multi-chassis. This functionality is provided when a
remote connection can establish a layer 2 tunnel with a phone hunt group that
spans over multiple access servers (see
Access Integration Services Software
User’s Guide V3.2
, SC30-3988).
2.1.4 Asynchronous Transfer Mode (ATM)
Asynchronous transfer mode ( ATM) is a switching technology that offers high
speed delivery of information including data, voice and video. It runs at 25, 100,
155, 622 Mbps or even up to 2.4 Gbps, and is both suitable for deployment in a
LAN or WAN environment. Due to its ubiquitous nature, it can be categorized as
both a LAN or a WAN technology.
Unlike LAN technologies such as Ethernet or token-ring that transport information
in packets called frames, ATM transports information in cells. In legacy LANs,
frames can vary in size, while in ATM, the cells are of fixed size and they are all
53 bytes. ATM is a connection-oriented protocol, which means it does not use
broadcast techniques at the data link layer for delivery of information, and the
data path is predetermined before any information is sent. It offers features that
are not found in Ethernet or token-ring, one of which is called Quality of Service
(QoS). Another benefit that ATM brings is the concept of Virtual LAN (VLAN).
Membership in a group is no longer determined by physical location. Logically
similar workstations can now be grouped together even though they are all
48 IP Network Design Guide
Because ATM works differently from the traditional LAN technologies, new
communication protocols and new applications have to be developed. Before this
happens, something needs to be done to make the traditional LAN technologies
and IP applications work across an ATM network. Today, there are two standards
developed solely for this purpose: Classical IP (CIP)
Classical IP (RFC 1577) is a way of running the IP protocol over an ATM
infrastructure. As its name implies, it supports only the IP protocol. Since ATM
does not provide broadcast service, something needs to be done to address the
mechanism for ARP, which is important in IP for mapping IP addresses to
hardware addresses. A device called the ARP server is introduced in this
standard to address this problem and all IP workstations will have to register with
the ARP server before communication can begin.
In RFC 1577, all IP workstations are grouped into a common domain called a
logical IP subnet, or LIS. And within each LIS, there is an ARP server. The
purpose of the ARP server is to maintain a table containing the IP addresses of
all workstations within the LIS and their corresponding ATM addresses. All other
workstations in a LIS are called ARP clients and they place calls, ATMARP, to the
ARP server, for the resolution of the IP address to the ATM address. After
receiving the information from the ARP server, ARP clients proceed to make calls
to other clients to establish the data path so that information can flow. Therefore,
ARP clients need to be configured with the ATM address of the ARP server before
they can operate in a CIP environment. In a large CIP network, this poses an
administrative problem if there is going to be a change in ARP server’s ATM
address. Due to this problem, it is advisable to configure the ARP servers End
System Identifier (ESI) with a locally administered address (LSA) so that no
reconfiguration is required on ARP clients.
There is an update to the RFC, called RFC 1577+, that provides the mechanism
for multiple ARP servers within a single LIS. This is mainly to provide redundancy
to the ARP server.
Classical IP over Permanent Virtual Circuit (CIP over PVC)
There is another implementation of CIP, which is called CIP over PVC. CIP over
PVC is usually deployed over an ATM WAN connection, where the circuit is
always connected. This is typically found in service providers that operate an ATM
core switch (usually with switching capacity ranging from 50 Gbps to 100 Gbps),
with limited or no support for SVC services. In CIP over PVC, there is no need to
resolve the IP address of the destination to ATM address, as it has been mapped
statically to an ATM connection through the definition of virtual path identifier
(VPI) and virtual channel identifier (VCI) values. Because the mapping has to be
done statically, CIP over PVC is used in networks where the interconnections are
limited; otherwise, it would be an administrative burden for the network manager.
Though it may have its limitations, CIP over PVC can be a good solution to some
specific requirements. For example, if it is used to connect a remote network to a
central backbone, the network manager can set up the PVC connection in the
ATM switch to be operative only at certain times of the day. The operation of the
PVC (for example, setup and tear down) can be managed automatically by a
network management station. In this way, a network manager can limit the flow of
the remote network’s traffic to certain times of the day for security reasons or for a
specific business requirement.
The Network Infrastructure 49
Advantages of CIP
There are several advantages of using CIP, especially in the areas of
performance and simplicity:
ATM provides higher speeds than Ethernet or token-ring
The specifications for ATM states connecting speeds of 25, 155 or even 622
Mbps. Some vendors have announced the support of link speeds of up to 2.4
Gbps. These links offer higher bandwidth than what Ethernet or token-ring can
CIP has no broadcast traffic
Since there is no broadcast traffic in the network, the bandwidth is better
utilized for carrying information.
Benefits of switching
All workstations can have independent conversation channels with their own
peers through the switching mechanism of ATM. This means all conversations
can take place at the same time, and the effective throughput of the network is
higher than a traditional LAN.
• Simplicity
Compared to LAN Emulation (LANE), CIP is simpler in implementation and it
utilizes fewer ATM resources, called VCs. Adding and deleting ARP clients
requires less effort than in LANE, and this makes it simpler to troubleshoot in
the event of a problem.
• Control
As mentioned in the example of CIP over PVC, traffic control can be enforced
through the setup and tear down of the PVCs. This is like giving the network
the ability to be "switched on" or "switched off". LAN Emulation (LANE)
Unlike CIP, which provides for running only IP over ATM, LAN Emulation (LANE),
is a standard that allows multiprotocol traffic to flow over ATM. As its name
implies, LANE emulates the operation of Ethernet or token-ring so that existing
applications that run on these two technologies can operate on ATM without any
changes. It is useful in providing a migration path for the existing LAN to ATM
because it protects the investment cost in the existing applications.
The components that make up LANE are much more complicated than those in
LAN Emulation Configuration Server (LECS)
The LECS centralizes and disseminates information of the ELANs and LECs.
It is optional to deploy LECS, although it is strongly recommended.
LAN Emulation Server (LES)
The LES has a rather similar job role as the ARP server in CIP. It resolves LAN
addresses to ATM addresses.
Broadcast and Unknown Server (BUS)
The BUS is responsible for the delivery of broadcast, multicast and unknown
unicast frames.
Lan Emulation Client (LEC)
50 IP Network Design Guide
A LEC is a workstation participating in a LANE network.
Although more complicated in terms of its implementation as opposed to CIP,
LANE enjoys some advantages in several areas:
LANE supports multiprotocol traffic.
LANE supports all protocols and this makes migration of existing networks
LANE supports broadcast.
However a nuisance it may be, many protocols rely on broadcast to work.
Many servers use broadcast to advertise their services or existence. Clients
use protocols such as DHCP to get their IP addresses. These services would
not be possible in a CIP environment.
LANE provides advanced features not found in CIP
LANE provides several advanced features that are not found in CIP. One good
example is Next Hop Resolution Protocol (NHRP). With NHRP, it is possible to
improve the performance of a network through reduction in router hops.
The following table shows the difference between ATM and LAN technologies.
Table 5. Comparing ATM versus other LAN Technologies
ATM is a technology that provides a ubiquitous transport mechanism for both LAN
and WAN. In the past, LAN and WAN used different protocols to operate, such as
Speed (Mbps) 4/16/100/1000 25/155/622 25/155/622
Broadcast support Yes No Yes, through the
QoS No Yes Yes
Multiprotocol Yes No, only IP Yes
bandwidth Share/Switch Switch Switch
No Yes Yes
Need new protocol No Yes Yes
Need new adaptor No (most PCs now
have built-in LAN
Yes Yes
effort in installation
of client
Minimal Need to specify ARP
server’s ATM
through any
combination of the
following :
- LES/BUS address
- ELAN names
(header vs total
packet size)
(< 2%) High
(>10%) High
(> 10%)
The Network Infrastructure 51
Ethernet for LAN and ISDN for WAN. This complicates design and makes
maintaining the network costly because more protocols are involved, and
managers need to be trained on different protocols. With ATM, it is possible to
use it for both LAN and WAN connections and to make the network
2.1.5 Fast Internet Access
In recent years, the number of users on the Internet has grown exponentially and
more and more users are subscribing to Internet service providers (ISPs) for
access. Most home users still connect to ISPs through an analog modem, with
initial speeds at a mediocre 9.6 kbps. With advancements in modem technology,
the speed has increased to 14.4 kbps, to 28.8 kbps, then to 33.6 kbps and finally
to 56 kbps. Some users have even signed up for ISDN services at 128 kbps or
256 kbps but these are few.
With the advent of e-commerce and multimedia rich applications proliferating on
the Internet, this "last mile" technology has proved to be a serious bottleneck.
Vendors are developing new technologies to "broaden the last mile pipe" and
there are two major technologies today that do this: the cable modem and the
xDSL technology.
These technologies, besides providing higher bandwidth for "surfers", have
opened a new door for network managers who may be looking at new
technologies for their company. With more employees working away from the
office, application design has taken a new turn. In the past, application
developers have always assumed that all users are connected via the LAN
technologies, and bandwidth is never a problem. With more and more users
working from home, application developers have now realized their application
may not run on a users workstation at home, because of the 28.8 kbps link at
which he or she is connected. While the company LAN has gone from 10 Mbps to
100 Mbps, and the entire corporation gears toward multimedia application
deployment, there are still some carts dragging behind. Although security may
pose a problem to the corporation, these technologies have nonetheless given
network managers some additional options in remote connectivity. Cable Modem Network
The cable TV (CATV) infrastructure is traditionally used for the transmission of
one way analog video signals. The network infrastructure has evolved from
mostly coaxial cabling to the new Hybrid Fiber-Coaxial (HFC) network, which is
made up of a combination of fiber optic and coaxial "last mile" networks. With the
introduction of fiber optic networks and the development of new standards, the
HFC network soon became capable of two way transmission. The general
structure of a cable modem network may look like the following diagram:
52 IP Network Design Guide
Figure 17. Cable Modem and the HFC Infrastructure
The cable modem network is typically made up of high speed fiber optic
distribution rings and coaxial cabling that carry the TV signals to the subscribers
home. Subscribers staying in the same district are connected to a common
distribution point called a headend. The coaxial cable runs from the headend to
the homes in a tree topology and the traffic direction is predominantly from the
headend to the homes. The cable router is a specialized device that can transport
data from a data network through the CATV’s coaxial infrastructure to the homes.
It can also receive a signal from the cable modems installed in the homes and
transport it to the data network.
The subscriber’s PC is connected to the cable modem through a 10 Mbps
Ethernet Interface, so to the PC, it is exactly like connecting to a LAN. The
bandwidth of the cable modem network is asymmetric, which means the
bandwidth that is available from the headend to the subscribers (called the
downstream channel) are not the same as that in the reverse direction (called the
upstream channel). The downstream channel bandwidth ranges from 30 Mbps to
50 Mbps and all subscribers that are connected to this downstream channel
share the common bandwidth. The upstream channel ranges from 500 kbps to
800 kbps. Depending on the configuration and bandwidth requirements, a group
of subscribers can share two downstream and four upstream channels, giving a
total of 60 Mbps downstream and 2 Mbps upstream. The design of the bandwidth
distribution is such because the cable modem network is used mainly to provide
fast Internet access. And Internet access is mainly sending a few strings of
requests to a Web server for a bigger chunk of data to be displayed on a Web
Cable modem technology provides a way for fast Internet connection (easily as
100 times faster than that of analog modems) for the homes and it can possibly
be deployed for mobile workers. As a rather new technology, it has its problems
and limitations:
• Interference
cm = Cable Modem
The Network Infrastructure 53
The tree-like topology of the coaxial cable runs acts just like a big TV antenna.
It receives a lot of outside signals and is easily influenced by electromagnetic
interference. This characteristic affects especially the quality of the upstream
data and is not an easy problem to solve. Corrupted upstream data means
there will be lots of retries from the subscriber’s PC and may result in
application termination.
Shared Network
The cable modem subscribers basically participate in an Ethernet network. All
subscribers share the same downstream bandwidth and they compete for the
same upstream bandwidth. For network managers considering deploying
cable modem technology, this will have to be taken into consideration.
Technology not readily available
Implementing a cable modem network requires substantial investment from
the cable company in terms of upgrading the infrastructure and purchasing
new equipment. In the first place, not all areas have HFC infrastructures in
place and it may take some time before some homes get cable modem
• Standards
Many different standards that deal with implementing cable modem exist today
and one is different from the other. To name a few:
Multimedia Cable Network System (MCNS)
Digital Video Broadcasting (DVB)
IEEE 802.14
These different standards make interoperability difficult and cable companies
may not want to deploy cable modem on a large scale. Digital Subscriber Line (DSL) Network
The digital subscriber line (DSL) technology is a way of transporting data over a
normal phone line at a higher speed than the current analog modem. The term
xDSL is usually used because there are several standards to it:
Asymmetric Digital Subscriber Line (ADSL)
High-Speed Digital Subscriber Line (HDSL)
Variable Digital Subscriber Line (VDSL)
The xDSL technology is capable of providing a downstream bandwidth of 30
Mbps and an upstream bandwidth of around 600 kbps. But in commercial
deployment, it is usually 1.5 Mbps downstream and maybe 256 kbps upstream.
Subscribers of xDSL technology are connected to a device called a MUX in a
point-to-point manner. The MUX aggregates a number of subscribers (usually 48,
some may go as high as 100) and has an uplink to a networking device, typically
54 IP Network Design Guide
Figure 18. The xDSL Network
An interesting point to note is that, unlike a conventional analog modem, a
subscriber can still use the phone while the xDSL modem is in use. This is
because the signaling used by the xDSL modem is of a different frequency from
that used by the phone. The subscriber’s PC is connected to the modem through
an Ethernet or ATM interface. For connection through the ATM interface, CIP is
commonly used.
The xDSL technology is positioned as a competitor to the cable modem network
because both of these are competing for the same market - home Internet users.
Although mainly used for connecting home users, there are already some
companies experimenting with using xDSL for connections to the head office.
The deployment of xDSL technology was not a smooth one in the beginning due
to its severe limitations on distance. Early subscribers had to be living near the
telephone exchanges. With improvements in the technology and the deployment
of other equipment, the distance problem has slowly been resolved. Cable Modem versus xDSL
Both the cable modem and xDSL technologies provide a "fat pipe" to subscriber
homes. While the intent is to provide fast Internet access to the subscribers,
many service providers have begun testing new technologies such as Video On
Demand and VPN services.
There are some differences between the cable modem and the xDSL technology
and they can be summarized as follows:
Table 6. Comparing High-Speed Internet Access Technologies
Cable Modem xDSL
Topology Tree Point-to-point
Infrastructure Cable TV Phone
Connectivity at PC Ethernet Ethernet/ATM
The Network Infrastructure 55
Network managers planning to consider these technologies have to think about
the following:
Cable companies usually charge a flat rate for cable modem services. That
means the modem can be left on all the time and communication takes place
as and when required. Phone companies usually charge xDSL service on a
duration basis, although there may be exceptions. Network managers have to
evaluate the need for constant connections versus the cost so as to make an
appropriate choice.
• Security
All the subscribers to both cable modem and xDSL networks are in a common
network. That means the network manager will have to design a security
framework so that legitimate company employees can get access to the server
while keeping intruders out of the company resource.
• Reliability
Reliability is a concern here, especially with the cable modem network.
Because it is subject to interference, it may not meet the requirements for a
reliable connection.
2.1.6 Wireless IP
Mobility has always been the key to success for many companies. Without doubt,
mobile communication will be a key component of a company’s network
infrastructure in the next few years. Much research and development has been
done on wireless communication, and in fact, wireless communication has been
around for quite some time. With the popularity of the Internet, many
developments have focused on delivering IP across a mobile network.
For many years, one of the problems with wireless communication has been the
adoption of standards and speed. But things are changing with the approval of
the IEEE 802.11 standard for wireless networks. It specifies a standard for
transmitting data over a wireless network at up to 2 Mbps or even at a higher rate
Bandwidth Users share a
downstream (e.g. 30
connection to the
MUX, usually at 1-3
Connection Continuous (due to
cheap charging
May not be
Continuous (due to
duration based
Availability Only to houses with
CATV wiring To houses with
phone lines
Wide spread use Limited Very limited
Potential for
business use Not really. Not all
business addresses
have CATV wiring
A viable alternative
Charge scheme Usually flat rate Flat/Duration based
Cable Modem xDSL
56 IP Network Design Guide
in the future. IEEE 802.11 uses the 2.4 GHz portion of the radio frequency. Some
research groups have even begun experimenting with a higher transmission rate
at a different frequency.
With the adoption of the IEEE 802.11 standard and vendors producing proven
products, you may have to give a wireless network serious thought. Here are
some reasons why:
Cost saving - since wireless uses radio frequency for transmission, there is no
need to invest in permanent wiring.
Mobility - since users are no longer tied to the physical wiring, they can have
flexibility in terms of their movement. They can still get connected to the
network as long as they are within certain range of the transmitting station.
Ad hoc network - there may be times when an ad hoc network is required, for
example, expedition in the field. Deploying wireless technology makes sense
in this environment without incurring the cost of fix wiring.
Competitiveness - having a mobile work force is important to some businesses
but at this time, most mobile workers still rely on phone lines for
communication. Using wireless technology is like having the last shackle
removed from the mobile workers. It makes them truly independent, but at the
same time, access to data is never an issue. One good example of such
worker is an insurance agent. With wireless technology, an agent can provide
service to his/her client anywhere, but he/she still has access to vital product
information regardless of the availability of LAN points or phone lines.
Extreme environment - in a certain extreme environment, for example,
command and control center during a war, wireless technology may be the
only viable technology.
Wireless IP is a relatively new field to many network managers. It is important for
network managers to begin exploring it as it is set to become more popular as
there is an increase in mobile workers and the introduction of field proven
products. Cellular Digital Packet Data (CDPD)
Cellular digital packet data (CDPD) is a way of transmitting an IP packet over a
cellular phone network. With the increase in popularity of the personal digital
assistant (PDA), many vendors are developing products as an add-on to the PDA
to enable users to connect to a mobile network. Since the connection is still slow,
at 19.2 kbps, it is mainly used for e-mail exchange and text-based information
dissemination. CDPD products are usually a modem that fits to the PDA and
provides basic TCP/IP services such as SLIP or PPP protocol.
The advantage of CDPD is of course mobility. No longer is a user tied to the
physical connection of a LAN. Information is readily available, and users need not
even look for a phone line anymore. With companies putting more workers on the
road, it is an important area that network managers should start looking into.
As a new technology, besides the maturity of standards and products, there are
several concerns that network managers should look into also. CDPD is capable
of sending data at 19.2 kbps. Taking into account the adding of a header for
reliable transmission, the actual data transfer rate is more like 9.6 kbps. With a
transmission rate like this, it is only the important text data that is transmitted.
Graphics or multimedia applications are almost out of the question. Also, one of
The Network Infrastructure 57
the most important aspects of mobile networks is of course security. Some area
that need special attention include:
Data security
User authentication
• Impersonation
Also, deploying CPDP technology in a network involves subscribing the service
from a service provider. This translates to extra cost involved and may not be
cheap for a company with several thousand employees. Last but not least, mobile
communication is subject to interference and failures such as poor transmission
power due to a low battery or over long distance. Error recovery becomes very
important in situations like these, and should be both at the network layer as well
as the application layer.
2.2 The Connecting Devices
A network can be as simple as two users sharing information through a diskette
or as complex as the Internet that we have today. The Internet is made up of
thousands of networks interconnected through devices called hubs, bridges,
routers and switches. These devices are the building blocks of a network and
each of them performs a specific task to deliver the information that is flowing in
the network. Some points to be considered as to which device is the most
appropriate one to implement are:
Complexity of the requirement
If the requirement is just to extend the network length to accommodate more
users, then a bridge will do the job.
Performance requirement
With the advent of multimedia applications, more bandwidth is required to be
made available to users. A switch, in this case, is a better choice than a hub
for building a network.
Specific business requirement
Sometimes, a specific business requirement dictates a more granular control
of who can access what information. In this type of situation, a router may be
required to perform sophisticated control of information flow.
Availability of expertise
Some devices require very little expertise to operate. A bridge is a simpler
device to operate than a router.
Ultimately, cost is an important decision criterion. When all devices can have
done the job, the one with the least cost will usually be selected.
The connecting devices function at different layers of the OSI model, and it is
important to know this so that a choice can be made in using them.
2.2.1 Hub
A hub is a connecting device that all end workstations are physically connected
to, so that they are grouped within a common domain called a network segment.
58 IP Network Design Guide
A hub functions at the physical layer of the OSI model; it merely regenerates the
electrical signal that is produced by a sending workstation, and is also known as
a repeater. It is a shared device, which means if all users are connected to a 10
Mbps Ethernet hub, then all the users share the same bandwidth of 10 Mbps. As
more users are plugged into the same hub, the effective average bandwidth that
each user has decreases. The number of hubs that you can use is also
determined by the chosen technology.
Ethernet, for instance, has specific limitations in the use of hubs in terms of
placement, distance and numbers. It is important to know the limitations so that
the network can work within specifications and not cause problems.
Figure 19. Hub Functions at the Physical Layer of the OSI Model
Most, if not all, of the hubs available in the market today, are plug and play. This
means very little configuration is required and probably everything works allright
after it is unpacked from the box. With the increasing numbers of small offices
and e-commerce, Ethernet hubs have become a consumer product. With these
hubs selling at a very low price and all performing a common function, the one
important buying decision is the price per port.
2.2.2 Bridge
A bridge is a connecting device that functions at the data link layer of the OSI
model. The primary task of a bridge is to interconnect two network segments so
that information can be exchanged between the two segments.
Physical Physical
The Network Infrastructure 59
Figure 20. Bridge Functions at the Data Link Layer of the OSI Model
A bridge basically stores a packet that comes into one port, and when required to,
forwards it out through another port. Thus, it is a store-and-forward device. When
a bridge forwards information, it only inspects the data link layer information
within a packet. As such, a bridge is generally more efficient than a router, which
is a layer-3 device. The reasons for using a bridge can be any of the following:
To accommodate more users on a network
Networks such as token-ring allow only 254 hosts to be in a single network
segment, and any additional hosts need to be in another network segment.
To improve the performance of a network
A bridge can be used to separate a network into two segments so that
interference, such as collisions, can be contained within a certain group of
users, allowing the rest to continue to communicate with each other
To extend the length of a network
Technologies such as Ethernet specify certain maximum distances for a LAN.
A bridge is a convenient tool to extend the distance so that more workstations
can be connected.
To improve security
A bridge can implement what is called MAC filtering, that is, selectively
allowing frames from certain workstations to pass through it. This manner
allows network managers to control access to certain information or hosts.
To connect dissimilar networks
A bridge can also be used to connect two dissimilar networks such as one
Ethernet and one token-ring segment.
Because there are a variety of reasons for using a bridge, bridges are classified
into various categories for the functions they perform:
Physical Physical
60 IP Network Design Guide
Transparent bridge
A transparent bridge is one that forwards traffic between two adjacent LANs
and it is unknown to the endstations, hence the name transparent. A
transparent bridge builds a table of MAC addresses of the workstations that it
learns and decides whether to forward a packet from the information. When
the bridge receives a packet, it checks its table to see the packet’s destination.
If the destination is on the same LAN segment as where the packet comes
from, the packet is not forwarded. If the destination is different from where the
packet comes from, the packet is forwarded. If the destination is not in the
table, the packet is forwarded to all interfaces except the one that the packet
comes from. Transparent bridges are used mainly in Ethernet LANs.
Source route bridge
A source route bridge is used in token-ring networks whereby the sending
workstation decides on the path to get to the destination. Before sending
information to a destination, a workstation has to decide what the path should
be. The workstation does this by sending out what is known as an explorer
frame, and builds its forwarding path based on information received from the
Source route transparent (SRT) bridge
A source route transparent (SRT) bridge is one that performs source routing
when source routing frames with routing information are received and
performs transparent bridging when frames are received without routing
information. The SRT bridge forwards transparent bridging frames without any
conversions to the outgoing interface, while source routing frames are
restricted to the source routing bridging domain. Thus, transparent frames are
able to reach the SRT and transparent bridged LAN, while the source routed
frames are limited only to the SRT and source route bridged LAN.
Source routing - Transparent bridge (SR-TB)
In the SRT model, source routing is only available in the adjacent token-ring
LANs and not in the transparent bridge domain. A source routing-transparent
bridge (SR-TB) overcomes this limitation and allows a token-ring workstation
to establish a connection across multiple source route bridges to a workstation
in the transparent bridging domain.
Another way of classifying bridges is to divide them into local and remote bridges.
While a local bridge connects two network segments within the same building,
remote bridges work in pairs and connect distant network segments together.
A bridge is a good tool to use because it is simple and requires very little
configuration effort. With its simplicity, it is very suitable to be used in an
environment where no networking specialist is available on site. Because it only
inspects the data link layer information, a bridge is truly a multiprotocol
connecting device.
2.2.3 Router
As mentioned earlier, a router functions at layer 3 of the OSI model, the network
layer. A router inspects the information in a packet pertaining to the network layer
and forwards the packet based on certain rules. Since it needs to inspect more
information than just the data link layer formation in a packet, a router generally
needs more processing power than a bridge to forward traffic. However different
The Network Infrastructure 61
in the way they inspect the information in a packet, both router and bridge attain
the same goal: that of forwarding information to a designated destination.
Figure 21. Router Functions at the Router Layer of the OSI Model
A router is an important piece of equipment in an IP network as it is the
connecting device for different groups of networks called IP subnets. All hosts in
an IP network have a unique identifier called the IP address. The IP address is
made up of two parts called the network number and the host number. Hosts
assigned with different network numbers are said to be in different subnets and
have to be connected through an intermediate device, the router, before they can
communicate. The router, in this case, is called the default gateway for the hosts.
All information exchanged between two hosts in different subnets has to go
through the router.
The reasons for using a router are the same as those mentioned for using a
bridge. Since a router inspects more information within a packet than a bridge, it
has more powerful features in terms of making decisions based on protocol and
network information such as the IP address. With the introduction of a more
powerful CPU and more memory, a router can even inspect information within a
packet at a higher layer than the network layer. As such, new generation routers
can perform tasks such as blocking certain users from accessing such functions
as FTP or TELNET. When a router performs that function, it is said to be
A router is also used often to connect remote offices to a central office. In this
scenario, the router located in the remote office usually comes with a port that
connects to the local office LAN, and a port that connects to the wide area
service, such as an ISDN connection. At the central office, there is a higher
capacity router that supports more connection ports for remote office
Table 7. Comparing Bridges and Routers
Bridge Router
OSI layer Data Link Network
Physical Physical
62 IP Network Design Guide
Because a router is such a powerful device, it is difficult to configure and usually
requires trained personnel to do the job. It is usually located within the data
center and costs more than a bridge. Although the reasons for using a router can
be the same as those mentioned for a bridge, some of the reasons for choosing a
router over a bridge are:
Routers can contain broadcast traffic within a certain domain so that not all
users are affected.
Routers can do filtering when security at a network or application level is
Routers can provide sophisticated TCP/IP services such as data link switching
Routers can provide congestion feedback at the network layer.
Routers has much more sophisticated redundancy features.
2.2.4 Switch
A switch functions at the same OSI layer as the bridge, the data link layer. In fact,
a switch can be considered a multi-port bridge. While a bridge forwards traffic
between two network segments, the switch has many ports, and forwards traffic
between those ports.
One great difference between a bridge and a switch is that a bridge does its job
through software functions, while a switch does its job through hardware
implementation. Thus, a switch is more efficient than a bridge, and usually costs
Suppress Broadcast No Yes
fragmentation of
No Yes
(relative to each
Cheap Expensive
Need trained
personnel May not Yes
Filtering level MAC MAC, network
protocol, TCP port,
application level
feedback No Yes
Used to connect
multipleremotesites No (only one) Yes
Redundancy Through spanning
tree protocol Through more
protocol such as
Link failure
recovery Slow Fast
Bridge Router
The Network Infrastructure 63
more. While the older generation switches can work only in store-and-forward
mode, some new switches, such as the IBM 8275-217, offer a new feature called
cut-through mode
whereby a packet is forwarded even before the switch has
received the entire packet. This greatly enhances the performance of the switch.
Later, a new method called
adaptive cut-through mode
was introduced whereby
the switch operates in cut-through mode and falls back to store-and-forward mode
if it discovers that packets are forwarded with CRC errors. A switch that has a
switching capacity of the total bandwidth required by all the ports is considered to
which is an important factor in choosing a switch.
Switches are introduced to partition a network segment into smaller segments, so
that broadcast traffic can be reduced and more hosts can communicate at the
same time. This is called microsegmentation, and it increases the overall network
bandwidth without doing major upgrade to the infrastructure.
Figure 22. Microsegmentation
Virtual LAN (VLAN)
With hardware prices falling and users demanding more bandwidth, more
segmentation is required and the network segments at the switch ports get
smaller until one user is left on a single network segment. More functions are also
added, one of which is called Virtual LAN (VLAN). VLAN is a logical grouping of
endstations that share a common characteristic. At first, endstations were
grouped by ports on the switch, that is, endstations connected to a certain port
belonged to the same group. This is called port-based VLAN. Port-based VLAN is
static because the network manager has to decide the grouping so that the switch
can be configured before putting it to use. Later, enhancements were made so
that switches can group endstations not by which ports they connect to, but by
which network protocol they run, such as IP or IPX. This is called a protocol
VLAN or PVLAN. Even recently, more powerful features were introduced whereby
Server Server
Switched Ethernet Replacement Using
Microsegmentation Server Server
20 Mbps
20 Mbps
10 Mbps 10 Mbps
64 IP Network Design Guide
the grouping of users is done on the basis of the IP network address. The
membership of an endstation is not decided until it has obtained its IP address
dynamically from a DHCP server.
It is worthy to note that when there are multiple VLANs created within a switch,
inter-VLAN communication can be achieved only through a bridge, which is
usually made available within the switch itself, or an external router. After all,
switches at this stage are still a layer-2 device.
As hardware gets more powerful in terms of speed and memory, more functions
have been added to switches, and a new generation of switches begins to appear.
Some switches begin to offer functions that were originally found only in routers.
This makes inter-VLAN communication possible without an external router for
protocols such as TCP/IP. This is what is called
layer-3 switching
, as opposed to
the original, which was termed
layer-2 switching
Advantages of VLAN
The introduction of the concept of VLANs created an impact on the network
design, especially with regard to physical connectivity. Previously, users who are
connected to the same hub belonged to the same network. With the introduction
of switches and VLANs, users are now grouped logically instead of their physical
connectivity. Companies are now operating in a dynamic environment:
departmental structures change, employee movements, relocations and mobility
can only be supported by a network that can provide flexibility in connectivity.
VLAN does exactly that. It gives the network the required flexibility to support the
logical grouping independent of the physical wiring.
Because the forwarding of packets based on layer 2 information (what a bridge
does) and layer 3 information (what a router does) is done at hardware speed, a
switch is more powerful than a bridge or a router in terms of forwarding capacity.
Because it offers such a rich functionality at wire speed, more and more switches
are being installed in corporate networks, and it is one of the fastest growing
technologies in connectivity. Network managers have begun to realize that with
the increase in the bandwidth made available to users, switching might be the
way to solve network bottleneck problem, as well as to provide a new
infrastructure to support a new generation of applications. Vendors begin to
introduce new ways of building a network based on these powerful switches. One
of them, Switched Virtual Networking (SVNz) is IBM’s way of exploiting the
enormous potential of a switching network in support of business needs. LAN Switches
LAN switches, as the name implies, are found in a LAN environment whereby
users get connected to the network. They come in different sizes, mainly based
on the number of ports that they support. Stackable LAN switches are used for
workgroup and low density connections and they are usually doing only layer-2
switching. Because of their low port density, they can be connected to each other
(hence stackable) through their switch port to form a larger switching pool. Many
other features are also added so that they can support the ever increasing need
from the users. Among the features that are most wanted are the following:
Link aggregation
Link aggregation is the ability to interconnect two switches through multiple
links so as to achieve higher bandwidth and fault tolerance in the connection.
For example, two 10 Mbps Ethernet switches may be connected to each other
The Network Infrastructure 65
using two ports on each switch so as to achieve a dual link configuration that
provides redundancy, in case one link fails, as well as a combined bandwidth
of 20 Mbps between them.
VLAN tagging/IEEE 802.1Q
VLAN tagging is the ability to share membership information of multiple
VLANs across a common link between two switches. This ability enables
endstations that are connected to two different switches but belong to the
same VLAN to communicate with each other as if they were connected to the
same switch. IEEE 802.1Q is a standard for VLAN tagging and many switches
are offering this feature.
Multicast support/IGMP snooping
Multicast support, better known as IGMP snooping, allows the switch to
forward traffic only to the ports that require the multicast traffic. This greatly
reduces the bandwidth requirement and improves the performance of the
switch itself. Campus Switches
As LAN switches get more powerful in terms of features, their port density
increases as well. This gives rise to bigger LAN switches, called campus
switches, that are usually deployed in the data center. Campus switches are
usually layer-3 switches, with more powerful hardware than the LAN switches,
and do routing at the network layer as well. Because of their high port density,
they usually have higher switching capacity and provide connections for LAN
switches. Campus switches are used to form the backbone for large networks
and usually provide feeds to even higher capacity backbones, such as an ATM
network. ATM Switches
Because ATM technology can be deployed in a LAN or WAN environment,
many different types of ATM switches are available:
The ATM LAN switch is usually a desktop switch, with UTP ports for the
connection of 25 Mbps ATM clients. It usually comes with a higher
bandwidth connection port, called an uplink, for connection to higher end
ATM switches that usually run at 155 Mbps.
ATM Campus Switch
The ATM campus switch is usually deployed in the data center and is for
concentrating ATM uplinks from the smaller ATM switches or LAN switches
with ATM uplink options. The ATM campus switch has high concentration of
ports that runs in 155 Mbps and maybe a few with 622 Mbps.
The ATM WAN switch, also called broadband switch, is usually deployed in
large corporations or Telcos for carrying data on wide area links and
support ranges from very low to high-speed connections. It can connect to
services such as frame relay and ISDN, or multiplex data across a few links
by using the technology called Inverse Multiplexing over ATM.
As switches develop over time, it seems apparent that switching is the way to
build a network because it offers the following advantages:
66 IP Network Design Guide
With its hardware implementation of forwarding traffic, a switch is faster
than a bridge or a router.
Due to the introduction of VLANs, the grouping of workstations now is no
longer limited by their physical locations. Instead, workstations are grouped
logically, whether or not they are located within the same location.
It offers more bandwidth
As opposed to a hub that provides shared bandwidth to the endstations, a
switch provides dedicated bandwidth to the endstations. More bandwidth is
introduced to the network without a redesign. With dedicated bandwidth, a
greater variety of applications, such as multimedia, can be introduced.
It is affordable
The prices for LAN switches have been dropping with advances in hardware
design and manufacturing. In the past, it was normal to pay about $500 per
port for a LAN switch. Now, vendors are offering switches below $100 per port.
With vendors offering a wide array of LAN switches at different prices, it is difficult
for a network manager to select an appropriate switch. However, there are a few
issues that you should consider when buying a LAN switch:
• Standards
It is important to select a switch that supports open standards. An open
standards-based product means there is a lesser chance of encountering
problems in connecting to another vendors product, if you need to.
Support for Quality of Service (QoS)
The switching capacity, the traffic control mechanism, the size of the buffer
pool and the support for multicast traffic are all important criteria to ensure
that the switch can support the demand for the QoS network.
• Features
Certain standard features have to be included because they are important in
building a switched network. These include the support for the 801.D spanning
tree protocol, SNMP protocol and remote loading of the configuration.
• Redundancy
This is especially important for the backbone switches. Because backbone
switches concentrate the entire company’s information flow, a downed
backbone switch means the company is paralyzed until the switch is back up
again. Hardware redundancy, which includes duplicate hardware as well as
hot-swappability, helps to reduce the risk and should be a deciding factor in
choosing a backbone switch.
Management capability
It is important to have a management software that makes configuration and
changes easy. Web-based management is a good way of managing the
devices because it means that what you need is just a browser. But
Web-based management usually accomplishes a basic management task
such as monitoring and does not provide sophisticated features. You may
need a specialized management software to manage your switches.
The Network Infrastructure 67
2.3 ATM Versus Switched High-Speed LAN
One of the most debated topics in networking recently is the role of ATM in an
enterprise network.
ATM was initially promoted as the technology of choice from desktop connections,
to backbone and the WAN. It was supposed to be the technology that would
replace others and unify all connecting protocols. The fact is, this is not
happening, and will not happen for quite some time.
ATM is a good technology but not everybody needs it. Its deployment has to be
very selective and so far, it has proven to be an appropriate choice for some of
the following situations:
When there is a need for image processing, for example, in a hospital network
where X-ray records are stored digitally and need to be shared electronically
In a graphics intensive environment, such as a CAD/CAM network, for use in
design and manufacturing companies
When there is a need to transport high quality video across the network, such
as advertising companies involved in video production
When there is a need to consolidate data, voice and video on a single network
to save cost on WAN connections
The ATM technology also has its weak points. Because it transports cells in a
fixed size of 53 bytes, and with its 5-byte header, it has a considerable high
overhead. With more and more PCs pre-installed with a LAN port, adopting ATM
technology to the desktops means having to open them up and install an ATM
NIC. You also need an additional driver for using the ATM NIC. For network
managers who are not familiar with the technology, the LES, BUS, LECS, VCs,
VCCs and other acronyms are just overwhelming.
While some vendors are pushing very hard for ATM’s deployment, many network
managers are finding that their good old LANs, though crawling under heavy load,
Beware of Those Figures
It is important to find out the truth about what vendors claim on the
specification of their products. It is common to see vendors claiming their
switches have an astronomical 560 Gbps switching throughput. Vendors
seem to have their own mathematics when making statements like this and
this is usually what happens:
Let’s say they have a chassis-based backbone switch that can support one
master module with 3 Gbps switching capacity, and 10 media modules each
with 3 Gbps switching capacity. They will claim that their backbone switch is
(3+10x3) which is 33, multiply by 2 because it supports duplex operation,
and voila, you have a 66 Gbps switch. What the vendor did not tell is that all
traffic on all media modules has to pass through the master module, which
is like acting as a supervisor. In fact, the switch at most can provide 6 Gbps
switching capacity, if you agree that duplex mode does provide more
68 IP Network Design Guide
are still relevant. The reasons for feeling so are none other than the legacy LANs’
low cost of ownership, familiarity with the technology and ease of implementation.
While some may still argue on the subject of which is better, others have found a
perfect solution to it: combining both technologies. Many have found that ATM as
a backbone, combined with switched LANs at the edge, provides a solution that
has the benefits of both technologies.
As a technology for backbones, ATM provides features such as PNNI, fast
reroute, VLAN capabilities and high throughput to act as a backbone that is both
fast and resilient to failure. The switched LAN protects the initial investment on
the technologies, continues to keep connections to the desktop affordable, and
due to their sheer volume, makes deployment easy.
It is important to know that both ATM and switched LANs solve the same problem:
the shortage of bandwidth on the network. Some have implemented networks
based entirely on ATM and have benefited from it. Others have stayed away from
it because it is too difficult. It is important to know how to differentiate both
technologies, and appreciate their implications to the overall design.
2.4 Factors That Affect a Network Design
Designing a network is more than merely planning to use the latest gadget in the
market. A good network design takes into consideration many factors:
2.4.1 Size Matters
At the end of the day, size does matter. Designing a LAN for a small office with a
few users is different from building one for a large company with two thousand
users. In building a small LAN, a flat design is usually used, where all connecting
devices may be connected to each other. For a large company, a hierachical
approach should be used.
2.4.2 Geographies
The geographical locations of the sites that need to be connected are important
in a network design. The decision making process for selecting the right
technology and equipment for remote connections, especially those of
cross-country nature, is different from that for a LAN. The tariffs, local expertise,
quality of service from service providers, are some of the important criteria.
2.4.3 Politics
Politics in the office ultimately decides how a network should be partitioned.
Department A may not want to share data with department B, while department C
allows only department D to access its data. At the network level, requirements
such as these are usually done through filtering at the router so as to direct traffic
flow in the correct manner. Business and security needs determine how
information flows in a network and the right tool has to be chosen to carry this
2.4.4 Types of Application
The types of application deployed determines the bandwidth required. While a
text-based transaction may require a few kbps of bandwidth, a multimedia help
The Network Infrastructure 69
file with video explanati