Criteria in Designing IP Networks
The topics in this section are important both for understanding IP addressing and routing and for use within the context of the Cisco certiﬁcation. The need for the hierarchical design proposed by Cisco is discussed, explaining the function of each layer and how access lists are used in conjunction with this design to protect the network from excessive and redundant trafﬁc.
This section deals with the hierarchical design that Cisco uses. The design integrates well with VLSM design because summarization helps to ensure a stable and well-managed network. This section also includes a review of access lists and their use, because they are integral to IP network design. You will examine alternatives to access lists and identify other key points to remember when designing an IP network.
The Cisco Hierarchical Design
It is crucial to build a network that can grow or scale with the needs of the user. This avoids a network that reels from crisis to crisis. Cisco provides a hierarchical design that simpliﬁes network management and also allows the network to grow. This growth may be physical growth or capacity growth.
To achieve a stable and responsive network—and to keep local trafﬁc local, preventing network congestion—Cisco suggests a network design structure that allows for growth. The key to the design is making it hierarchical, with a division of functionality between the layers of the hierarchy. Trafﬁc that begins on a lower layer of the hierarchy is only allowed to be forwarded through to the upper levels if it meets clearly deﬁned criteria. A ﬁltering operation restricts unnecessary trafﬁc from traversing the entire network. Thus, the network is more adaptable, scalable, and reliable.
Clear guidelines and rules govern how to design networks according to these principles. The following section explains how the hierarchical network design proposed by Cisco reduces congestion.
If the network is designed hierarchically, with each layer acting as a ﬁlter for the layer beneath it, the network can grow effectively. In this way, local trafﬁc is kept local (within the same layer), and only data and information about global resources needs to travel outside the immediate domain or layer.
Criteria in Designing IP Networks 91
Understanding that the layers are ﬁltering functions begs the question of how many layers are required in your network. The answer is that it depends on the type of applications and network architecture, in addition to other criteria.
The Cisco design methodology is based on simplicity and ﬁltering. Cisco suggests that the largest networks currently require no more than three layers of ﬁltering.
Because a hierarchical layer in the network topology is a control point for trafﬁc ﬂow, a hierarchical layer is the same as a routing layer. Thus, a layer of hierarchy is created with the placement of a router or a Layer 3 switching device.
The number of hierarchical layers that you need to implement in your network reﬂects the amount of trafﬁc control required. To determine how many layers are required, you must identify the function that each layer will have within your network.
The Functions of Each Layer
Each hierarchical layer in the network design is responsible for preventing unnecessary trafﬁc from being forwarded to the higher layers, only to be discarded by unrelated or uninterested hosts. The goal is to allow only relevant trafﬁc to traverse the network and thereby reduce the load on the network. If this goal is met, the network can scale more effectively. The three layers of a hierarchyare as follows:
■ The access layer
■ The distribution layer
■ The core layer
The next sections describe each layer in more detail.
The Access Layer
In accordance with its name, the access layer is where the end devices connect to the network— where they gain access to the company network. The Layer 3 devices (such as routers) that guard the entry and exit to this layer are responsible for ensuring that all local server trafﬁc does not leak out to the wider network. Quality of service (QoS) classiﬁcation is performed here, along with other technologies that deﬁne the trafﬁc that is to traverse the network. Service Advertisement Protocol (SAP) ﬁlters for NetWare and AppleTalk’s GetZoneLists are also implemented here, in reference to the design consideration of client/server connectivity.
The Distribution Layer
The distribution layer provides connectivity between several parts of the access layer. The distribution layer is responsible for determining access across the campus backbone by ﬁltering out unnecessary resource updates and by selectively granting speciﬁc access to users and departments. Access lists are used not just as trafﬁc ﬁlters, but as the ﬁrst level of rudimentary security. Access to the Internet is implemented here, requiring a more sophisticated security or ﬁrewall system.
The Core Layer
The responsibility of the core layer is to connect the entire enterprise by interconnecting distribution layer devices. At the pinnacle of the network, reliability is of the utmost importance. A break in the network at this level would result in the inability for large sections of the organization to communicate. To ensure continuous connectivity, the core layer should be designed to be highly redundant, and as much as possible, all latency should be removed. Because latency is created when decisions are required, decisions relating to complex routing issues, such as ﬁlters, should not be implemented at this layer. They should be implemented at the access or distribution layers, leaving the core layer with the simple duty of relaying the data as fast as possible to all areas of the network. In some implementations, QoS is implemented at this layer to ensure a higher priority to certain packets, preventing them from being lost during high congestion periods.
General Design Rules for Each Layer
A clear understanding of the trafﬁc patterns within the organization—who is connecting to whom and when—helps to ensure the appropriate placement of client and servers, and eases the implementation of ﬁltering at each layer. Without hierarchy, networks have less capacity to scale because the trafﬁc must traverse every path to ﬁnd its destination, and manageability becomes an issue.
It is important for each layer to communicate only with the layer above or below it. Any connectivity or meshing within a layer impedes the hierarchical design.
Organizations often design their networks with duplicate paths. This is to build network resilience so that the routing algorithm can immediately use an alternative path if the primary link fails. If this is the design strategy of your company, care should be taken to ensure that the hierarchical topology is still honored.
Figure 3-1 shows an illustration of the appropriate design and trafﬁc ﬂow.
You need to have an understanding of the current network, the placement of the servers, and trafﬁc ﬂow patterns before attempting to design an improved network with the proper hierarchy.
One of the strengths of the Cisco hierarchical design is that it allows you to identify easily where to place the access lists. A quick review of access lists and how they can be used is provided in the next section.
IP Access Lists
Cisco router features enable you to control trafﬁc, primarily through access lists. They are crucial to the sophisticated programming of a Cisco router and allow for great subtlety in the control of trafﬁc.
Given that the router operates at Layer 3, the control that is offered is extensive. The router can also act at higher layers of the OSI model. This proves useful when identifying particular trafﬁc and protocol types for prioritization across slower WAN links.
You can use access lists to either restrict or police trafﬁc entering or leaving a speciﬁed interface. They are also used to implement “what if” logic on a Cisco router. This gives you the only real mechanism of programming the Cisco router. The access lists used for IP in this way enable you to apply subtlety to the router’s conﬁguration. This section reviews how to conﬁgure access lists and discusses their use in an IP network. The books CCNA Self-Study: Interconnecting Cisco Network Devices (ICND) and the CCNA ICND Exam Certification Guide (CCNA Self-Study, exam #640- 811), both from Cisco Press, deal with these subjects in more depth.
Because access lists can be used so subtly in system programming, they are used in many ways. IP access lists are used mainly to manage trafﬁc. The next sections discuss the role of access lists in security and controlling terminal access.
94 Chapter 3: Designing IP Networks
Security Using Access Lists
Cisco recommends using alternative methods rather than access lists for security. Although access lists are complex to conceive and write, they are easy to spoof and break through. As of IOS software version 11.3, Cisco implemented full security features. Use these features instead of access lists. The Cisco Secure Integrated Software (IOS Firewall Feature Set) is also now available.
Some simple security tasks are well suited to access lists, however. Although access lists do not constitute complex security, they will deter the idle user from exploring the company network.
The best way to use access lists for security is as the ﬁrst hurdle in the system, to alleviate processing on the main ﬁrewall. Whether the processing on the ﬁrewall device is better designed for dealing with the whole security burden, or whether this task should be balanced between devices, should be the topic of a capacity-planning project.
Controlling Terminal Access
Access lists applied to router interfaces ﬁlter trafﬁc traversing the router; they are not normally used to ﬁlter trafﬁc generated by the router itself. To control Telnet trafﬁc in which the router is the end station, an access list can be placed on the vty.
Five terminal sessions are available: vty 0 through vty 4. Because anticipating which session will be assigned to which terminal is difﬁcult, control is generally placed uniformly on all virtual terminals. Although this is the default conﬁguration, some platforms have different limitations on the number of vty interfaces that can be created.
Traffic Control Through Routing Updates
Trafﬁc on the network must be managed. Trafﬁc management is most easily accomplished at Layer 3 of the OSI model. You must be careful, however, because limiting trafﬁc also limits connectivity. Therefore, careful design and documentation is required.
Routing updates convey information about the available networks. In most routing protocols, these updates are sent out periodically to ensure that every router’s perception of the network is accurate and current.
Access lists that are applied to routing protocols restrict the information sent out in the update and are called distribute lists. Distribute lists work by omitting the routing information about certain networks based on the criteria in the access list. The result is that remote routers that are unaware of these networks are not capable of delivering trafﬁc to them. Networks hidden in this way are typically research-and-development sites, test labs, secure areas, or just private networks. This is also a way to reduce overhead trafﬁc in the network.
These distribute lists are also used to prevent routing loops in networks that have redistribution between multiple routing protocols.
When connecting two separate routing domains, the connection point of the domains, or the entry point to the Internet, is an area through which only limited information needs to be sent. Otherwise, routing tables become unmanageably large and consume large amounts of bandwidth.
Other Solutions to Traffic Control
Many administrators tune the update timers between routers, trading currency of information for optimization of bandwidth. All routers running the same routing protocol expect to hear these updates with the same frequency that they send out their own. If any of the parameters deﬁning how the routing protocol works are changed, these alterations should be applied consistently throughout the network; otherwise, routers will time out and the routing tables will become unsynchronized.
CAUTION Tuning network timers of any type is an extremely advanced task and should be
done only under very special circumstances and with the aid of the Cisco TAC team.
Across WAN networks, it might be advantageous to turn off routing updates completely and to deﬁne manually or statically the best path to be taken by the router. Note also that sophisticated routing protocols such as EIGRP or OSPF send out only incremental updates. Be aware, however, that these are correspondingly more complex to design and implement, although ironically, the conﬁguration is very simple.
Another method of reducing routing updates is to implement snapshot routing, which is available on Cisco routers and designed for use across on-demand WAN links. This allows the routing tables to be frozen and updated either at periodic intervals or when the on-demand link is brought up. For more information on this topic, refer to the Cisco web page.
To optimize the trafﬁc ﬂow throughout a network, you must carefully design and conﬁgure the IP network. In a client/server environment, control of the network overhead is even more important. The following section discusses some concerns and strategies.
Access lists are not used just to determine which packets will be forwarded to a destination. On a slow network connection where bandwidth is at a premium, access lists are used to determine the order in which trafﬁc is scheduled to leave the interface. Unfortunately, some of the packets might time out. Therefore, it is important to carefully plan the prioritization based on your understanding of the network. You need to ensure that the most sensitive trafﬁc (that is, trafﬁc most likely to time out) is handled ﬁrst.
Many types of prioritization are available. Referred to as queuing techniques, they are implemented at the interface level and are applied to the interface queue. The weighted fair queuing (WFQ) technique is turned on by default on interfaces slower than 2 Mbps, and can be tuned with the fairqueue x y z interface conﬁguration command.
The WFQ method is available in the later versions of the IOS. It is turned on automatically—in some instances, by the Cisco IOS—replacing the ﬁrst-in, ﬁrst-out (FIFO) queuing mechanism as the default. The queuing process analyzes the trafﬁc patterns on the link, based on the size of the packets and the nature of the trafﬁc, to distinguish interactive trafﬁc from ﬁle transfers. The queue then transmits trafﬁc based on its conclusions.
Queuing techniques that are manually conﬁgured with access lists are as follows:
■ Priority queuing —This is a method of dividing the outgoing interface buffer into four virtual queues. Importance or priority ranks these queues, and trafﬁc will be sent out of the interface accordingly. This method ensures that sensitive trafﬁc on a slow or congested link is processed ﬁrst.
■ Custom queuing —The interface buffer is divided into many subqueues. Each queue has a threshold stating the number of bytes or the number of packets that might be sent before the next queue must be serviced. In this way, it is possible to determine the percentage of bandwidth that each type of trafﬁc is given.
■ Class-based weighted fair queuing (CBWFQ) —This queuing method extends the standard WFQ functionality to provide support for user-deﬁned trafﬁc classes. For CBWFQ, you deﬁne trafﬁc classes based on match criteria, including protocols, access control lists (ACLs)—known as simply access lists in Cisco parlance—and input interfaces. Packets satisfying the match criteria for a class constitute the trafﬁc for that class. A queue is reserved for each class, and trafﬁc belonging to a class is directed to that class’s queue.
■ Low-latency queuing (LLQ) —This feature brings strict priority queuing to CBWFQ. Conﬁgured by the priority command, strict priority queuing gives delay-sensitive data, such as voice, preferential treatment over other trafﬁc. With this feature, delay-sensitive data is sent ﬁrst. In the absence of data in the priority queue, other types of trafﬁc can be sent.