Optimizing Your Network Design

Download Report

Transcript Optimizing Your Network Design

CIS460 – NETWORK
ANALYSIS AND DESIGN
CHAPTER 12
Optimizing Your Network Design
Introduction
– Optimization is a critical design step for
organizations that use high-bandwidth and
delay sensitive applications
– We are going to look at IP multicast techniques
that minimize bandwidth utilization for
multimedia applications
– Then we are cover methods for optimizing
network performance to meet QoS
requirements
Optimizing Bandwidth Usage
with IP Multicast Technologies
– High-bandwidth multiple-user multimedia
• distance learning, videoconferencing and
collaborative computing
– Old way - send a data stream to every user
– Alternative - use a single stream and use a
broadcast destination address
• Disadvantages - goes to all devices
– Multicast - single data stream only stations that
request
IP Multicast Addressing
• Transmits IP data to a group of hosts that
are identified by a single Class-D IP address
• Can also be identified by a MAC-layer
multicast address. Optimizes network
performance since it allows NICs to ignore
data streams
The Internet Group Management
Protocol
– Allows a host to join a group and inform routers of
the need to receive a particular data stream
– Host transmits a membership-report message
– Multicast router sends an IGMP query out every port
periodically
– To lessen bandwidth host sets a random timer to
reply to IGMP query
– IGMPv2 - recognizes when last host has left a group
Multicast Routing Protocols
• Extend the capabilities of a standard routing
protocol to include:
– learning paths to destination networks to
include multicast destination addresses
Multicast Open Shortest Path
First
– Complements OSPFs capability to develop a
link-state database to include l8ink-state record
for group memberships
– Router running MOSPF computes shortest-path
tree within its area
• Then prunes the branches of the tree that do not lead
to group members
• Optimized for an autonomous system that has a
limited number of groups
• Learns areas and not whole network - inefficient
Protocol-Independent Multicast
• Works in tandem with IGMP and other
unicast protocol such as OSPF
• PIM has two modes dense and sparse
– Dense - have many members
– Sparse - smaller group of employees
Dense-Mode PIM
• Similar to an older dense -mode protocol
(Distance Vector Multicast Routing Protocol
(DVMRP)
• Uses a reverse path forwarding mechanism to
compute shortest path
• When a multicast packet is received from a
source to a group it determines if it needs it
• Uses prune messages to be deleted from paths
Spares-Mode PIM
• Sets up a rendezvous point to provide
registration services for a multicast group
• Hosts join by sending a membership report,
depart by sending a leave message
• A designated router tracks these messages
and periodically sends join and prune
messages to rendezvous point
Optimizing Network Performance to
Meet Quality of Service Requirements
• Two types
– Controlled-load service
• provides a client data flow with a QoS closely
approximating the QoS that the flow would receive
on an unloaded network
– Guaranteed service
• provides firm bounds on end-to-end packet queuing
delays. Guaranteed for applications that need it
IP Precedence and Type of
service
• Specifies both precedence and type of service
– Precedence - helps router determine which packets
to send when several packets are queued
– Type of service helps router select a routing path
when multiple paths are available
• Type-of-service byte - 3 bit precedence and 4
bit type-of-service
IP Precedence Field
• Specify importance of packet
• Congestion control
• 0-5 are used for applications and user data
– 5 is typically set for Voice over IP and other
real-time applications
IP Type-of-Service Field
• Select a route using route characteristics
– Delay bit - minimize delay (Telnet, Rlogin,
Voice and video)
– Throughput bit - maximize throughput (file
transfer)
– Reliability bit - maximize reliability (faulttolerance path)
– Cost bit - minimize monetary cost
• Can only select one
Resource Reservation Protocol
• Alternative to IP type-of-service and
precedence capabilities in the IP header
• Supports more sophisticated mechanisms for
QoS requirements for individual traffic flow
• Deploy on LANs and intranets to support
multimedia applications
• QoS signaling protocols for delivering QoS
requirements on a network
Resource Reservation Protocol
(Cont’d)
• Setup protocol not a routing protocol
• Receiver responsible for requesting level of
service
• Provides a general facility for creating and
maintaining information on resource
reservations
• More suited for private intranets than the
Internet or other public networks
Common Open Policy Service
Protocol
• Understands actual services and policies
regarding the service
• Defines a simple client/server model for
supporting policy control with QoS
signaling protocols
• A policy server is a policy-decision point
and the client is a policy-enforcement point
IEEE 802.1 Specification
• Specifies mechanisms in bridges to expedite
the delivery of time-critical traffic
• Limits the extent of high-bandwidth
multicast traffic
IP Version 6
• Enhances the capability of hosts and routers
to implement varying levels of QoS for
different traffic flows
• A source host assigns a flow label to each
application flow. The router maintains a
cache of flow information indexed by
source address and flow label
• Header includes a 4-bit priority field
Real-Time Protocol
• Used by multimedia applications
• Provides end-to-end network transport
functions suitable for transmitting real-time
data
• Usually runs on top of User Datagram
Protocol (UDP)
• Relies on lower layer services to deliver
QoS
CISCO Internetworking OS Features for
Optimizing Networking Performance
• Ranges from proxy services which allow
delegating of specialized tasks to a router or
switch to advanced switching and queuing
services to improve throughput and offer
QoS functionality
Proxy Services
– Allows a router to act as a surrogate for the
service that is not available locally
– Support a router performing tasks beyond its
typical duties to minimize delay and bandwidth
usage
– Can convert a frame type to a new type that
causes less traffic
– Improve performance for applications that are
time sensitive
Switching Techniques
• Router switches switch packets from
incoming interfaces to outgoing interfaces
• Speed router can do this is a major factor in
determining network performance
• In general should use fastest switching
available
Classic Methods for Layer-3
Packet Switching
• Process switching is the slowest of the
switching methods
– Processor interrupted to process packet
information
– Fast switching uses an entry in the fast-switch
cache
– Autonomous switching uses an autonomous
switching cache
Classic Methods for Layer-3
Packet Switching (Cont’d)
– Silicon switching speeds up autonomous
switching by using silicon switching cache
– Optimum switching is faster due to an
enhanced caching algorithm and the optimized
structured of the cache
– Distributed switching supports very fast
throughput because the switching process
occurs on the interface card
NetFlow Switching
• New switching that is optimized for
environments where services must be
applied to packets to implement security,
QoS features, and traffic accounting
• Identifies traffic flows and then quickly
switches packets in the flows when it
applies services
Cisco Express Forwarding
• Technique for switching packets very
quickly across large backbones networks
and the Internet
• Evolved to accommodate Web-based
applications and other interactive
applications that are characterized by
session of short duration to multiple
destinations addresses
Tag Switching
– Optimizes packet-switching through a network of
tag switches
– A tag switch is a router or switch that supports tag
switching
– Tagging the first packet of information will
expedite following packets
– Three major components Tag Edge routers, Tag
switches and the Tag Distribution Protocol
Queuing Services
• Allows a network device to handle an
overflow of traffic using queuing methods
• First In, First Out
• Priority Queuing
• Custom Queuing
• Weighted fair queuing
First In, First Out Queuing
• Basic store and forward functionality
• Stores when network is congested and
forwards them in order
• Provides no QoS functionality
Priority Queuing
• Ensures that important traffic is processed
first
• Designed to give strict priority to a critical
application
• Is appropriate where WAN links are
congested from time to time
• Has four queues: high, medium, normal
and low
Custom Queuing
• Designed to allow a network to be shared
among applications with different minimum
bandwidth or latency requirements
• Assigns different amounts of queue space to
different protocols and handles the queues
in round-robin order
Weighted Fair Queuing
• Sophisticated set of algorithms designed to
reduce delay variability and provide
predictable throughput and response time for
traffic flows
• Goal is to offer uniform service to light and
heavy network users alike
• Recognizes an interactive application and
schedules that traffic to the front of the queue
Weighted Fair Queuing (Cont’d)
• Adapts automatically to changing network
traffic conditions and requires little or no
configuration
• Can allocate bandwidth based on
precedence
Random Early Detection
• A new class of congestion-avoidance algorithm
• Monitors traffic loads at points in a network and
randomly discards packets if congestion begins
to increase
• Source nodes detect dropped traffic and slow
their transmission rate
• Uses a randomization process
Weighted Random Early
Detection
• Combines the capabilities of standard RED
algorithm with IP precedence
• Provides preferential traffic handling for
higher-priority packets
• Selectively discards lower priority traffic
Traffic Shaping
• Allows management and control of network
traffic to avoid bottlenecks and meet QoS
requirements
• Avoids congestion by reducing outbound
traffic for a flow to a configured bit rate
while queuing bursts to that rate
• Configured on a per-interface basis
Committed Access Rate
• Supports specifying policies regarding how
traffic that exceeds a certain bandwidth
allocation should be handled
• Looks at received traffic, compares it to a
configured maximum and takes action
based on the result
Cisco WAN Switching
Optimization Techniques
• Techniques to dynamically allocate
bandwidth where it is needed and avoids
using bandwidth unnecessarily and
prioritize, manage, and control bandwidth
usage
– Voice Activity Detection
– Prioritization and Traffic Management on Wan
Switches
Voice Activity Detection
• Saves bandwidth by generating data only
when someone is speaking
• Voice conversations tend to be 60 percent
silent so VAD is an effective way to
dynamically free up bandwidth
• Test default setting for VAD to ensure they
are appropriate for the network and users
Prioritization and Traffic
Management on WAN Switches
• WAN switch features:
– Advanced Class of Service (CoS) management provides dedicated queues and queuing algorithms
for the different sub-classes of serviced defined in
a network
– Optimized Bandwidth Management - an
implementation of version 4.0 of the ATM Forum
Traffic Management Specification that enables a
switch to continually monitor trunk utilization
Prioritization and Traffic Management
on WAN Switches (Cont’d)
• Automatic Routing Management provides
end-to-end connection management services
via a source-based routing algorithm
– selects a route based on network topology, class
of service, trunk loading and relative distance to
the destination
Summary
• Optimization provides the high bandwidth,
low delay, and controlled jitter required by
many critical business applications
• Multimedia and other applications that are
sensitive to network congestion and delay
can inform routers in the network of their
QoS requirements using both in-band and
out of band methods