DAY - 7
Distance Vector Routing Protocols:-
Distance vector
means that routes are advertised by providing two characteristics:
Distance: Identifies how far it is to the destination network
and is based on a metric such as the hop count, cost, bandwidth, delay, and
more
Vector: Specifies the direction of the next-hop router or
exit interface to reach the destination
The Meaning of
Distance Vector
A router using
a distance vector routing protocol does not have the knowledge of the
entire path to a destination network. Distance vector protocols use routers as
sign posts along the path to the final destination. The only information a
router knows about a remote network is the distance or metric to reach that
network and which path or interface to use to get there. Distance vector
routing protocols do not have an actual map of the network topology.
There are four
distance vector IPv4 IGPs:
RIPv1: First generation legacy protocol
RIPv2: Simple distance vector routing protocol
IGRP: First generation Cisco proprietary protocol (obsolete and replaced by
EIGRP)
EIGRP: Advanced version of distance vector routing
Link-State Routing Protocols
In contrast to
distance vector routing protocol operation, a router configured with a link-state routing protocol can create a complete view or topology of
the network by gathering information from all of the other routers. to continue
our analogy of sign posts, using a link-state routing protocol is like having a
complete map of the network topology. The sign posts along the way from source
to destination are not necessary, because all link-state routers are using an
identical map of the network. A link-state router uses the link-state
information to create a topology map and to select the best path to all
destination networks in the topology.
RIP-enabled
routers send periodic updates of their routing information to their neighbours.
Link-state routing protocols do not use periodic updates. After the network has
converged, a link-state update is only sent when there is a change in the
topology. For example, in Figure, the link-state update is sent when the
172.16.3.0 network goes down.
Link-State Protocol Operation
Link-state
protocols work best in situations where:
The network
design is hierarchical, usually occurring in large networks Fast convergence of
the network is crucial
The
administrators have good knowledge of the implemented link-state routing protocol
There are two
link-state IPv4 IGPs:
OSPF: Popular standards-based routing protocol
IS-IS: Popular in provider networks
Classful Routing Protocols
The biggest
distinction between classful and classless routing protocols is that classful
routing protocols do not send subnet mask information in their routing updates.
Classless routing protocols include subnet mask information in the routing
updates.
The two
original IPv4 routing protocols developed were RIPv1 and IGRP. They were
created when network addresses were allocated based on classes (i.e., class A,
B, or C). At that time, a routing protocol did not need to include the subnet
mask in the routing update, because the network mask could be determined based
on the first octet of the network address.
The fact that
RIPv1 and IGRP do not include subnet mask information in their updates means
that they cannot provide variable-length subnet masks (VLSMs) and Classless
Inter-Domain Routing (CIDR).
Classful
routing protocols also create problems in discontiguous networks. A
discontiguous network is when subnets from the same classful major network
address are separated by a different classful network address.
To illustrate
the shortcoming of classful routing, refer to the topology in Figure.
R1 Forwards a
Classful Update to R2
Classless Routing Protocols
Modern networks
no longer use classful IP addressing and the subnet mask cannot be determined
by the value of the first octet. The classless IPv4 routing protocols (RIPv2,
EIGRP, OSPF, and IS-IS) all include the subnet mask information with the
network address in routing updates. Classless routing protocols support VLSM
and CIDR.
IPv6 routing
protocols are classless. The distinction whether a routing protocol is classful
or classless typically only applies to IPv4 routing protocols. All IPv6 routing
protocols are considered classless because they include the prefix-length with
the IPv6 address.
Routing Protocol Characteristics
Routing
protocols can be compared based on the following characteristics:
Speed of convergence: Speed of convergence defines how quickly the routers
in the network topology share routing information and reach a state of
consistent knowledge. The
faster the
convergence, the more preferable the protocol. Routing loops can occur when
inconsistent routing tables are not updated due to slow convergence in a
changing network.
Scalability: Scalability defines how large a network can become,
based on the routing protocol that is deployed. The larger the network is, the
more scalable the routing protocol needs to be.
Classful or classless (use of VLSM): Classful routing protocols do not include
the subnet mask and cannot support variable-length subnet mask (VLSM).
Classless routing protocols include the subnet mask in the updates. Classless
routing protocols support VLSM and better route summarization.
Resource usage: Resource usage includes the requirements of a routing
protocol such as memory space (RAM), CPU utilization, and link bandwidth
utilization. Higher resource requirements necessitate more powerful hardware to
support the routing protocol operation, in addition to the packet forwarding
processes.
Implementation
and maintenance: Implementation and maintenance describes the level of
Knowledge that
is required for a network administrator to implement and maintain the network
based on the routing protocol deployed.
UDP (User Datagram Protocol)
UDP (User Datagram Protocol) is a communications protocol that offers a
limited amount of service when messages are exchanged between computers in a
network that uses the Internet Protocol (IP).
UDP (User
Datagram Protocol) is a communications protocol that offers a limited amount of
service when messages are exchanged between computers in a network that uses
the Internet Protocol (IP). UDP is an alternative to the Transmission Control
Protocol (TCP) and, together with IP, is sometimes
referred to as UDP/IP. Like the Transmission Control Protocol, UDP uses the
Internet Protocol to actually get a data unit (called a datagram) from one computer to another. Unlike TCP,
however, UDP does not provide the service of dividing a message into packets (datagram)
and reassembling it at the other end. Specifically, UDP doesn't provide
sequencing of the packets that the data arrives in. This means that the
application program that uses UDP must be able to make sure that the entire
message has arrived and is in the right order. Network applications that want
to save processing time because they have very small data units to exchange
(and therefore very little message reassembling to do) may prefer UDP to TCP.
The Trivial File Transfer Protocol (TFTP) uses UDP instead of TCP.
UDP provides
two services not provided by the IP layer. It provides port numbers to help distinguish different user
requests and, optionally, a checksum capability to verify that the data arrived
intact. In the Open Systems Interconnection (OSI) communication model, UDP, like TCP, is in
layer 4, the Transport Layer.
TCP (Transmission Control Protocol)
TCP
(Transmission Control Protocol) is a standard that defines how to
establish and maintain a network conversation via which application programs can exchange data. TCP works with the
Internet Protocol (IP), which defines how computers send packets of data to each other. Together, TCP and IP are
the basic rules defining the Internet. TCP is defined by the Internet
Engineering Task Force (IETF) in the Request for Comment (RFC) standards document number 793.
TCP is a connection-oriented protocol, which means a connection is
established and maintained until the application programs at each end have finished exchanging messages. It determines how to break application
data into packets that networks can deliver, sends packets to and accepts
packets from the network layer, manages flow control, and—because it is meant to provide
error-free data transmission—handles retransmission of dropped or garbled
packets as well as acknowledgement of all packets that arrive. In the Open Systems Interconnection (OSI) communication model, TCP covers
parts of Layer 4, the Transport Layer, and parts of Layer 5, the Session Layer.
For example,
when a Web server sends an HTML file to a client, it uses the HTTP protocol to do so. The HTTP program layer
asks the TCP layer to set up the connection and send the file. The TCP
stack divides the file into packets, numbers them and then forwards them
individually to the IP layer for delivery. Although each packet in the
transmission will have the same source and destination IP addresses, packets may be sent along multiple
routes. The TCP program layer in the client computer waits until all of the
packets have arrived, then acknowledges those it receives and asks for the retransmission
on any it does not (based on missing packet numbers), then assembles them into
a file and delivers the file to the receiving application.
Congestion Control
Congestion is
an important issue that can arise in packet switched network. Congestion is a
situation in Communication Networks in which too many packets are present in a
part of the subnet, performance degrades. Congestion in a network may occur
when the load on the network (i.e. the number of packets sent to the
network) is greater than the capacity of the network (i.e. the number of
packets a network can handle.)
In other words
when too much traffic is offered, congestion sets in and performance degrades
sharply
Causing of Congestion:
The various
causes of congestion in a subnet are:
1. The input
traffic rate exceeds the capacity of the output lines. If suddenly, a stream of
packet start arriving on three or four input lines and all need the same output
line. In this case, a queue will be built up. If there is insufficient memory to hold all the packets, the packet will
be lost. Increasing the memory to unlimited size does not solve the problem.
This is because, by the time packets reach front of the queue, they have
already timed out (as they waited the queue). When timer goes off source
transmits duplicate packet that are also added to the queue. Thus same packets
are added again and again, increasing the load all the way to the destination.
2. The routers
are too slow to perform bookkeeping tasks (queuing buffers, updating tables,
etc.).
3. The routers'
buffer is too limited.
4. Congestion
in a subnet can occur if the processors are slow. Slow speed CPU at routers will perform the routine tasks
such as queuing buffers, updating table etc slowly. As a result of this, queues
are built up even though there is excess line capacity.
5. Congestion
is also caused by slow links. This problem will be solved when high speed links
are used. But it is not always the case. Sometimes increase in link bandwidth
can further deteriorate the congestion problem as higher speed links may make
the network more unbalanced. Congestion can make itself worse. If a
route!" does not have free buffers, it start ignoring/discarding the newly
arriving packets. When these packets are discarded, the sender may retransmit
them after the timer goes off. Such packets are transmitted by the sender again
and again until the source gets the acknowledgement of these packets. Therefore
multiple transmissions of packets will force the congestion to take place at
the sending end.
How to correct the Congestion Problem:
Congestion
Control refers to techniques and mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has happened. Congestion
control mechanisms are divided into two categories, one category prevents the
congestion from happening and the other category removes congestion after it
has taken place.
These two
categories are:
1. Open loop
2. Closed loop
Open
Loop Congestion Control
• In this
method, policies are used to prevent the congestion before it happens.
• Congestion
control is handled either by the source or by the destination.
• The various
methods used for open loop congestion control are:
1.
Retransmission Policy
• The sender
retransmits a packet, if it feels that the packet it has sent is lost or
corrupted.
• However
retransmission in general may increase the congestion in the network. But we
need to implement good retransmission policy to prevent congestion.
• The
retransmission policy and the retransmission timers need to be designed to
optimize efficiency and at the same time prevent the congestion.
2.
Window Policy
• To implement window policy, selective reject window
method is used for congestion control.
• Selective
Reject method is preferred over Go-back-n window as in Go-back-n method, when
timer for a packet times out, several packets are resent, although some may
have arrived safely at the receiver. Thus, this duplication may make congestion
worse.
• Selective
reject method sends only the specific lost or damaged packets.
3.
Acknowledgement Policy
• The
acknowledgement policy imposed by the receiver may also affect congestion.
• If the
receiver does not acknowledge every packet it receives it may slow down the
sender and help prevent congestion.
•
Acknowledgments also add to the traffic load on the network. Thus, by sending
fewer acknowledgements we can reduce load on the network.
• To implement
it, several approaches can be used:
1. A receiver
may send an acknowledgement only if it has a packet to be sent.
2. A receiver
may send an acknowledgement when a timer expires.
3. A receiver may
also decide to acknowledge only N packets at a time.
4.
Discarding Policy
• A router may
discard less sensitive packets when congestion is likely to happen.
• Such a
discarding policy may prevent congestion and at the same time may not harm the
integrity of the transmission.
5.
Admission Policy
• An admission
policy, which is a quality-of-service mechanism, can also prevent congestion in
virtual circuit networks.
• Switches in a
flow first check the resource requirement of a flow before admitting it to the
network.
• A router can
deny establishing a virtual circuit connection if there is congestion in the
"network or if there is a possibility of future congestion.
Closed
Loop Congestion Control
• Closed loop
congestion control mechanisms try to remove the congestion after it happens.
• The various
methods used for closed loop congestion control are:
1.
Backpressure
• Backpressure
is a node-to-node congestion control that starts with a node and propagates, in
the opposite direction of data flow.
The backpressure
technique can be applied only to virtual circuit networks. In such virtual
circuit each node knows the upstream node from which a data flow is coming.
• In this
method of congestion control, the congested node stops receiving data from the immediate
upstream node or nodes.
• This may
cause the upstream node on nodes to become congested, and they, in turn, reject
data from their upstream node or nodes.
• As shown in
fig node 3 is congested and it stops receiving packets and informs its upstream
node 2 to slow down. Node 2 in turns may be congested and informs node 1 to
slow down. Now node 1 may create congestion and informs the source node to slow
down. In this way the congestion is alleviated. Thus, the pressure on node 3 is
moved backward to the source to remove the congestion.
2.
Choke Packet
• In this
method of congestion control, congested router or node sends a special type of
packet called choke packet to the source to inform it about the congestion.
• Here,
congested node does not inform its upstream node about the congestion as in
backpressure method.
• In choke
packet method, congested node sends a warning directly to the source station i.e.
the intermediate nodes through which the packet has traveled are not
warned.
3.
Implicit Signaling
• In implicit
signaling, there is no communication between the congested node or nodes and
the source.
• The source
guesses that there is congestion somewhere in the network when it does not
receive any acknowledgment. Therefore the delay in receiving an acknowledgment
is interpreted as congestion in the network.
• On sensing
this congestion, the source slows down.
• This type of
congestion control policy is used by TCP.
No comments:
Post a Comment
Give your valuable feedback