BGP OSPF Questions: Difference between revisions

Line 2:
= Hardware =
 
<pre>
1900 series routers for small branch office they support wan connectivity up to 25 M also called ISRG2 with two intergrated Gig ports .
rd
3900 series for medium and large branch office support up to 375 Mbps ,for example in 3945 we have 3 intergrated Gig ports and we can installed T3/E3 card based on bandidth requirement of site .
 
*1900 series routers for small branch office they support wan connectivity up to 25 M also called ISRG2 with two intergrated Gig ports .
ASR1002 -for large branch office and HuB topology .
*rd
*3900 series for medium and large branch office support up to 375 Mbps ,for example in 3945 we have 3 intergrated Gig ports and we can installed T3/E3 card based on bandidth requirement of site .
*ASR1002 -for large branch office and HuB topology .
*Cisco Catalyst 2960G 24 and 48-Port Switches is EOL ,is replaced with 2960 X seris that is with 24 port and 48 ports switches ,support stacking,provide backplance of 80 GBPS.
 
Cisco Catalyst 2960G 24 and 48-Port Switches is EOL ,is replaced with 2960 X seris that is with 24 port and 48 ports switches ,support stacking,provide backplance of 80 GBPS.
 
 
 
= 2960X- =
 
Total 10/100/1000 Ethernet Ports 24 or 48
2960X-
Uplinks 2x10 GE (SFP+) or 4x1 GE (SFP) options
FlexStack+ Optional on all LAN Base AND IP-Lite models
PoE/PoE+ Power Available 370W or 740W
 
= Architecture :=
Total 10/100/1000 Ethernet Ports 24 or 48
Uplinks 2x10 GE (SFP+) or 4x1 GE (SFP) options
FlexStack+ Optional on all LAN Base AND IP-Lite models
PoE/PoE+ Power Available 370W or 740W
 
*Small branch office -up to 50 users .for small branch its not neccessary to have mutlilayer architecture .
Architecture :
*Medium branch -up to 100 users .for medium/large we should have mutlilayer architecture to provide high availiblity and resilency,
*Large branc- up to 200 users or more
 
Small branch office -up to 50 users .for small branch its not neccessary to have mutlilayer architecture .
 
 
 
Medium branch -up to 100 users .for medium/large we should have mutlilayer architecture to provide high availiblity and resilency,
 
 
 
kredistribution= Redistribution from osfp otto bgp =
 
*all redistributed routes into bgp takes ad value of BGP ,inorder redistribute all the ospf routes internal ,external (E1&E2) we need to uses redisrtibute ospf process mathc internal external 1 external 2
 
*Redistribution of bgp into Ospf will take metric one ,Reditributio of ospf into BGP take IGP metric
Large branc- up to 200 users or more
 
*Qos -Each router maintain two queue hardware queue works on FIFO and software queues (LLQ,CBWFQ,Flow based WFq) ,service policy applies only on software queue
 
 
*Use the tx-ring-limit command to tune the size of the transmit ring to a non-default value (hardware queue is last stop before the packet is transmitted)
 
*Note: An exception to these guidelines for LLQ is Frame Relay on the Cisco 7200 router and other non-Route/Switch Processor (RSP) platforms. The original implementation of LLQ over Frame Relay on these platforms did not allow the priority classes to exceed the configured rate during periods of non-congestion. Cisco IOS Software Release 12.2 removes this exception and ensures that non-conforming packets are only dropped if there is congestion. In addition, packets smaller than an FRF.12 fragmentation size are no longer sent through the fragmenting process, reducing CPU utilization.
 
*It's all based upon whether there is or is not congestion on the link.
 
 
kredistribution from osfp ot bgp
 
all redistributed routes into bgp takes ad value of BGP ,inorder redistribute all the ospf routes internal ,external (E1&E2) we need to uses redisrtibute ospf process mathc internal external 1 external 2
 
Redistribution of bgp into Ospf will take metric one ,Reditributio of ospf into BGP take IGP metric
 
Qos -Each router maintain two queue hardware queue works on FIFO and software queues (LLQ,CBWFQ,Flow based WFq) ,service policy applies only on software queue
 
 
Use the tx-ring-limit command to tune the size of the transmit ring to a non-default value (hardware queue is last stop before the packet is transmitted)
 
Note: An exception to these guidelines for LLQ is Frame Relay on the Cisco 7200 router and other non-Route/Switch Processor (RSP) platforms. The original implementation of LLQ over Frame Relay on these platforms did not allow the priority classes to exceed the configured rate during periods of non-congestion. Cisco IOS Software Release 12.2 removes this exception and ensures that non-conforming packets are only dropped if there is congestion. In addition, packets smaller than an FRF.12 fragmentation size are no longer sent through the fragmenting process, reducing CPU utilization.
 
It's all based upon whether there is or is not congestion on the link.
 
*The priority queue (LLQ) will always be served first, regardless of congestion. It will be both guaranteed bandwidth AND policed if there is congestion. If there is not congestion, you may get more throughput of your priority class traffic.
 
*If the class is underutilized then the bandwidth may get used by other classes. Generally speaking this is harder to quantify than you may think. Because in normal classes, the "bandwidth" command is a minimum of what's guaranteed. So you may get MORE in varying amounts just depending on what is in the queue at any point in time of congestion.
 
 
*As mentioned before, policers determine whether each packet conforms or exceeds (or, optionally, violates) to the traffic configured policies and take the prescribed action. The action taken can include dropping or re-marking the packet. Conforming traffic is traffic that falls within the rate configured for the policer. Exceeding traffic is traffic that is above the policer rate but still within the burst parameters specified. Violating traffic is traffic that is above both the configured traffic rate and the burst parameters.
 
 
 
An improvement to the single-rate two-color marker/policer algorithm is based on RFC 2697, which details the logic of a single-rate three-color marker.
 
*An improvement to the single-rate two-color marker/policer algorithm is based on RFC 2697, which details the logic of a single-rate three-color marker.
The single-rate three-color marker/policer uses an algorithm with two token buckets. Any unused tokens in the first bucket are placed in a second token bucket to be used as credits later for temporary bursts that might exceed the CIR. The allowance of tokens placed in this second bucket is called the excess burst (Be), and this number of tokens is placed in the bucket when Bc is full. When the Bc is not full, the second bucket contains the unused tokens. The Be is the maximum number of bits that can exceed the burst siz
 
*The single-rate three-color marker/policer uses an algorithm with two token buckets. Any unused tokens in the first bucket are placed in a second token bucket to be used as credits later for temporary bursts that might exceed the CIR. The allowance of tokens placed in this second bucket is called the excess burst (Be), and this number of tokens is placed in the bucket when Bc is full. When the Bc is not full, the second bucket contains the unused tokens. The Be is the maximum number of bits that can exceed the burst siz
Queing -FIFO,PQ,WFO,CBWFQ
 
= Queing -FIFO,PQ,WFO,CBWFQ =
PQ- high priorty queue is always serviced first irrrespective traffic coming fron other queue.
 
*PQ- high priorty queue is always serviced first irrrespective traffic coming fron other queue.
WFQ-flow based ,each flow consist of source port ,destination port ,source and destination WFO always give prefernce smaller flows and lower packet size
 
*WFQ-flow based ,each flow consist of source port ,destination port ,source and destination WFO always give prefernce smaller flows and lower packet size
CBWFQ-each traffic is classifed and placed in class ,each class is allcated some amount of bandwidth ,queues are always serviced on basis amount of allocated bandwidth to queue .
 
*CBWFQ-each traffic is classifed and placed in class ,each class is allcated some amount of bandwidth ,queues are always serviced on basis amount of allocated bandwidth to queue .
 
 
Random Early Detection (RED) is a congestion avoidance mechanism that takes advantage of the congestion control mechanism of TCP. By randomly dropping packets prior to periods of high congestion, RED tells the packet source to decrease its transmission rate. WRED drops packets selectively based on IP precedence. Edge routers assign IP precedences to packets as they enter the network. (WRED is useful on any output interface where you expect to have congestion. However, WRED is usually used in the core routers of a network, rather than at the edge.) WRED uses these precedences to determine how it treats different types of traffic.
 
*Random Early Detection (RED) is a congestion avoidance mechanism that takes advantage of the congestion control mechanism of TCP. By randomly dropping packets prior to periods of high congestion, RED tells the packet source to decrease its transmission rate. WRED drops packets selectively based on IP precedence. Edge routers assign IP precedences to packets as they enter the network. (WRED is useful on any output interface where you expect to have congestion. However, WRED is usually used in the core routers of a network, rather than at the edge.) WRED uses these precedences to determine how it treats different types of traffic.
When a packet arrives, the following events occur:
 
*When a packet arrives, the following events occur:
1. The average queue size is calculated.
 
2 1. If theThe average is less than the minimum queue threshold, the arriving packetsize is queuedcalculated.
 
3 2. If the average is betweenless than the minimum queue threshold for that type of traffic and, the maximum threshold for the interface, thearriving packet is either dropped or queued, depending on the packet drop probability for that type of traffic.
 
4 3. If the average is between the minimum queue sizethreshold isfor greaterthat thantype of traffic and the maximum threshold for the interface, the packet is either dropped or queued, depending on the packet drop probability for that type of traffic.
 
4. If the average queue size is greater than the maximum threshold, the packet is dropped.
 
= IPSEC =