BGP OSPF Questions: Difference between revisions

m
 
(13 intermediate revisions by the same user not shown)
Line 20:
*Medium branch - up to 100 users .for medium/large we should have mutlilayer architecture to provide high availiblity and resilency,
*Large branch - up to 200 users or more
 
= Redistribution from osfp to bgp =
 
*All redistributed routes into bgp takes ad value of BGP ,inorder redistribute all the ospf routes internal ,external (E1&E2) we need to uses redisrtibute ospf process mathc internal external 1 external 2
 
*Redistribution of bgp into Ospf will take metric one ,Reditributio of ospf into BGP take IGP metric
 
*Qos -Each router maintain two queue hardware queue works on FIFO and software queues (LLQ,CBWFQ,Flow based WFq) ,service policy applies only on software queue
 
*Use the tx-ring-limit command to tune the size of the transmit ring to a non-default value (hardware queue is last stop before the packet is transmitted)
 
Note: An exception to these guidelines for LLQ is Frame Relay on the Cisco 7200 router and other non-Route/Switch Processor (RSP) platforms. The original implementation of LLQ over Frame Relay on these platforms did not allow the priority classes to exceed the configured rate during periods of non-congestion. Cisco IOS Software Release 12.2 removes this exception and ensures that non-conforming packets are only dropped if there is congestion. In addition, packets smaller than an FRF.12 fragmentation size are no longer sent through the fragmenting process, reducing CPU utilization.
 
*It's all based upon whether there is or is not congestion on the link.
 
*The priority queue (LLQ) will always be served first, regardless of congestion. It will be both guaranteed bandwidth AND policed if there is congestion. If there is not congestion, you may get more throughput of your priority class traffic.
 
*If the class is underutilized then the bandwidth may get used by other classes. Generally speaking this is harder to quantify than you may think. Because in normal classes, the "bandwidth" command is a minimum of what's guaranteed. So you may get MORE in varying amounts just depending on what is in the queue at any point in time of congestion.
 
*As mentioned before, policers determine whether each packet conforms or exceeds (or, optionally, violates) to the traffic configured policies and take the prescribed action. The action taken can include dropping or re-marking the packet. Conforming traffic is traffic that falls within the rate configured for the policer. Exceeding traffic is traffic that is above the policer rate but still within the burst parameters specified. Violating traffic is traffic that is above both the configured traffic rate and the burst parameters.
 
*An improvement to the single-rate two-color marker/policer algorithm is based on RFC 2697, which details the logic of a single-rate three-color marker.
 
*The single-rate three-color marker/policer uses an algorithm with two token buckets. Any unused tokens in the first bucket are placed in a second token bucket to be used as credits later for temporary bursts that might exceed the CIR. The allowance of tokens placed in this second bucket is called the excess burst (Be), and this number of tokens is placed in the bucket when Bc is full. When the Bc is not full, the second bucket contains the unused tokens. The Be is the maximum number of bits that can exceed the burst siz
 
= Queing - FIFO,PQ,WFO,CBWFQ =
 
* PQ - High priorty queue is always serviced first irrrespective traffic coming fron other queue.
* WFQ - Flow based ,each flow consist of source port ,destination port ,source and destination WFO always give prefernce smaller flows and lower packet size
* CBWFQ - Each traffic is classifed and placed in class ,each class is allcated some amount of bandwidth ,queues are always serviced on basis amount of allocated bandwidth to queue .
 
* Random Early Detection (RED) is a congestion avoidance mechanism that takes advantage of the congestion control mechanism of TCP. By randomly dropping packets prior to periods of high congestion, RED tells the packet source to decrease its transmission rate. WRED drops packets selectively based on IP precedence. Edge routers assign IP precedences to packets as they enter the network. (WRED is useful on any output interface where you expect to have congestion. However, WRED is usually used in the core routers of a network, rather than at the edge.) WRED uses these precedences to determine how it treats different types of traffic.
 
* When a packet arrives, the following events occur:
1. The average queue size is calculated.
2. If the average is less than the minimum queue threshold, the arriving packet is queued.
3. If the average is between the minimum queue threshold for that type of traffic and the maximum threshold for the interface, the packet is either dropped or queued, depending on the packet drop probability for that type of traffic.
4. If the average queue size is greater than the maximum threshold, the packet is dropped.
 
= IPSEC =
Line 131 ⟶ 93:
 
*Data packets for protocols that require Layer 7 inspection can also go through the fast path.
 
= BGP =
 
*BGP Synchronization rule -IF the AS is acting transient for other AS routes learn through BGP will not be advertized unless the all the routes learn this routes though IGP.
*If we turned on the synchronisation BGP router will not advertize the route learned from IBGP PEER to EBGP Peer unless that route is learned through IGP.
*Split horizon rule -Routes larn though IBGp nei will not be advertized to other IBGP nei .
*BGP path selection criterion -route is excluded if next hop is unreachable, hightest wieight, high local pref, route if locally orginated, shortest as path len, prefer lowest origin code (IGP<EGP<Unknown), lowest MED, ebgp over IBGP, between IBGP closed IGP nei, bet EBGP oldest route, lowest Router ID.
*BGP Message types - Keepalive, notification, open, update.
 
*Routes received from a Route-Reflector-client is reflected to other clients and non-client neighbors.So if we have two route reflectors we should also keep in separte clusters ,, to avoide loops .That means that if you have multiple RRs with different cluster ID, optimal path is selected by selecting shorter cluster list. Having multiple RRs in the same cluster creates partial connectivity during failure
 
*The first route reflector also set an additonal BGP attribute called originator id and add it to BGP router -id of client.if any router receive the route which contains its own router id will ignore the route
 
*Confedrations - Breaking As into smaller As so that they can exchange routing updates using intra confedration EBGp Seesion.
but on the intraconfedration EBGP session parmaters for IBGP are still preserved. (like next hop self, metric, preference)
 
*Commands - under BGP process bgp confedration id x.x -Original As
- BGP confederation peers x.x ,y...- Need to specify the the intra confdration with in AS.
 
*MED Vs AS path prepend - MED doesnot goes beyond neibor As while As path prepeend goes beyond that.
*BGP always compare md - compares MED for a path from neighbors in different AS.
*BGP Determinsic-Med -comparison of MEd for a path from differnt Peers advertize in same AS.
 
*BGP conditional advertizement uses two terms advertize-map and non-exist-map, advertize the prefix in adtervertize map only if there is no route in BGPtable defined in non-exist-map.
*BGP conditonal Inject and Exist map -BGP conditional Route injection advertize the specific route defined in inject map from the summary route present in exist map .Its reverse of Aggregation .
*SOO - Site of orgin -is used to prevent routing loops and is used to identify the site from where the route is orginated and does not readvertize same route back to the site .
*SOO is enabled on PE routers - marked the customer prefixes.
*BGP communities are used to TAG the routes and they are used to perform policy routing in upstream router. Community attribute consist of four octets. Inorder to send community
*We need to use send community command under BGP process.
*BGP community are :
Internet: advertise these routes to all neighbors.
Local-as: prevent sending routes outside the local As within the confederation.
No-Advertise: do not advertise this route to any peer, internal or external.
No-Export: do not advertise this route to external BGP peers.
 
*Local AS command can be used in while migration of As - it will genrate BGP open message which is defined in local AS.
*nei x.x.x. local 100 no prepend replace as dual-as.( can be used for remote peer to configue whatever AS no has configured at there side ).
 
*Peers Group -Peer groups are a way of defining templates/groups with settings for neighbor
*Relationships - The same policy that goes to 1 neighbor in the peer group must go to all if it case one neighbor has a slightly different config we do not use peer-group for this neighbor the idea being a group with all required bgp settings and then add the neighbors to this group so they inherit the settings.
*Using BGP peer group one update is sent to peer group instead of individual updates helps in optimisation of updates .Configration makes its simpler.
 
*BGP route relector -Eliminates the need of bgp full mesh ,similar to ospf DR ,BDR elecltion, only peering needs to with RR.
*When RR get the update from its client it sent to other RR and its client .
*Modify the spilt horizon rule .BGP cluster id is used as loop prevention.
*Does not modiy the next hope attributes.
*Route reflectores modify split horizon rule now routes learn through IBGP can be forwarded to other IBGP nei ,route reflectore can do .
*if the client is having IBGP session with multiple routereflectores so each client will receive two copies of all routes.this can create the routing loops to avoid it each route reflector and its client form cluster which is identifed by cluster id which is unique in AS.
*whenver particular route is reflected route reflector router id is added to cluster list attirbute and set cluster id number in cluster -list.if for any reason route is reflected back to route reflectore for some reason it will reconganize cluster id includes its own router id . and will not forward it .
 
*The BGP Link Bandwidth feature used to enable multipath load balancing for external links with unequal bandwidth capacity. This feature is enabled under an IPv4 or VPNv4 address family sessions by entering the bgp dmzlink-bw command. This feature supports both iBGP, eBGP multipath load balancing, and eiBGP multipath load balancing in Multiprotocol Label Switching (MPLS) Virtual Private Networks (VPNs). When this feature is enabled, routes learned from directly connected external neighbor are propagated through the internal BGP (iBGP) network with the bandwidth of the source external link.
 
*The link bandwidth extended community indicates the preference of an autonomous system exit link in terms of bandwidth. This extended community is applied to external links between directly connected eBGP peers by entering the neighbor dmzlink-bw command. The link bandwidth extended community attribute is propagated to iBGP peers when extended community exchange is enabled with the neighbor send-community command.
 
*It should be configured in conjuction with max path command:
bgp dmzlink-bw
neighbor ip-address dmzlink-bw
neighbor ip-address send-community [both | extended | standard
 
*Aggreagate with AS set command - normal aggregation with summary command advertise the summary prefix only and suppress all the specific routes, so router which is performing the aggreagation will include its own AS while sending the update.
*So when Aggreagate with AS set command is used it will include all the AS in updates for summary prefix for those AS route performing the aggregation with AS list, this will prevent routing loop.
 
*Attribute map -can be used to modify the community received in aggregation router to none.(command) MAP. When particular is sending the prefix to router performing aggregation with community like no export attached, Aggregate router will inherit the communtiy and can cause issue to aggregate prefix while propagating, To avoid it we can modifiy the community to none using atrribute map command (aggrgate address x.x.x.x .x.x.x as-set summary only attribute map)
 
*BGP Backdor link - used to modifiy the AD for external route from 20 to 200 so that IGP learned route can be prefered over EBGP.
*Command will be added to router which is learning the prefises from two routing ptotocols .
 
router bgp x.x.x.x
network x.x.x.x mask backdoor
 
= OSPF =
 
*OSPF Packet type -Hello ,DBD ,LSR ,LSU ,LSA
*Each interface participate in OSPF send hello at 224.0.0.5
*Two router to form neighborship-same area ,samehello and dead interval,same subnetmask ,authentication must same .
*OSPF States - Down, Init, Two way, Exstart (DR, BDR selection), Exchange (DBD contains entry of link or net type having following info link type,adv router,seq number,costoflink),if router donot have update info for link type it send LSR (loading state), Neirouter send updated LSU again LSR router adds new entry in lSDB once all the routers have identical LSDB -routers are in full state .
 
*To send request to DR and BDR - 224.0.0.6
*For broadcast n/w type each ospf speaking router will be form full adjancey b/w DR, BDR and two way state b/w DR other routers.
 
*sh ip ospf database summary ( prefix ) will give information for type 3 inter area routes learned via ABR.
*Type 3 LSA called summary LSA doesnot mean network prefixes are summarised while propagated by ABR means topolgy information is summarised.
*EACH LSA in LSDB contains seq number, EACH LSA is flooded after 30 minutes, each time LSA is flooded it is incremnted by one ) - 195
*Point to point - T1,E1,neighbors are discovered automatically,hellos send at M.A 224.0.0.5, NO DR BDR election as there are only two routers.
*Multiaccess - DR, BDR election DR failes BDR becomes DR and new BDR is elected.
 
*If new router added with highest priorty it will not preemt existing DR and BDR election, if DR or BDR goes down then only selection starts.
*DR/BDR-ip ospf priority =0 for DR other
 
*STUB Area- All the routers in Area must agree on stub flag, does not allow type 5 and type 4 LSA and ABR generates default route in stub area to reach external destination.
to cofigure stub area - area x stub
 
*Tottaly Stub area - removes type 3,4,5 LSA and ABR genrates inter area default route, total stubby area configured on ABR of the area.
*To configure totally stubby - on ABR area x stub no summary and other routers need to configued wth area x stub command.
 
*NSSA area - was desgined to keep stub feature attribute and also allowed external routes, ASBR will genrate type 7 LSA in NSSA and se the P bit 1 and ABR will translate type 7 to type 5 propagate in ospf domain and all routers should agree on NSSA area. ABR does not genrate default route automatically. So in case if we other external AS connected to other areas NSSA area will not have information for that external routes, so in that case we need to genrate defaul route mannually.
 
*NOSo-total stubby area - remove type 3 ,4 ,5 lsa , genrates type 7 LSA and ABR genrates default route. Note it is not necessary for ABR to be part of total stubby NSSA it can still run NSSA for that area in ospf process.
 
*Order of preference of OSPF routes- O,OIA,E1,E2,N1,N2.
 
*When ABR does LSA translation from Type 7 to Type 5, if we look for external network in an area using sh ip os database external. There are field, Advertising router and Forwading address, Advertising address will be address of ABR which is doing the translation and Forwading address is address of ASBR.
*Also if the forwading address field is 0.0.0.0, then traffic will be forwading to who is orginating the route.
 
*If we have mutliple ABR in NSSA the ABR with highest router id will genrate type 5 LSA. This does not mean all the traffic will follow the ABR with highest router id because the forwading address field contains the information for the ASBR to reach external destination.
 
*In case if we want to change the forwading address on ABR while tranlating from type 7 to type 5 we can use the command
area i nssa no summary translate type 7 suppress forwading address.
 
Note - in the LSA lookup if the forwading address is 0.0.0.0 so the router which is advertising the lsa and is announcing it self to use himself to reach destination.
 
*E1 and E2 routes - E1 routes external cost is added to cost of link packet traverse, if we have multiple ASBR then we should use marked external routes as type E1
*If we have muliple ASBR, then default metric to reach external network would be same propagated by both of them, in that case each ospf speaking router will use forward metric to reach ASBR as best path.In case the forward metric is same then decision will be based on router id of ASBR.
 
*That can be verified by:
sh ip ospf database external XXXX.
 
*E2 -External cost only, if we have single ASBR
 
Note- ABR has information for all the connected area's so when genrating the type 3 SLA topogy information is summarised and propagated from one area to other area.
 
*Loop prevnetion mechanism in OSPF-Its ABR only that accespts and process the type 3 LSA if it is from backbone area.
 
 
area X filter-list prefix {in|out}
 
*Good news here - this command applies after all summarization has been done and filters the routing information from being used for type-3 LSA generation. It applies to all three type of prefixes: intra-area routes, inter-area routes, and summaries generated as a result of the area X range command. All information is being learned from the router's RIB. used to filter specific prefix in Type 3 LSA.
 
*LSA Type 5 filerting -This LSA is originated by an ASBR (router redistributing external routes) and flooded through the whole OSPF autonomous system, Important - You may filter the redistributed routes by using the command distribute - list out configured under the protocol, which is the source of redistribution or simply applying filtering with your redistribution.
 
*The key thing you should remember is that non-local route filtering for OSPF is only available at ABRs and ASBRs
*Distribute list out on ABR and ASBR will filter the type 5 LSA while propagting
*We can verify using:
sh ip ospf database external x.x.x.x
 
*Distribute list in - Will filter the information from routing table but lSA will still be propagating to neighbor routers.
 
*If we have NSSA area we want to filer type 5 SLA on ABR we can filter the forwading address using ditribute list on ABR. (As the forwading address is copied from type 7 SLA when ABR regenrates the type 5 SLA out of it.
 
OSPF Network Types:
1. Point to point - Supports broadcast like T1, E1, there are only two routers no DR/BDR election ,hello and dead are 10/40.
2. Brodacast - Like ethernet, broadacst capabilty, There is DR and BDR election, 10 and 40.
3. Point to multipoint brodacast - have broadcast capabilty, NO DR and BDr election , hello/dead are 40 /130, In case of hub and spoke topology hub will form adjancy.
with the spokes ,other spokes will not form adjancy as there is not direct layer connection so when hub receive the update from spoke it changes its next hop self while propagating the updates.
4. Point to multipoint non brodcast - No broadcast capabilty, hello will be send as unicast, will not be send if neighbors are not defined manually.
As there is no brodcast capabilty hellos are send as unicast and there is no DR /BDR election. hello/dead are 40 /130, Special next hope processing.
Non-Broadcast is the default network type on multipoint frame-relay interface, eg a main interface.
5. Non broadcast n/w - Default network type is nonbroadcast for frame-relay network , there is no broadcast capabilty , hello are send as unicast ,neibors need to define mannualy .hello /dead 30-40 ,DR and BDR election,
NBMAN-(Non broadcast)-Nei needs to define mannualy ,there is slection of DR and BDR ,full mesh or partail mesh,IN NBMAN if there is DR ,BDR selction all routers should be fully meshed or DR BDR can be staticly configured on router that should have full adjancies to all routers.
Make sure the for non-broadcastn/w make sure hub is chossen as DR and need to define nei mannaulay to send ospf updates as unicast.
 
Note - Broadcast and non broadcast n/w , DR on receiveing the LSA's didnot change the next hop while propagating the LSA to other DR-other routers so in case of broadcast segment its fine while for non broadcaset frame relay network we need to mannualy define the layer 3 to layer 2 resoltuion to reach that neibour.
While in case of point-point, HDLC there is only one device at other end so layer 3 to layer 2 mapping is not required.
 
6. In OSPF loopbacks are advertised as stub host and network type loopback.if the mask of loopback is /24 and we want to advertise as /24 to ospf domain we need to change the network type
 
 
*By adjusting the hello/dead timers you can make non-compatible OSPF network types appear as neighbors via the - show ip ospf neighbor - but they won't become adjacent with each other. OSPF network types that use a DR (broadcast and non-broadcast) can neighbor with each other and function properly. Likewise OSPF network types (point-to-point and point-to-multipoint) that do not use a DR can neighbor with each other and function properly. But if you mix DR types with non-DR types they will not function properly (i.e. not fully adjacent). You should see in the OSPF database Adv Router is not-reachable messages when you've mixed DR and non-DR types.
 
*Here is what will work:
Broadcast to Broadcast
Non-Broadcast to Non-Broadcast
Point-to-Point to Point-to-Point
Point-to-Multipoint to Point-to-Multipoint
Broadcast to Non-Broadcast (adjust hello/dead timers)
Point-to-Point to Point-to-Multipoint (adjust hello/dead timers)
 
*Command lines:
1. sh ip os inter brief
2. sh ip route ospf
3. sh ip os boarder routers
4. sh ip os da summary x.x.x - type 3
5. sh ip os da external x.x.x.x-type 5
6. sh ip os data router .x..x.x.x- type 1
 
*Sumarisation can occur on ABR and ASBR
 
*ABR uses area range command
*when ABR /ASBR does sumarization it genrates null route for the summary , in case spefic prefix went unreachable for some reason and ABR has received traffic for that preifx it wll drop the traffic , if we want to avoid it use default route to forward the traffic we can use command ( no discard route internal / external) to drop the null route from routing table .
 
ASBR- Summary address x.x.x.x mask
 
*RFC 2328 - to learn the ospf
 
;Virtual links
 
*All areas in an Open Shortest Path First (OSPF) autonomous system must be physically connected to the backbone area (Area 0). In some cases, where this is not possible, you can use a virtual link to connect to the backbone through a non-backbone area. You can also use virtual links to connect two parts of a partitioned backbone through a non-backbone area. The area through which you configure the virtual link, known as a transit area, must have full routing information. The transit area cannot be a stub area.
 
*The transit area cannot be a stub area, because routers in the stub area do not have routes for external destinations. Because data is sent natively, if a packet destined for an external destination is sent into a stub area which is also a transit area, then the packet is not routed correctly. The routers in the stub area do not have routes for specific external destinations.
 
*We can also use GRE link between nonbackbone area and backbone area ,run area 0 over tunneled interface but there is GRE overhead. In case of virtul only OSPF packets are send as tunneled packet and data traffic is send as it is normal area connected to backbone area.
 
= EIGRP =
 
* EIGRP runs on ip protocol 88, ospf 89
* Eigrp is hybrid protocol and has some properties of distance vector and some link state.
* Distance vector - Only knows what its directly connected neibors are advertizing and link state because it form adjancies .
* Inorder to form adjancency EIGRP AS no should be same between neihbours.
* EIGRP Multicast adress - 224.0.0.10
* EIGRP is like bgp will only advertize the route which is going to install in routing table.
* EIGRP classes protocol does automatic summary by default, so we need to disable the automatic summarisation (no auto summary)
* EIGRP does spilt horizon, in case of DMVPN we need to disable the split horizon so that routes learned on tunnel interface through one spoke need to advertize to other spoke through same tunnel interface.
* Passive interface command works silghtly different in EIGRP, it stops sending multicast/unicast hello to nei thus prevent forming adjancies.
* Issuing a neighbour statment in eigrp on a link means it stops listen to mutlicast address so we need to specify the neighbour mannuly to other side to form adjancies.
* Timers in EIGRP is not nessescary to match to form adjancey.
* EIGRP - Metric calculation by bandwidth, delay, relibilty, load MTU.
* Bandwidth is scaled as minimum bandwidth and total delay, highest load, lowest reliablilty while calculating composite metric.
* Feasible distance is best metric along the path and its successor metric .
* EIGRP - FD-is best metric along the path to desination router including metric to reach the neibor
* Advertised distance -total metric along the path advertized by up stream router.
* A router is feasible successor if AD<FD of successor
* FD is used for loop avoidance. spilt horizonrule -never advertized the route on the interface on which it is learned.
* Feasible succesors are only candidates for unequal path load balancing.
* Load balancing is done in EIGRP though unequal cost paths through variance multiplier.
* EIGRP is only routing protocol that supports load balancing across unequal path unlike like rip, ospf, Isis.
* FD <= FSx variance (FD) then the path is choosen for unequal cost load balancing.
* EIGRP traffic eng. could be easily achieved by modify the delay value instead of bandwidth.
* EIGRP command
sh ip eigrp nei
sh ip eigrp nei de
sh ip eigrp topology
sh ip eigrp route
 
* Equal cost load balancing the traffic is distributed based on CEF. To turn off cef on interface do (no ip route-cache)
* SIA -Stuck in active, If router receive a queries for destination network it taking too much time to respond be because of network flap or some network condition occur route is considered in SIA state.
* We can tune the amount of time router should wait before putting route in SIA state by timers acive-time command
* To check which routers have not replied with queries issue sh ip eigrp topolgy, router denoted by R meaning waiting for replies.
* EIGRP perpforms auto summarization for a n/w when crossing a major n/w boundary
* Split horizon should only be disabled on a hub site in a hub-and-spoke network:
no ip split-horizon eigrp x
 
* EIGRP router id helps in loop prevention for external routes which says if I gets the routes with orignator that is equal to my router id then I will discard the routes.
* EIGRP provides faster convergnece as it doesnot need to run dual algo in case if there is feasible successor for the path, else if router do not have route it will send the query to its neibour router which will further progates the query to there neibours if the router does not recive the reply from the neibour before the timer expires it will mark this route in Stuck in active state and reset its neibour relationship if all its query are not answered with time time period.
* While in OSPF if the primary path goes down, it need to send the LSA and SPF algo is run again.
* dcesor in mind.
* There is ways to bound the query domain You can do in either of 2 ways or both
1) Using Summary routes -ip summary-address eigrp 'as' [network] [mask] [ad]
If RouterA sends a query message to RouterB and summarization is in use, RouterB will only have a summary router in its EIGRP topology table not the exact prefix match of the query and will therefore send a network unknown response back to routerA. This stops the query process immediately at RouterB, only one hop away.
 
2) Using Stub -
router eigrp 1
eigrp stub 'arguments'
 
The default arguments are connected and summary this means it will advertised connected and summary routes only.
A router will inform it neighbor of it stub status during the neighbor adjacency forming
 
Stub routers tell their neighbors - do not send me any queries. Since no queries will be sent, it is extremely effective. However, it is limited in where you can use it. It is only used in non-transit paths and star topologies.
 
3. filtering the prefix
please note Eigrp neighbor router will propagate query received from neighbor router only if it has the extact match for the route ints topology table, if router doesnot have exact route in toplogy table it will send the reply with route unknow to its neighbor and further query will not be propagated .
 
4.Different AS domains
Different EIGRP AS numbers. EIGRP processes run independently from each other, and queries from one system don\92t leak into another. However, if redistribution is configured between two processes a behavior similar to query leaking is observed.
 
* Both IGRP and EIGRP use an Autonomous System (AS) number and only routers using the same AS number can exchange routing information using that protocol. When routing information is propagated between IGRP and EIGRP, redistribution has to be manually configured because IGRP and EIGRP use different AS numbers. However, redistribution occurs automatically when both IGRP and EIGRP use the same AS number
 
= MPLS =
Line 717 ⟶ 423:
== Configure ==
 
Vlanvlan 1000
Private vlan primary
 
Line 724 ⟶ 430:
 
vlan 1013
private vlan ISolatedIsolated
 
vlan 1000
private vlan association 1012,1013.
 
 
== Configure ports ==
Line 774 ⟶ 479:
= 6500 Architecture =
 
chassis* Chassis -6503/6503-E , 6504-E, 6506/6506-E, 6509, 6513 ( 13 slot chassis)
* Cisco has introduced new E series chasis .
* The first generation switching fabric was delivered by the switch fabric modules (WS-C6500-SFM and WS-C6500-SFM2), each providing a total switching capacity of 256 Gbps.
* More recently, with the introduction of the Supervisor Engine 720, the crossbar switch fabric has been integrated into the Supervisor Engine 720 baseboard itself, eliminating the need for a standalone switch fabric module.
 
total* switchingThe capacity of 256the Gbps.new Moreintegrated recently,crossbar withswitch thefabric introduction ofon the Supervisor Engine 720, thehas crossbarbeen switchincreased fabricfrom 256 Gbps to 720 Gbps.
* The Supervisor Engine 720-3B and Supervisor Engine 720-3BXL also maintain the same fabric capacity size of 720 Gbps.
 
* 6509 - Sup cards on slots 5 and 6, supported sup - sup32&sup720
has been integrated into the Supervisor Engine 720 baseboard itself, eliminating the need for a standalone switch fabric module.
* 6513-13 slots - sup cards on 7th and 8th slot, sup32&sup720
 
* The capacitySupervisor ofEngine the720-3B new integrated crossbar switch fabric on theand Supervisor Engine 720-3BXL hasalso beenmaintain increasedthe fromsame 256fabric Gbpscapacity tosize of 720 Gbps.
 
Gbps. The Supervisor Engine 720-3B and Supervisor Engine 720-3BXL also maintain the same fabric capacity size of 720 Gbps.
 
6509 - Sup cards on slots 5 and 6 ,supported sup -sup32&sup720
6513-13 slots -sup cards on 7th and 8th slot ,sup32&sup720
 
The Supervisor Engine 720-3B and Supervisor Engine 720-3BXL also maintain the same fabric capacity size of 720 Gbps.
6501676
* SUP32 - This supervisor engine provides an integrated PFC3B and MSFC2a by default
* Cards.supports 6700 series line cards
 
* SUp720-3B - same backplane capacity, It incorporates new PFC3B for addtionnal funcationality (mainly supports of mpls in hardware)
SUP32 -This supervisor engine provides an integrated PFC3B and MSFC2a by default.
* Sup720-3BXl - It incorporates new PFC3BXL, It is functionally identical to the Supervisor Engine 720-3B, but differs in its capacity
* For supporting routes and NetFlow entries.
* Sup2T - incorporates MSFC5 (control plane functions) and PFC4 (hardware accelarated data plane function) cards, 2 Tbps Switch Fabric
* PFC4 supports addtional featuers Cisco TrustSec (CTS) and Virtual Private LAN Service (VPLS).
* The 2 Tbps Switch Fabric provides 26 dedicated 20 Gbps or 40 Gbps channels to support the new 6513-E chassis
* SUP2T- All new 6900 series modules
* All new 6800 series modules (again, WS-X6816-GBIC is not one of those)
* Those 6700 series modules that are equipped either with CFC or DFC4
* Some 6100 series modules
 
* The control plane funations are mainly performed by route processor situated on MFSc3 itself includes running process for running routing protocol ,addres resoltion ,maintaing SVI's ,...
* Switch processor looks after switching functions building layer 2 cam tables .. , all layer 2 protocols (SPaniing tree,VTP...)
* MFSC - maintains routing table does not participate in forwading the packets, it build cef table pushed down to PFC and DFCs.
 
* The PFC is a daughter card that sits on the supervisor base board and contains the ASICs that are used to accelerate Layer 2 and
 
; Layer 3 switching in hardware.
cards.supports 6700 series line cards
 
* Layer 2 funations -mac based forwading based on cam table , layer 3 functions forwading the packets using layer 3 look up.
* Classic line cards support a connection to the 32-Gbps shared bus but do not have any connections into the crossbar switch fabric.
* Classic line cards are supported by all generations of the supervisor engines, from the Supervisor Engine 1 through to the Supervisor Engine 720-3BXL
* Modes in SUP720
RPR - state information is not in syc - time taken to switchover is 2-4 minutes, traffic disrupption, IO modules are reloaded.
RPR+ - state is partially intialized. need a addtional information to have the sytem in sych. switchover time is 30 to 60 seconds, IO modules are not reloded.
SSO - fully synchronised
 
* To check the redundancy status:
SUp720-3B- same backplane capacity ,It incorporates new PFC3B for addtionnal funcationality ( mainly supports of mpls in hardware)
show redundancy
 
* To set the redandancy mode
redundancy
keepalive-enable
mode sso
main-cpu
auto-sync running-config
 
Sup720-3BXl-It incorporates new PFC3BXL ,It is functionally identical to the Supervisor Engine 720-3B, but differs in its capacity
 
for* Sups supporting routes and NetFlow entries.VSS-
VS-S720-10G-3C *
 
Sup2T-incorporates MSFC5 (control plane functions) and PFC4 (hardware accelarated data plane function) cards ,2 Tbps Switch Fabric
 
PFC4 supports addtional featuers Cisco TrustSec (CTS) and Virtual Private LAN Service (VPLS).
 
The 2 Tbps Switch Fabric provides 26 dedicated 20 Gbps or 40 Gbps channels to support the new 6513-E chassis
 
SUP2T- All new 6900 series modules
All new 6800 series modules (again, WS-X6816-GBIC is not one of those)
Those 6700 series modules that are equipped either with CFC or DFC4
Some 6100 series modules
 
 
The control plane funations are mainly performed by route processor situated on MFSc3 itself includes running process for running
 
routing protocol ,addres resoltion ,maintaing SVI's ,...
 
Switch processor looks after switching funations building layer 2 cam tables .. ,all layer 2 protocols (SPaniing tree,VTP...)
 
MFSC -maintains routing table does not participate in forwading the packets ,it build cef table pushed down to PFC and DFCs.
 
The PFC is a daughter card that sits on the supervisor base board and contains the ASICs that are used to accelerate Layer 2 and
 
Layer 3 switching in hardware.
 
layer 2 funations -mac based forwading based on cam table , layer 3 functions forwading the packets using layer 3 look up.
 
Classic line cards support a connection to the 32-Gbps shared bus but do not have any connections into the crossbar switch fabric.
 
Classic line cards are supported by all generations of the supervisor engines, from the Supervisor Engine 1 through to the
 
Supervisor Engine 720-3BXL
 
Modes in SUP720 -RPR -state information is not in syc -time taken to switchover is 2-4 minutes ,traffic disrupption ,IO modules are reloaded .
 
reloaded .
RPR+-state is partially intialized ... need a addtional information to have the sytem in sych.switchover time is
 
30 to 60 seconds ,IO modules are not reloded .
SSO- fully synchronised .
 
do show redundancy to check the redundancy status
 
To set the redandancy mode
redundancy
keepalive-enable
mode sso
main-cpu
auto-sync running-config
 
 
 
Sups supporting VSS-
VS-S720-10G-3C *
VS-S720-10G-3CXL*
Sup2T
*Stacking - VSS have single control plane as master while vpc is having two independent control planes
---------------------------------------------------------------------------------------------------------------------------------
Stacking ,VSS have single control plane as master while vpc is having two independent control planes
 
= Nexus Archetecture =
 
* Independant control and data plane , High availiabilty - Dual SUP, Power redundancy , line card reduandancy
 
7009,7010,7018
7009- 9 slots -Sup on 1 and 2 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
7010-10 slots -Sup on 5 and 6 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
7018-18 slots -Sup on 9 and 10 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
 
Sup supported -SUP1 which includes 4 VDC including default VDC - on default VDC you can allocate resource and perform data plane functions as well.
7009- 9 slots -Sup on 1 and 2 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
SUP2- 4+1 VDC - extra one is admin vdc just for allocating resoucres, not passes data.
SUP2E-8+1 VDC's - Require additional licence to add extra 4 VDC.
 
* Line cards supported - M and F series I/O module
7010-10 slots -Sup on 5 and 6 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
 
* The initial series of line cards launched by cisco for Nexus 7k series switches were M1 and F1.
7018-18 slots -Sup on 9 and 10 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
* M1 series line cards are basicaly used for all major layer 3 operations like MPLS, OTV, routing etc, however, the F1 series line cards are basically layer 2 cards and used for for FEX, FabricPath, FCoE etc.
* If there is only F1 card in your chassis, then you can not achieve layer 3 routing.
* You need to have a M1 card installed in chassis so that F1 card can send the traffic to M1 card for proxy routing.
* The fabric capacity of M1 line card is 80 Gbps.
* Since F1 line card dont have L3 functionality, they are cheaper and provide a fabric capacity of 230 Gbps.
* Later cisco released M2 and F2 series of line cards.
* A F2 series line card can also do basic Layer 3 functions, however, can not be used for OTV or MPLS.
* M2 line card's fabric capacity is 240 Gbps while F2 series line cards have fabric capacity of 480 Gbps.
 
* There are two series of Fabric modules, FAB1 and FAB2.
 
* Each FAB1 has a maximum throughput of 46Gbps per slot meaning the total per slot bandwidth available when chassis is running on full capacity, i.e. there are five FAB1s in a single chassis would be 230Gbps.
Sup supported -Sup1 which includes 4 VDC including default VDC -on default VDC you can allocate resource and perform data plane functions as well .
* Each FAB2 has a maximum throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five FAB2s in a single chassis would be 550Gbps.
 
* These are the FAB module capacity, however, the actual throughput from a line card is really dependent on type of line card being used and the fabric connection of the linecard being used.
SUP2- 4+1 VDC- extra one is admin vdc just for allocating resoucres ,not passes data .
SUP2E-8+1 VDC's- Require additional licence to add extra 4 VDC.
 
 
 
Lincards supported - M and F series I/O module
 
 
The initial series of line cards launched by cisco for Nexus 7k series switches were M1 and F1. M1 series line cards are basicaly used for all major layer 3 operations like MPLS, OTV, routing etc,however, the F1 series line cards are basically layer 2 cards and used for for FEX, FabricPath, FCoE etc. If there is only F1 card in your chassis, then you can not achieve layer 3 routing. You need to have a M1 card installed in chassis so that F1 card can send the traffic to M1 card for proxy routing. The fabric capacity of M1 line card is 80 Gbps. Since F1 line card dont have L3 functionality, they are cheaper and provide a fabric capacity of 230 Gbps.
Later cisco released M2 and F2 series of line cards. A F2 series line card can also do basic Layer 3 functions,however,can not be used for OTV or MPLS. M2 line card's fabric capacity is 240 Gbps while F2 series line cards have fabric capacity of 480 Gbps.
 
There are two series of Fabric modules, FAB1 and FAB2.
Each FAB1 has a maximum throughput of 46Gbps per slot meaning the total per slot bandwidth available when chassis is running on full capacity, ie. there are five FAB1s in a single chassis would be 230Gbps. Each FAB2 has a maximum throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five FAB2s in a single chassis would be 550Gbps. These are the FAB module capacity,however, the actual throughput from a line card is really dependent on type of line card being used and the fabric connection of the linecard being used.
* You can mix all cards in same vdc EXCEPT F2 card. The F2 card has to be on it's own VDC. You can't mix F2 cards with M1/M2 and F1 in the same VDC. As per cisco, its a hardware limitation and it creates forwarding issues.
* The F2 card has to be on it's own VDC.
* You can't mix F2 cards with M1/M2 and F1 in the same VDC.
 
* As per cisco, its a hardware limitation and it creates forwarding issues.
 
 
 
M & M1Xl series are used for creating layer 3 routing functions ,creation of SVI's ,fex ,OTV ,trustsec - example-M132XP
f- layer 2 functions,fabric path ,vpc+, FCOE -F132XP , F248XP
 
The current shipping I/O module donot leaverage full bandwidth max is 80 Gig for 10 Gig module
 
In Ideal design we should have pair of M1 and F1 series module per VDC
 
Depending on line cards we have shared mode Vs Dedicated mode
 
Shared mode - All the ports in port group share the bandwidth
 
Dedicared Mode -first port in port group will get the entire bandwidth and rest of ports are disable
 
example -32 Port 10 Gig IoModule -N7k-M132Xp-12 and back plane capacity of 80 gig
 
Per port group will have 10 Gig bandwidth that can used as shared mode or dedicated mode
 
Port group is combination of contiguous ports in odd and even numbering .
 
1 Gig module require 1 Fabric ie is 46 Gig and 2 Fab for N+1 redundancy
 
10 Gig -require 2 FABric and 3 for N+1 redundancy
 
VoQ's -are virtual output queues ,is called virtual as it resides on Ingrees I/O module but represnt egress bandwidth capacity.
VoQ's are managed by central arbiter.
 
Nex 5000 & 5500 - Mainly used for layer 2 only .(Access layer)
 
5000 -5010 ,5020
/''ju
5500 -5548 , layer 2 only but supports for layer 3 card as well .
 
 
Nex2k- act as remote line cards for 7k and 5k .once we have connected the downlink ports from 7kor 5k ,enable the feature fex parent swicth will automatically discover fex switch .
we need to configure uplink port on parent switch with switchmode fex ,fex associate number.once the featuer is enabled and ports and cables are connected it start pulling the IOS from its parent switch.once the fex is online you can see the port number on parent swicth as int(fexassociatenumber)1/x .. .
 
 
Note - Downlink ports on parent switch need to configure with switchmode fex ,fex associate no ... and there is no configration required on ports on fex switch connectected uplink port.
 
Nex2k -Doesnot support local swictinig... if two host in same vlan connected to 2k are tring to communicate ,then communication will happen through parent switch .
 
These fexed ports are pinned to uplink connected to parent switch .All management is done from parent switch.
 
 
two types of pinning (Static pinning & Dynamic Pinning)
 
issue with static piining -Once the uplink fail b/w nex2k and parent switch all the piined fexed port need to mannual move to other uplink to make it operational while on dynamic piining its automatically redistribued
 
Nex 5k -Support static pinning and vpc when we connect Nex 2k .
 
Nex 7k - Not all the line cards support Fex , only support port channel when we connect Nex 2k to 7k
 
 
All the fexed ports are considered as edge ports from STP point of view and there is BPDU guard is enabled on this .
 
 
CFS- Cisco fabric services is used to syn configration and control box between chasis.
 
Mangement interface is out of band connectivity as this is separte management vrf .
 
 
VDC is virtual device context used for virtuallization of hardware ( both control plane and data plane )
 
Allocate resource in VDC - can allocate M1, F1 ,M2 but not F2 cards apart from its own vdc .
 
M & M1Xl series are used for creating layer 3 routing functions, creation of SVI's, fex, OTV, trustsec - example - M132XP
VDC 1 is default vdc - used to create / delete / suspend other vdc ,allocate resoucres ,system wide qos , ethanalizer ,NX-Os upgrade across all the vdc .
f- layer 2 functions, fabric path, vpc+, FCOE -F132XP, F248XP
 
* The current shipping I/O module do not leaverage full bandwidth max is 80 Gig for 10 Gig module
From default vdc we can use switchto command to move to other vdc ,switch back to return to default vdc .
* In Ideal design we should have pair of M1 and F1 series module per VDC
* Depending on line cards we have shared mode Vs Dedicated mode
Shared mode - All the ports in port group share the bandwidth
Dedicared Mode - first port in port group will get the entire bandwidth and rest of ports are disable
Example - 32 Port 10 Gig IoModule -N7k-M132Xp-12 and back plane capacity of 80 gig
 
* Per port group will have 10 Gig bandwidth that can used as shared mode or dedicated mode
Creating an Admin VDC:
* Port group is combination of contiguous ports in odd and even numbering.
* 1 Gig module require 1 Fabric ie is 46 Gig and 2 Fab for N+1 redundancy
* 10 Gig -require 2 FABric and 3 for N+1 redundancy
 
* VoQ's -are virtual output queues, is called virtual as it resides on Ingrees I/O module but represnt egress bandwidth capacity.
Enter the system admin-vdc command after bootup. The default VDC becomes the admin VDC. All the nonglobal configuration in the default VDC is lost after you enter this command. This option is recommended for existing deployments where the default VDC is used only for administration and does not pass any traffic.
* VoQ's are managed by central arbiter.
 
* Nex 5000 & 5500 - Mainly used for layer 2 only (Access layer)
You can change the default VDC to the admin VDC with the system admin-vdc migratenew vdc name command. After entering this command, the nonglobal configuration on a default VDC is migrated to the new migrated VDC. This option is recommended for existing deployments where the default VDC is used for production traffic whose downtime must be minimized.
 
5000 -5010, 5020
5500 -5548, layer 2 only but supports for layer 3 card as well.
 
* Nex2k- act as remote line cards for 7k and 5k.
CMP port is associated in SUP 1 - used a console access to SUP as separte kickstart and system image then chasis.
* Once we have connected the downlink ports from 7kor 5k, enable the feature fex parent swicth will automatically discover fex switch.
* We need to configure uplink port on parent switch with switchmode fex, fex associate number.
* Once the featuer is enabled and ports and cables are connected it start pulling the IOS from its parent switch.
* Once the fex is online you can see the port number on parent swicth as int(fexassociatenumber)1/x.
 
Note - Downlink ports on parent switch need to configure with switchmode fex, fex associate no and there is no configration required on ports on fex switch connectected uplink port.
 
* Nex2k -Doesnot support local swictinig... if two host in same vlan connected to 2k are tring to communicate, then communication will happen through parent switch.
Non default vdc has two separate user roles
* These fexed ports are pinned to uplink connected to parent switch. All management is done from parent switch.
vdc admin - has read /write access to vdc
vdc operator -read only access to vdc.
 
;Pinning
* Two types of pinning - Static pinning & Dynamic Pinning
* Issue with static piining - Once the uplink fail b/w nex2k and parent switch all the piined fexed port need to mannual move to other uplink to make it operational while on dynamic piining its automatically redistribued
 
* Nex 5k -Support static pinning and vpc when we connect Nex 2k.
vdc high availiablity polciy - based on single sup / or dual Sup
* Nex 7k - Not all the line cards support Fex, only support port channel when we connect Nex 2k to 7k
 
* All the fexed ports are considered as edge ports from STP point of view and there is BPDU guard is enabled on this.
* CFS- Cisco fabric services is used to syn configration and control box between chasis.
* Mangement interface is out of band connectivity as this is separte management vrf.
 
;VDC
* VDC is virtual device context used for virtuallization of hardware (both control plane and data plane)
* Allocate resource in VDC - can allocate M1, F1, M2 but not F2 cards apart from its own vdc.
* VDC 1 is default VDC - used to create/delete/suspend other vdc, allocate resoucres, system wide qos, ethanalizer, NX-Os upgrade across all the vdc.
* From default vdc we can use switchto command to move to other vdc, switch back to return to default vdc.
 
* Creating an Admin VDC:
Bridge Assurance and Network Ports
Enter the system admin-vdc command after bootup.
The default VDC becomes the admin VDC.
All the nonglobal configuration in the default VDC is lost after you enter this command.
This option is recommended for existing deployments where the default VDC is used only for administration and does not pass any traffic.
 
You can change the default VDC to the admin VDC with the system admin-vdc migratenew vdc name command.
Cisco NX-OS contains additional features to promote the stability of the network by protecting STP from bridging loops. Bridge assurance works in conjunction with Rapid-PVST BPDUs, and is enabled globally by default in NX-OS. Bridge assurance causes the switch to send BPDUs on all operational ports that carry a port type setting of "network", including alternate and backup ports for each hello time period. If a neighbor port stops receiving BPDUs, the port is moved into the blocking state. If the blocked port begins receiving BPDUs again, it is removed from bridge assurance blocking, and goes through normal Rapid-PVST transition. This bidirectional hello mechanism helps prevent looping conditions caused by unidirectional links or a malfunctioning switch.
After entering this command, the nonglobal configuration on a default VDC is migrated to the new migrated VDC.
This option is recommended for existing deployments where the default VDC is used for production traffic whose downtime must be minimized.
 
* CMP port is associated in SUP 1 - used a console access to SUP as separte kickstart and system image then chasis.
Bridge assurance works in conjunction with the spanning-tree port type command. The default port type for all ports in the switch is "normal" for backward compatibility with devices that do not yet support bridge assurance; therefore, even though bridge assurance is enabled globally, it is not active by default on these ports. The port must be configured to a spanning tree port type of "network" for bridge assurance to function on that port. Both ends of a point-to-point Rapid-PVST connection must have the switches enabled for bridge assurance, and have the connecting ports set to type "network" for bridge assurance to function properly. This can be accomplished on two switches running NX-OS, with bridge assurance on by default, and ports configured as type "network" as shown below.
 
* Non default vdc has two separate user roles
* vdc admin - has read /write access to vdc
* vdc operator -read only access to vdc.
 
* vdc high availiablity polciy - based on single sup / or dual Sup
Cisco Nexus 7009-- sUP IN slot 1 and Slot 2
Cisco Nexus 7010--
Cisco Nexus 7018--
Line card Capacity differ in diffrent modules...
 
== Bridge Assurance and Network Ports ==
Two type of line cards are available :
 
* Cisco NX-OS contains additional features to promote the stability of the network by protecting STP from bridging loops.
1) M sERIES:
* Bridge assurance works in conjunction with Rapid-PVST BPDUs, and is enabled globally by default in NX-OS.
Layer 3 cards--svi, ospf, otv, Can be layer 2, Trust Sec
* Bridge assurance causes the switch to send BPDUs on all operational ports that carry a port type setting of "network", including alternate and backup ports for each hello time period.
Fex
* If a neighbor port stops receiving BPDUs, the port is moved into the blocking state.
* If the blocked port begins receiving BPDUs again, it is removed from bridge assurance blocking, and goes through normal Rapid-PVST transition.
* This bidirectional hello mechanism helps prevent looping conditions caused by unidirectional links or a malfunctioning switch.
 
* Bridge assurance works in conjunction with the spanning-tree port type command.
* The default port type for all ports in the switch is "normal" for backward compatibility with devices that do not yet support bridge assurance; therefore, even though bridge assurance is enabled globally, it is not active by default on these ports.
* The port must be configured to a spanning tree port type of "network" for bridge assurance to function on that port.
* Both ends of a point-to-point Rapid-PVST connection must have the switches enabled for bridge assurance, and have the connecting ports set to type "network" for bridge assurance to function properly.
* This can be accomplished on two switches running NX-OS, with bridge assurance on by default, and ports configured as type "network" as shown below.
 
2) F Series :
Layer 2 cards only
F2 SUPPORT fabric Path, VPC+, FCOE
 
Cisco Nexus 7009-- sUP IN slot 1 and Slot 2
Cisco Nexus 7010--
Cisco Nexus 7018--
Line card Capacity differ in different modules...
 
* Two type of line cards are available :
Cisco Nexus 5k :Used Mainly layer 2 switches
 
1) M sERIES:
5000--5020 and 5010
Layer 3 cards--svi, ospf, otv, Can be layer 2, Trust Sec
Fex
 
2) F Series :
5500--5548 and 5596
Layer 2 cards only
F2 SUPPORT fabric Path, VPC+, FCOE
 
* Cisco Nexus 5k :Used Mainly layer 2 switches
5000--5020 and 5010
5500--5548 and 5596
 
* Nexus 2k: Remote line card
 
 
Line 1,198 ⟶ 851:
switch(config-vsan-db)# vsan <number> interface vfc <number>
switch(config-vsan-db)# exit
 
 
= F5 Trainging =
 
LTM How BIG IP process Traffic
 
 
Node -represent the Ip address
Pool member -combination of Ip address and port number ,in other words pool member is application server on which F5 will redirect the traffic
Pool-combitnation of pool memeber.
 
Virtual server -combination of virtual IP and port ,is also know as listner and we associate virtual server to pool members.
 
= load balacing mehtods =
static -Round robin ,ratio
Dyanamic -LFOPD (least connection ,fastest ,observed,predective,dyanmic ratio )
 
 
 
Least connection -load balacing is based on no of connection counts ,if the connection counts are equal it will use round robin
 
 
Fastest -No of layer 7 request pending on each member.
 
Observed -ration load balacing method but ratio assigned by BIG IP,No off least connections counts BIG IP assign the request and check dyanamically and assign the ratio's of the request.
 
Predective -similar to oberved but assigns the ratio agressivley based on average connection counts .
 
 
load balacing by poolmember or node .
 
 
Priorty activation -helps to configure back sets for exsiting pool members .BIG Ip will use high priorty pool member first .
 
Fallback host is only used for HTTP request ,if all the pool memebers are not availiable BIG will redirect the cilent request
 
--------------------------------------------------------------------------------------
 
Monitors :check the status of nodes and pool memembers ,if any pool meember resposnse time is not good or is not responding big ip will not send the request to that node.
 
monitor type :
 
adress check -BIG IP send ICMP request and wait for reply if there is no reply it considers nei down does not send the trafic further to that node.
 
service check -will check TCP port number on which server is lisenting ,if no responce it considers down ----
 
contect check -we can check if the server is resondping with right contest ,like for http requset get/http .... request is send .
 
interactive check -TEST for FTP connection .once connection is open username and password is send then request is send get /file once file is recieved connection is closed .
 
F5 recommends time out = 3n+1 (frquency) for setting the monitor for http
 
Customization of monitor
 
Assign nodes to monitor
 
 
-------------------------------
 
Profiles -defining traffic behaviour for virtual server.
 
Profiles contains setting how to process traffic though virtual servers.if for certain application BIG IP load balace the traffic then it will break the client connection
to avoid this we use perstiance profile so that return request for the cilent is send to same server.
 
persisteance profile - isconfigured for clients and group of cilents how BIG IP knows the returning client request need to send to same server ,persistance profile is confiured taking source ip address of http cookie
 
SSL termination -
 
 
FTP profile
 
 
All virtual servers have layer four profile includes tCP,UDP,fastl4
 
 
Profile types -service profile ,persistance profile ,protocol profile ,ssl profile ,authentication profile ,other profiles.
 
 
persistence types-
----------------------------------------------------------------
 
source address persistance :keeps the track of source ip address ,adminstrator can set the net mask in persitance record so that all lients in same mask will assigned to same pool member.
 
Limitation -if the client address being NAted .
 
 
Cookie persistance -only uses http protocol
 
Three modes : (insert ,rewrite ,passive ) mode.
 
Insert mode -BIG ip create special cookie in HTTP resonse to client .
rewrite -pool member created blanl cookie and big ip inserts special cookie
passive -pool memeber created special cookie and BIG IP let it pass through
 
-------------------------------------------------------------------------------
 
 
SSL Profile
 
SSL is secured socket layer .
 
website which uses HTTPS we need to us SSL profile as traffic is being Nated for source clients and web app is using https protocol.
Using SSL termination BIG can decrypt the traffic and assigned to pool member.
 
 
BIG IP contains SSL encryption hardware so all the encruption and key exchange are done in hardware .centralized certifiacte management.
 
 
 
----------------------------------------------------------------------------------------
I rule :
 
is a script that direct traffic though BIG IP , based on TCl command language .I rule give controll of inbound and outbound traffic from BIg IP.
 
I rule contains follwing events ( I rule name ,events ,condtion ,action )
 
 
= Multicasting =
 
Ranges
 
224.0.0.0/4 - 224.0.0.0 -239.255.255.255
 
Link local address - 224.0.0.0/24
 
Source specifc multicast -232.0.0.0/24
 
Administrativley scoped -239.0.0.0/8
 
 
Multicast control plane work differntly than unicast routing ,it needs to know who is sender of mutlicast and to which group ,also the reciever of multicast.
 
Multicast Data plane -do RPF check ( was traffic received on correct interface and bulid multicast routing table ).
 
Multicast is source based routing .
 
IGMP -Host on LAN singanl the router to join the mutlicast group .
 
Two kind of request - (*,G)-Any source who is genrating the mutlicast stream for that group -Supported by IGMP V1 and V2
(S,G)-want to join particular source sending the mutlicast group .-IGMP version 3 support both (s,g and (*,G)
 
IGMP get enabled when the IP PIM [ Dense mode,sparse mode,SParse-DENSE-mode) is enabled .
 
BY default IGMP version 2 is enabled .
 
IP IGMP join group address can be used for testing on routers to see weather muticast traffic is recieved on router for particular group.
 
ip igmp static group command can be used to mannually put the request for particular mutlicast group insteaed of reling on IGMP queriy messsage for particular group.
 
PIM- used to siganl routers to bulid muticast tree ,tree could be sender to receiver or sender to rendevpoint--- receiver.
 
PIM version 1 or 2 ,By default its PIM version 2 , RP information is already encoded in PIM packet in version 2. PIM version 2 has field for BSR.
 
DENSE mode - Implicit join ,mutilcast traffic is send across entire network unless if some one report for not joing the particular stream.Flood and prune behiviour.
Nighbor discovery on multiicast address 224.0.0.13 same for sparse mode as well .
 
Note if we have (*,G) entery then we know about reciver and if we have (S,G) entry then we know about sender as well .
 
Two ways to genrate mutlicast traffic either through pinging mutlicast address or through IP SLA.
IN PIM dense -through RPF nei information is used to send unicast packet back to source ,message could pim prune or graft message .when the multicast source flood the traffic for particular multicast groups each multicast enable router will install (S,G entry) and (*,G) entries even if they are not intersted .
 
So in dense every router needle to install (*,G ) and (S,G) entry as we canot have (S,G) untill we have (*,G) entries.so if the source is active every router need to maintain the state table for mutlicasting .
 
Graft message for (S,G) entry is to unprune the mutlicast traffic as earlier it was set to prune .
 
State refresh to keep the link prune as its original state .
 
SParse mode -uses explict join unless it is asked by someone to join mutlicast traffic uses RP as reference point.In case we are using source specific mutlicast we don't need RP.for Group specfic joins we need RP.Traffic is not send anywhere unless it is requested .Sparse mode uses both source based trees and shortest path trees
RP needs to know the recievers and senders . DR on lan segment send (S,G) register mess age to and RP in turns reply regiester stop process and recievers on lan sengment send IGMP join and which will be converted to pim join(*,G) message to RP to form RPT tree.So pim join will traverse from receiver till RP every device will have (*,G) entry and from source till RP every device will have (S,G) entry.once RP knows about sender and reciver it will send (S,G) join request back to source and source would start sending the mutlicast traffic to RP then to receiver.then its up to the last hop reouter on reciever side for the optimation process weather it want to join directly to source using SPT bypassing RP.
 
Note -When we do debug only process switchd traffic is debug if we want to debug the data plane traffic then we need to disable cef (no ip route cache),if we change the unicast routing it will also change the mutlicasting routing,To change the unicast routing we can also use Ip mroute command .
 
 
Source based tree- tree is bulid based on shortest path from reciver till sender.
shared tree -tree from sender to RP and then RP till receiver.
 
To check RP configured on each transient router -sh ip pim rp mapping
RP can be assigned staticaly (ip pim rp address ) or dynamically ( auto RP and BSR)
 
Auto RP -uses two data plane mutlicast address (224.0.1.39) advertised by routers willing to become RP to mapping agents ,
224.0.1.40- chooses the RP and advertised to rest of routers for RP information.
 
To stay on shared tree rather than SPT ( ip pm spt-threshold infinity)
 
 
SParse-dense-mode -ANY group for which we have RP assigned used sparse mode for other uses dense mode.
 
RPF check is used for loop free path in mutlicast data plane ,AS per RPF check if the mutlicast packet is received on incoming interface router will check the unicast routing for source and that matches the incoming interface RPF check Passes else fail .
 
Once the mutlicast routing table is populated router always prefer (S,G) over (*,G) and in muticast routing table we have incoming interfaces and OIL for outgoing intefrcae list if the RPF check passes mutilcast traffic is send across all interfaces in OIL.
 
 
 
 
 
 
 
 
 
On multicast router -sh ip igmp group -- shows which multicast group is active on ethernet and which receiver has joined the group
 
To determine which router is IGMP querier router - sh ip igmp interface EO
 
We can manauly tune the query interval and query max response time -
query interval - ip igmp query interval 120 (default 60 sec)
respose time - ip igmp query-max-response-time 20 (default 10 sec)
 
IOS command to support which version of IGMP is - Ip igmp version 1/2
 
 
Test commands for IGMP
 
ip igmp join group
 
ip igmp static group
 
for sparse mode we need to assgn RP - ip pim rp address x.x.x.x
 
inorder to check if there are any rp mapping - sh ip pim rp mappings
 
Inoder to check for mutlicating packet conuters- sh ip mroute counters
 
In sparse mode there is SPT switch over shorted path tree
 
for the SPT threshold we can set the threshold on DR muticast router that is receiving the IGMP join request in gloabl config mode ip pim spt threshold (vlaue)- Value is volume of multicast feed
 
if the Rpf check is failing we can still have interface to forward multicase by static mrouter ( ip mroute server mask next hop address )
 
 
= Security =