BGP OSPF Questions: Difference between revisions

Line 773:
= 6500 Architecture =
 
chassis* Chassis -6503/6503-E , 6504-E, 6506/6506-E, 6509, 6513 ( 13 slot chassis)
* Cisco has introduced new E series chasis .
* The first generation switching fabric was delivered by the switch fabric modules (WS-C6500-SFM and WS-C6500-SFM2), each providing a total switching capacity of 256 Gbps.
* More recently, with the introduction of the Supervisor Engine 720, the crossbar switch fabric has been integrated into the Supervisor Engine 720 baseboard itself, eliminating the need for a standalone switch fabric module.
 
total* switchingThe capacity of 256the Gbps.new Moreintegrated recently,crossbar withswitch thefabric introduction ofon the Supervisor Engine 720, thehas crossbarbeen switchincreased fabricfrom 256 Gbps to 720 Gbps.
* The Supervisor Engine 720-3B and Supervisor Engine 720-3BXL also maintain the same fabric capacity size of 720 Gbps.
 
* 6509 - Sup cards on slots 5 and 6, supported sup - sup32&sup720
has been integrated into the Supervisor Engine 720 baseboard itself, eliminating the need for a standalone switch fabric module.
* 6513-13 slots - sup cards on 7th and 8th slot, sup32&sup720
 
* The capacitySupervisor ofEngine the720-3B new integrated crossbar switch fabric on theand Supervisor Engine 720-3BXL hasalso beenmaintain increasedthe fromsame 256fabric Gbpscapacity tosize of 720 Gbps.
 
Gbps. The Supervisor Engine 720-3B and Supervisor Engine 720-3BXL also maintain the same fabric capacity size of 720 Gbps.
 
6509 - Sup cards on slots 5 and 6 ,supported sup -sup32&sup720
6513-13 slots -sup cards on 7th and 8th slot ,sup32&sup720
 
The Supervisor Engine 720-3B and Supervisor Engine 720-3BXL also maintain the same fabric capacity size of 720 Gbps.
6501676
* SUP32 - This supervisor engine provides an integrated PFC3B and MSFC2a by default
* Cards.supports 6700 series line cards
 
* SUp720-3B - same backplane capacity, It incorporates new PFC3B for addtionnal funcationality (mainly supports of mpls in hardware)
SUP32 -This supervisor engine provides an integrated PFC3B and MSFC2a by default.
* Sup720-3BXl - It incorporates new PFC3BXL, It is functionally identical to the Supervisor Engine 720-3B, but differs in its capacity
* For supporting routes and NetFlow entries.
* Sup2T - incorporates MSFC5 (control plane functions) and PFC4 (hardware accelarated data plane function) cards, 2 Tbps Switch Fabric
* PFC4 supports addtional featuers Cisco TrustSec (CTS) and Virtual Private LAN Service (VPLS).
* The 2 Tbps Switch Fabric provides 26 dedicated 20 Gbps or 40 Gbps channels to support the new 6513-E chassis
* SUP2T- All new 6900 series modules
* All new 6800 series modules (again, WS-X6816-GBIC is not one of those)
* Those 6700 series modules that are equipped either with CFC or DFC4
* Some 6100 series modules
 
* The control plane funations are mainly performed by route processor situated on MFSc3 itself includes running process for running routing protocol ,addres resoltion ,maintaing SVI's ,...
* Switch processor looks after switching functions building layer 2 cam tables .. , all layer 2 protocols (SPaniing tree,VTP...)
* MFSC - maintains routing table does not participate in forwading the packets, it build cef table pushed down to PFC and DFCs.
 
* The PFC is a daughter card that sits on the supervisor base board and contains the ASICs that are used to accelerate Layer 2 and
 
; Layer 3 switching in hardware.
cards.supports 6700 series line cards
 
* Layer 2 funations -mac based forwading based on cam table , layer 3 functions forwading the packets using layer 3 look up.
* Classic line cards support a connection to the 32-Gbps shared bus but do not have any connections into the crossbar switch fabric.
* Classic line cards are supported by all generations of the supervisor engines, from the Supervisor Engine 1 through to the Supervisor Engine 720-3BXL
* Modes in SUP720
RPR - state information is not in syc - time taken to switchover is 2-4 minutes, traffic disrupption, IO modules are reloaded.
RPR+ - state is partially intialized. need a addtional information to have the sytem in sych. switchover time is 30 to 60 seconds, IO modules are not reloded.
SSO - fully synchronised
 
* To check the redundancy status:
SUp720-3B- same backplane capacity ,It incorporates new PFC3B for addtionnal funcationality ( mainly supports of mpls in hardware)
show redundancy
 
* To set the redandancy mode
redundancy
keepalive-enable
mode sso
main-cpu
auto-sync running-config
 
Sup720-3BXl-It incorporates new PFC3BXL ,It is functionally identical to the Supervisor Engine 720-3B, but differs in its capacity
 
for* Sups supporting routes and NetFlow entries.VSS-
VS-S720-10G-3C *
 
Sup2T-incorporates MSFC5 (control plane functions) and PFC4 (hardware accelarated data plane function) cards ,2 Tbps Switch Fabric
 
PFC4 supports addtional featuers Cisco TrustSec (CTS) and Virtual Private LAN Service (VPLS).
 
The 2 Tbps Switch Fabric provides 26 dedicated 20 Gbps or 40 Gbps channels to support the new 6513-E chassis
 
SUP2T- All new 6900 series modules
All new 6800 series modules (again, WS-X6816-GBIC is not one of those)
Those 6700 series modules that are equipped either with CFC or DFC4
Some 6100 series modules
 
 
The control plane funations are mainly performed by route processor situated on MFSc3 itself includes running process for running
 
routing protocol ,addres resoltion ,maintaing SVI's ,...
 
Switch processor looks after switching funations building layer 2 cam tables .. ,all layer 2 protocols (SPaniing tree,VTP...)
 
MFSC -maintains routing table does not participate in forwading the packets ,it build cef table pushed down to PFC and DFCs.
 
The PFC is a daughter card that sits on the supervisor base board and contains the ASICs that are used to accelerate Layer 2 and
 
Layer 3 switching in hardware.
 
layer 2 funations -mac based forwading based on cam table , layer 3 functions forwading the packets using layer 3 look up.
 
Classic line cards support a connection to the 32-Gbps shared bus but do not have any connections into the crossbar switch fabric.
 
Classic line cards are supported by all generations of the supervisor engines, from the Supervisor Engine 1 through to the
 
Supervisor Engine 720-3BXL
 
Modes in SUP720 -RPR -state information is not in syc -time taken to switchover is 2-4 minutes ,traffic disrupption ,IO modules are reloaded .
 
reloaded .
RPR+-state is partially intialized ... need a addtional information to have the sytem in sych.switchover time is
 
30 to 60 seconds ,IO modules are not reloded .
SSO- fully synchronised .
 
do show redundancy to check the redundancy status
 
To set the redandancy mode
redundancy
keepalive-enable
mode sso
main-cpu
auto-sync running-config
 
 
 
Sups supporting VSS-
VS-S720-10G-3C *
VS-S720-10G-3CXL*
Sup2T
*Stacking - VSS have single control plane as master while vpc is having two independent control planes
---------------------------------------------------------------------------------------------------------------------------------
Stacking ,VSS have single control plane as master while vpc is having two independent control planes
 
= Nexus Archetecture =
 
* Independant control and data plane , High availiabilty - Dual SUP, Power redundancy , line card reduandancy
 
7009,7010,7018
7009- 9 slots -Sup on 1 and 2 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
7010-10 slots -Sup on 5 and 6 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
7018-18 slots -Sup on 9 and 10 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
 
Sup supported -SUP1 which includes 4 VDC including default VDC - on default VDC you can allocate resource and perform data plane functions as well.
7009- 9 slots -Sup on 1 and 2 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
SUP2- 4+1 VDC - extra one is admin vdc just for allocating resoucres, not passes data.
SUP2E-8+1 VDC's - Require additional licence to add extra 4 VDC.
 
* Line cards supported - M and F series I/O module
7010-10 slots -Sup on 5 and 6 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
 
* The initial series of line cards launched by cisco for Nexus 7k series switches were M1 and F1.
7018-18 slots -Sup on 9 and 10 ,suppport of 5 Fabric chanel ,each fab channel provides 46 Gig backplane capacity so total of 5X46=230 per slot bandwidth
* M1 series line cards are basicaly used for all major layer 3 operations like MPLS, OTV, routing etc, however, the F1 series line cards are basically layer 2 cards and used for for FEX, FabricPath, FCoE etc.
* If there is only F1 card in your chassis, then you can not achieve layer 3 routing.
* You need to have a M1 card installed in chassis so that F1 card can send the traffic to M1 card for proxy routing.
* The fabric capacity of M1 line card is 80 Gbps.
* Since F1 line card dont have L3 functionality, they are cheaper and provide a fabric capacity of 230 Gbps.
* Later cisco released M2 and F2 series of line cards.
* A F2 series line card can also do basic Layer 3 functions, however, can not be used for OTV or MPLS.
* M2 line card's fabric capacity is 240 Gbps while F2 series line cards have fabric capacity of 480 Gbps.
 
* There are two series of Fabric modules, FAB1 and FAB2.
 
* Each FAB1 has a maximum throughput of 46Gbps per slot meaning the total per slot bandwidth available when chassis is running on full capacity, i.e. there are five FAB1s in a single chassis would be 230Gbps.
Sup supported -Sup1 which includes 4 VDC including default VDC -on default VDC you can allocate resource and perform data plane functions as well .
* Each FAB2 has a maximum throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five FAB2s in a single chassis would be 550Gbps.
 
* These are the FAB module capacity, however, the actual throughput from a line card is really dependent on type of line card being used and the fabric connection of the linecard being used.
SUP2- 4+1 VDC- extra one is admin vdc just for allocating resoucres ,not passes data .
SUP2E-8+1 VDC's- Require additional licence to add extra 4 VDC.
 
 
 
Lincards supported - M and F series I/O module
 
 
The initial series of line cards launched by cisco for Nexus 7k series switches were M1 and F1. M1 series line cards are basicaly used for all major layer 3 operations like MPLS, OTV, routing etc,however, the F1 series line cards are basically layer 2 cards and used for for FEX, FabricPath, FCoE etc. If there is only F1 card in your chassis, then you can not achieve layer 3 routing. You need to have a M1 card installed in chassis so that F1 card can send the traffic to M1 card for proxy routing. The fabric capacity of M1 line card is 80 Gbps. Since F1 line card dont have L3 functionality, they are cheaper and provide a fabric capacity of 230 Gbps.
Later cisco released M2 and F2 series of line cards. A F2 series line card can also do basic Layer 3 functions,however,can not be used for OTV or MPLS. M2 line card's fabric capacity is 240 Gbps while F2 series line cards have fabric capacity of 480 Gbps.
 
There are two series of Fabric modules, FAB1 and FAB2.
Each FAB1 has a maximum throughput of 46Gbps per slot meaning the total per slot bandwidth available when chassis is running on full capacity, ie. there are five FAB1s in a single chassis would be 230Gbps. Each FAB2 has a maximum throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five FAB2s in a single chassis would be 550Gbps. These are the FAB module capacity,however, the actual throughput from a line card is really dependent on type of line card being used and the fabric connection of the linecard being used.
You can mix all cards in same vdc EXCEPT F2 card. The F2 card has to be on it's own VDC. You can't mix F2 cards with M1/M2 and F1 in the same VDC. As per cisco, its a hardware limitation and it creates forwarding issues.
* You can mix all cards in same vdc EXCEPT F2 card.
* The F2 card has to be on it's own VDC.
* You can't mix F2 cards with M1/M2 and F1 in the same VDC.
* As per cisco, its a hardware limitation and it creates forwarding issues.
 
M & M1Xl series are used for creating layer 3 routing functions, creation of SVI's, fex, OTV, trustsec - example - M132XP
f- layer 2 functions, fabric path, vpc+, FCOE -F132XP, F248XP
 
* The current shipping I/O module do not leaverage full bandwidth max is 80 Gig for 10 Gig module
* In Ideal design we should have pair of M1 and F1 series module per VDC
* Depending on line cards we have shared mode Vs Dedicated mode
Shared mode - All the ports in port group share the bandwidth
Dedicared Mode - first port in port group will get the entire bandwidth and rest of ports are disable
Example - 32 Port 10 Gig IoModule -N7k-M132Xp-12 and back plane capacity of 80 gig
 
* Per port group will have 10 Gig bandwidth that can used as shared mode or dedicated mode
* Port group is combination of contiguous ports in odd and even numbering.
* 1 Gig module require 1 Fabric ie is 46 Gig and 2 Fab for N+1 redundancy
* 10 Gig -require 2 FABric and 3 for N+1 redundancy
 
* VoQ's -are virtual output queues, is called virtual as it resides on Ingrees I/O module but represnt egress bandwidth capacity.
* VoQ's are managed by central arbiter.
M & M1Xl series are used for creating layer 3 routing functions ,creation of SVI's ,fex ,OTV ,trustsec - example-M132XP
f- layer 2 functions,fabric path ,vpc+, FCOE -F132XP , F248XP
 
* Nex 5000 & 5500 - Mainly used for layer 2 only (Access layer)
The current shipping I/O module donot leaverage full bandwidth max is 80 Gig for 10 Gig module
 
5000 -5010, 5020
In Ideal design we should have pair of M1 and F1 series module per VDC
5500 -5548, layer 2 only but supports for layer 3 card as well.
 
* Nex2k- act as remote line cards for 7k and 5k.
Depending on line cards we have shared mode Vs Dedicated mode
* Once we have connected the downlink ports from 7kor 5k, enable the feature fex parent swicth will automatically discover fex switch.
* We need to configure uplink port on parent switch with switchmode fex, fex associate number.
* Once the featuer is enabled and ports and cables are connected it start pulling the IOS from its parent switch.
* Once the fex is online you can see the port number on parent swicth as int(fexassociatenumber)1/x.
 
Note - Downlink ports on parent switch need to configure with switchmode fex, fex associate no and there is no configration required on ports on fex switch connectected uplink port.
Shared mode - All the ports in port group share the bandwidth
 
* Nex2k -Doesnot support local swictinig... if two host in same vlan connected to 2k are tring to communicate, then communication will happen through parent switch.
Dedicared Mode -first port in port group will get the entire bandwidth and rest of ports are disable
* These fexed ports are pinned to uplink connected to parent switch. All management is done from parent switch.
 
;Pinning
example -32 Port 10 Gig IoModule -N7k-M132Xp-12 and back plane capacity of 80 gig
* Two types of pinning - Static pinning & Dynamic Pinning
* Issue with static piining - Once the uplink fail b/w nex2k and parent switch all the piined fexed port need to mannual move to other uplink to make it operational while on dynamic piining its automatically redistribued
 
* Nex 5k -Support static pinning and vpc when we connect Nex 2k.
Per port group will have 10 Gig bandwidth that can used as shared mode or dedicated mode
* Nex 7k - Not all the line cards support Fex, only support port channel when we connect Nex 2k to 7k
 
* All the fexed ports are considered as edge ports from STP point of view and there is BPDU guard is enabled on this.
Port group is combination of contiguous ports in odd and even numbering .
* CFS- Cisco fabric services is used to syn configration and control box between chasis.
* Mangement interface is out of band connectivity as this is separte management vrf.
 
;VDC
1 Gig module require 1 Fabric ie is 46 Gig and 2 Fab for N+1 redundancy
* VDC is virtual device context used for virtuallization of hardware (both control plane and data plane)
* Allocate resource in VDC - can allocate M1, F1, M2 but not F2 cards apart from its own vdc.
* VDC 1 is default VDC - used to create/delete/suspend other vdc, allocate resoucres, system wide qos, ethanalizer, NX-Os upgrade across all the vdc.
* From default vdc we can use switchto command to move to other vdc, switch back to return to default vdc.
 
* Creating an Admin VDC:
10 Gig -require 2 FABric and 3 for N+1 redundancy
Enter the system admin-vdc command after bootup.
The default VDC becomes the admin VDC.
All the nonglobal configuration in the default VDC is lost after you enter this command.
This option is recommended for existing deployments where the default VDC is used only for administration and does not pass any traffic.
 
You can change the default VDC to the admin VDC with the system admin-vdc migratenew vdc name command.
VoQ's -are virtual output queues ,is called virtual as it resides on Ingrees I/O module but represnt egress bandwidth capacity.
After entering this command, the nonglobal configuration on a default VDC is migrated to the new migrated VDC.
VoQ's are managed by central arbiter.
This option is recommended for existing deployments where the default VDC is used for production traffic whose downtime must be minimized.
 
* CMP port is associated in SUP 1 - used a console access to SUP as separte kickstart and system image then chasis.
Nex 5000 & 5500 - Mainly used for layer 2 only .(Access layer)
 
* Non default vdc has two separate user roles
5000 -5010 ,5020
* vdc admin - has read /write access to vdc
/''ju
* vdc operator -read only access to vdc.
5500 -5548 , layer 2 only but supports for layer 3 card as well .
 
* vdc high availiablity polciy - based on single sup / or dual Sup
 
== Bridge Assurance and Network Ports ==
Nex2k- act as remote line cards for 7k and 5k .once we have connected the downlink ports from 7kor 5k ,enable the feature fex parent swicth will automatically discover fex switch .
we need to configure uplink port on parent switch with switchmode fex ,fex associate number.once the featuer is enabled and ports and cables are connected it start pulling the IOS from its parent switch.once the fex is online you can see the port number on parent swicth as int(fexassociatenumber)1/x .. .
 
* Cisco NX-OS contains additional features to promote the stability of the network by protecting STP from bridging loops.
* Bridge assurance works in conjunction with Rapid-PVST BPDUs, and is enabled globally by default in NX-OS.
* Bridge assurance causes the switch to send BPDUs on all operational ports that carry a port type setting of "network", including alternate and backup ports for each hello time period.
* If a neighbor port stops receiving BPDUs, the port is moved into the blocking state.
* If the blocked port begins receiving BPDUs again, it is removed from bridge assurance blocking, and goes through normal Rapid-PVST transition.
* This bidirectional hello mechanism helps prevent looping conditions caused by unidirectional links or a malfunctioning switch.
 
* Bridge assurance works in conjunction with the spanning-tree port type command.
Note - Downlink ports on parent switch need to configure with switchmode fex ,fex associate no ... and there is no configration required on ports on fex switch connectected uplink port.
* The default port type for all ports in the switch is "normal" for backward compatibility with devices that do not yet support bridge assurance; therefore, even though bridge assurance is enabled globally, it is not active by default on these ports.
* The port must be configured to a spanning tree port type of "network" for bridge assurance to function on that port.
* Both ends of a point-to-point Rapid-PVST connection must have the switches enabled for bridge assurance, and have the connecting ports set to type "network" for bridge assurance to function properly.
* This can be accomplished on two switches running NX-OS, with bridge assurance on by default, and ports configured as type "network" as shown below.
 
Nex2k -Doesnot support local swictinig... if two host in same vlan connected to 2k are tring to communicate ,then communication will happen through parent switch .
 
Cisco Nexus 7009-- sUP IN slot 1 and Slot 2
These fexed ports are pinned to uplink connected to parent switch .All management is done from parent switch.
Cisco Nexus 7010--
Cisco Nexus 7018--
Line card Capacity differ in different modules...
 
* Two type of line cards are available :
 
1) M sERIES:
two types of pinning (Static pinning & Dynamic Pinning)
Layer 3 cards--svi, ospf, otv, Can be layer 2, Trust Sec
Fex
 
2) F Series :
issue with static piining -Once the uplink fail b/w nex2k and parent switch all the piined fexed port need to mannual move to other uplink to make it operational while on dynamic piining its automatically redistribued
Layer 2 cards only
F2 SUPPORT fabric Path, VPC+, FCOE
 
* Cisco Nexus 5k :Used Mainly layer 2 switches
Nex 5k -Support static pinning and vpc when we connect Nex 2k .
5000--5020 and 5010
5500--5548 and 5596
 
* Nexus 2k: Remote line card
Nex 7k - Not all the line cards support Fex , only support port channel when we connect Nex 2k to 7k
 
 
All the fexed ports are considered as edge ports from STP point of view and there is BPDU guard is enabled on this .
 
 
CFS- Cisco fabric services is used to syn configration and control box between chasis.
 
Mangement interface is out of band connectivity as this is separte management vrf .
 
 
VDC is virtual device context used for virtuallization of hardware ( both control plane and data plane )
 
Allocate resource in VDC - can allocate M1, F1 ,M2 but not F2 cards apart from its own vdc .
 
VDC 1 is default vdc - used to create / delete / suspend other vdc ,allocate resoucres ,system wide qos , ethanalizer ,NX-Os upgrade across all the vdc .
 
From default vdc we can use switchto command to move to other vdc ,switch back to return to default vdc .
 
Creating an Admin VDC:
 
Enter the system admin-vdc command after bootup. The default VDC becomes the admin VDC. All the nonglobal configuration in the default VDC is lost after you enter this command. This option is recommended for existing deployments where the default VDC is used only for administration and does not pass any traffic.
 
You can change the default VDC to the admin VDC with the system admin-vdc migratenew vdc name command. After entering this command, the nonglobal configuration on a default VDC is migrated to the new migrated VDC. This option is recommended for existing deployments where the default VDC is used for production traffic whose downtime must be minimized.
 
 
CMP port is associated in SUP 1 - used a console access to SUP as separte kickstart and system image then chasis.
 
 
Non default vdc has two separate user roles
vdc admin - has read /write access to vdc
vdc operator -read only access to vdc.
 
 
vdc high availiablity polciy - based on single sup / or dual Sup
 
 
 
Bridge Assurance and Network Ports
 
Cisco NX-OS contains additional features to promote the stability of the network by protecting STP from bridging loops. Bridge assurance works in conjunction with Rapid-PVST BPDUs, and is enabled globally by default in NX-OS. Bridge assurance causes the switch to send BPDUs on all operational ports that carry a port type setting of "network", including alternate and backup ports for each hello time period. If a neighbor port stops receiving BPDUs, the port is moved into the blocking state. If the blocked port begins receiving BPDUs again, it is removed from bridge assurance blocking, and goes through normal Rapid-PVST transition. This bidirectional hello mechanism helps prevent looping conditions caused by unidirectional links or a malfunctioning switch.
 
Bridge assurance works in conjunction with the spanning-tree port type command. The default port type for all ports in the switch is "normal" for backward compatibility with devices that do not yet support bridge assurance; therefore, even though bridge assurance is enabled globally, it is not active by default on these ports. The port must be configured to a spanning tree port type of "network" for bridge assurance to function on that port. Both ends of a point-to-point Rapid-PVST connection must have the switches enabled for bridge assurance, and have the connecting ports set to type "network" for bridge assurance to function properly. This can be accomplished on two switches running NX-OS, with bridge assurance on by default, and ports configured as type "network" as shown below.
 
 
Cisco Nexus 7009-- sUP IN slot 1 and Slot 2
Cisco Nexus 7010--
Cisco Nexus 7018--
Line card Capacity differ in diffrent modules...
 
Two type of line cards are available :
 
1) M sERIES:
Layer 3 cards--svi, ospf, otv, Can be layer 2, Trust Sec
Fex
 
 
2) F Series :
Layer 2 cards only
F2 SUPPORT fabric Path, VPC+, FCOE
 
 
Cisco Nexus 5k :Used Mainly layer 2 switches
 
5000--5020 and 5010
 
5500--5548 and 5596
 
 
Nexus 2k: Remote line card
 
 
Line 1,197 ⟶ 1,145:
switch(config-vsan-db)# vsan <number> interface vfc <number>
switch(config-vsan-db)# exit
 
 
= F5 Trainging =