Clarification of Nexus VPC

After a recent deployment of the switch, I see some very strange transmission problems layer 2.  I'm not sure if this is related to a problem of configuration on my part.  I would like clarification on the VPC, and I hope you can help me.

It is my understanding that when you have a series of double-sided VPC between, say, a pair of N7700 and several pairs of Nexus 9 K or K 5 switches, that all the field of VPC ID must be different.  However, a contractor told me that each number VPC - associated with the uplink of etherchannel to the Nexus 7Ks - must also be unique.  A graph that shows my dilemma is attached.

Question - I am ok to reuse the port-channel number 3 with the VPC 3 for all pairs of leaves, or if they should be different?

Hello

What you show, it's perfectible acceptable. The two areas of vPC in the Nexus 9000 layer, domain of the vPC 117 and 118 in your diagram, have no knowledge of the other, and so the same vPC and port channel number can be used.

As you said, you obviously have a unique numbering to the layer of Nexus 7000.

Concerning

Tags: Cisco DataCenter

Similar Questions

  • Pre-configuration of Nexus vPC

    Is it possible to configure ports to the vPC in a N5K before having installed vPC counterpart?

    Yes, this should be doable.

    Just that the extent of the vpc is configured (global or interface), they will be inactive/down and won't be operational not until peer is also configured and vPC rises.

    Kind regards

    Sunil.

  • S4048-ON - MLAG Question

    From the world of Cisco, I wanted to put two S4048s in a VSS mode.  Dell touts the MLAG via VLT abilities, but as far as I can tell, it is analogous to the Cisco Nexus VPC.  I'm therefore looking to separate control/management plans.  It is not possible to rethink the VSS as capabilities?  The reason why I ask, is I'm looking for high-times.  If I get only L2 abilities off SUVS, so I run VRRP between switches, but I am concerned about the convergence time.  I have not messed with VRRP a lot, but I was pretty happy with HSRP 2 convergence.  I can foresee periods of weak convergence with VLT + VRRP or should I consider going with a configuration of the stack instead?  Also, I used Cisco enough that I met of numerous warnings "featured".  Any configuration warnings, should I be aware of cases using the VLT or stacking?

    Well, I answered my question after his arrival in the whole of the additional documents (VLT).  What I'm looking for is "peer routing", which denies the necessity of VRRP.  If the two switches will actively transmit packets, instead to pass traffic through the VLTi.  There should not be problems of convergence as a result.  This is similar to Cisco VSS AFAIK, except control plans are separated on the side of Dell.

    I am still confused but problems with devices monoresident; See my post above.  I guess I can lab this place, but it is not clear in a scenario of equal routing, if these devices will be a problem.

  • Nexus 5600 HSRP design question for VLAN stretched between 2 areas of vPc.

    To our new data center network, I have 4 5672UP Nexus in two data centers. Between data centers is a redundant vPc with fiber 2x10Gb. I have configured two areas VPC, one for each data center. I read that HSRP within a VPC domain is active/inactive, but I wonder what would be the right way to configure the HSRP configuration for the VLAN tense because they are two areas different vPc?

    If you need isolation of FHRP between sites, this can be achieved by configuring the HSRP authentication in the same place so stop the HSRP Hellos between the treatment sites and allow each site to act in active / standby. Due to the HW on the 5600 Nexus architecture, control plane packets multicast are punted to the CPU, ignoring any PACL or MAC - ACL. So with a PACKAGE, you will not be able to filter the Hellos HSRP, ARP, BPDU, etc. that need to go to the CPU, because there is an ACL predefined to redirect traffic to control CPU and this ACL that overrides the ACL configured by the user. It is advisable to set up "no arp ip free hsrp duplicate" to repress unnecessary GARPs at each location in this design as well. Note 4-way HSRP is supported only on the latest versions of NX - OS, see also CSCuy89705.

    Another solution is to run FabricPath DCI with Anycast HSRP, which will allow all the 5600 to act as an active gateway by default, refer to page 22 of the FabricPath Cisco best practices.

    -Jeffords Tyler

  • Assistance of Nexus 5 k with VPC and routing

    Hello guys,.

    We are trying to implement a new solution for one of our customers who have purchased a pair of devices 5596UP nexus.

    We have this topology attached in jpeg format. They want to use the pair of 5 k for LAN and WAN connectivity.

    Background

    Customer wants a VPC configuration between the pair of nexus 5 k beucase at some point they will want to buy modules FEX and VPC servers directly, in which case it will take the VPC (VPC VLAN L3 ends the 5 k using HSRP).

    Quesitons

    1. can I have the same vlan with SVI built on each link and go through the vlan the link peer in order to build IBGP and EBGP peers according to the diagram. Will this work?

    2. is it possible to build a bond of layer 3 of each link to remote device of PE and then configure other IVS on each link, allowing through the link peer? This configuration would work and traffic would pass through the link of peers for IBGP connectivity?

    3. where can I I directed by question 1 above and use a separate port channel (non - vpc) between the two Nexus 5 k trunk vlan everywhere?

    What is the best design around this kind of solution?

    The alternative is to have the layer switch 2 plug to two Nexus 5 k without port-channel and make tree covering to the loop. In this case I have to build another trunk between the 5 k or could simply allow to the vlan through the link Peer VPC.

    Thank you very much in advance.

    Hello

    The 5ks have cards daughter layer-3 installed? The 5K support BGP, but the maximum amount of BGP routes, you can have is 8000.

    HTH

  • Nexus 9396 VPC with VMware ESXi 5.5 switch Standard

    We have just upgraded our central office switch in a Nexus 9396, which is linked to double UCS 6248 FI and also individually to some Dell R710 with dual 10 GB Ethernet card. When it connects to these stand-alone servers with a VPC etherchannel, the IP address of management rises and falls and some hosts can speak while others do not. If I leave a disabled port, it works fine. However, as soon as the two are enabled, connectivity breaks down. These ESXi servers are on a standard license, so distributed switches are out of the question. I have activate eval license and installation a vDS using the LACP improved and that works fine, it just doesn't work as an etherchannel standard with a standard virtual switch. The connectivity of the 6248 works also without any problems running in LACP. Here is the configuration:

    SW01 Nexus:

     interface Ethernet1/5   description ESX vPC Member   switchport mode trunk   channel-group 202   no shutdown interface port-channel202   description ESX   switchport mode trunk   vpc 202 

    SW02 Nexus:

     interface Ethernet1/5   description ESX vPC Member   switchport mode trunk   channel-group 202   no shutdown interface port-channel202   description ESX   switchport mode trunk   vpc 202 

    Configuration of ESXi is a standard switch with "Route based on the IP hash" and the two active adapters. Am I missing something? This configuration is not supported? Any help/advice would be greatly appreciated.

    Thank you!

    What happens if you do not use Portchannel, only 2 10gig (1/5 on each switch)?

    I think that the on the ESX host using maps network HA and can use only a single link. Once an operation fails, it uses it.  I think, I remember because the same problem with ESX hosts a couple of years.

    You can test the redundancy with physical ports and Portchannels?

    HTH

  • Vpc 3524 Nexus support feature?

    Hi, I read that the Nexus 3524 does not support the functionality of VPC, can someone confirm this for me.

    Thanks in advance.

    Has the Nexus 3524 vPC support.  You need to activate it is not enabled by default.

    VPC (config) #feature

  • membership of VPC - nexus 5000

    Hello guys,.

    I have a small conceptual question on vPC. Is it possible to add more then 2 devices in one area of the vPC?

    I want to add 4 nexus 5000 devices to a vPC-domain, make a vPC on all 4 devices.

    I found nothing on that in the internet, this is why I ask.

    Thanks in advance!

    2 only allowed in an area of the vPC.

    http://www.Cisco.com/c/en/us/products/collateral/switches/nexus-5000-SER...

  • Double room Nexus 2348UPQ / Max ports for vPC links

    Hi people,

    Cisco Nexus 2348UPQ has 24 to 10 GB uplink interfaces. Supposedly on the distribution layer, we have a pair of Cisco Nexus 5548UP, how many ports we can group about 2348 for vPC for double-homing links.

    Thank you very much!!!

    Hi Chris,

    Using an Extender of fabric with a Cisco Nexus 5000 and 6000 Series Switch Cisco Nexus

    http://www.Cisco.com/c/en/us/TD/docs/switches/Datacenter/nexus2000/HW/installation/guide/nexus_2000_hig/overview.html#pgfId-1415984

    -If you decide to use the drop cable instead of the 40 G uplinks, please note the following limitation:

    The 2348TQ of Cisco Nexus and Nexus 2348UPQ FEX, if a port channel is used to connect a switch parent with a tissue Expander device, the port channels can have up to 8 ports.

    Nexus 2348 FEX devices have a total of 6 * 40 Gigabit Ethernet uplink ports to the parent switch. If these are used with native port of 40G uplinks on a switch of parent, there is no limitation. 6 ports can be used in a hosted configuration only hosted or double. You can also use 40 Gigabit Ethernet uplink ports on the extensor of N2348 fabric with 10 Gigabit Ethernet ports on the switch to parent when it is used with the appropriate wiring. A maximum of 8 ports can be added to the string of port between the parent switch and the fabric Extender. If it's a double hosted configuration, VPC for the extensor of fabric, only 4 ports per switch are allowed in the channel of the port.

    Cisco Nexus 5600 series NX - OS Layer 2 switching Configuration Guide, version 7.x - configuration of the fabric Extender

    http://www.Cisco.com/c/en/us/TD/docs/switches/Datacenter/nexus5600/SW/Layer2/7x/b_5600_Layer2_Config_7x/b_6k_Layer2_Config_7x_chapter_01110.html

    -SFP support and cables are listed in the Cisco Nexus 2300 platform fabric Extender card (http://www.cisco.com/c/en/us/products/collateral/switches/nexus-2000-series-fabric-extenders/datasheet-c78-731663.html).

    I'd be suspicious of this bug;

    https://Tools.Cisco.com/bugsearch/bug/CSCuu88175/?reffering_site=dumpcr

    "If it's a double configuration hosted, vpc for fex, that only 4 ports per switch are permitted in the Port-Channel"

    HTH,

    Qiese Sa'di

  • Several links to counterparts in vPC between two Nexus 9 K

    Hello

    My question is quite simple: it is possible to configure two (or more) vPC peer that connect the two switches in a domain of the vPC? The goal is to set up a domain of vPC where two peers links are used to transport of VLAN different, instead of having a single peer link to all VLAN (see attached image).

    Thanks in advance

    Its not supported.

    If you are concerned about bandwidth, in addition of links link Peer.

    Thank you

    Madhu

  • VPC of Nexus 5 k Keep-Alives - Portchannel necessary?

    I'll set up 2 5548UP for layer 3 connectivity and I was wondering about the configuration of the VPC. What is the best practice? Do I have to configure a port-channel of layer 3 to do this with different interfaces (for example, e1/e1/1-2), or can I only make the mgmt port 0 card layer 3 girl (via copper)?

    And also, the ports that are L1 and L2 on the layer 3 daughter card for? I know that at some point there were not usable...

    Thank you

    Bobby Grewal

    Hi Bobby,.

    You can use the for the persistent mgmt0 port.  The main reason for them to determine what kind of failure has occurred if the VPC peer-link goes down.  If persistent are up, we know peer switch is in place, and the secondary switch must stop its ports until the pair link comes back online.  If persistent are also declining, we switch know peer is gone and the remaining switch must be responsible for the transfer of traffic.

    Hand, I'm not sure on the ports of L1 and L2.  I'll look into it, but mabey someone has the practical answer.

    Chad

  • Configure ports on Nexus 5000/2000 for the grouping of network server adapters

    Hello

    I have two Nexus 5000 and Nexus two, 2000.  The 5000 s are peers of vpc.  I would like to connect my server with cluster NIC in a port on each 2000 and all have both be active.  Is this possible?  What are the steps?  I thought it should be this:

    CPR_NEXUS_5K01 (config) # int eth100/1/32

    CPR_NEXUS_5K01(Config-if) # channel - group 32

    CPR_NEXUS_5K01 (config) # int eth101/1/32

    CPR_NEXUS_5K01(Config-if) # channel - group 32

    on the Nexus two.

    Thank you, Jerry

  • L3 Nexus 7000 routing Proxy

    Hello

    I propose Nexsu 7000 as backbone switch for my client.

    But I do not understand why proxy L3 routing must be used under M & F2 mixed condition.

    But module F2 also bear all capacity of L3, L3 routing dosen't support of the F2 module with module of the M series.

    Could you tell me why this is happening and explain Nexus 7000 architecture?

    Thank you

    Yun.

    M2 + F2e in same VDC works dated 6.2 (2), in which case F2e module returns to the classic transmission L2 mode, leaving all L3 decisions up to the M2 module, so you still need L3 routing proxy.

    I think the reasoning behind this is because of the motor M2 L3 being much more potent than the freight forwarder F2e L3.  For example, the motor M2 can make OTV, the impossible F2e.  Logic to return L3 decisions for a more powerful card.

    What I find strange is that although F2e online map has an integrated transmission of L3 engine, that I can not configure the IP of L3 addresses directly on F2e ports.  Creating a VLAN SVI and by setting the port access mode F2e work, but if I only need a link point single point of L3 between the Nexus 7 K and another device, and I have configured the vPC, vPC then in an inconsistent with Type 2 State, because the VLAN and/or IVR is not present on the switch of peers.

  • Question of vPC NEXUS7K

    Guyz please correct me if im wrong, I have 2 x Nexus7k and lets say 1 3750 switch now I need to have the connected and active also vPC inter - vlan routing for VLAN10 on the two Nexus switches as follows: -.

    SW1 - 2x10G - SW2 Nexus nexus

    (Gi0/3)-/(Gi0/2)

    3750 Switch3

    |

    VLAN 10

    Requirements on the Nexus two, SWs

    ------------------------------------------------------------

    (1) turn on vPC on both switches to Nexus

    (2) create vPC area 8 on the two switches

    management interface 3) the use of two switches to configure the peer of vPC keepalive

    (4) (4) configurer set up on links two 10 G on both sides on the port channel 5, turn on the trunk and spanning tree-type network

    (5) activate vPC Peer to the port-channel 5 on both sides

    (6) create VLAN 10 on Nexus SW1< by="" doing="" this="" shouldnt="" vlan="" 10="" be="" created="" on="" nexus="" sw2="" by="" default="">

    (7) create the Vlan 10 interface and IP address assignment< is="" there="" anything="" i="" need="" to="" add="" here="" other="" than="" this="" also="" the="" interface="" vlan="" will="" be="" added="" automatically="" on="" the="" other="" switch="" with="" the="" same="" ip="" address="">

    (8) create port channel 7, assign Gi0/3 and Gi0/2 and allow both the trunk

    (9) select vPC 101 to the channel port 7 on both sides

    Requirements on the two 3750 SW1

    ------------------------------------------------------------

    (1) create a vlan 10

    (2) assign the interface vlan Access 10

    (3) to activate the trunk Gi0/3 and Gi0/2

    (4) create port channel 7 and add the two links

    -NOW assume that everything is configured correctly, all links between the switches that none should be blocked by STP and VLAN 10 traffic should be secured by two Nexus switches?

    Hello

    Most of the steps you outlined is correct, although a few comments:

    (3) use the two switches management interface to configure the peer of vPC keepalive

    A point to note here is that if you have a supervisor engine double (SE) in your Nexus 7 K, then you need to install the management between the two interface IS active that is current and wait for SE, since the two N7K to the same local network. This way you will always have peer connectivity vPC regardless of who is active.

    (6) create VLAN 10 on Nexus SW1< by="" doing="" this="" shouldnt="" vlan="" 10="" be="" created="" on="" nexus="" sw2="" by="" default="">

    VLANs are not created on the second switch unless you use the switch profiles i.e., config-sync, and this feature is not supported on the Nexus 7 K.

    (7) create the Vlan 10 interface and IP address assignment< is="" there="" anything="" i="" need="" to="" add="" here="" other="" than="" this="" also="" the="" interface="" vlan="" will="" be="" added="" automatically="" on="" the="" other="" switch="" with="" the="" same="" ip="" address="">

    I guess the obvious thing is to allow a first Hop Router Protocol as HSRP. Note that when you use HSRP jointly with vPC, while the control plan continues to operate as active / standby, in a perspective of data plan, both routers are capable of transmitting data in VLAN that is, assets.

    With regard to the SVI created automatically, according to the note to point 6 above, the IVR will not be created as there is not of the Sync feature of config on the Nexus 7 K.

    -NOW assume that everything is configured correctly, all links between the switches that none should be blocked by STP and VLAN 10 traffic should be secured by two Nexus switches?

    Fix. You should probably also follow best practices spanning tree as Setup providing the bridge root is located on one of the Nexus 7 K, the root of the backup is the second Nexus 7 K etc.

    This and much more are covered in the Configuration and Design Guide: best practices for Virtual Port channels (vPC) on switches Cisco Nexus 7000 Series on CCO. It is a very good reference and well worth taking a look through.

    Concerning

  • Connection 5520 s WLC to 7706 Nexus s

    I "inherited" a bunch of material that the customer wants to use me to a local wireless network. The interesting bit connects the WLCs 5520 to the 7706 s nexus.

    Ideally, because I have two WLCs and two Nexus, I would like to connect a port of each WLC to each link, but it is complicated by the fact that the Nexus is running vPC, not VSS and speaks only LACP, but the WLC includes only coy LAG.

    It has been suggested that if I created the Nexus (Nexii?) to run the WLC and LACP to run LAG, it will work, but I want to be reasonably sure before going to the risk of exposing myself to ridicule when a CEP fails.

    So, in a nutshell: can (and if so, how) I connect two 5520 WLCs to a pair of Nexus 7706, such as incoming traffic or Nexus can get WLC and criticaly, BACK to the source using only the features of L2, or if this is not possible, how do this with routing rather than go without making a rod for my back?

    Thanks for any help

    Jim

    Hello Jim,

    Cisco TAC, topology I tried was invalid itseems. By their suggestion a configured WLC LATE can be connected to a single switch upstream :(

    Please find attachment as the physical topology that I was recommended.

Maybe you are looking for