UCS to NEXUS

Hello

Is it possible to connect a UCS solution to a pair of 5500 Nexus via native Ethernet according to LAN and CF communications native according to communications SAN? Then I was going to link arrays EMC VNX 5500 Nexus via native FC and the core of his native LAN via Ethernet. In other words, I use 5500 Nexus as a LAN/SAN consolidated device, but WITHOUT using a consolidated as FCoE Protocol. This design is ok? Have warnings or things to keep in mind?

In addition, in this scenario it would add any benefit to connect the VNX berries to the Nexus through FCoE instead of native FC?

Thank you

This will work. Do us all the time in our laboratory. The 5500 replaces the need for a MDS or Brocade FC switch. It works perfectly. You can use the NPIV Switch 5500 or you can enable the CF to switching in the box of the UCS and make a E or TE port between the 5500 and UCS FIs.  We also support F port channel trunk between 5500 and UCS FI, you can do just about every option CF connection between boxes of Nexus and UCS.

I can't comment on the VNX and it's FCOE capabilities.

Louis

Tags: Cisco DataCenter

Similar Questions

  • Trouble seeing vhba of UCS on Nexus 5 k

    I have UCS > 6100 (a/b) > nexus 5 k (a / b) > emc vnx 5300.  I have two installed VSAN and my switch can see two ports on my vnx5300 with show flogi. However, my 5ks don't see the vhba on the ucs system. I have two vhba configured, a tissue in one and the other for fabric b. From now the nexus is connected to the 6100 as a port channel. Ports-channel is configured as a trunk. Now, this makes me think that this is where the vsan communication is a failure. The connection to the 6100 and nexus needs to be something more than a simple trunk?

    Dennis

    Dennis,

    I hope that you give on the Fiber Channel uplinks to your N5K and not Eth/FCoE (there no FCOE Multihop yet).

    Assuming that you have your 6100's with CF logs to your N5Ks and the N5K "npiv' enabled, you should see the flogi to your host vHBAs.

    The CLI of 6100 you can also 'connect nxos' and 'show flogi npv' which will show you the CF sessions between your 6100 and N5K - I find where you problem.

    All of your SAN connections would travel on the module of Fibre Channel 6100, up to Channel N5K Fiber module, then to your table.  You can always trunk and channel ports SAN your rising FC.  Just to confirm that this is what you did and not using uplink Ethernet to your N5K waiting to work.

    Can you provide a diagram or a detail the exact topology (Eth, FC etc)

    Kind regards

    Robert

  • Connect UCS to Nexus

    Hello

    I add UCS FI to our network, connect you to a switch of Nexus. our network has several VLANS isolate and VLAN normal, that all should be trunk to FI. The port of the switch, can I do mode normal trunk, or private - vlan trunk.

    Thank you

    Hello

    In general, all them VLANs is defined in a domain UCS are shared resources on any Nortbound links, see

    http://www.Cisco.com/c/en/us/support/docs/servers-unified-computing/UCS-...

    However, there is also the notion of vlan disjoint, see

    http://www.Cisco.com/c/en/us/solutions/collateral/data-center-virtualiza...

    Walter.

  • Please tell me how to connect with FCoE mini-UCS and Nexus N5k (N5K-C5548UP-B-S32)

    I have UCS 5108 chassis of Server Blade (with two blades B200 M3 inside and two FI 6324) connected to the switch of Nexus N5k 4x10GE links.

    I want to set both aggregated channels FCoE (vFC) of chassi to pass.

    All the instructions I've seen mention that UCS-mini needs to be mode end-host FC, but version 3.0 UCSM does not support this mode, what should you do?

    I put in place following the www.cisco.com/c/en/us/support/docs/switches/nexus-5000-series-switches/116248-configure-fcoe-00.html instructions

    But after the installation, I get an error on the fabric: FCoE or FC uplink is down on Vsan 500;

    And the Nexus program: Vsan 500 is declining (pending flogi)

    When I ' display interface vfc 1 ' CLI nexus I don't see "Trunk VSAN (high)", but

    "VSANS trunk (boot) (500).

    Unfortunately, I can't find the technical notes for mini UCS (with UCSM v. 3.0), so maybe you can promt guides or suggestion for this?

    Thank you in advance!

    Same question here.

    TIA

  • Map of ESX/ESXi DRS/HA/FT-UCS M71KR - Nexus 1000

    Hi all

    I have a doubt about the configuration of ESX/ESXi in congiunction with ucs years n1k.

    If I use the channel (mac-pinning based due to material ucs) on n1k and I have only two NIC (M71KR) card can I properly configure FT/HA or DRS.

    having a port-channel vmware is informed of nic ha? I may have some redundant warning as a note to management interface or stuff like that?

    any idea of?

    TNX

    Dan

    Dan,

    We work with the DRS, FT and HA. You will be able to configure a Port-Channel with MAC-pinning and always use DRS, the FT and HA feature.

    To create the channel port, you must add the two network interface cards to the DVS in a perspective of VMware is aware that you have redundancy NIC.

    Louis

  • VTP on UCS?

    Hi all

    I need to create about 100 VLANS on link bottom-up clusters interconnection by UCS fabric nexus 7K, impossible to find a way get UCS learning VLANS use VTP or something like that. I have prepared the list of commands to run with CLI, that's still a lot of work, because according to each create vlan command, I need to add 'exit' command to return to the previous mode, otherwise the next command ' create a vlan "won't work not - different from the switches to the Cat.

    blade system UCS does support VTP? What is the best way to create a VLAN that much?

    Thank you

    UCS does not DRIFT support under this item.

    I suggedt make you a little script in your Notepad and paste it on the fabric of interconnection CLI.

  • VCE vblock, LSI_SCSI: Reset for device \Device\RaidPort0 ERROR in VM Windows Event logs

    Hello, I have been a problem for some time and I can't seem to understand this issue. Basically, the VM freezes but returns after 30 seconds.
    I found the problem was present on all data warehouses and even showed errors for for guests (san boot) boot sector. After changing the round robin at the fixed path on data warehouses to use a specific fiber channel switch LSI_SCSI errors were gone on all data warehouses, VM ect... everything worked normally.
    We have an installation program VCE vblock with UCS (blades), Nexus 5ks and VNX.

    So basically, we were troubleshooting paths FiberChannel beside the UCS to the VNX. We have changed ports on the VNX SP but side A was always bad / good B... Compared the configs of nexus for the two, identical configs/features beside the specificities as VSAN ect. Examined the configs for the UCS, but are fundamentally the same with Setup on the side B, but everything looked good. Nothing is defective.
    I noticed Tx errors on ports of module e/s and CRC errors on ports in the Nexus. Thought it might be a bad fiber so I replaced... The problem is always present. Any ideas? Thanks in advance!

    Data Center:
    UCS: 5108 Chassis, IO module 2104XP (2.2 (3g)), blades B200 M2, fabric 6120XP interconnections (5.2 (3) N2 (2.23 g))

    5.5 ESXi update 2

    2 nexus 5ks

    VNX 5500

    I found the culprit, SFP. The FPS that were used throughout the entire upward was incompatible and FUBAR would. Not match speeds, single mode when it should have been multi ect. Just all around the nightmare. It's on EVERYTHING from the VNX to IOM simply ridiculous. After completing the gutting of the SFP and the fiber, replacement and then delivered at the point of the chassis of the UCS everything was great. Honestly, I don't know how it worked at all with this configuration. Anyway, if you want something done you have to do it yourself.

  • Design - Rack Edge or Edge VLAN question

    I have Cisco UCS and Nexus 7 k gear I design, so I use this design guide:

    https://www.VMware.com/files/PDF/products/NSX/VMware-NSX-on-Cisco-n7kucs-design-guide.PDF

    However, it is not totally clear on how the physical to virtual connections must be deployed.  Looking at this guide (page 11), it seems that 5 VLAN must be shared resources to each host (including the VLAN edge), and it would negate the need for a separate group of edge (or grid).  However, the same guide also speaks of a cluster of mgmt and edge and there is even a diagram (pg 13) that shows what looks like to me a host of edge.  Since both the mgmt, edge and compute clusters all share the same distributed switch, it seems that this design is indicating that there is no need for a separate edge cluster.   Does this sound right to you?

    Then the document proposed the edge VLAN to shared resources for all hosts, the VLAN Edge can simply be ignored and remain stagnant on the hosts of the calculation, thus linking the only living on the edge/edge management cluster VLAN edge elements.  He accomplishes the goal of the cluster Edge.  The edge vs no decision dashboard isn't so much on this VLAN are connected, but more info on how you plan to implement NSX and its components.

    Brad Hedlund did a good job, talking through the design, specific to the N7K decisions, who do not do in the Cisco Design here document and help you decide if needed/wanted a cluster of edge: http://bradhedlund.com/2015/02/06/going-over-the-edge-with-your-vmware-nsx-and-cisco-nexus/

  • Nexus 1000v, UCS, and Microsoft NETWORK load balancing

    Hi all

    I have a client that implements a new Exchange 2010 environment. They have an obligation to configure load balancing for Client Access servers. The environment consists of VMware vShpere running on top of Cisco UCS blades with the Nexus 1000v dvSwitch.

    Everything I've read so far indicates that I must do the following:

    1 configure MS in Multicast mode load balancing (by selecting the IGMP protocol option).

    2. create a static ARP entry for the address of virtual cluster on the router for the subnet of the server.

    3. (maybe) configure a static MAC table entry on the router for the subnet of the server.

    3. (maybe) to disable the IGMP snooping on the VLAN appropriate in the Nexus 1000v.

    My questions are:

    1. any person running successfully a similar configuration?

    2 are there missing steps in the list above, or I shouldn't do?

    3. If I am disabling the snooping IGMP on the Nexus 1000v should I also disable it on the fabric of UCS interconnections and router?

    Thanks a lot for your time,.

    Aaron

    Aaron,

    The steps above you are correct, you need steps 1-4 to operate correctly.  Normally people will create a VLAN separate to their interfaces NLB/subnet, to prevent floods mcast uncessisary frameworks within the network.

    To answer your questions

    (1) I saw multiple clients run this configuration

    (2) the steps you are correct

    (3) you can't toggle the on UCS IGMP snooping.  It is enabled by default and not a configurable option.  There is no need to change anything within the UCS regarding MS NLB with the above procedure.  FYI - the ability to disable/enable the snooping IGMP on UCS is scheduled for a next version 2.1.


    This is the correct method untill the time we have the option of configuring static multicast mac entries on
    the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR.

    This will give more "push" to our develpment team to prioritize this request.

    Hopefully some other customers can share their experience.

    Regards,

    Robert

  • Vsan UCS - Nexus - questions

    Hello

    I need some advice on setting up a SAN between a UCS and link configuration, I checked a large number of document, but nothing is clear (to me) to understand the correct configuration and apply.

    First of all, I have an err - disable on nexus (VLAN L2 problem) and ucs (SFP incompatibility).

    I have 2 (twinax cables) connected between 5548UP ucs6248 and nexus

    Should what configuration I configure on the NGC? FC or fcoe

    Should what configuration I configure on the Nexus? CF or vfc

    Can I use a twinax cable or a wired fiber sfp module?

    Use the switch or NPIV mode?

    using the switchport E or F?

    Or I need a link that explains the process to follow.

    Concerning

    Eric,

    Do you have a chance to look at the pdf that I provided in my previous email. He has the list of SFP supported on YEW, SFP part number can be found on the FPS itself.

    . / Afonso

  • VXLAN on UCS: IGMP with Catalyst 3750, 5548 Nexus, Nexus 1000V

    Hello team,

    My lab consists of Catalyst 3750 with SVI acting as the router, 5548 Nexus in the vpc Setup, UCS in end-host Mode and Nexus 1000V with segmentation feature enabled (VXLAN).

    I have two different VLAN for VXLAN (140, 141) to demonstrate connectivity across the L3.

    VMKernel on VLAN 140 guests join the multicast fine group.

    Hosts with VMKernel on 141 VLAN do not join the multicast group.  Then, VMs on these hosts cannot virtual computers ping hosts on the local network VIRTUAL 140, and they can't even ping each other.

    I turned on debug ip igmp on the L3 Switch, and the result indicates a timeout when he is waiting for a report from 141 VLAN:

    15 Oct 08:57:34.201: IGMP (0): send requests General v2 on Vlan140

    15 Oct 08:57:34.201: IGMP (0): set the report interval to 3.6 seconds for 224.0.1.40 on Vlan140

    15 Oct 08:57:36.886: IGMP (0): receipt v2 report on 172.16.66.2 to 239.1.1.1 Vlan140

    15 Oct 08:57:36.886: IGMP (0): group record received for group 239.1.1.1, mode 2 from 172.16.66.2 to 0 sources

    15 Oct 08:57:36.886: IGMP (0): update EXCLUDE group 239.1.1.1 timer

    15 Oct 08:57:36.886: IGMP (0): add/update Vlan140 MRT for (*, 239.1.1.1) 0

    15 Oct 08:57:38.270: IGMP (0): send report v2 for 224.0.1.40 on Vlan140

    15 Oct 08:57:38.270: IGMP (0): receipt v2 report on Vlan140 of 172.16.66.1 for 224.0.1.40

    15 Oct 08:57:38.270: IGMP (0): group record received for group 224.0.1.40, mode 2 from 172.16.66.1 to 0 sources

    15 Oct 08:57:38.270: IGMP (0): update EXCLUDE timer group for 224.0.1.40

    15 Oct 08:57:38.270: IGMP (0): add/update Vlan140 MRT for (*, 224.0.1.40) by 0

    15 Oct 08:57:51.464: IGMP (0): send requests General v2 on Vlan141<----- it="" just="" hangs="" here="" until="" timeout="" and="" goes="" back="" to="">

    15 Oct 08:58:35.107: IGMP (0): send requests General v2 on Vlan140

    15 Oct 08:58:35.107: IGMP (0): set the report interval to 0.3 seconds for 224.0.1.40 on Vlan140

    15 Oct 08:58:35.686: IGMP (0): receipt v2 report on 172.16.66.2 to 239.1.1.1 Vlan140

    15 Oct 08:58:35.686: IGMP (0): group record received for group 239.1.1.1, mode 2 from 172.16.66.2 to 0 sources

    15 Oct 08:58:35.686: IGMP (0): update EXCLUDE group 239.1.1.1 timer

    15 Oct 08:58:35.686: IGMP (0): add/update Vlan140 MRT for (*, 239.1.1.1) 0

    If I do a show ip igmp interface, I get the report that there is no joins for vlan 141:

    Vlan140 is up, line protocol is up

    The Internet address is 172.16.66.1/26

    IGMP is enabled on the interface

    Current version of IGMP host is 2

    Current version of IGMP router is 2

    The IGMP query interval is 60 seconds

    Configured IGMP queries interval is 60 seconds

    IGMP querier timeout is 120 seconds

    Configured IGMP querier timeout is 120 seconds

    Query response time is 10 seconds max IGMP

    Number of queries last member is 2

    Last member query response interval is 1000 ms

    Access group incoming IGMP is not defined

    IGMP activity: 2 joints, 0 leaves

    Multicast routing is enabled on the interface

    Threshold multicast TTL is 0

    Multicast designated router (DR) is 172.16.66.1 (this system)

    IGMP querying router is 172.16.66.1 (this system)

    Multicast groups joined by this system (number of users):

    224.0.1.40 (1)

    Vlan141 is up, line protocol is up

    The Internet address is 172.16.66.65/26

    IGMP is enabled on the interface

    Current version of IGMP host is 2

    Current version of IGMP router is 2

    The IGMP query interval is 60 seconds

    Configured IGMP queries interval is 60 seconds

    IGMP querier timeout is 120 seconds

    Configured IGMP querier timeout is 120 seconds

    Query response time is 10 seconds max IGMP

    Number of queries last member is 2

    Last member query response interval is 1000 ms

    Access group incoming IGMP is not defined

    IGMP activity: 0 joins, 0 leaves

    Multicast routing is enabled on the interface

    Threshold multicast TTL is 0

    Multicast designated router (DR) is 172.16.66.65 (this system)

    IGMP querying router is 172.16.66.65 (this system)

    No group multicast joined by this system

    Is there a way to check why the hosts on 141 VLAN are joined not successfully?  port-profile on the 1000V configuration of vlan 140 and vlan 141 rising and vmkernel are identical, except for the different numbers vlan.

    Thank you

    Trevor

    Hi Trevor,

    Once the quick thing to check would be the config igmp for both VLAN.

    where did you configure the interrogator for the vlan 140 and 141?

    are there changes in transport VXLAN crossing routers? If so you would need routing multicast enabled.

    Thank you!

    . / Afonso

  • UCS/Nexus and FCoE Question

    Hi all:

    Can we connect a FCoE target directly to an interconnection of fabric or a Nexus 5xxx CWF in NPV mode and not connected to an FC switch for power and tissue services continue to have the initiator see the target and be able to supply storage as a result?

    Thus, the scenario would be the following:

    a server rack with an ANC connected to a Nexus 5xxx CWF in NPV mode

    OR

    a server blade a NON related to interconnection of UCS fabric (which is also a CWF) in VAN mode

    AND

    In the above cases, there is NO CONNECTION between the RTC and an FC switch.

    I will be able to have the ANC (FCoE initiator) see table of Strorage (FCoE target)?

    Thank you

    Hello

    Yes you are right. For the direct-attached storage work, you need the FI in FC switch mode. Your explanation is good and you can not storage direct-attached mode/npv end FC host mode. With FI in npv mode, it can just transfer CF applications to an NPIV switch and cannot process any request CF as FLOGI etc...

    In addition, in the UCS versions under 2.1 you need a SAN switch connected to a FI running in mode switching and storage directly connected to it, which can push the FI database zoning. Thanks to 2.1, you can have storage direct-attached without the need for any SAN switch upstream as we have zoning capabilities in UCS now.

    In short, for all direct-attached storage, you must have FI FC Switching mode.

    -Ganesh

  • Configuration of Nexus 5 k for SAN switching between UCS 6100 s and NetApp storage

    We strive to get our configuration of the UCS environment make start SAN LUN on our NetApp storage. We have a pair of Nexus 5Ks with the company / SAN license on them and the 6-port FC module. What to do to configure Nexus 5000 to operate as the SAN switch between the target storage and environment of the UCS?

    You see the 6120 WWPN on the N5Ks? What is the result of 'show flogi database' and 'show int br '.

    You have mapped the FC ports on the 6120 for the proper VSAN?

    NPIV is enabled?

  • UCS environment vSphere 5.1 upgrade to vSphere 6 with Nexus 1000v

    Hi, I've faced trying to get the help of TAC and internet research on the way to upgrade to our UCS environment, and as a last thought resort, I would try a post on the forum.

    My environment is an environment of UCS chassis with double tracking, tissue of interconnections, years 1110 with pair HA of 1000v 5.1 running vsphere.  We have updated all our equipment (blades, series C and UCS Manager) for the supported versions of the firmware by CISCO for vSphere 6.

    We have even upgraded our Nexus 1000v 5.2 (1) SV3(1.5a) which is a support for vSphere version 6.

    To save us some treatment and cost of issuing permits and on our previous vcenter server performance was lacking and unreliable, I looked at the virtual migration on the vCenter 6 appliance.  There is nowhere where I can find information which advises on how to begin the process of upgrading VMWare on NGC when the 1000v is incorporated.  I would have thought that it is a virtual machine if you have all of your improved versions to support versions for 6 veil smooth.

    A response I got from TAC was that we had to move all of our VM on a standard switch to improve the vCenter, but given we are already on a supported the 1000v for vSphere version 6 that he left me confused, nor that I would get the opportunity to rework our environment being a hospital with this kind of downtime and outage windows for more than 200 machines.

    Can provide some tips, or anyone else has tried a similar upgrade path?

    Greetings.

    It seems that you have already upgraded your components N1k (are VEM upgraded to match?).

    Are your questions more info on how you upgrade/migration to a vcenter server to another?

    If you import your vcenter database to your new vcenter, it shouldn't have a lot of waste of time, as the VSM/VEM will always see the vcenter and N1k dVS.  If you change the vcenter server name/IP, but import the old vcenter DB, there are a few steps, you will need to ensure that the connection of VSM SVS corresponds to the new IP address in vcenter.

    If you try to create a new additional vcenter in parallel, then you will have problems of downtime as the name of port-profiles/network programming the guestVMs currently have will lose their 'support' info if you attempt to migrate, because you to NIC dVS standard or generic before the hosts for the new vcenter.

    If you are already on vcenter 6, I believe you can vmotion from one host to another and more profile vswitch/dVS/port used.

    Really need more detail on how to migrate from a vcenter for the VCA 6.0.

    Thank you

    Kirk...

  • Migrate UCS links CF Direct Nexus 5 K storage

    Hello

    Currently the FIs UCS are connected diretly to EMC storage using the plug-in FC (FC Switch mode)

    I need to migrate these uplinks FC for N5K which will be connected to the storage (without downtime).

    Is this possible?

    I have two FIs, can I disconnect FI B storage--> changes its mode to VAN (host mode) and connect it to the link between the

    then do the same on A FI?

    Since these two FIs are redundant and VmWare ESX built in the multipathing software I shouldn't lose connection to the storage at any time in this migration.

    Am I wrong?

    Concerning

    Hello

    Change of switching mode of CF to end host mode (NPV) will be effective for both switches at the same time and will th switches to be reloaded. After recharging, you will need to reconfigure the settings of CF as well.

    . / Afonso

Maybe you are looking for