Uplinks ESX on a blade of UCS

After installing ESX on UCS Blade (adapter type is Palo, the number of network cards is 2) sometimes I see that the uplinks are vmnic2 and vmnic3 (output of esxcfg-NICS - l)

Where are the output of esxcfg-vswitch - l shows the uplink as vmnic0, and it also generates a warning saying there is no such thing as vmnic0.

This happens not all the time. Anyone know why this might be happening?

I saw this happen a little when you do not have NICs as the first devices in the vNIC/vHBA Placement. By default, we put the HBAs first sometimes ESX doesn't like it and will the NPI as vmnic2 and 3. If you move the network above the HBA cards, they come as a vmnic0 and vmnic1. I have always to reorganize and put my NIC before my storage.

Louis

Tags: Cisco DataCenter

Similar Questions

  • Map of ESX/ESXi DRS/HA/FT-UCS M71KR - Nexus 1000

    Hi all

    I have a doubt about the configuration of ESX/ESXi in congiunction with ucs years n1k.

    If I use the channel (mac-pinning based due to material ucs) on n1k and I have only two NIC (M71KR) card can I properly configure FT/HA or DRS.

    having a port-channel vmware is informed of nic ha? I may have some redundant warning as a note to management interface or stuff like that?

    any idea of?

    TNX

    Dan

    Dan,

    We work with the DRS, FT and HA. You will be able to configure a Port-Channel with MAC-pinning and always use DRS, the FT and HA feature.

    To create the channel port, you must add the two network interface cards to the DVS in a perspective of VMware is aware that you have redundancy NIC.

    Louis

  • ESX hardware Choice - blades vs boxes Intel vs AMD FTW

    Guys-wanted to get your opinion on the material.  We currently have 5 of the DL380 G5 (they are becoming dogs BTW) and I need to prepare for growth in 2009-2010.  The following points:

    • I have only about 10U left in my current car at the co - lo and for recurrent cost issues I don't want to go to another booth in the near future.

    • Goal is to fill this 10U with best performance by U in the reasoning of cost

    • Everything is NFS, so don't worry about iSCSI HBA etc.

    • Blade architecture seems very appealing due to its density.  I can put in an IBM Bladecenter that would all the redundancy, management & switching in a package of tight 9U - Colocation of blades 10 I think.  But what about performance?

    • That these mega-wholesale Dell PowerEdge R905 Rack is possible with AMD Opteron & 48 GB of RAM?  They are a better option for VMware than blades?

    • How much more powerful are AMD chips to running ESX than Intel?  I'm a big guy AMD, I run em' at home and we all know the architecture of the FSB is cleaner than Intel. **

    p.s. DL380 are configured this way: 2 x xeon quad core @ 2.33Ghz - 32 GB of ram

    Affect your religion to break!

    See you soon,.

    Bart

    I'm working on a site with a range of materials, the older facilities are on PE6950, there is a VDI on R905 facility and a new migration environment being built on HP c7000 with BL685c G5 (full memory, Cisco/Brocade). All hosts are AMD either 82xx 83xx (Dual or Quad Core). We are working on the server of virtual machines migration of the blades HP 6950 on the newly built (and lightning quick).

    VDI on of the R905 is easily blockage on vcpu scheduling without complaints from the users performance rote, which gives us 128 sessions VDI by R905 (quad quads, 64 GB) we head to connection broker shortly.

    On the server we are on average 40 guests by 6950 and limited memory (8 cores, 32 GB of Ram)

    We are hoping to reach 90 guests by Bl685c, but are only in the construction phase and test with no complaints to date - in particular with minimal wiring required. My calculations suggest that 96 GB will be the sweet spot for memory installed to BL685s, although they can support 128 GB at great expense. This translates into a theoretical capacity of 600-700 VMS by chassis running between 128 cores, which is just huge for a such small piece of rack space.

    We went with AMD because at the time of the decision (many moons ago) their roadmap showed they would be stable on the new Barcelona for some time, so only intel or even to release the new set statement that would be compatibility likely vmotion - the wisdom here is definitely check the roadmaps on the cpu and server provider before committing.

    Also, don't neglect the benefits of integrated with your blades remote management; It's yet another extra expense on individual servers.

  • UCS < 1.4.2b >, N1K and uplink

    Hi all

    I have a UCS (cluster) connected in vPC mode to a Nexus 7010.

    UCS 4.1U1 VMware ESXi, N1K with EHM PC blade-based UCS NIC is 71KR.

    Question is what happens if the two uplink of interconnection fails? I mean maybe cut fibres or stuff like that. N1K channel will be still two active link?

    Maybe the redundancy is performed by re arping for finding a mac (i've got UCS in switch mode) address, but I'm not sure. I saw a feature in the new version for a link status tracking rising full failure handling system (how?).

    last question about redundancy... what happens if a UCS IOM resets? I have some traffic disruption?

    TNX

    Dan

    Dan,

    Just to be clear, check the following:

    -UCS in switch mode

    -Adapters M71KR

    -N1k using Mac-pinning (I guess)

    -Connectivity upstream of each FI is a VPC to a pair of N7Ks.

    In this case, the N1K has no visibility of uplinks UCS.  See all your hosts VEM are two uplinks for each host (a session each fabric interconnect.)  If one of the two uplinks fails on interconnection, traffic will be the uplink remaining on this FI re-hairpin.  If the TWO uplinks on a financial institution fail, then UCS will be down (called the link at the bottom) server links and traffic should be routed through the MEC is another uplink will the other.  You can change this behavior to follow links to Server (for local switching only) but the default action of UCSM's close links to corresponding server if there are no links available on a financial institution.   Make sense?

    Now, in the last version of N1K here (1.4) is a new feature called followed by the State of the network (NST) for use with VPC - HM (such as Mac pinning).  This feature will allow to test the connectivity of a VLAN sends a probe packet and expect to get fired on another group of liaison rising/sup.   If you have a network VLAN which SHOULD be identified, you can follow with TSN.  If the network becomes unavailable, you can choose to close the uplink and re - route traffic to an another uplink.  This is useful to detect failures beyond the first jump (which would be the interconnections) as a failure somewhere in your N7K level or beyond.

    Setup Guide: http://www.cisco.com/en/US/partner/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/interface/configuration/guide/n1000v_if_5portchannel.html#wp1284321

    White paper: https://communities.Cisco.com/docs/doc-20657

    Order code: http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/command/reference/n1000v_cmds_t.html#wp1295608

    For your last question about a failure/reset of the IOM, the cards corresponding to each blade will lose connectivity.  This is redundancy at the host level comes into play to reroute traffic.  In the case of your hosts N1K VEM, they would simply be re - route traffic on the access road to functional IOM of the chassis.

    Another point to consider is the M71 and M81 adapters support Fabric Failover.  He is tilting at the adapter level if there is a failure with any device between the adapter and uplink (like the IOM or FI).  Fabric failover is a configurable option adapter which re - will route traffic in Menlo ASIC adapters to the fabric in the 'other', such that the host will NOT see one of the two ports down.  Without switch fabric, a failure of a MOI or FI consider the adapter and this port would descend.  FF only adds a level of redundancy in the adapter without having to rely on any OS host grouping/failover.  M51KR, M61 KR and M72KR adapters do NOT support this feature.

    Kind regards

    Robert

  • without uplinks in dvSwitch created by UCS vCSA 6.0, 6.0u1b, 6.0u2 Manager

    I try to configure passthru vnic UCS in vCenter Server Appliance 6.0u1b. I had:

    1. updated my UCS field for the 2.2 version (6f).
    2. installed VMware-VCSA-all - 6.0.0 - 3343019.iso;
    3. installed ESXi 5.1 with the latest updates and cisco-vem-v151 - 5.1 - 1.1.1.1.zip;
    4. moved this 5.1 to vCenter ESXi host;
    5. exported vCenter extension of UCS Manager;
    6. any succesfully extension to vCenter;
    7. created vCenter, Datacenter, folder and DVS in the UCS Manager;

    After a few moments UCS created this DVS in vCenter. When I try to move 5.1 ESXi host at this DVS, there is no uplinks:

    60.png

    VCenter then unable to bind ESXi vmnic for anything whatsoever. In addition, I tried to add an ESXi host to DVS without uplinks and obviously it worked. For example, it is a screenshot of vCenter Server Appliance 5.5 in the same window of uplinks. Servers is B200-M4 with VIC 1340 adapters to the chassis with modules e/s 2208XP. Is it possible to install vnic passthru vCSA 6.0u1b and ESXi 5.1/5.5 Server blades Cisco UCS?

    This is a bug of the vmware vsphere web 6.0 client, 6.0u1b, 6.0u2 and maybe later versions. It can not find or just cannot display the uplinks in ucs-associated distributed switch. so I could not add the host to the switch via vmware vsphere client web. uplinks in switch is very good, so this bug have several workaround solutions. host can be added to the distributed switch ucs related through:

    1. application of VMware vsphere client. It will automatically assign first free uplink in the switch and i/o directpath will work;
    2. profiles of the host. in this case I can attribute any any vmnic uplink as I want. This can be done in vsphere client of web as application of vmware vsphere client.
    3. client VMware console or sdk. I did not tried this case, but I guess it will work fine.

    success. distributed switch:

    virtual machine:

  • FEX Uplink and Ports on the side Server

    I am very new to the environment of the UCS and having to return to the current fairly quickly. I decided to start with the chassis. But have a few questions.

    I hope someone can clarify how serverside ports are assigned less than ports uplink, 2204XP and 2208XP.

    My understanding is that the side ports server are assigned to the uplinks based on the number of uplink configured at the END.

    Once the FEX is initialized ports are hardcoded.

    Assuming 2 (6200 series) FI and 2 FEX (2204).

    2 each to their respective FI FEX uplinks

    8 blades width in the chassis.

    My understanding is;

    Uplink port 1 receive blades (1, 3, 5-7) and uplink 2 (2, 4, 6, 8)

    Question: Which ports are active for each blade and that FEX.

    For example the blade 1 assigned to Uplink on FEX A 1 with FEX B 1 uplink failover

    Blade 3 assigned to Uplink on FEX B 1 with FEX A 1 uplink failover

    Thanks in advance people...

    Hello and welcome to the community,

    "" My understanding is that the side ports server are assigned to the uplinks based on the number of uplink configured at the END.".

    Server ports are assigned to connections rising from ports either automatic (load balanced by available uplink interfaces) or manual pin (PIN LAN groups hard coded)

    2 fIs and IOM 2204 2 with 2 cables on each IOM: "UpLink port 1 receive blades (1, 3, 5-7) and uplink 2 (2, 4, 6, 8)" "< this="" is="" correct,="" odd="" blades="" through="" odd="" ports="" and="" even="" blades="" through="" even="">

    Which ports are active for each blade and that FEX.?

    In this case with a 2204, IOM provides 16 internal ports, 2 for each blade, and you can check which one is active with the command "Show software platform woodside m" mode FEX which is realized with the following commands:

    UCS # connect the IOM

    FEX-1 # display software platform woodside m

    You can also go to the vNIC itself in the "Servers" tab by going to the Service profile > network and then double click on in the vNIC and you will see the configuration:

    For example: blade 1 assigned to Uplink on FEX A 1 with FEX B 1 uplink failover

    Blade 3 assigned to Uplink on FEX B 1 with FEX A 1 uplink failover

    This will be the case if you have the 'discovery of the chassis/FEX policy' located in the 'grouping preference link' on 'None' instead of 'channel of Port "; If it is "Port Channel", I won't miss not until all other links fail in the same module e/s, see below

    NOTE: this option is valid only if your hardware is ALL second generation, in other words, it does not work with fabric 61xx interconnections or 2104 IOM

    I hope this helps, if it does, please note it.

    -Kenny

    Note ensure on the limitation of the function of the channel port between IOM and the tissue Message was edited by: Keny Perez

  • networking of base UCS

    Hi guys

    IM new UCS and I need help lift network connectivity and running of my servers blade of UCS.

    I have a Nexus 5 K connected to a 6248FI.

    Nexus Port config is:

    interface Ethernet1/8
    Description Portchannel interconnect fiber has
    switchport mode trunk
    switchport trunk vlan native 2025
    Dashboard for the port spanning tree type trunk

    I set up a unified on FI uplink port so I FC and Ethernet via the same port.

    FI port config:

    interface Ethernet1/32
    Description UF: UnifiedUplink
    PIN border
    switchport mode trunk
    switchport trunk vlan native 2025
    switchport trunk allowed vlan 1, 2025-2026, 2276, 4048
    Uni-directional disable
    no downtime

    The Vnic on my server looks like this:

    interface Vethernet833
    Description 1/4 server, 1-fabric-A VNIC
    switchport mode trunk
    No server pinning sticky
    pinning of pinning-down server drop down link
    No cdp enable
    switchport trunk allowed vlan 2276
    bind the interface port-channel1290 833 road
    queues of default entry - type service-policy policy
    no downtime

    Now, the thing is, I put an IP address on the server and it can not ping the default gateway.
    I followed guides online on how to configure things but can't for the life of me understand why it does not work!
    pointers would be greatly appreciated.

    Thank you

    AK

    Please indicate your question as answered, so you understood the future users can see.

    -Kenny

  • Server Blade B200M2 showing an amber light

    Hello

    I am faceing question in one of our Cisco UCS B200 M2 Blade Server, is to show the orange light.

    We have a chassis with 4 servers B200 M2 all other servers are working properly, but the server which is present in 3 Bay gives seeing orange.

    Until now, we bypass them and made the firmware updated, tested with the service profile, work tryied in the Bay of different, changed map of CNA.

    But no results, after that we are thinging of the motherboard number the server is 2 months and under no mandate. waiting for the solution

    Thanks in advance

    Thank you

    Sunil

    Sunil,

    The fluctuation of power may be due to a defective mother board and voltage sensor.  I suggest you open the TAC proof and they may decide to replace the blade (mainboard only) for you.

    Connect to the UCSM CLI using telnet or SSH and enable logging with your SSH client to capture output following

    (Where X = chassis # and Y = blade #)

    UCS-250-A # login local-mgmt

    UCS-A(local-Mgmt) # show the chassis support Mmic X Y

    Output UCS-A(local-Mgmt) #.

    UCS-250-A # connect Mmic X / Y

    [help] # sensors power

    Fix spacewalks on the TAC case when you open for a quicker resolution.

    Kind regards

    Robert

  • VLAN Management with the 1000V and UCS

    If I want to use 10 VLAN management, to configure the following:

    1000V - VM vEthernet port profile coelio with VLAN 10, Ethernet uplink profile includes 10 VLANS

    UCS - vNIC in the Service profile, VLAN 10 and include the UCS uplink trunk VLAN 10

    Switch upstream: include 10 VLANS in trunk port.

    Ok?

    Now, this management VLAN, I then jeutiliser can for everything? I mean to 1000V management VLAN, vSPhere management, management FI and switch? For example management of 1000V or different from others?

    Hi Atle,

    Yes, is that what you mentioned, it's good in terms of activities you need to perform. However, I wish to add a few:

    (1) define him vlan on the UCS - except if you do this, that you won't be able to add to the vnic. More once you have set the vlan automatiaclly will get added to the list of trunk on uplink ports (unless you l2-disjoint set up)

    You can have the same vlan for all management, unless you have some traffic that you would not want some devices to see / meet.

    . / Afonso

  • stop the ethernet uplink port will cause the port of the unit down

    Dear all,

    I want to make sure that this kind of network traffic will communicate internal fi.

    I've disabled the uplink port and found the path of material has also declined.

    Port of the device must road toward uplink port and then back to UCS server?

    Thank you for the clarification.

    Best regards

    Dennis Dai

    Expected behavior.  By default, if there is no uplinks online & fwd, then all links South (Server binds and the ports of the device) will be used also to the bottom.

    See the explanation of Abi here for more information how to change this behavior.

    https://supportforums.Cisco.com/thread/2187144?TSTART=0

    In short, we must change policy control of the network of the way of the material in the Hardware tab.

    Kind regards

    Robert

  • Don't UCS Mini Support 2 chassis configuration?

    Hello, I found in the data sheet that takes care of the UCS 6324 up to 15 servers, 8 in the chassis and the direcct 7 annexed UCS C series.

    http://www.Cisco.com/c/en/us/products/collateral/servers-unified-computi...

    the Cisco UCS 6324 fabric Interconnect incorporates connectivity in server chassis blade Cisco UCS 5108 to provide a smaller area of up to 15 servers (8 blade servers and up to 7 connection direct rack servers).

    But I have some heads a Cisco presentation that the UCS Mini can support up to 2 chassis 5108 (which one with the FI 6324 and the other with 2208XP, 2204XP or 2104XP fabric Extenders FEX but I can't find the documents related.)

    Could you please confirm this point? There is a documentation?

    Thank you very much and best regards,

    Raquel

    I have

    Real! FCS + 3 months?

    However, I have seen a lot of customers with the classical configuration of UCS and 2 chassis. Cisco offered and offers very attractive bundles, which makes such an entry of low size attractive.

  • UCS B200 M4 Mini

    Hello

    Cisco UCS B200 M4 is supported on the UCS solution mini?

    When I check UCS B200 M4 technical data sheet below, he says "the UCS B200 M4 Blade Server installs in a Cisco UCS 5100 Series Blade
    server chassis or chassis of Server Blade Mini UCS. »

    http://www.Cisco.com/c/dam/en/us/products/collateral/servers-unified-com...

    But I can not configure UCSB200 M4 on CCW when I choose the UCS chassis mini.

    Kind regards

    Frédéric

    Just received the information that it will be supported with 3.0 (2) output, Q2CY15 expected.

  • VTP on UCS?

    Hi all

    I need to create about 100 VLANS on link bottom-up clusters interconnection by UCS fabric nexus 7K, impossible to find a way get UCS learning VLANS use VTP or something like that. I have prepared the list of commands to run with CLI, that's still a lot of work, because according to each create vlan command, I need to add 'exit' command to return to the previous mode, otherwise the next command ' create a vlan "won't work not - different from the switches to the Cat.

    blade system UCS does support VTP? What is the best way to create a VLAN that much?

    Thank you

    UCS does not DRIFT support under this item.

    I suggedt make you a little script in your Notepad and paste it on the fabric of interconnection CLI.

  • Design network blade chassis configuration

    What would be the best practices for a new vmware environment which is an HP blade chassis with 4 3020 switch cisco and brocade switch 2

    The blades are blades half so it would be a total of 4 physical network interface cards.  I was planning to Channel 4 port fiber ethernet ports per switch.  All 3020 switches will feed a single main switch.  Each Brocade each feed to two san switches.

    Yes, it's what you do on the blade switches to make sure that all the downlink ports go down if all uplinks fail.

    On the main switch, you must configure the channel as usual, there is nothing special that needs to be done because of the State group of links on the switches of the blade. If the main switch has several modules/slots, connect uplink to a switch blade or module can fail without interrupting production.

    André

  • Servers blades - whats will benefit which bring in VI

    Hi all:

    I kept on hearing the advantage of the blades, I take a lot of you use it for your deployment of VI, you can point to me the advantage and disadvantage maybe Server blades and think that its profitable for your environment?

    Thank you

    J

    I've been running ESX on 4 Blade X 6250 of Sun (in a pair of Sun6000 chasis) for almost a year now.

    (1) Admin complexity - well now, I have 2 Sun 6000 s to manage which I didn't have before.

    (2) strength - the material Sun is infinitely more stable that the IBM 4-ways that it replaces (0 HW problems v.s.), have not started even get me on Dell.

    (3) cost - 4 blades, blades 2 < < 4 stand-alone servers (add blades SPARC and the savings made the decisonn obvious).

    (4) cable/switches - Sun 6000 s have dedicated ports and PCI ports for each blade, so I have the same number of cables execpt on management interfaces where a UI for the 6000 also makes use of the management for blades console interface.

    Traps:

    (A) the Sun X6250s have no serial port don't even think to check this out when I Wen to the port on my surveillance system (Orion), it was a big DUH-OH, Orion is still sitting on an old IBM.

    (B) with only 2 ports per blade the VIC interface ethernet sharing ports to shared resources that serve virtual machines, no real problem EXCEPT that in the construction of the ESX Server, I have to use a port not shared resources, and then add a port to shared resources for the guests, then move the VIC interface to the port trunking, don't delete shared resources , trunk that and the team that he.

    (C) power - where I can out a Server Intel or SPARC, re - arrange track and pop a newer version, one in the center of the blade takes 6 power supplies 220V, must have the power again to these 2 grids.

Maybe you are looking for