iSCSI MPIO (Multipath) with Nexus 1000v

Anyone out there have iSCSI MPIO successfully with Nexus 1000v? I followed the Cisco's Guide to the best of my knowledge and I tried a number of other configurations without success - vSphere always displays the same number of paths as it shows the targets.

The Cisco document reads as follows:

Before you begin the procedures in this section, you must know or follow these steps.

•You have already configured the host with the channel with a port that includes two or more physical network cards.

•You have already created the VMware kernel NIC to access the external SAN storage.

•A Vmware Kernel NIC can be pinned or assigned to a physical network card.

•A physical NETWORK card may have several cards pinned VMware core network or assigned.

That means 'a core of Vmware NIC can be pinned or assigned to a physical NIC' average regarding the Nexus 1000v? I know how to pin a physical NIC with vDS standard, but how does it work with 1000v? The only thing associated with "pin" I could find inside 1000v was with port channel subgroups. I tried to create a channel of port with manuals subgroups, assigning values of sub-sub-group-id for each uplink, then assign an id pinned to my two VMkernel port profiles (and directly to ports vEthernet as well). But that doesn't seem to work for me

I can ping both the iSCSI ports VMkernel from the switch upstream and inside the VSM, so I know Layer 3 connectivity is here. A strange thing, however, is that I see only one of the two addresses MAC VMkernel related on the switch upstream. Both addresses show depending on the inside of the VSM.

What I'm missing here?

Just to close the loop in case someone stumbles across this thread.

In fact, it is a bug on the Cisco Nexus 1000v. The bug is only relevant to the ESX host that have been fixed in 4.0u2 (and now 4.1). Around short term work is again rev to 4.0u1. Medium-term correction will be integrated into a maintenance for the Nexus 1000V version.

Our implementation of code to get the multipath iSCSI news was bad but allowed in 4.0U1. 4.0U2 no longer our poor implementation.

For iSCSI multipath and N1KV remain 4.0U1 until we have a version of maintenance for the Nexus 1000V

Tags: VMware

Similar Questions

  • Balancing of VMware with Nexus 1000v

    With the vmware puts vDS or vSS, I see many designs use the asset-liability approach for binding rising consolidation of NETWORK cards, IE a vmnic is active will fabric and a vmnic is passive will fabric B. This setting is configured in vSphere.

    Se this article: http://bradhedlund.com/2010/09/15/vmware-10ge-qos-designs-cisco-ucs-nexus/

    Is this correct, that we can put in place a regime with the 1000V? All the network is on the 1000V config, and as far as I know, we can only configure the uplink in these 3 modes:

    1. LACP 2. vPC-Host Mode 3. vPC-Host Mode Mac pinning

    and they are all 'active' based.

    Post edited by: Atle Dale

    Yes.  All uplinks are used.  Each VM virtual interface is pinned to one of of the uplinks.  If one uplink goes down, all interfaces pinned gets dynamically likes to remaining uplinks.  A mac address will only see on a single interface at a time.  This is how MAC pinning prevents STP loops.

    Robert

  • UCS environment vSphere 5.1 upgrade to vSphere 6 with Nexus 1000v

    Hi, I've faced trying to get the help of TAC and internet research on the way to upgrade to our UCS environment, and as a last thought resort, I would try a post on the forum.

    My environment is an environment of UCS chassis with double tracking, tissue of interconnections, years 1110 with pair HA of 1000v 5.1 running vsphere.  We have updated all our equipment (blades, series C and UCS Manager) for the supported versions of the firmware by CISCO for vSphere 6.

    We have even upgraded our Nexus 1000v 5.2 (1) SV3(1.5a) which is a support for vSphere version 6.

    To save us some treatment and cost of issuing permits and on our previous vcenter server performance was lacking and unreliable, I looked at the virtual migration on the vCenter 6 appliance.  There is nowhere where I can find information which advises on how to begin the process of upgrading VMWare on NGC when the 1000v is incorporated.  I would have thought that it is a virtual machine if you have all of your improved versions to support versions for 6 veil smooth.

    A response I got from TAC was that we had to move all of our VM on a standard switch to improve the vCenter, but given we are already on a supported the 1000v for vSphere version 6 that he left me confused, nor that I would get the opportunity to rework our environment being a hospital with this kind of downtime and outage windows for more than 200 machines.

    Can provide some tips, or anyone else has tried a similar upgrade path?

    Greetings.

    It seems that you have already upgraded your components N1k (are VEM upgraded to match?).

    Are your questions more info on how you upgrade/migration to a vcenter server to another?

    If you import your vcenter database to your new vcenter, it shouldn't have a lot of waste of time, as the VSM/VEM will always see the vcenter and N1k dVS.  If you change the vcenter server name/IP, but import the old vcenter DB, there are a few steps, you will need to ensure that the connection of VSM SVS corresponds to the new IP address in vcenter.

    If you try to create a new additional vcenter in parallel, then you will have problems of downtime as the name of port-profiles/network programming the guestVMs currently have will lose their 'support' info if you attempt to migrate, because you to NIC dVS standard or generic before the hosts for the new vcenter.

    If you are already on vcenter 6, I believe you can vmotion from one host to another and more profile vswitch/dVS/port used.

    Really need more detail on how to migrate from a vcenter for the VCA 6.0.

    Thank you

    Kirk...

  • The Nexus 1000V loop prevention

    Hello

    I wonder if there is a mechanism that I can use to secure a network against the loop of L2 packed the side of vserver in Vmware with Nexus 1000V environment.

    I know, Nexus 1000V can prevent against the loop on the external links, but there is no information, there are features that can prevent against the loop caused by the bridge set up on the side of the OS on VMware virtual server.

    Thank you in advance for an answer.

    Concerning

    Lukas

    Hi Lukas.

    To avoid loops, the N1KV does not pass traffic between physical network cards and also, he silently down traffic between vNIC is the bridge by operating system.

    http://www.Cisco.com/en/us/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html#wp9000156

    We must not explicit configuration on N1KV.

    Padma

  • [Nexus 1000v] Vincent can be add in VSM

    Hi all

    due to my lab, I have some problems with Nexus 1000V when VEM cannot add in VSM.

    + VSM has already installed on ESX 1 (stand-alone or ha) and you can see:

    See the Cisco_N1KV module.

    Status of Module Type mod Ports model

    ---  -----  --------------------------------  ------------------  ------------

    1 active 0 virtual supervisor Module Nexus1000V *.

    HW Sw mod

    ---  ----------------  ------------------------------------------------

    1 4.2 (1) SV1(4a) 0.0

    MOD-MAC-Address (es) series-Num

    ---  --------------------------------------  ----------

    1 00-19-07-6c-5a-a8 na 00-19-07-6c-62-a8

    Server IP mod-Server-UUID servername

    ---  ---------------  ------------------------------------  -------------------

    1 10.4.110.123 NA NA

    + on ESX2 installed VEM

    [[email protected] / * / ~] status vem #.

    VEM modules are loaded

    Switch name Num used Ports configured Ports MTU rising ports

    128 3 128 1500 vmnic0 vSwitch0

    VEM Agent (vemdpa) is running

    [[email protected] / * / ~] #.

    all advice to do this.

    Thank you very much

    Doan,

    Need more information.

    The reception was added via vCenter to DVS 1000v successfully?

    If so, there is probably a problem with your control communication VLAN between the MSM and VEM.  Start here and ensure that the VIRTUAL local area network has been created on all intermediate switches and it is allowed on each end-to-end trunk.

    If you're still stuck, paste your config running of your VSM.

    Kind regards

    Robert

  • migration from 4.1 to 5.1 hosts/guests of Nexus 1000V to VDS using CLI power

    Is there scripts out there that will migrate to a host and guests on this respective host of a virtual center of ESXi 4.1 with Nexus 1000v switch, for a new virtual Center 5.1 with a VDS of VMWare ESXi.

    I don't want to upgrade the host 5.1, or guests. only move them from 4.1 to 5.1 Virtual Center and upgrade at a later date

    Maybe can the Gabe migration distributed vSwitch in vCenter new post help?

  • Nexus 1000v

    Hey guys,.

    Hope this is the right place to post.

    IM currently working on a design legacy to put in an ESXi 5 with Nexus 1000v solution and a back-end of a Cisco UCS 5180.

    I have a few questions to ask about what people do in the real world about this type of package upward:

    In oder to use the Nexus 1000v - should I vCenter?  The reason why I ask, is there no was included on the initial list of the Kit.  I would say to HA and vMotion virtual Center, but the clients wants to know if I can see what we have for now and licenses to implement vCenter at a later date.

    Ive done some reading about the Nexus 1000v as Ive never designed a soluton with that of before.  Cisco is recommended 2 VSM be implemented for redundancy.  Is this correct?  Do need me a license for each VSM?  I also suppose to meet this best practice I need HA and vMoton and, therefore, vCenter?

    The virtual machine for the VSM, can they sit inside the cluster?

    Thanks in advance for any help.

    In oder to use the Nexus 1000v - should I vCenter?

    Yes - the Nexus 1000v is a type of switch developed virtual and it requires vcenetr

    Ive done some reading about the Nexus 1000v as Ive never designed a soluton with that of before.  Cisco is recommended 2 VSM be implemented for redundancy.  Is this correct?  Do need me a license for each VSM?  I also suppose to meet this best practice I need HA and vMoton and, therefore, vCenter?

    I'm not sure the VSMs but yes HA and vMotion are required as part of best practices

    The virtual machine for the VSM, can they sit inside the cluster?

    Yes they cane xist within the cluster.

  • VXLAN on UCS: IGMP with Catalyst 3750, 5548 Nexus, Nexus 1000V

    Hello team,

    My lab consists of Catalyst 3750 with SVI acting as the router, 5548 Nexus in the vpc Setup, UCS in end-host Mode and Nexus 1000V with segmentation feature enabled (VXLAN).

    I have two different VLAN for VXLAN (140, 141) to demonstrate connectivity across the L3.

    VMKernel on VLAN 140 guests join the multicast fine group.

    Hosts with VMKernel on 141 VLAN do not join the multicast group.  Then, VMs on these hosts cannot virtual computers ping hosts on the local network VIRTUAL 140, and they can't even ping each other.

    I turned on debug ip igmp on the L3 Switch, and the result indicates a timeout when he is waiting for a report from 141 VLAN:

    15 Oct 08:57:34.201: IGMP (0): send requests General v2 on Vlan140

    15 Oct 08:57:34.201: IGMP (0): set the report interval to 3.6 seconds for 224.0.1.40 on Vlan140

    15 Oct 08:57:36.886: IGMP (0): receipt v2 report on 172.16.66.2 to 239.1.1.1 Vlan140

    15 Oct 08:57:36.886: IGMP (0): group record received for group 239.1.1.1, mode 2 from 172.16.66.2 to 0 sources

    15 Oct 08:57:36.886: IGMP (0): update EXCLUDE group 239.1.1.1 timer

    15 Oct 08:57:36.886: IGMP (0): add/update Vlan140 MRT for (*, 239.1.1.1) 0

    15 Oct 08:57:38.270: IGMP (0): send report v2 for 224.0.1.40 on Vlan140

    15 Oct 08:57:38.270: IGMP (0): receipt v2 report on Vlan140 of 172.16.66.1 for 224.0.1.40

    15 Oct 08:57:38.270: IGMP (0): group record received for group 224.0.1.40, mode 2 from 172.16.66.1 to 0 sources

    15 Oct 08:57:38.270: IGMP (0): update EXCLUDE timer group for 224.0.1.40

    15 Oct 08:57:38.270: IGMP (0): add/update Vlan140 MRT for (*, 224.0.1.40) by 0

    15 Oct 08:57:51.464: IGMP (0): send requests General v2 on Vlan141<----- it="" just="" hangs="" here="" until="" timeout="" and="" goes="" back="" to="">

    15 Oct 08:58:35.107: IGMP (0): send requests General v2 on Vlan140

    15 Oct 08:58:35.107: IGMP (0): set the report interval to 0.3 seconds for 224.0.1.40 on Vlan140

    15 Oct 08:58:35.686: IGMP (0): receipt v2 report on 172.16.66.2 to 239.1.1.1 Vlan140

    15 Oct 08:58:35.686: IGMP (0): group record received for group 239.1.1.1, mode 2 from 172.16.66.2 to 0 sources

    15 Oct 08:58:35.686: IGMP (0): update EXCLUDE group 239.1.1.1 timer

    15 Oct 08:58:35.686: IGMP (0): add/update Vlan140 MRT for (*, 239.1.1.1) 0

    If I do a show ip igmp interface, I get the report that there is no joins for vlan 141:

    Vlan140 is up, line protocol is up

    The Internet address is 172.16.66.1/26

    IGMP is enabled on the interface

    Current version of IGMP host is 2

    Current version of IGMP router is 2

    The IGMP query interval is 60 seconds

    Configured IGMP queries interval is 60 seconds

    IGMP querier timeout is 120 seconds

    Configured IGMP querier timeout is 120 seconds

    Query response time is 10 seconds max IGMP

    Number of queries last member is 2

    Last member query response interval is 1000 ms

    Access group incoming IGMP is not defined

    IGMP activity: 2 joints, 0 leaves

    Multicast routing is enabled on the interface

    Threshold multicast TTL is 0

    Multicast designated router (DR) is 172.16.66.1 (this system)

    IGMP querying router is 172.16.66.1 (this system)

    Multicast groups joined by this system (number of users):

    224.0.1.40 (1)

    Vlan141 is up, line protocol is up

    The Internet address is 172.16.66.65/26

    IGMP is enabled on the interface

    Current version of IGMP host is 2

    Current version of IGMP router is 2

    The IGMP query interval is 60 seconds

    Configured IGMP queries interval is 60 seconds

    IGMP querier timeout is 120 seconds

    Configured IGMP querier timeout is 120 seconds

    Query response time is 10 seconds max IGMP

    Number of queries last member is 2

    Last member query response interval is 1000 ms

    Access group incoming IGMP is not defined

    IGMP activity: 0 joins, 0 leaves

    Multicast routing is enabled on the interface

    Threshold multicast TTL is 0

    Multicast designated router (DR) is 172.16.66.65 (this system)

    IGMP querying router is 172.16.66.65 (this system)

    No group multicast joined by this system

    Is there a way to check why the hosts on 141 VLAN are joined not successfully?  port-profile on the 1000V configuration of vlan 140 and vlan 141 rising and vmkernel are identical, except for the different numbers vlan.

    Thank you

    Trevor

    Hi Trevor,

    Once the quick thing to check would be the config igmp for both VLAN.

    where did you configure the interrogator for the vlan 140 and 141?

    are there changes in transport VXLAN crossing routers? If so you would need routing multicast enabled.

    Thank you!

    . / Afonso

  • Nexus 1000v VSM compatibility with older versions of VEM?

    Hello everyone.

    I would like to upgrade our Nexus 1000v VSM 4.2 (1) SV1 (5.1) to 4.2 (1) SV2(2.1a) because we are heading of ESXi 5.0 update 3 to 5.5 ESXi in the near future. I was not able to find a list of compatibility for the new version when it comes to versions VEM, I was wondering if the new VSM supports older versions VEM, we are running, so I must not be upgraded all at once. I know that it supports two versions of our ESXi.

    Best regards

    Pete

    You found documentation, transfer of the station from 1.5 to latest code is supported in a VSM perspective.  Which is not documented is the small one on the MEC.  In general, the VSM is backward compatible with the old VEM (to a degree, the degree of which is not published).  Although it is not documented (AFAIK), verbal comprehension is that MEC can be a version or two behind, but you should try to minimize the time that you run in this configuration.

    If you plan to run mixed versions VEM when getting your upgraded hosts (totally fine that's how I do mine), it is better to move to this enhanced version of VEM as you upgrade the hypervisor.  Since you go ESXi 5.0 5.5, you create an ISO that contains the Cisco VIBs, your favorite driver async (if any), and the image of ESXi 5.5 all grouped together so the upgrade for a given host is all of a sudden.  You probably already have this cold technique, but the links generated by the Cisco tool below will show you how to proceed.  It also gives some URLS handy to share with each person performing functions on this upgrade.  Here is the link:

    Nexus 1000V and ESX upgrade utility

    PS - the new thing takes clones your VSMs offline.  Even if they are fairly easy to recover, having a real pure clone will save some sauce secret that otherwise you may lose in a failure scenario.  Just turn off a VSM, then right click and clone.  Turn on again this MSM and failover pair HA, then take to the bottom of it and get a clone of it.  So as a security measure, this upgrade, get your clones currently out of the current 1.5 VSMs, then some time after your upgrade some clones offline, saved from the new version.

  • Change the maximum number of ports on Nexus 1000v vDS online with no distribution?

    Hello

    Change the maximum number of ports on Nexus 1000v vDS online with no distribution?


    I'm sure that's what the link

    VMware KB: Increase in the maximum number of vNetwork Distributed Switch (vDS) ports in vSphere 4.x

    not to say that

    I have 5.1 ESXi and vcenter

    Thank you
    Saami

    There is no downtime when you change quantity "vmware max-ports" a port profile. It can be done during production.

    You can also create a new profile of port with a test of the virtual machine and change the "vmware max-ports' If you want warm and ferrets.

  • Upgrade to vCenter 4.0 with Cisco Nexus 1000v installed

    Hi all

    We have vCenter 4.0 and ESX 4.0 servers and we want to upgrade to version 4.1. But also Nexus 1000v installed on the ESX Server and vCenter.i found VMware KB which is http://kb.vmware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1024641 . But only the ESX Server upgrade is explained on this KB, vCenter quid?  Our vcenter 4.0 is installed on Windows 2003 64-bit with 64-bit SQL 2005.

    We can upgrade vcenter with Nexsus plugin 1000v installed upgrading on-site without problem? And how to proceed? What are the effects of the plugin Nexus1000v installed on the server vcenter during update?

    Nexus1000v 4.0. (version 4).sv1(3a) has been installed on the ESX servers.

    Concerning

    Mecyon,

    Upgrading vSphere 4.0-> 4.1 you must update the software VEM (.vib) also.  The plugin for vCenter won't change, it won't be anything on your VSM (s).  The only thing you should update is the MEC on each host.  See the matrix previously posted above for the .vib to install once you're host has been updated.

    Note: after this upgrade, that you will be able to update regardless of your software of vSphere host or 1000v (without having to be updated to the other).  1000v VEM dependencies have been removed since vSphere 4.0 Update 2.

    Kind regards

    Robert

  • Design/implementation of Nexus 1000V

    Hi team,

    I have a premium partner who is an ATP on Data Center Unified Computing. He has posted this question, I hope you can help me to provide the resolution.

    I have questions about nexus 1KV design/implementation:

    -How to migrate virtual switches often to vswitch0 (in each ESX server has 3 vswitches and the VMS installation wizard will only migrate vswicht0)? for example, to other vswitchs with other vlan... Please tell me how...
    -With MUV (vmware update manager) can install modules of MEC in ESX servers? or install VEM manually on each ESX Server?
    -Assuming VUM install all modules of MEC, MEC (vib package) version is automatically compatible with the version of vmware are?
    -is the need to create port of PACKET-CONTROL groups in all THE esx servers before migrating to Nexus 1000? or only the VEM installation is enough?
    -According to the manual Cisco VSM can participate in VMOTION, but, how?... What is the recommendation? When the primary virtual machines are moving, the secondary VSM take control? This is the case with connectivity to all virtual machines?
    -When there are two clusters in a vmware vcenter, how to install/configure VSM?
    -For the concepts of high availability, which is the best choice of design of nexus? in view of the characteristics of vmware (FT, DRS, VMOTION, Cluster)
    -How to migrate port group existing Kernel to nexus iSCSI?... What are the steps? cisco manual "Migration from VMware to Cisco Nexus 1000V vSwitch" show how to generate the port profile, but
    How to create iSCSI target? (ip address, the username/password)... where it is defined?
    -Assuming that VEM licenses is not enough for all the ESX servers, ¿will happen to connectivity of your virtual machines on hosts without licenses VEM? can work with vmware vswitches?

    I have to install nexus 1000V in vmware with VDI plataform, with multiple ESX servers, with 3 vswitch on each ESX Server, with several machinne virtual running, two groups defined with active vmotion and DRS and the iSCSI storage Center

    I have several manuals Cisco on nexus, but I see special attention in our facilities, migration options is not a broad question, you you have 'success stories' or customers experiences of implementation with migration with nexus?

    Thank you in advance.

    Jojo Santos

    Cisco partner Helpline presales

    Thanks for the questions of Jojo, but this question of type 1000v is better for the Nexus 1000v forum:

    https://www.myciscocommunity.com/Community/products/nexus1000v

    Answers online.  I suggest you just go in a Guides began to acquire a solid understanding of database concepts & operations prior to deployment.

    jojsanto wrote:

    Hi Team,

    I have a premium partner who is an ATP on Data Center Unified Computing. He posted this question, hopefully you can help me provide resolution.

    I have questions about nexus 1KV design/implementation:

    -How migrate virtual switchs distint to vswitch0 (in each ESX server has 3 vswitches and the installation wizard of VMS only migrate vswicht0)?? for example others vswitchs with others vlan.. please tell me how...

    [Robert] After your initial installation you can easily migrate all VMs within the same vSwitch Port Group at the same time using the Network Migration Wizard.  Simply go to Home - Inventory - Networking, right click on the 1000v DVS and select "Migrate Virtual Machine Networking..."   Follow the wizard to select your Source (vSwitch Port Groups) & Destination DVS Port Profiles

    -With VUM (vmware update manager) is possible install VEM modules in ESX Servers ??? or must install VEM manually in each ESX Server?

    [Robert] As per the Getting Started & Installation guides, you can use either VUM or manual installation method for VEM software install.

    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3/getting_started/configuration/guide/n1000v_gsg.html

    -Supposing of VUM install all VEM modules, the VEM version (vib package) is automatically compatible with build existen vmware version?

    [Robert] Yes.  Assuming VMware has added all the latest VEM software to their online repository, VUM will be able to pull down & install the correct one automatically.


    -is need to create PACKET-MANAGEMENT-CONTROL port groups in ALL esx servers before to migrate to Nexus 1000? or only VEM installation is enough???

    [Robert] If you're planning on keeping the 1000v VSM on vSwitches (rather than migrating itself to the 1000v) then you'll need the Control/Mgmt/Packet port groups on each host you ever plan on running/hosting the VSM on.  If you create the VSM port group on the 1000v DVS, then they will automatically exist on all hosts that are part of the DVS.

    -According to the Cisco manuals VSM can participate in VMOTION, but, how? .. what is the recommendation?..when the primary VMS is moving, the secondary VSM take control?? that occurs with connectivity in all virtual machines?

    [Robert] Since a VMotion does not really impact connectivity for a significant amount of time, the VSM can be easily VMotioned around even if its a single Standalone deployment.  Just like you can vMotion vCenter (which manages the actual task) you can also Vmotion a standalone or redundant VSM without problems.  No special considerations here other than usual VMotion pre-reqs.

    -When there two clusters in one vmware vcenter, how must install/configure VSM?

    [Robert] No different.  The only consideration that changes "how" you install a VSM is a vCenter with multiple DanaCenters. VEM hosts can only connect to a VSM that reside within the same DC.  Different clusters are not a problem.

    -For High Availability concepts, wich is the best choices of design of nexus? considering vmware features (FT,DRS, VMOTION, Cluster)

    [Robert] There are multiple "Best Practice" designs which discuss this in great detail.  I've attached a draft doc one on this thread. A public one will be available in the coming month. Some points to consider is that you do not need FT.  FT is still maturing, and since you can deploy redundany VSMs at no additional cost, there's no need for it.  For DRS you'll want to create a DRS Rule to avoid ever hosting the Primar & Secondary VSM on the same host.

    -How to migrate existent Kernel iSCSI port group to nexus? .. what are the steps? in cisco manual"Migration from VMware vSwitch to Cisco Nexus 1000V" show how to generate the port-profile, but
    how to create the iSCSI target? (ip address, user/password) ..where is it defined?

    [Robert] You can migrate any VMKernel port from vCenter by selecting a host, go to the Networking Configuration - DVS and select Manage Virtual Adapters - Migrate Existing Virtual Adapter. Then follow the wizard.  Before you do so, create the corresponding vEth Port Profile on your 1000v, assign appropriate VLAN etc.  All VMKernel IPs are set within vCenter, 1000v is Layer 2 only, we don't assign Layer 3 addresses to virtual ports (other than Mgmt).  All the rest of the iSCSI configuration is done via vCenter - Storage Adapters as usual (for Targets, CHAP Authentication etc)

    -Supposing of the licences of VEM is not enough for all ESX servers,, ¿will happen to the connectivity of your virtual machines in hosts without VEM licences? ¿can operate with vmware vswitches?

    [Robert] When a VEM comes online with the DVS, if there are not enough available licensses to license EVERY socket, the VEM will show as unlicensed.  Without a license, the virtual ports will not come up.  You should closely watch your licenses using the "show license usage" and "show license usage " for detailed allocation information.  At any time a VEM can still utilize a vSwitch - with or without 1000v licenses, assuming you still have adapters attached to the vSwitches as uplinks.

    I must install nexus 1000V in vmware plataform with VDI, with severals Servers ESX, with 3 vswitch on each ESX Server, with severals virtual machinne running, two clusters defined with vmotion and DRS active and central storage with iSCSI

    I have severals cisco manuals about nexus, but i see special focus in installations topics, the options for migrations is not extensive item, ¿do you have "success stories" or customers experiences of implementation with migrations with nexus?

    [Robert] Have a good look around the Nexus 1000v community Forum.   Lots of stories and information you may find helpful.

    Good luck!

  • Remove the ' system VLAN "Nexus 1000V port-profile

    We have a Dell M1000e blade chassis with a number of Server Blade M605 ESXi 5.0 using the Nexus 1000V for networking.  We use 10 G Ethernet fabric B and C, for a total of 4 10 cards per server.  We do not use the NIC 1 G on A fabric.  We currently use a NIC of B and C fabrics for the traffic of the virtual machine and the other card NETWORK in each fabric for traffic management/vMotion/iSCSI VM.  We currently use iSCSI EqualLogic PS6010 arrays and have two configuration of port-groups with iSCSI connections (a physical NIC vmnic3 and a vmnic5 of NIC physical).

    We have added a unified EMC VNX 5300 table at our facility and we have configured three VLANs extra on our network - two for iSCSI and other for NFS configuration.  We've added added vEthernet port-profiles for the VLAN of new three, but when we added the new vmk # ports on some of the ESXi servers, they couldn't ping anything.   We got a deal of TAC with Cisco and it was determined that only a single port group with iSCSI connections can be bound to a physical uplink both.

    We decided that we would temporarily add the VLAN again to the list of VLANS allowed on the ports of trunk of physical switch currently only used for the traffic of the VM. We need to delete the new VLAN port ethernet-profile current but facing a problem.

    The Nexus 1000V current profile port that must be changed is:

    The DenverMgmtSanUplinks type ethernet port profile

    VMware-port group

    switchport mode trunk

    switchport trunk allowed vlan 2308-2306, 2311-2315

    passive auto channel-group mode

    no downtime

    System vlan 2308-2306, 2311-2315

    MGMT RISING SAN description

    enabled state

    We must remove the list ' system vlan "vlan 2313-2315 in order to remove them from the list" trunk switchport allowed vlan.

    However, when we try to do, we get an error about the port-profile is currently in use:

    vsm21a # conf t

    Enter configuration commands, one per line.  End with CNTL/Z.

    vsm21a (config) #-port ethernet type DenverMgmtSanUplinks profile

    vsm21a(config-port-Prof) # system vlan 2308-2306, 2311-2312

    ERROR: Cannot delete system VLAN, port-profile in use by Po2 interface

    We have 6 ESXi servers connected to this Nexus 1000V.  Originally they were MEC 3-8 but apparently when we made an update of the firmware, they had re - VEM 9-14 and the old 6 VEM and associates of the Channel ports, are orphans.

    By example, if we look at the port-channel 2 more in detail, we see orphans 3 VEM-related sound and it has no ports associated with it:

    Sho vsm21a(config-port-Prof) # run int port-channel 2

    ! Command: show running-config interface port-canal2

    ! Time: Thu Apr 26 18:59:06 2013

    version 4.2 (1) SV2 (1.1)

    interface port-canal2

    inherit port-profile DenverMgmtSanUplinks

    MEC 3

    vsm21a(config-port-Prof) # sho int port-channel 2

    port-canal2 is stopped (no operational member)

    Material: Port Channel, address: 0000.0000.0000 (bia 0000.0000.0000)

    MTU 1500 bytes, BW 100000 Kbit, DLY 10 usec,

    reliability 255/255, txload 1/255, rxload 1/255

    Encapsulation ARPA

    Port mode is trunk

    Auto-duplex, 10 Gb/s

    Lighthouse is off

    Input stream control is turned off, output flow control is disabled

    Switchport monitor is off

    Members in this channel: Eth3/4, Eth3/6

    Final cleaning of "show interface" counters never

    102 interface resets

    We can probably remove the port-channel 2, but assumed that the error message on the port-profile in use is cascading on the other channel ports.  We can delete the other port-channel 4,6,8,10 orphans and 12 as they are associated with the orphan VEM, but we expect wil then also get errors on the channels of port 13,15,17,19,21 and 23 who are associated with the MEC assets.

    We are looking to see if there is an easy way to fix this on the MSM, or if we need to break one of the rising physical on each server, connect to a vSS or vDS and migrate all off us so the Nexus 1000V vmkernel ports can clean number VLAN.

    You will not be able to remove the VLAN from the system until nothing by using this port-profile. We are very protective of any vlan that is designated on the system command line vlan.

    You must clean the canals of old port and the old MEC. You can safely do 'no port-channel int' and "no vem" on devices which are no longer used.

    What you can do is to create a new port to link rising profile with the settings you want. Then invert the interfaces in the new port-profile. It is generally easier to create a new one then to attempt to clean and the old port-profile with control panel vlan.

    I would like to make the following steps.

    Create a new port-profile with the settings you want to

    Put the host in if possible maintenance mode

    Pick a network of former N1Kv eth port-profile card

    Add the network adapter in the new N1Kv eth port-profile

    Pull on the second NIC on the old port-profile of eth

    Add the second network card in the new port-profile

    You will get some duplicated packages, error messages, but it should work.

    The other option is to remove the N1Kv host and add it by using the new profile port eth.

    Another option is to leave it. Unless it's really bother you no VMs will be able to use these ports-profile unless you create a port veth profile on this VLAN.

    Louis

  • Configuring network DMZ, internal using Nexus 1000v

    Hello peoples, this is my first post in the forums.

    I am trying to build a profile for my customer with the following configuration;

    4 x ESXi hosts on the DL380 G7 each with 12 GB of RAM, CPU Core X 5650 of 2 x 6, 8 x 1 GB NIC

    2 x left iSCSI SAN.

    The hardware components and several design features, on that I have no control, they were decided and I can't change, or I can't add additional equipment. Here's my constraints;

    (1) the solution will use the shared for internal, external traffic and iSCSI Cisco network switches.

    (2) the solution uses a single cluster with each of the four hosts within that group.

    (3) I install and configure a Nexus 1000v in the environment (something I'm not want simply because I have never done it before). The customer was sold on the concept of a solution of cheap hardware and shared because they were told that using a N1Kv would solve all the problems of security.

    Before I learned that I would have to use a N1Kv my solution looked like the following attached JPG. The solution used four distributed virtual switches and examples of how they were going to be configured is attached. Details and IP addresses are examples.

    My questions are:

    (1) what procedure should I use to set up the environment, should I build the dvSwtiches as described and then export it to the N1Kv?

    (2) how should I document place this solution? In general in my description I will have a section explaining each switch, how it is configured, vital details, port groups etc. But all of this is removed and replaced with uplink ports or something is it not?

    (3) should I be aiming to use a different switch by dvSwitch, or I can stem the heap and create groups of different ports, is it safe, is there a standard? Yes, I read the white papers on the DMZ and the Nexus 1000v.

    (4) is my configuration safe and effective? Are there ways to improve it?

    All other comments and suggestions are welcome.

    Hello and welcome to the forums,

    (1) what procedure should I use to set up the environment, should I build the dvSwtiches as described and then export it to the N1Kv?

    N1KV replace dvSwitch, but there isn't that a N1KV ONLY where there are many dvSwitches N1KV would use the same rising in the world.

    (2) how should I document place this solution? In general in my description I will have a section explaining each switch, how it is configured, vital details, port groups etc. But all of this is removed and replaced with uplink ports or something is it not?

    If you use N1KV you rising the pSwitch to the N1KV.

    If you use dvSwitch/vSwitch you uplink to the pSwitches to the individual dvSwitch/vSwitch in use.

    (3) should I be aiming to use a different switch by dvSwitch, or I can stem the heap and create groups of different ports, is it safe, is there a standard? Yes, I read the white papers on the DMZ and the Nexus 1000v.

    No standard and Yes in many cases, it can be considered secure. If your existing physical network relies on VLANs and approves the Layer2 pSwitches, then you can do the exact same thing in the virtual environment and be as safe as your physical environment.

    However, if you need separation to the layer of pSwitch then you must maintain various vSwitches for this same separation. Take a look at this post http://www.virtualizationpractice.com/blog/?p=4284 on the subject.

    (4) is my configuration safe and effective? Are there ways to improve it?

    Always ways to improve. I would like to start looking into the defense-in-depth the vNIC and layers of edge within your vNetwork.

    Best regards
    Edward L. Haletky VMware communities user moderator, VMware vExpert 2009, 2010

    Now available: url = http://www.astroarch.com/wiki/index.php/VMware_Virtual_Infrastructure_Security'VMware vSphere (TM) and Virtual Infrastructure Security' [/ URL]

    Also available url = http://www.astroarch.com/wiki/index.php/VMWare_ESX_Server_in_the_Enterprise"VMWare ESX Server in the enterprise" [url]

    Blogs: url = http://www.virtualizationpractice.comvirtualization practice [/ URL] | URL = http://www.astroarch.com/blog Blue Gears [url] | URL = http://itknowledgeexchange.techtarget.com/virtualization-pro/ TechTarget [url] | URL = http://www.networkworld.com/community/haletky Global network [url]

    Podcast: url = http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcastvirtualization security Table round Podcast [url] | Twitter: url = http://www.twitter.com/TexiwillTexiwll [/ URL]

  • iSCSI MPIO + Jumbos

    I see lots of information about iSCSI MPIO and Jumbo support with iSCSI sw in vSphere 4.0. I see not a lot of information of the HBA iSCSI and MPIO + Jumbo support. Am I missing something? The HBA have access to the same improvements?

    Thank you

    HBAS are already supported these features in ESX 3.5.

    And multipath also exist in ESX 3.x.

    The 'new' storage module also accepts NEW 3rd party modules (only the Enterprise Plus edition) have native multichemin of the seller.

    Right now there's EMC PowerPath/VE modules products (I suppose that could also be used for iSCSI) and a beta version of Dell Equallogic MPIO.

    By Jumbo Frame, now vSphere supported also for iSCSI and NFS software, so you can activate and use it.

    Also the map of 10 GB are now supported.

    André

Maybe you are looking for