Replacement of failing on Nexus 1000v VEM

I was curious how others accomplish the replacement of a failed ESXi host who is an on the Nexus 1000v VEM. I did this procedure once and it seemed endless. The goal is to make the swap of the transparent to the Nexus 1000v (VEM even #, just different VMware UUID)

-Transition to the standard vSwitches home

-Remove the switch distributed via vCenter host (host of right click and delete)

-Swap physically on the chassis

-Find the UUID for new host (got esxcfg-info)

-Install new MEC in the 1000v with this UUID

-Host of replacement over 1000v that assumes the number of Vincent who was installed just to migrate

Ben,

Note - 1000v in relationship issues are better posted in the «Server network» forum  This forum is specific to the UCS.

https://supportforums.Cisco.com/community/NetPro/data-center/server-network?view=discussions

The procedure that you use is the right.  Another method is to remove the host of 1000v with elegance, then issue a 'no vem X"that removes the MEC does record from the MSM.  Exchange your hosts, then add back to the 1000v.  Is there a reason that you need the same UUID?

Kind regards

Robert

Tags: Cisco DataCenter

Similar Questions

  • Nexus 1000v VSM compatibility with older versions of VEM?

    Hello everyone.

    I would like to upgrade our Nexus 1000v VSM 4.2 (1) SV1 (5.1) to 4.2 (1) SV2(2.1a) because we are heading of ESXi 5.0 update 3 to 5.5 ESXi in the near future. I was not able to find a list of compatibility for the new version when it comes to versions VEM, I was wondering if the new VSM supports older versions VEM, we are running, so I must not be upgraded all at once. I know that it supports two versions of our ESXi.

    Best regards

    Pete

    You found documentation, transfer of the station from 1.5 to latest code is supported in a VSM perspective.  Which is not documented is the small one on the MEC.  In general, the VSM is backward compatible with the old VEM (to a degree, the degree of which is not published).  Although it is not documented (AFAIK), verbal comprehension is that MEC can be a version or two behind, but you should try to minimize the time that you run in this configuration.

    If you plan to run mixed versions VEM when getting your upgraded hosts (totally fine that's how I do mine), it is better to move to this enhanced version of VEM as you upgrade the hypervisor.  Since you go ESXi 5.0 5.5, you create an ISO that contains the Cisco VIBs, your favorite driver async (if any), and the image of ESXi 5.5 all grouped together so the upgrade for a given host is all of a sudden.  You probably already have this cold technique, but the links generated by the Cisco tool below will show you how to proceed.  It also gives some URLS handy to share with each person performing functions on this upgrade.  Here is the link:

    Nexus 1000V and ESX upgrade utility

    PS - the new thing takes clones your VSMs offline.  Even if they are fairly easy to recover, having a real pure clone will save some sauce secret that otherwise you may lose in a failure scenario.  Just turn off a VSM, then right click and clone.  Turn on again this MSM and failover pair HA, then take to the bottom of it and get a clone of it.  So as a security measure, this upgrade, get your clones currently out of the current 1.5 VSMs, then some time after your upgrade some clones offline, saved from the new version.

  • Cisco Nexus 1000V Virtual Switch Module investment series in the Cisco Unified Computing System

    Hi all
    I read an article by Cisco entitled "Best practices in Deploying Cisco Nexus 1000V Switches Cisco UCS B and C Series series Cisco UCS Manager servers" http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

    A lot of excellent information, but the section that intrigues me, has to do with the implementation of module of the VSM in the UCS. The article lists 4 options in order of preference, but does not provide details or the reasons underlying the recommendations. The options are the following:

    ============================================================================================================================================================
    Option 1: VSM external to the Cisco Unified Computing System on the Cisco Nexus 1010

    In this scenario, the virtual environment management operations is accomplished in a method identical to existing environments not virtualized. With multiple instances on the Nexus 1010 VSM, multiple vCenter data centers can be supported.
    ============================================================================================================================================================

    Option 2: VSM outside the Cisco Unified Computing System on the Cisco Nexus 1000V series MEC

    This model allows to centralize the management of virtual infrastructure, and proved to be very stable...
    ============================================================================================================================================================

    Option 3: VSM Outside the Cisco Unified Computing System on the VMware vSwitch

    This model allows to isolate managed devices, and it migrates to the model of the device of the unit of Services virtual Cisco Nexus 1010. A possible concern here is the management and the operational model of the network between the MSM and VEM devices links.
    ============================================================================================================================================================

    Option 4: VSM Inside the Cisco Unified Computing System on the VMware vSwitch

    This model was also stable in test deployments. A possible concern here is the management and the operational model of the network links between the MSM and VEM devices and switching infrastructure have doubles in your Cisco Unified Computing System.
    ============================================================================================================================================================

    As a beginner for both 100V Nexus and UCS, I hope someone can help me understand the configuration of these options and equally important to provide a more detailed explanation of each of the options and the resoning behind preferences (pro advantages and disadvantages).

    Thank you
    Pradeep

    No, they are different products. vASA will be a virtual version of our ASA device.

    ASA is a complete recommended firewall.

  • VM - FEX and Nexus 1000v relationship

    Hello

    I'm new in the world of virtulaization and I need to know what the relationship between Cisco Nexus 1000v and Cisco VM - FEX? and when to use VM - FEX and when to use Nexus 1000v.

    Concerning

    Ahmed,

    Nexus 1000v is a Distributed Switch that allows you to manage your VEM, see this relationship as a supervisor-LineCard relationship.

    VM - FEX gives you the opportunity to bypass the vSwitch embarked on each host ESXi (VEM) for example.

    With VM - FEX, you see the virtual machines as if they were directly connected to the Parent switch (N7K / 5K for example), making it the parent spend management (cause there is more no vSwitch in the middle).

    This is a good topic that can be discussed and is difficult to summarize in a few lines, you read something in particular? any questions or doubts we can help clarify?

    -Kenny

  • Authentication of connection Cisco AAA to shelf (MS IAS) Nexus 1000v

    Hey

    I have a link, I'll add to my radius for the logon server.

    On a sw IOS, I need to do more

    Number of attribute change to '1 '.
    Set the Format of the attribute to "String".
    Type "shell: priv-lvl = 15" in the value of the attribute field

    But should I put in the "shell", so I'll work on a Nexus 1000v

    Shell: roles = "network-admin".

    (or replace any role to assign the user to network-admin)

  • Design/implementation of Nexus 1000V

    Hi team,

    I have a premium partner who is an ATP on Data Center Unified Computing. He has posted this question, I hope you can help me to provide the resolution.

    I have questions about nexus 1KV design/implementation:

    -How to migrate virtual switches often to vswitch0 (in each ESX server has 3 vswitches and the VMS installation wizard will only migrate vswicht0)? for example, to other vswitchs with other vlan... Please tell me how...
    -With MUV (vmware update manager) can install modules of MEC in ESX servers? or install VEM manually on each ESX Server?
    -Assuming VUM install all modules of MEC, MEC (vib package) version is automatically compatible with the version of vmware are?
    -is the need to create port of PACKET-CONTROL groups in all THE esx servers before migrating to Nexus 1000? or only the VEM installation is enough?
    -According to the manual Cisco VSM can participate in VMOTION, but, how?... What is the recommendation? When the primary virtual machines are moving, the secondary VSM take control? This is the case with connectivity to all virtual machines?
    -When there are two clusters in a vmware vcenter, how to install/configure VSM?
    -For the concepts of high availability, which is the best choice of design of nexus? in view of the characteristics of vmware (FT, DRS, VMOTION, Cluster)
    -How to migrate port group existing Kernel to nexus iSCSI?... What are the steps? cisco manual "Migration from VMware to Cisco Nexus 1000V vSwitch" show how to generate the port profile, but
    How to create iSCSI target? (ip address, the username/password)... where it is defined?
    -Assuming that VEM licenses is not enough for all the ESX servers, ¿will happen to connectivity of your virtual machines on hosts without licenses VEM? can work with vmware vswitches?

    I have to install nexus 1000V in vmware with VDI plataform, with multiple ESX servers, with 3 vswitch on each ESX Server, with several machinne virtual running, two groups defined with active vmotion and DRS and the iSCSI storage Center

    I have several manuals Cisco on nexus, but I see special attention in our facilities, migration options is not a broad question, you you have 'success stories' or customers experiences of implementation with migration with nexus?

    Thank you in advance.

    Jojo Santos

    Cisco partner Helpline presales

    Thanks for the questions of Jojo, but this question of type 1000v is better for the Nexus 1000v forum:

    https://www.myciscocommunity.com/Community/products/nexus1000v

    Answers online.  I suggest you just go in a Guides began to acquire a solid understanding of database concepts & operations prior to deployment.

    jojsanto wrote:

    Hi Team,

    I have a premium partner who is an ATP on Data Center Unified Computing. He posted this question, hopefully you can help me provide resolution.

    I have questions about nexus 1KV design/implementation:

    -How migrate virtual switchs distint to vswitch0 (in each ESX server has 3 vswitches and the installation wizard of VMS only migrate vswicht0)?? for example others vswitchs with others vlan.. please tell me how...

    [Robert] After your initial installation you can easily migrate all VMs within the same vSwitch Port Group at the same time using the Network Migration Wizard.  Simply go to Home - Inventory - Networking, right click on the 1000v DVS and select "Migrate Virtual Machine Networking..."   Follow the wizard to select your Source (vSwitch Port Groups) & Destination DVS Port Profiles

    -With VUM (vmware update manager) is possible install VEM modules in ESX Servers ??? or must install VEM manually in each ESX Server?

    [Robert] As per the Getting Started & Installation guides, you can use either VUM or manual installation method for VEM software install.

    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3/getting_started/configuration/guide/n1000v_gsg.html

    -Supposing of VUM install all VEM modules, the VEM version (vib package) is automatically compatible with build existen vmware version?

    [Robert] Yes.  Assuming VMware has added all the latest VEM software to their online repository, VUM will be able to pull down & install the correct one automatically.


    -is need to create PACKET-MANAGEMENT-CONTROL port groups in ALL esx servers before to migrate to Nexus 1000? or only VEM installation is enough???

    [Robert] If you're planning on keeping the 1000v VSM on vSwitches (rather than migrating itself to the 1000v) then you'll need the Control/Mgmt/Packet port groups on each host you ever plan on running/hosting the VSM on.  If you create the VSM port group on the 1000v DVS, then they will automatically exist on all hosts that are part of the DVS.

    -According to the Cisco manuals VSM can participate in VMOTION, but, how? .. what is the recommendation?..when the primary VMS is moving, the secondary VSM take control?? that occurs with connectivity in all virtual machines?

    [Robert] Since a VMotion does not really impact connectivity for a significant amount of time, the VSM can be easily VMotioned around even if its a single Standalone deployment.  Just like you can vMotion vCenter (which manages the actual task) you can also Vmotion a standalone or redundant VSM without problems.  No special considerations here other than usual VMotion pre-reqs.

    -When there two clusters in one vmware vcenter, how must install/configure VSM?

    [Robert] No different.  The only consideration that changes "how" you install a VSM is a vCenter with multiple DanaCenters. VEM hosts can only connect to a VSM that reside within the same DC.  Different clusters are not a problem.

    -For High Availability concepts, wich is the best choices of design of nexus? considering vmware features (FT,DRS, VMOTION, Cluster)

    [Robert] There are multiple "Best Practice" designs which discuss this in great detail.  I've attached a draft doc one on this thread. A public one will be available in the coming month. Some points to consider is that you do not need FT.  FT is still maturing, and since you can deploy redundany VSMs at no additional cost, there's no need for it.  For DRS you'll want to create a DRS Rule to avoid ever hosting the Primar & Secondary VSM on the same host.

    -How to migrate existent Kernel iSCSI port group to nexus? .. what are the steps? in cisco manual"Migration from VMware vSwitch to Cisco Nexus 1000V" show how to generate the port-profile, but
    how to create the iSCSI target? (ip address, user/password) ..where is it defined?

    [Robert] You can migrate any VMKernel port from vCenter by selecting a host, go to the Networking Configuration - DVS and select Manage Virtual Adapters - Migrate Existing Virtual Adapter. Then follow the wizard.  Before you do so, create the corresponding vEth Port Profile on your 1000v, assign appropriate VLAN etc.  All VMKernel IPs are set within vCenter, 1000v is Layer 2 only, we don't assign Layer 3 addresses to virtual ports (other than Mgmt).  All the rest of the iSCSI configuration is done via vCenter - Storage Adapters as usual (for Targets, CHAP Authentication etc)

    -Supposing of the licences of VEM is not enough for all ESX servers,, ¿will happen to the connectivity of your virtual machines in hosts without VEM licences? ¿can operate with vmware vswitches?

    [Robert] When a VEM comes online with the DVS, if there are not enough available licensses to license EVERY socket, the VEM will show as unlicensed.  Without a license, the virtual ports will not come up.  You should closely watch your licenses using the "show license usage" and "show license usage " for detailed allocation information.  At any time a VEM can still utilize a vSwitch - with or without 1000v licenses, assuming you still have adapters attached to the vSwitches as uplinks.

    I must install nexus 1000V in vmware plataform with VDI, with severals Servers ESX, with 3 vswitch on each ESX Server, with severals virtual machinne running, two clusters defined with vmotion and DRS active and central storage with iSCSI

    I have severals cisco manuals about nexus, but i see special focus in installations topics, the options for migrations is not extensive item, ¿do you have "success stories" or customers experiences of implementation with migrations with nexus?

    [Robert] Have a good look around the Nexus 1000v community Forum.   Lots of stories and information you may find helpful.

    Good luck!

  • UCS environment vSphere 5.1 upgrade to vSphere 6 with Nexus 1000v

    Hi, I've faced trying to get the help of TAC and internet research on the way to upgrade to our UCS environment, and as a last thought resort, I would try a post on the forum.

    My environment is an environment of UCS chassis with double tracking, tissue of interconnections, years 1110 with pair HA of 1000v 5.1 running vsphere.  We have updated all our equipment (blades, series C and UCS Manager) for the supported versions of the firmware by CISCO for vSphere 6.

    We have even upgraded our Nexus 1000v 5.2 (1) SV3(1.5a) which is a support for vSphere version 6.

    To save us some treatment and cost of issuing permits and on our previous vcenter server performance was lacking and unreliable, I looked at the virtual migration on the vCenter 6 appliance.  There is nowhere where I can find information which advises on how to begin the process of upgrading VMWare on NGC when the 1000v is incorporated.  I would have thought that it is a virtual machine if you have all of your improved versions to support versions for 6 veil smooth.

    A response I got from TAC was that we had to move all of our VM on a standard switch to improve the vCenter, but given we are already on a supported the 1000v for vSphere version 6 that he left me confused, nor that I would get the opportunity to rework our environment being a hospital with this kind of downtime and outage windows for more than 200 machines.

    Can provide some tips, or anyone else has tried a similar upgrade path?

    Greetings.

    It seems that you have already upgraded your components N1k (are VEM upgraded to match?).

    Are your questions more info on how you upgrade/migration to a vcenter server to another?

    If you import your vcenter database to your new vcenter, it shouldn't have a lot of waste of time, as the VSM/VEM will always see the vcenter and N1k dVS.  If you change the vcenter server name/IP, but import the old vcenter DB, there are a few steps, you will need to ensure that the connection of VSM SVS corresponds to the new IP address in vcenter.

    If you try to create a new additional vcenter in parallel, then you will have problems of downtime as the name of port-profiles/network programming the guestVMs currently have will lose their 'support' info if you attempt to migrate, because you to NIC dVS standard or generic before the hosts for the new vcenter.

    If you are already on vcenter 6, I believe you can vmotion from one host to another and more profile vswitch/dVS/port used.

    Really need more detail on how to migrate from a vcenter for the VCA 6.0.

    Thank you

    Kirk...

  • [Nexus 1000v] Vincent can be add in VSM

    Hi all

    due to my lab, I have some problems with Nexus 1000V when VEM cannot add in VSM.

    + VSM has already installed on ESX 1 (stand-alone or ha) and you can see:

    See the Cisco_N1KV module.

    Status of Module Type mod Ports model

    ---  -----  --------------------------------  ------------------  ------------

    1 active 0 virtual supervisor Module Nexus1000V *.

    HW Sw mod

    ---  ----------------  ------------------------------------------------

    1 4.2 (1) SV1(4a) 0.0

    MOD-MAC-Address (es) series-Num

    ---  --------------------------------------  ----------

    1 00-19-07-6c-5a-a8 na 00-19-07-6c-62-a8

    Server IP mod-Server-UUID servername

    ---  ---------------  ------------------------------------  -------------------

    1 10.4.110.123 NA NA

    + on ESX2 installed VEM

    [[email protected] / * / ~] status vem #.

    VEM modules are loaded

    Switch name Num used Ports configured Ports MTU rising ports

    128 3 128 1500 vmnic0 vSwitch0

    VEM Agent (vemdpa) is running

    [[email protected] / * / ~] #.

    all advice to do this.

    Thank you very much

    Doan,

    Need more information.

    The reception was added via vCenter to DVS 1000v successfully?

    If so, there is probably a problem with your control communication VLAN between the MSM and VEM.  Start here and ensure that the VIRTUAL local area network has been created on all intermediate switches and it is allowed on each end-to-end trunk.

    If you're still stuck, paste your config running of your VSM.

    Kind regards

    Robert

  • Nexus 1000v by cluster?

    Ho mny nexus 1000v should deploy? I had a cluster with 8 hosts running a pair of nexus 1000v. If I set up a new cluster, do I need to deploy a new pair or nexus to the new cluster?

    VSMs are used to control, no data. VEM is installed on the computers hosts - that's what manage switching.

  • Remove the ' system VLAN "Nexus 1000V port-profile

    We have a Dell M1000e blade chassis with a number of Server Blade M605 ESXi 5.0 using the Nexus 1000V for networking.  We use 10 G Ethernet fabric B and C, for a total of 4 10 cards per server.  We do not use the NIC 1 G on A fabric.  We currently use a NIC of B and C fabrics for the traffic of the virtual machine and the other card NETWORK in each fabric for traffic management/vMotion/iSCSI VM.  We currently use iSCSI EqualLogic PS6010 arrays and have two configuration of port-groups with iSCSI connections (a physical NIC vmnic3 and a vmnic5 of NIC physical).

    We have added a unified EMC VNX 5300 table at our facility and we have configured three VLANs extra on our network - two for iSCSI and other for NFS configuration.  We've added added vEthernet port-profiles for the VLAN of new three, but when we added the new vmk # ports on some of the ESXi servers, they couldn't ping anything.   We got a deal of TAC with Cisco and it was determined that only a single port group with iSCSI connections can be bound to a physical uplink both.

    We decided that we would temporarily add the VLAN again to the list of VLANS allowed on the ports of trunk of physical switch currently only used for the traffic of the VM. We need to delete the new VLAN port ethernet-profile current but facing a problem.

    The Nexus 1000V current profile port that must be changed is:

    The DenverMgmtSanUplinks type ethernet port profile

    VMware-port group

    switchport mode trunk

    switchport trunk allowed vlan 2308-2306, 2311-2315

    passive auto channel-group mode

    no downtime

    System vlan 2308-2306, 2311-2315

    MGMT RISING SAN description

    enabled state

    We must remove the list ' system vlan "vlan 2313-2315 in order to remove them from the list" trunk switchport allowed vlan.

    However, when we try to do, we get an error about the port-profile is currently in use:

    vsm21a # conf t

    Enter configuration commands, one per line.  End with CNTL/Z.

    vsm21a (config) #-port ethernet type DenverMgmtSanUplinks profile

    vsm21a(config-port-Prof) # system vlan 2308-2306, 2311-2312

    ERROR: Cannot delete system VLAN, port-profile in use by Po2 interface

    We have 6 ESXi servers connected to this Nexus 1000V.  Originally they were MEC 3-8 but apparently when we made an update of the firmware, they had re - VEM 9-14 and the old 6 VEM and associates of the Channel ports, are orphans.

    By example, if we look at the port-channel 2 more in detail, we see orphans 3 VEM-related sound and it has no ports associated with it:

    Sho vsm21a(config-port-Prof) # run int port-channel 2

    ! Command: show running-config interface port-canal2

    ! Time: Thu Apr 26 18:59:06 2013

    version 4.2 (1) SV2 (1.1)

    interface port-canal2

    inherit port-profile DenverMgmtSanUplinks

    MEC 3

    vsm21a(config-port-Prof) # sho int port-channel 2

    port-canal2 is stopped (no operational member)

    Material: Port Channel, address: 0000.0000.0000 (bia 0000.0000.0000)

    MTU 1500 bytes, BW 100000 Kbit, DLY 10 usec,

    reliability 255/255, txload 1/255, rxload 1/255

    Encapsulation ARPA

    Port mode is trunk

    Auto-duplex, 10 Gb/s

    Lighthouse is off

    Input stream control is turned off, output flow control is disabled

    Switchport monitor is off

    Members in this channel: Eth3/4, Eth3/6

    Final cleaning of "show interface" counters never

    102 interface resets

    We can probably remove the port-channel 2, but assumed that the error message on the port-profile in use is cascading on the other channel ports.  We can delete the other port-channel 4,6,8,10 orphans and 12 as they are associated with the orphan VEM, but we expect wil then also get errors on the channels of port 13,15,17,19,21 and 23 who are associated with the MEC assets.

    We are looking to see if there is an easy way to fix this on the MSM, or if we need to break one of the rising physical on each server, connect to a vSS or vDS and migrate all off us so the Nexus 1000V vmkernel ports can clean number VLAN.

    You will not be able to remove the VLAN from the system until nothing by using this port-profile. We are very protective of any vlan that is designated on the system command line vlan.

    You must clean the canals of old port and the old MEC. You can safely do 'no port-channel int' and "no vem" on devices which are no longer used.

    What you can do is to create a new port to link rising profile with the settings you want. Then invert the interfaces in the new port-profile. It is generally easier to create a new one then to attempt to clean and the old port-profile with control panel vlan.

    I would like to make the following steps.

    Create a new port-profile with the settings you want to

    Put the host in if possible maintenance mode

    Pick a network of former N1Kv eth port-profile card

    Add the network adapter in the new N1Kv eth port-profile

    Pull on the second NIC on the old port-profile of eth

    Add the second network card in the new port-profile

    You will get some duplicated packages, error messages, but it should work.

    The other option is to remove the N1Kv host and add it by using the new profile port eth.

    Another option is to leave it. Unless it's really bother you no VMs will be able to use these ports-profile unless you create a port veth profile on this VLAN.

    Louis

  • ESXi 5 and Nexus 1000v

    Hello

    I have an ESXi 5 but only NIC I am migrating the VSS for Nexus 1000v. I installed Nexus VEM correctly and do primary and secondary, VSMs configured uplink port groups all according to the guides from Cisco. When I try to add a host under the link, I have first to migrate the vnic0 for the Group of appropriate uplink ports and it then asks me to migrate the management port (I think it is vmk0) so if I create a group of ports on Nexus to migrate a management port or do not migrate at all I always lose connectivity to ESXi.

    Can someone please share the configs of the Nexus 1000v and how to migrate properly vnic0 and vmk0 (with a single physical NETWORK adapter) so that I do not lose connectivity?

    Thanks in advacne.

    Remi

    control is vlan 152 and package is 153.

    You can make same vlan. We have supported using the same vlan for the control and the package for several years now.

    Louis

  • Upgrade to vCenter 4.0 with Cisco Nexus 1000v installed

    Hi all

    We have vCenter 4.0 and ESX 4.0 servers and we want to upgrade to version 4.1. But also Nexus 1000v installed on the ESX Server and vCenter.i found VMware KB which is http://kb.vmware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1024641 . But only the ESX Server upgrade is explained on this KB, vCenter quid?  Our vcenter 4.0 is installed on Windows 2003 64-bit with 64-bit SQL 2005.

    We can upgrade vcenter with Nexsus plugin 1000v installed upgrading on-site without problem? And how to proceed? What are the effects of the plugin Nexus1000v installed on the server vcenter during update?

    Nexus1000v 4.0. (version 4).sv1(3a) has been installed on the ESX servers.

    Concerning

    Mecyon,

    Upgrading vSphere 4.0-> 4.1 you must update the software VEM (.vib) also.  The plugin for vCenter won't change, it won't be anything on your VSM (s).  The only thing you should update is the MEC on each host.  See the matrix previously posted above for the .vib to install once you're host has been updated.

    Note: after this upgrade, that you will be able to update regardless of your software of vSphere host or 1000v (without having to be updated to the other).  1000v VEM dependencies have been removed since vSphere 4.0 Update 2.

    Kind regards

    Robert

  • Configuring network DMZ, internal using Nexus 1000v

    Hello peoples, this is my first post in the forums.

    I am trying to build a profile for my customer with the following configuration;

    4 x ESXi hosts on the DL380 G7 each with 12 GB of RAM, CPU Core X 5650 of 2 x 6, 8 x 1 GB NIC

    2 x left iSCSI SAN.

    The hardware components and several design features, on that I have no control, they were decided and I can't change, or I can't add additional equipment. Here's my constraints;

    (1) the solution will use the shared for internal, external traffic and iSCSI Cisco network switches.

    (2) the solution uses a single cluster with each of the four hosts within that group.

    (3) I install and configure a Nexus 1000v in the environment (something I'm not want simply because I have never done it before). The customer was sold on the concept of a solution of cheap hardware and shared because they were told that using a N1Kv would solve all the problems of security.

    Before I learned that I would have to use a N1Kv my solution looked like the following attached JPG. The solution used four distributed virtual switches and examples of how they were going to be configured is attached. Details and IP addresses are examples.

    My questions are:

    (1) what procedure should I use to set up the environment, should I build the dvSwtiches as described and then export it to the N1Kv?

    (2) how should I document place this solution? In general in my description I will have a section explaining each switch, how it is configured, vital details, port groups etc. But all of this is removed and replaced with uplink ports or something is it not?

    (3) should I be aiming to use a different switch by dvSwitch, or I can stem the heap and create groups of different ports, is it safe, is there a standard? Yes, I read the white papers on the DMZ and the Nexus 1000v.

    (4) is my configuration safe and effective? Are there ways to improve it?

    All other comments and suggestions are welcome.

    Hello and welcome to the forums,

    (1) what procedure should I use to set up the environment, should I build the dvSwtiches as described and then export it to the N1Kv?

    N1KV replace dvSwitch, but there isn't that a N1KV ONLY where there are many dvSwitches N1KV would use the same rising in the world.

    (2) how should I document place this solution? In general in my description I will have a section explaining each switch, how it is configured, vital details, port groups etc. But all of this is removed and replaced with uplink ports or something is it not?

    If you use N1KV you rising the pSwitch to the N1KV.

    If you use dvSwitch/vSwitch you uplink to the pSwitches to the individual dvSwitch/vSwitch in use.

    (3) should I be aiming to use a different switch by dvSwitch, or I can stem the heap and create groups of different ports, is it safe, is there a standard? Yes, I read the white papers on the DMZ and the Nexus 1000v.

    No standard and Yes in many cases, it can be considered secure. If your existing physical network relies on VLANs and approves the Layer2 pSwitches, then you can do the exact same thing in the virtual environment and be as safe as your physical environment.

    However, if you need separation to the layer of pSwitch then you must maintain various vSwitches for this same separation. Take a look at this post http://www.virtualizationpractice.com/blog/?p=4284 on the subject.

    (4) is my configuration safe and effective? Are there ways to improve it?

    Always ways to improve. I would like to start looking into the defense-in-depth the vNIC and layers of edge within your vNetwork.

    Best regards
    Edward L. Haletky VMware communities user moderator, VMware vExpert 2009, 2010

    Now available: url = http://www.astroarch.com/wiki/index.php/VMware_Virtual_Infrastructure_Security'VMware vSphere (TM) and Virtual Infrastructure Security' [/ URL]

    Also available url = http://www.astroarch.com/wiki/index.php/VMWare_ESX_Server_in_the_Enterprise"VMWare ESX Server in the enterprise" [url]

    Blogs: url = http://www.virtualizationpractice.comvirtualization practice [/ URL] | URL = http://www.astroarch.com/blog Blue Gears [url] | URL = http://itknowledgeexchange.techtarget.com/virtualization-pro/ TechTarget [url] | URL = http://www.networkworld.com/community/haletky Global network [url]

    Podcast: url = http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcastvirtualization security Table round Podcast [url] | Twitter: url = http://www.twitter.com/TexiwillTexiwll [/ URL]

  • Nexus 1000v deployment issues

    I'm working on the Cisco Nexus 1000v deployment to our ESX cluster. I have read the Cisco "Start Guide" and the "installation guide" but the guides are good to generalize your environment and obviously does not meet my questions based on our architecture.

    This comment in the "Getting Started Guide" Cisco makes it sound like you can't uplink of several switches on an individual ESX host:

    «The server administrator must assign not more than one uplink on the same VLAN without port channels.» Affect more than one uplink on the same host is not supported by the following:

    A profile without the port channels.

    Port profiles that share one or more VLANS.

    After this comment, is possible to PortChannel 2 natachasery on one side of the link (ESX host side) and each have to go to a separate upstream switch? I am creating a redundancy to the ESX host using 2 switches but this comment sounds like I need the side portchannel ESX to associate the VLAN for both interfaces. How do you manage each link and then on the side of the switch upstream? I don't think that you can add to a portchannel on this side of the uplink as the port channel protocol will not properly negotiate and show one side down on the side ESX/VEM.

    I'm more complicate it? Thank you.

    Do not portchannel, but it is possible the channel port to different switches using the pinning VPC - MAC mode. On upstream switches, make sure that the ports are configured the same. Same speed, switch config, VLAN, etc (but no control channel)

    On the VSM to create a unique profile eth type port with the following channel-group command

    port-profile type ethernet Uplink-VPC

    VMware-port group

    switchport mode trunk

    Automatic channel-group on mac - pinning

    no downtime

    System vlan 2.10

    enabled state

    What that will do is create a channel port on the N1KV only. Your ESX host will get redundancy but your balancing algorithm will be simple Robin out of the VM. If you want to pin a specific traffic for a particular connection, you can add the "pin id" command to your port-type veth profiles.

    To see the PIN, you can run

    module vem x run vemcmd see the port

    n1000v-module # 5 MV vem run vemcmd see the port

    LTL VSM link PC - LTL SGID Vem State Port Admin Port

    18 Eth5/2 UP UP FWD 1 305 vmnic1

    19 Eth5/3 UP UP FWD 305 2 vmnic2

    49 Veth1 UP UP 0 1 vm1 - 3.eth0 FWD

    50 Veth3 UP UP 0 2 linux - 4.eth0 FWD

    Po5 305 to TOP up FWD 0

    The key is the column SGID. vmnic1 is SGID 1 and vmnic2 2 SGID. Vm1-3 VM is pinned to SGID1 and linux-4 is pinned to SGID2.

    You can kill a connection and traffic should swap.

    Louis

  • Installation of Nexus 1000V

    Hello

    I tried to install the Nexus 1000V using the CLI on a 4.U1 Server ESXi remote.

    I use after a command

    C:\Program Files (x 86) \VMware\VMware vSphere CLI\bin > vihostupdate.pl b-i "c:\Program Files (x 86) \VMware\VMware CLI\ vSphere.

    "Cisco-vem-v100 - 4.0.4.1.1.27 - 0.4.2.zip '-SCSC-sesx002 Server

    Enter the user name: root

    Enter the password:

    Please wait install the fix is underway...

    There was an error resolving dependencies.

    Asked to VIB cross_cisco-vem-v100 - esx_4.0.4.1.1.27 - 0.4.2 comes into conflict with the host

    No VIB provides ' vmknexus1kvapi-0-4' (required by cross_cisco-vem-v100 - esx_4.0.4.1.1.27 - 0.4.2)»

    But as you can see, there is an error message.

    If there is a mismatch between the version of Nexus and the ESXi4.

    I use more ESXi version 4.0U1: 208167 in evaluation mode.

    Does anyone have an idea why this happens?

    Thank you

    Concerning

    Jürgen

    Jurgen,

    You use the bad VEM module for the patch of the ESX kernel that you are running. Take a look at the link for compatibility with the VEM module

    http://www.Cisco.com/en/us/docs/switches/Datacenter/nexus1000/SW/4_0_4_s_v_1_2/compatibility/information/n1000v_compatibility.html

    For the core ESX 208167, you should install VIB 4.0.4.1.2.0.80 - 1.9.179

    It also seems that you try to install a module VEM 1.1. I strongly suggest to download the package N1KV from cisco.com and get version 1.2 from N1KV.

    Louis

Maybe you are looking for

  • Cannot answer security questions required to reset PW

    It's so crazy.  I was told that my account could be hacked, so I should update my password.  I go to the website of password and it ask me two questions of car I never responded and he tells me that my answers are wrong.  What I do cause I can't rese

  • Satellite 1800: Fault pin plastic LCD display!

    "Look at the LCD display screen. If you can see a small image on the screen, backlight fluorescent display screen can be disabled. Light of a flashlight directly to the screen will make it much easier to see if or not there is a screen image actually

  • HP Pavilion P7-1459 Windows 7 drivers - want to dual boot

    I need assistance in obtaining 64 bit drivers Windows 7 for my HP Pavilion P7 1459 in the driver download page is available as Windows 8 and 8.1. I'd like to partition the hard drive and double to start my machine (windows & and windows 8.1) I have a

  • HP pavilion 17 laptop PC: spill soda on computer.

    About a month ago I spilled accidentily one soda on my computer, I took the battery and the wrong to put the computer inside, two hours later, I turned it back, there well worked, off, he left upside down and in the morning, he'll say it is on, but t

  • KB2724197 failed with the error code "WindowsUpdate_FFFFFFFE" "WindowsUpdate_dt000"

    I havw tried to install the date at the top and it fails every time. When I shut down the computer it hangs up. tells me not to cut the power, but I'll be back the next morning and the message is still there