Nexus 1000v and vmkernel ports

What is the best practice to put on the nexus 1000v vmkernel ports? Is it a good idea to put all the ports of vms and vmkernel on nexus 1000v? switch or dvs or some vmkernel as management ports must be on a standard?  If something happens to the 1000v, all management and vms will be unreachable.

any tips?

Yep, that's correct. Port system profiles don't require any communication between the MEC and VSM.

Tags: VMware

Similar Questions

  • Nexus 1000v and vMotion/VM traffic?

    Hello

    We are moving our servers of VM production to a new data center.

    There is a willingness to use the ports of 2 x 10 giga on each ESX host for the uplinks (etherchannel) Nexus 1000v for vMotion traffic and traffic VM.

    I don't know if that's even possible/supported? To devote nexus 1000v for vMotion and VM traffic ports at the same time? I think each guest will need an address unique IP for vMotion be able to operate.

    Best regards

    / Pete

    Info:

    vCenter/ESXi - 5.0 Update 3

    Yes - the characteristics of vmotion still apply if you use a virtual switch Standard or Distributed Virtual Switch as the Nexus 1000v - you should have a network for vmotion and each port vmkernel will need a unique IP address on the subnet of vmotion

  • Question about license for Nexus 1000v and it time to stand not licensed ESXi

    Hello

    I have the license for 12 cpu that are 3 popular ESXi with 4 processors each

    so if I add an another ESXi, which will support 60 days

    I'm pretty sure but wanted confirmation

    Thank you

    What version do you use? 2.1 they went to a model free and advanced, depending on the features you need, you can be able to upgrade and continue without additional licenses.

    No doubt, you can convert your current license VSG, contact your engineer.

    http://blogs.Cisco.com/tag/nexus-1000V/

  • Nexus 1000v and vSwitch best practices

    I am working on the design of our vDS Nexus 1000v for use on HP BL490 G6 servers. 8 natachasery is allocated as follows:

    vmnic0, 1: management of ESXi, VSM-CTRL-PKT, VSM - MGT

    vmnic2, 3: vMotion

    vmnic4, 5: iSCSI, FT, Clustering heartbeats

    vmnic6, 7: data server and Client VM traffic

    Should I migrate all the natachasery to 1000v vDS, or should I let vmnic 0.1 on a regular vSwitch and others migrate to the vDS? If I migrate all the natachasery at the very least I would designate vmnic 0.1 as system so that traffic could elapse until the MSM could be reached. My inclination is to migrate all the natachasery, but I've seen elsewhere on comments in forums that the VSM associated networks and, possibly, the console ESX (i) are better let off of the vDS.

    Thoughts?

    Here is a best practice-how-to guide specific to 1000v & VC HP might be useful.

    See you soon,.

    Robert

  • NEXUS 1000v and mgnt0

    Nexus worked when the VSM was on a host of ESXi seporate as VEM/dSwitchs.  When I manually installed the MEC even welcome as the VSM VSM lost of physical network connectivity and therefore switches distributed.

    The host ESXi SUR VSM runs has 2 network cards, a NETWORK card for control, management and packages via the dSwitch and the other Nexus card NETWORK connected to the vSwitch.

    I think that if I can get mgnt0 of the MSM to get access to the vSwitch things will return to normal.  How can I assign this interface to the vSwitch?

    I would create a new control, management, port-groups of packets on the normal if possible vswitch. Then move the VSM network connections to the vSwitch which should resolve the network connectivity.

    Once this work make sure that you create the control and administration, package back port-profiles for the MSM if you want to move it. Make sure that the system VLANs is defined in these profiles-port and port uplink-profile as well. Once those are created, you can migrate the VSM to MEC.

    Louis

  • Remove the ' system VLAN "Nexus 1000V port-profile

    We have a Dell M1000e blade chassis with a number of Server Blade M605 ESXi 5.0 using the Nexus 1000V for networking.  We use 10 G Ethernet fabric B and C, for a total of 4 10 cards per server.  We do not use the NIC 1 G on A fabric.  We currently use a NIC of B and C fabrics for the traffic of the virtual machine and the other card NETWORK in each fabric for traffic management/vMotion/iSCSI VM.  We currently use iSCSI EqualLogic PS6010 arrays and have two configuration of port-groups with iSCSI connections (a physical NIC vmnic3 and a vmnic5 of NIC physical).

    We have added a unified EMC VNX 5300 table at our facility and we have configured three VLANs extra on our network - two for iSCSI and other for NFS configuration.  We've added added vEthernet port-profiles for the VLAN of new three, but when we added the new vmk # ports on some of the ESXi servers, they couldn't ping anything.   We got a deal of TAC with Cisco and it was determined that only a single port group with iSCSI connections can be bound to a physical uplink both.

    We decided that we would temporarily add the VLAN again to the list of VLANS allowed on the ports of trunk of physical switch currently only used for the traffic of the VM. We need to delete the new VLAN port ethernet-profile current but facing a problem.

    The Nexus 1000V current profile port that must be changed is:

    The DenverMgmtSanUplinks type ethernet port profile

    VMware-port group

    switchport mode trunk

    switchport trunk allowed vlan 2308-2306, 2311-2315

    passive auto channel-group mode

    no downtime

    System vlan 2308-2306, 2311-2315

    MGMT RISING SAN description

    enabled state

    We must remove the list ' system vlan "vlan 2313-2315 in order to remove them from the list" trunk switchport allowed vlan.

    However, when we try to do, we get an error about the port-profile is currently in use:

    vsm21a # conf t

    Enter configuration commands, one per line.  End with CNTL/Z.

    vsm21a (config) #-port ethernet type DenverMgmtSanUplinks profile

    vsm21a(config-port-Prof) # system vlan 2308-2306, 2311-2312

    ERROR: Cannot delete system VLAN, port-profile in use by Po2 interface

    We have 6 ESXi servers connected to this Nexus 1000V.  Originally they were MEC 3-8 but apparently when we made an update of the firmware, they had re - VEM 9-14 and the old 6 VEM and associates of the Channel ports, are orphans.

    By example, if we look at the port-channel 2 more in detail, we see orphans 3 VEM-related sound and it has no ports associated with it:

    Sho vsm21a(config-port-Prof) # run int port-channel 2

    ! Command: show running-config interface port-canal2

    ! Time: Thu Apr 26 18:59:06 2013

    version 4.2 (1) SV2 (1.1)

    interface port-canal2

    inherit port-profile DenverMgmtSanUplinks

    MEC 3

    vsm21a(config-port-Prof) # sho int port-channel 2

    port-canal2 is stopped (no operational member)

    Material: Port Channel, address: 0000.0000.0000 (bia 0000.0000.0000)

    MTU 1500 bytes, BW 100000 Kbit, DLY 10 usec,

    reliability 255/255, txload 1/255, rxload 1/255

    Encapsulation ARPA

    Port mode is trunk

    Auto-duplex, 10 Gb/s

    Lighthouse is off

    Input stream control is turned off, output flow control is disabled

    Switchport monitor is off

    Members in this channel: Eth3/4, Eth3/6

    Final cleaning of "show interface" counters never

    102 interface resets

    We can probably remove the port-channel 2, but assumed that the error message on the port-profile in use is cascading on the other channel ports.  We can delete the other port-channel 4,6,8,10 orphans and 12 as they are associated with the orphan VEM, but we expect wil then also get errors on the channels of port 13,15,17,19,21 and 23 who are associated with the MEC assets.

    We are looking to see if there is an easy way to fix this on the MSM, or if we need to break one of the rising physical on each server, connect to a vSS or vDS and migrate all off us so the Nexus 1000V vmkernel ports can clean number VLAN.

    You will not be able to remove the VLAN from the system until nothing by using this port-profile. We are very protective of any vlan that is designated on the system command line vlan.

    You must clean the canals of old port and the old MEC. You can safely do 'no port-channel int' and "no vem" on devices which are no longer used.

    What you can do is to create a new port to link rising profile with the settings you want. Then invert the interfaces in the new port-profile. It is generally easier to create a new one then to attempt to clean and the old port-profile with control panel vlan.

    I would like to make the following steps.

    Create a new port-profile with the settings you want to

    Put the host in if possible maintenance mode

    Pick a network of former N1Kv eth port-profile card

    Add the network adapter in the new N1Kv eth port-profile

    Pull on the second NIC on the old port-profile of eth

    Add the second network card in the new port-profile

    You will get some duplicated packages, error messages, but it should work.

    The other option is to remove the N1Kv host and add it by using the new profile port eth.

    Another option is to leave it. Unless it's really bother you no VMs will be able to use these ports-profile unless you create a port veth profile on this VLAN.

    Louis

  • ESXi 5 and Nexus 1000v

    Hello

    I have an ESXi 5 but only NIC I am migrating the VSS for Nexus 1000v. I installed Nexus VEM correctly and do primary and secondary, VSMs configured uplink port groups all according to the guides from Cisco. When I try to add a host under the link, I have first to migrate the vnic0 for the Group of appropriate uplink ports and it then asks me to migrate the management port (I think it is vmk0) so if I create a group of ports on Nexus to migrate a management port or do not migrate at all I always lose connectivity to ESXi.

    Can someone please share the configs of the Nexus 1000v and how to migrate properly vnic0 and vmk0 (with a single physical NETWORK adapter) so that I do not lose connectivity?

    Thanks in advacne.

    Remi

    control is vlan 152 and package is 153.

    You can make same vlan. We have supported using the same vlan for the control and the package for several years now.

    Louis

  • VM - FEX and Nexus 1000v relationship

    Hello

    I'm new in the world of virtulaization and I need to know what the relationship between Cisco Nexus 1000v and Cisco VM - FEX? and when to use VM - FEX and when to use Nexus 1000v.

    Concerning

    Ahmed,

    Nexus 1000v is a Distributed Switch that allows you to manage your VEM, see this relationship as a supervisor-LineCard relationship.

    VM - FEX gives you the opportunity to bypass the vSwitch embarked on each host ESXi (VEM) for example.

    With VM - FEX, you see the virtual machines as if they were directly connected to the Parent switch (N7K / 5K for example), making it the parent spend management (cause there is more no vSwitch in the middle).

    This is a good topic that can be discussed and is difficult to summarize in a few lines, you read something in particular? any questions or doubts we can help clarify?

    -Kenny

  • ASA 1000V and ASA 5500

    I hope someone can help me to answer this question:

    Currently, we have redundant FWSM and consider a migration of standalone ASA 5500 series firewalls. However, we have a complete VMWare environment and look at the Nexus 1000V. I understand the Nexus 1000V and ESR architecture and implementation, and I don't understand that the ASA 1000V is designed for cloud environments. But I have a question about the ASA 1000V.

    Is it possible that a firewall series ASA 5500 be replaced by ASA 1000V? Basically, can an ASA 1000V to be a single firewall solution, or are that ASA 5500 is always necessary?

    Is there a datasheet anywhere that compares the ASA 1000V and ASA 5500 series?

    Thanks for your help.

    -Joe

    Depending on what you are using the ASA5500 series for now. If you use the ASA5500 for the remote access vpn and AnyConnect VPN, he will not rely on the first version of the ASA1000V yet.

    Here's the Q & A on ASA1000V which includes more information:

    http://www.Cisco.com/en/us/partner/prod/collateral/vpndevc/ps6032/ps6094/ps12233/qa_c67-688050.html

    Hope that answers your question.

  • Nexus 1000v VSM compatibility with older versions of VEM?

    Hello everyone.

    I would like to upgrade our Nexus 1000v VSM 4.2 (1) SV1 (5.1) to 4.2 (1) SV2(2.1a) because we are heading of ESXi 5.0 update 3 to 5.5 ESXi in the near future. I was not able to find a list of compatibility for the new version when it comes to versions VEM, I was wondering if the new VSM supports older versions VEM, we are running, so I must not be upgraded all at once. I know that it supports two versions of our ESXi.

    Best regards

    Pete

    You found documentation, transfer of the station from 1.5 to latest code is supported in a VSM perspective.  Which is not documented is the small one on the MEC.  In general, the VSM is backward compatible with the old VEM (to a degree, the degree of which is not published).  Although it is not documented (AFAIK), verbal comprehension is that MEC can be a version or two behind, but you should try to minimize the time that you run in this configuration.

    If you plan to run mixed versions VEM when getting your upgraded hosts (totally fine that's how I do mine), it is better to move to this enhanced version of VEM as you upgrade the hypervisor.  Since you go ESXi 5.0 5.5, you create an ISO that contains the Cisco VIBs, your favorite driver async (if any), and the image of ESXi 5.5 all grouped together so the upgrade for a given host is all of a sudden.  You probably already have this cold technique, but the links generated by the Cisco tool below will show you how to proceed.  It also gives some URLS handy to share with each person performing functions on this upgrade.  Here is the link:

    Nexus 1000V and ESX upgrade utility

    PS - the new thing takes clones your VSMs offline.  Even if they are fairly easy to recover, having a real pure clone will save some sauce secret that otherwise you may lose in a failure scenario.  Just turn off a VSM, then right click and clone.  Turn on again this MSM and failover pair HA, then take to the bottom of it and get a clone of it.  So as a security measure, this upgrade, get your clones currently out of the current 1.5 VSMs, then some time after your upgrade some clones offline, saved from the new version.

  • Definition of VLAN ID on vmkernel ports on dVS - Nexus 1000

    I noticed when adding vmkernel ports to a host on the dVS I'm not presented with an option to enter a VLAN ID. Is it because the VLANS is defined at the level of the port on the Nexus Group? Is there a need for me to create these ports on a vSwitch first to set the VLAN and then migrate them to the dVS?

    Thank you

    JD

    The VLAN ID is controlled by the port-profile on the VSMs 1000v. You don't have to specify it.

  • Configuration of the channel of port on nexus 1000V

    Hello

    I'm new on nexus 1000V, the configuration is as follows, UCS chassis with 4 blades full-width connected to 2 FI 6248UP.

    each FI's uplink to a n5k (no mail ORDER).

    is there any configuration model the nexus 1000v? How to configure port-channel?

    Thank you.

    Hello

    We recommend using mac pinning ("channel-group auto mode on mac - pinning") when N1KV is used on the blades of the UCS.

    Next doc provides good overview on best practices.

    Best practices in deploying Cisco Nexus 1000V switches in the Cisco UCS B and C series series Cisco UCS Manager servers

    http://www.Cisco.com/en/us/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

    HTH

    Padma

  • Change the maximum number of ports on Nexus 1000v vDS online with no distribution?

    Hello

    Change the maximum number of ports on Nexus 1000v vDS online with no distribution?


    I'm sure that's what the link

    VMware KB: Increase in the maximum number of vNetwork Distributed Switch (vDS) ports in vSphere 4.x

    not to say that

    I have 5.1 ESXi and vcenter

    Thank you
    Saami

    There is no downtime when you change quantity "vmware max-ports" a port profile. It can be done during production.

    You can also create a new profile of port with a test of the virtual machine and change the "vmware max-ports' If you want warm and ferrets.

  • Design of Nexus 1000v - COS and vCenter

    Hello everyone. First foray into the 1000V, so hoping someone can give an opinion as to whether my thoughts for the design of high level are on the right track?

    • Host servers have 4 Gb x 10 and 2 x 1 GB ports

    • ESXi 4.1, Enterprise Plus and 1000v a license for every host

    • We will use the devices 1010 need, so no VSM modules

    • 10 GB ports will be allocated to the 1000v vDS

      • Portprofiles created for vMotion and all required VLANS VM

      • No requirement for IP storage

    • The ports of 1 GB on a standard vSwitch for VMkernel Port Management

      • The thinking is that if the vCenter disconnects we can manage the vDS (I suppose that this nature applies to the 1000 v as the VMware vDS?), so it is best to have hosts on a network that we can always access and change if necessary

      • 1010 devices are going to sit on this network (or be routable to/from it)

    • vCenter will be installed on a physical server

      • No Heartbeat vCenter, so only one instance running

      • Because of the dependence vCenter to 1000v port configuration, is probably a good idea to vCenter, such as a computer virtual connected to a port vDS for its virtual machine traffic?

      • The physical vCenter will have connectivity to the management on every host and devices 1010 VMkernel port

      • As an alternative, I guess we could put a VM port on the connections of 1 GB for vCenter, but who was going to start complicating the design and management


        What is everything is OK good, or is it too conservative? Should we watch it again add 1 GB ports in links rising 1000v and with the VMkernel management as on a separate port profile and containing a virtualized vCenter (which is what I normally would use for new deployments).

        Thanks in advance for any comments,

        Steve

        Hi Steve,.

        Disclaimer clause - I work for Cisco and I consolidated pro network

        There was a lot of similar questions already posted on the Cisco 1000v community.   There are a lot of similar questions and suggestions already available: https://www.myciscocommunity.com/community/products/nexus1000v?view=discussions

        Here's a recent post similar to yours.

        https://www.myciscocommunity.com/thread/17624?TSTART=0

        Regarding some of your design considerations, many of them come down to your level of comfort & expertise.  With vCenter as a physical host allows you to keep out of your virtual environment, but you will lose the advatanges of the best use of the host and VMotion resources that make the host maintenance & limited downtime.  With the vCenter as a virtual machine and running on the DVS, you'd probably the service profile, the VC is assigned to a vlan "system".  This will ensure that your VC network connectivity is ALWAYS transmitted, even if the VSMs are offline and VEM accommodation the VC is reset.  This added protection removes much of the risk of the VC running on the DVS it manages.

        Regarding your VSM availability, if the two 1010 are falling, yes you are not able to make changes of configuration on the VSM.  Once active & well configured, there shouldn't be many situations when the two VSMs are offline and you have an urgent need for an immediate configuration change.  VEM guests can survive very well in mode without head (no present VSM) assuming that they are not restart before finding the VSM.   Additional protection for important virtual connections for the management, storage and control IP traffic include the use of "system VLAN" as stated above.

        With each adapter used comes additional management and the need for additional upstream switch ports.  With 4 x 10 G of each host cards I would find it difficult to justify also connections of 1 G of utilzing, except if you opt for an "out of band" connections for your management on a vSwitch interfaces.  If you are comfortable with the performance of all your virtual interfaces including management & VMotion on the DVS you can better pool your resources, but using only your adapters 10 G uplinks.  By running all about the 1000v, you can easily apply QoS & limit the use of bandwidth by Port profiles - news just released version 1.4 of 1000v is incredibly easy to set up.  Certainly worth a visit.  See my post here for some of the new features in the 1000v: https://www.myciscocommunity.com/thread/17120?tstart=0

        I hope this helps.

        Kind regards

        Robert

      • Nexus 1000v, UCS, and Microsoft NETWORK load balancing

        Hi all

        I have a client that implements a new Exchange 2010 environment. They have an obligation to configure load balancing for Client Access servers. The environment consists of VMware vShpere running on top of Cisco UCS blades with the Nexus 1000v dvSwitch.

        Everything I've read so far indicates that I must do the following:

        1 configure MS in Multicast mode load balancing (by selecting the IGMP protocol option).

        2. create a static ARP entry for the address of virtual cluster on the router for the subnet of the server.

        3. (maybe) configure a static MAC table entry on the router for the subnet of the server.

        3. (maybe) to disable the IGMP snooping on the VLAN appropriate in the Nexus 1000v.

        My questions are:

        1. any person running successfully a similar configuration?

        2 are there missing steps in the list above, or I shouldn't do?

        3. If I am disabling the snooping IGMP on the Nexus 1000v should I also disable it on the fabric of UCS interconnections and router?

        Thanks a lot for your time,.

        Aaron

        Aaron,

        The steps above you are correct, you need steps 1-4 to operate correctly.  Normally people will create a VLAN separate to their interfaces NLB/subnet, to prevent floods mcast uncessisary frameworks within the network.

        To answer your questions

        (1) I saw multiple clients run this configuration

        (2) the steps you are correct

        (3) you can't toggle the on UCS IGMP snooping.  It is enabled by default and not a configurable option.  There is no need to change anything within the UCS regarding MS NLB with the above procedure.  FYI - the ability to disable/enable the snooping IGMP on UCS is scheduled for a next version 2.1.


        This is the correct method untill the time we have the option of configuring static multicast mac entries on
        the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR.

        This will give more "push" to our develpment team to prioritize this request.

        Hopefully some other customers can share their experience.

        Regards,

        Robert

      Maybe you are looking for