activation of vmotion

For some reason when I try to migrate (vmotion only) a machine from one host to another there "Change Host" gray and under it says:

"vMotion is not enabled on the virtual machine host.  To migrate the virtual machine, enable vMotion on the host or power off the virtual machine.

I think that everything is permitted for vmotion.  My group has 'Spin on VMware HA' checked so as to "Turn on VMware DRS" checked.  I have my two hosts added to the cluster.  What should I check first?

OK... Uncheck Activate the vMotion, say ok, then go back and turn it back on.

Tags: VMware

Similar Questions

  • Activation of vMotion on ESX4.1

    Every morning.

    I am looking for some advise and help on activation of vMotion on our ESX4.1 of 2 nodes cluster. We have recently upgraded to the Enterprise version of vSphere until now have capabilities of vMotion. I try to configure things but because of my lack of experience with this am a bit stuck.

    I activated the license that allows me to configure now vMotion.

    I created a new VMkernel, assigned an unused vmnic, left the VLAN ID empty and selected 'use this port for VMotion group', put in the IP address and subnet mask but the DG was left blank. Everything seems to proceed without problem, but I see that the NIC does not appear "observed IP varies. Also, I cannot ping by IP VMkernel new are assigned. This course is not allowing me to use vMotion.

    My question is, what do I need to enable the NETWORK card somehow? If so some can help me to do so.

    Should the ESX restart? is there anything else I should do to complete the configuration of vMotion to allow me to use it.

    Help much appreciated.

    Thank you.

    > My question is, what do I need to enable the NETWORK card somehow? If so some can help me to do so.

    Check the other side - switch. Port is enabled on the switch? Is port right VLAN?

    > The ESX needs to restart? is there anything else I should do to complete the configuration of vMotion to allow me to use it.

    No, the restart is unnecessary here.

    You can check network connectivity VMkernel interface with the vmkping service console command.

    ---

    MCSA, MCTS Hyper-V, VCP 3/4, VMware vExpert

    http://blog.vadmin.ru

  • Activation of VMotion on a vmk dvSwitch with PowerCLI problems

    Hello

    I'm trying to enable vmotion and a dvswitch via powercli on a host with the following command:

    game-vmhostnetworkadapter - VMHost xxx.xxx.xxx.xxx - VirtualNic "vmk1" - VMotionEnabled $true

    I get the following error:

    Game-VMHostNetworkAdapter: cannot bind parameter 'VirtualNic '. Cannot convert t
    He value "vmk1' to type 'System.String' to type 'VMware.VimAutomation.ViCore.Ty '.
    pes.V1.Host.Networking.Nic.HostVirtualNic ".
    On line: 1 char: 56
    + set-vmhostnetworkadapter - vmhost xxx.xxx.xxx.xxx - virtualnic < < < < "vmk1" - vmotion
    $true enabled
    + CategoryInfo: InvalidArgument: (:)) [game-VMHostNetworkAdapter])
    ParameterBindingException
    + FullyQualifiedErrorId: CannotConvertArgumentNoMessage, VMware.VimAutomat
    ion.ViCore.Cmdlets.Commands.Host.SetVMHostNetworkAdapter

    I also tried to perform functions of LucD:

    Get-DistributedSwitchNetworkAdapter - VMHost xxxx.xxx.xxx.xxx - PortGroup "vMotion" - DistributedSwitch "dv1.

    I get the following error:

    Get-DistributedSwitchNetworkAdapter: cannot process the transformation of the argument on
    parameter "$vmhost". Cannot convert value "xxx.xxx.xxx.xxx" to type "System.Stri".
    NG' type 'VMware.VimAutomation.ViCore.Types.V1.Inventory.VMHost '.
    On line: 1 char: 44
    + Get-DistributedSwitchNetworkAdapter - VMHost < < < < xxx.xxx.xxx.xxx - PortGroup ' vMot
    ion'-DistributedSwitch 'dv1.
    + CategoryInfo: InvalidData: (:)) [Get-DistributedSwitchNetworkAd)
    Apter], ParameterBindin... mationException
    + FullyQualifiedErrorId: ParameterArgumentTransformationError, Get-Distrib
    utedSwitchNetworkAdapter


    Someone at - it ideas?

    I guess you took the function of the DistributedSwitch module included with the book?

    The VMHost parameter must be an object of VMHost.

    Try it as

    $esx = get-VMHost xxxx.xxx.xxx.xxx

    Get-DistributedSwitchNetworkAdapter - VMHost $esx - PortGroup "vMotion" - DistributedSwitch "dv1.

  • VIMSH - how to get active correctly vmotioned

    Could Hello - anyone tell me what I'm not doing?  I enter the following command in the command line and it works very well to enable vmotion, however, it does not work in my script where everything is:

    % post

    Cat & gt;/tmp/ESXpostconfig.sh & lt; & lt; EXPRESSIONS OF FOLKLORE

    #! / bin/bash

    #esxcfg - vswitch - L vmnic1 vSwitch0

    1. Add switch prod vSwitch1 and vSwitch2 and configure links rising further addition of uplink service console

    esxcfg-vswitch - L vmnic2 vSwitch0

    esxcfg-vswitch - a vSwitch1

    esxcfg-vswitch - L vmnic1 vSwitch1

    esxcfg-vswitch - L vmnic3 vSwitch1

    esxcfg-vswitch - a vSwitch2

    esxcfg-vswitch - L vmnic4 vSwitch2

    esxcfg-vswitch - L vmnic5 vSwitch2

    1. Add groups of ports and VLANS vSwitch1 and vSwitch2

    esxcfg-vswitch - a linux_network vSwitch1

    esxcfg-vswitch v - 401 - p linux_network vSwitch1

    esxcfg-vswitch - a vSwitch2 windows_network

    esxcfg-vswitch v - 402 - p windows_network vSwitch2

    #Configure NTP

    echo "Configuring NTP"

    echo 'restrict kod nomodify notrap noquery nopeer' & gt; /etc/ntp.conf

    echo "restrict 127.0.0.1" & gt; & gt; /etc/ntp.conf

    echo "Server ntp.brown.edu" & gt; & gt; /etc/ntp.conf

    echo "Server ntp2.brown.edu" & gt; & gt; /etc/ntp.conf

    echo "driftfile/var/lib/ntp/drift" & gt; & gt; /etc/ntp.conf

    echo "Server ntp.brown.edu" & gt; / etc/NTP/Step-tickers

    echo "Server ntp2.brown.edu" & gt; / etc/NTP/Step-tickers

    1. additional configuration steps for NTP

    ntpClient - e esxcfg-firewall

    chkconfig--level 345 ntpd on

    / sbin/hwclock--systohc - utc

    service ntpd restart

    1. set up authentication with AD

    esxcfg-auth - enablead - addomain = ad.brown.edu - addc sorry = cannot display this info

    service mgmt-vmware restart

    1. Add users

        1. Sorry, can not display this

    root #enable login

    Perl - p - i.old e ' s/PermitRootLogin No./PermitRootLogin Yes / g ' / etc/ssh/sshd_config

    service sshd restart

    #Unload the VMFS-2 module

    vmkload_mod vmfs2 u

    1. Add vmotion kernel port to vSwitch0, configure IP VMkernel, add the uplink for redundancy

    #esxcfg - vswitch - a Vmotion vSwitch0

    #esxcfg - vmknic - a-i 128.148.176.228 - n 255.255.255.0 Vmotion

    #esxcfg - vswitch - L vmnic1 vSwitch0

    #esxcfg - road 128.148.176.1

    1. enable Vmotion

    ' vimsh - n-e' / vmotion/hostsvc/vnic_set vmk0.

    EXPRESSIONS OF FOLKLORE

    Help!

    I use a slightly different ks file taken to deployment ESX (AED) unit:

    service mgmt-vmware restart

    Waiting for go to accept connections from the echo...

    sleep 180

    some time; do

    sleep 5

    Waiting for go to accept connections from the echo...

    fact

    echo 'Configuration VMotion'

    cmd-vim-VMware vmotion/hostsvc/vnic_set vmk0

    VMware-vim-cmd internalsvc/refresh_network

  • Long Distance vMotion

    Hi guys,.

    Please help me to gain some knowledge here, because I'm really very confusing. I understand (or understand) vMotion at the same level physics site, i.e. all hosts to connect to the same VLAN for vMotion and all guests can see the same LUN storage... you can vMotion should be child's play...

    But I really struggle with the concept of long Distance vMotion... that is another physical site where the same VLAN may not exist and indeed the same storage LUN cannot exist... can someone please (please) explain how this concept would work or how such an activity of vMotion can be implemented? vMotion for me means being able to move a virtual machine from one host to another without any downtime of the virtual machine, that is why I am struggling to understand the concept of a vMotion of long Distance with different networks and storage on different site.

    Confused Dryv

    so effectively you're saying we need meet the pre-reqs that apply to the creation of a vMSC inorder to get long distance vMotion to work?

    Yes, all requirements fundamental clusters as for vMSC applies here as well. It's the same cluster stretched with a more 'cool' name implies support for distances longer than a metropolitan area.

    -What is vmotion without shared storage? This implies that you need not shared storage for vMotion activities? is it still possible? I hear this a lot... the term sharing nothing vmotion I hear people throw a lot!

    vSphere 5.1 presents the vMotion (vMotion "reinforced") turf, which is basically storage vMotion and vMotion of calculation at the same time on the network. You have no need of storage shared between source and destination host but you still need to layer 2 for VM networks and vmkernel connectivity. Of course, it will take much more time than a regular vMotion with shared storage because you must copy the entire VM disks. See here for more information:

    http://wahlnetwork.com/2012/11/21/leveraging-VMwares-enhanced-shared-nothing-VMotion/

    so, if I told you:

    "yes we can just virtual computers vmotion from one site to the other.

    and the underlying environment has been configured as follows:

    -Unique VCenter for all sites

    -Clusters by site (for vMotion/DRS/HA in a cluster only)

    -No VLAN stretched between sites

    -Routing in place across all sites

    -No SAN or SAN replication that extends between sites

    What would you say to me?

    Is there a way (or even remotely possible) "we can just virtual computers vMotion from one site to the other?

    In this scenario you can technically , through the above mentioned improved turf vMotion. But without VLAN stretched your VM will be accessible from the other site, unless do you a few magic fantasy to the physical network layer or decide to re - IP manually the machine virtual or the network as NSX virtualization.

    However, any type of vMotion between hosts is only officially supported if vMotion vmkernel interfaces are on the same domain of dissemination of layer 2. Apparently, you can get an approval of support for VMware kludgy as this http://cumulusnetworks.com/blog/routed-vmotion-why/.

    This will change with vSphere 6 that officially supports vMotion traffic routing however.

    Also note that in this scenario there is y no HA or DRS between sites since your clusters are divided. But the generally longer the general idea that underlies the vMSC or his brother of distance is to have a single stretched cluster.

    I recommend you read some of the messages of Ivan I've linked my post before he made some good shots of face on the whole subject from a perspective of networkers.

  • Error cuando intento pasar una Máquina UN ESX ESX another con Vmotion

    Quiero pasar una Máquina UN ESX has otro ESX Vmotion con of forma con manual the utility of the Migración pero cuando lo quiero run sale me el siguiente error that sale the foto en. Say that I have activated the Vmotion option in todos los ESX.

    MUCHAS gracias por su ayuda.

    Saludos

    A las buenas

    ECHA a look has the config of the VM intentas migrar TR of puertos series has, SER asi eliminalos.

    A greeting.

    -

  • Network configuration for ISCSI and VMotion

    Hello

    I have an ESX host configured with the iSCSI storage and am currently working on the best way to affect my NIC I a four VMK vSwitch and two nic

    http://communities.VMware.com/message/1428385#1428385

    I also have an additional switch for VMotion.

    vSwitch3

    -


    -VMkernel

    -Service 2 console

    -


    vmnic6

    -


    vmnic7

    -


    vmnic6 and vmnic7 are both on the San.

    After adding the new VMkernel and activation of vmotion, I was wondering why this has not shown as an additional path to the storage (I want to know if this is another question). Then I ran "esxcli swiscsi nic list d vmhba33" and of course, only the first four VMK was listed.

    Why the new VMKernel is not automatically linked to vmhba33?

    It would be a bad idea?

    See you soon

    Just to play devil's advocate, why shouldn't be VMotion and SAN traffic on the same link though?

    the iSCSI traffic MUST have a low latency and no mistake.

    VMotion can create advanced that could generate problems in iSCSI traffic.

    No idea why it does not automatically bind well?

    Can you vmkping each IPs Eql?

    You have to add each interface vmkernel of initiator iSCSI, with a command like this?

    esxcli swiscsi nic add - n vmk0 d vmhba34

    André

  • 3i Embedded and vMotion

    How do people set up their Embedded 3i for NIC redundancy servers in a system of NETWORK 6 card?

    In the past, on traditional ESX I have configured as follows for not to consume more than half the NIC for management and VMotion.

    • Assign 2 network cards to vSwitch 0 (or Vswif0 in ESX), for arguments sake Let's say vmnic0 and vmnic3

    • Have 2 groups of VMKernel ports on vSwitch0, one for the service console (management in 3i connection) for VMotion.

    • I would then bypass the switch by default to be as follows:

      • Vmnic0 is the active NIC to the service console, vmnic3 is set to failover only

      • Vmnic3 is active for Vmotion NIC, vmnic0 is set to failover only

    This means that 2 network cards using the interface of the console/service management and Vmotion use their own NIC unless there is a problem with the NETWORK card, in which they would share a NETWORK card as long as the problem persists.

    To do it in a card system NETWORK 6 then leaves 4 network cards available for traffic of comments.

    However, when I tried to do in 3i I HAD to put in a gateway for the interface of Vmotion or HA would fail to set up with various errors such as cannot solve the other guests or is unable to ping address of default isolation.

    Now, while things work, I had to set the gateway of Vmotion VMKernel default gateway, if it appears as the default gateway for the management interface as well which is a little disconcerting.

    I did some research and I found references to the use of das.allowNetworkx and das. AllowVmotionNetworks to control traffic HA, but I can't find anything to say if it will affect my current number.

    It has been suggested to create switches separated for the service console and Vmotion traffic, but to ensure the resilience it will take 4 of my 6 NICs, only leaving 2 for customer traffic, which is a little limited.

    Welcome to the VMTN Forums,

    Could you try the following:

    The das value. AllowVmotionNetworks to false. Add your default gateway to your management network which can be ping from the console. It should work?

    Duncan

    Blogs: http://www.yellow-bricks.com

    If you find this information useful, please give points to "correct" or "useful".

  • SQL, vmotion and RDM

    Hello

    I am a beginner and I would try to create a virtual machine with a database of sql 2005 with rdm lun.

    I created my vm on a lun that is shared between two hosts esx (for the vmotion).

    Now, I would like to add 2 rdm lun. The first on lun raid 5 for the data base, the second on a lun raid 1 for newspapers.

    But when I add the row on my vm as the vmotion and I try to migrate the virtual machine to the compatibility of field, it is 'impossible to migrate from esx1 to esx2. Virtual disk is mapped lun direct access which is not accessible.

    you have any suggestions?

    Sorry for my English

    Hello

    LUNS that are configured for the RDM plan must be visible on all ESX servers participating in the activity of vmotion. They must also have the same ranking of LU.

    for example

    ESX 1:

    READ 0 = store VMFS

    READ 1 = RDM

    ESX 2:

    READ 0 = store VMFS

    READ 1 = RDM

    http://blog.laspina.ca/

  • question of vCenter 5.1 HA

    I had 5.1.0,880146.  I've just implemented 2 guests with HA configured and wanted to do proof of concept with the customer. I left a test VM running on host 1 and rebooted the host. As expected, because the server rebooted and restarted the VM host 2. I then tried to perform a switch failure by pulling on the cable network to host the switch 1. I have two vswitches configuration. Each host has 4 network cards.

    vswitch 0 - vnic0 is active mgmt port w / sleep vnic2; vnic2 is active for vmotion w / vnic0 as standby

    vswitch 1 - vm network w / vnic1 and vnic3/active

    Same configuration of network on both hosts.

    When I pull on the cables, nothing happens even after 2 minutes. Connections on vswitch 1 show as disconnected.  I have the datasore of pulses to select one of the data stores of cluster in which I 2.

    I tried to reconfigure HA and reduced/re-enabled disabled it.

    Am I missing something here?

    I guess that pswitch1 and pswitch2 are used for the uplinks on vSwitch0!

    That's right, while the management network is working (i.e. Exchange heartbeats, the traffic of the election, address accessible gateway/isolation), HA do kick in. In most cases, it is not desirable that HA is involved, if only the management network is not working. In case it is just a wiring problem and still work in the virtual machine without any problems on the computer, virtual network, you wouldn't usually downtime, you. This is why the default isolation response is set to "leave power." In addition to the heartbeat of network, VMware has added pulsations of storage to avoid false positives.

    IMO with a network of high availability configuration (i.e. several cards of network connected to different physical switches) non-response of Isolation HA is the preferred choice.

    André

  • ESX, number of network cards, NETIOC or traditional approach

    Hi all

    I have a question of design on the network of an ESX environment configuration.

    In short, we must decide how many cards per server, we need.

    It's an ESX cluster for a Cloud Computing environment (hosting).

    I told my boss about 8 cards per server would be appropriate (4-port ethernet cards).

    He said, however, that I am crazy in the coconut with so many NICs per host,

    because of the complex network management / wiring.

    and said that the 2 network cards should be sufficient, or maximum 4.

    We don't know yet if we'll use NETIOC or the traditional approach

    with multiple VSWITCHEs to separate the network flow.

    That's what I had in mind when using no NETIOC:

    THE VM NETWORK:

    VSWITCH1 - ETH0 (active) = physical NIC 0_port0

    Eth1 (at rest) = physical NIC 1_port0

    VMOTION NETWORK:

    VSWITCH2 - ETH2 (active) = physical NIC 0_port1

    Eth3 (at rest) = physical NIC 1_port1

    IP STORAGE NETWORK:

    VSWITCH3 - ETH4 (active) = physical NIC 2_port0

    (Passive) ETH5 = physical NIC 3_port0

    NETWORK FAULT TOLERANCE:

    VSWITCH4 - ETH6 (active) = physical NIC 2_port1

    (Passive) ETH7 = physical NIC 3_port1

    Is it really that crazy to have 8 adapters per ESX host?

    If so, the 6 is acceptable?

    I think 6 would work if we combine vmotion and storage over IP on the same VSWITCH,

    or tolerance vmotion and the fault on the same VSWITCH.

    He thinks that 4 is an absolute minimum.

    Somehow, I don't think it's a good idea to combine the vmotion/ipstorage/and fault tolerance

    on the same network adapter. I think that if we get only 4 adapters for each host.

    We forget storage over IP and keep all related storage

    connected with Fibre Channel.

    But maybe I am too greedy-NIC here?

    Currently, we do not fault tolerance

    but I think that there is a demand for it in the future.

    It is, therefore, may be an exaggeration to allocate separate physical maps to do this.

    It might be better to combine it with the flow VMOTION?

    That's what I had in mind when using NETIOC:

    1 virtual switch, whose actions for the load balancing,

    and with the load database grouping enabled.

    SHARING OF NETWORK FLOWS:

    VMOTION 20 |  VSWITCH1 | eth0_ACTIVE

    MGMT 10 |                         | eth1_ACTIVE

    NFS 20 |                         | eth2_ACTIVE

    FT                  10  |                         | eth3_ACTIVE

    VM                 40  |                         | eth4_ACTIVE

    |                         | eth5_ACTIVE

    |                         | eth6_STANDBY

    |                         | eth7_STANDBY

    It may take a higher share for NFS valuie.

    We don't know yet what kind of goal, NFS data warehouses will be used.

    There will also be Fibre Channel connected to data warehouses.

    In the NETIOC scenario, if 8 physical NIC are really too much, I guess we could

    do with 6

    But 6 also seems to be a minimum in this situation for me,

    or can come out us with 4?

    Also, about NETIOC, given that this kind of thing is still pretty new (since 4.1),

    anyone here have experience with the new NETIOC feature on

    a distributed switch?

    I would get at least 6 cards per server,

    better one or two network cards too much that you do not use at the beginning.

    but have the option to use, at least be blocked eventually.

    not being able to implement a feature (for example. Fault tolerance)

    because you have no free NIC more.

    Or otherwise, they should just go with 2x10GB NIC,.

    and distribute the stream with NETIOC.

    It would greatly simplify cabling and management.,.

    and give more bandwidth

    Well...

    Any input would be greatly appreciated.

    Concerning best practices, you will need 8-

    vMotion and your management network (you had left it) can share a pare - woth the pair configured active / standby management and standby/active for vMotion - FT, VM Network and Sotrage IP network should all be insulated and redundnat - depending on your load I/O that you can condense up to four network adapters with FT/VMs and storage over IP sharing the same NIC but on different VLAN -

  • Home put in service of a DRS cluster active mode NOT to trigger and automatic Vmotion

    I'm under 5.5 U2

    I have a cluster with active and defined DRS fully automated, but when I select the host for the maintenance mode of the VM NOT vmotion to another host in the cluster automatically. I have to vmotion manually each virtual machine to a host selected manually. There are a few VM with DRS off on them because they ONLY work on the host, they are assigned to (replication of virtual computer), but all the other VMS are defined 'by default (automated)'. Would it because I only have two of the host in the cluster?

    Capture.JPGCapture1.JPG

    Try this.

    Hope it will solve your problem.

    http://www.vmwareminds.com/troubleshooting-maintenance-mode-process-get-stuck-at-2/

  • VMotion and Active Directory

    Hi all

    First post here.  I read up on top of the communities here as well as the Google search to find the answer to my VMware / AD question but have not found a definitive answer for her:

    Most said that it is NOT recommended to enable the snapshots on a VM AD as AD may be damaged.  If this is the case, does AD VMs should not be VMotioned as well because when you VMotion, you take a snapshot?

    What are the recommendations/experiences with VMotion and AD that you all have?

    Thank you

    VEEI

    It does not have a Flash disk for vmotion.  It does take a bitmap of ram, send that to the target esx server, copy becomes more ram for the vmotion and executed treatment hand.

    I don't take pictures of my ad for any reading except for the purposes of this component snap in a test environment. You wouldn't put snaps of online advertising, it is simply not good in my opinion.

    I wouldn't ban the shots, so I'm not going to one of the modes of drive.

  • Storage vMotion with Syncrep on Equallogic San

    Hello

    We recently bought 2 San Equallogic (PS6510ES) and turned on synchronous replication between them.

    We use ESXi 5.5 and followed the best practices guide to disable TCP ack and 60 logintimeout value.

    Everything seems to work fine until we tried Storage vMotion/migration and cloning VMS on the San.

    When we did the migration of 2 volumes on the EQL SAN storage (the target volume has active syncrep), the target volume will be out-of-sync and then the storage vMotion will hang sometimes.

    The worst is when such a problem has occurred, all other volumes on the EQL constantly to respond also. All volumes will not respond until we rebooted the corresponding ESXi host (NOTE: simple shut down/power outside the host won't help.) The San EQL will only return to normal after a reboot of the ESXi host).

    If we pause / off the syncrep before Storage vMotion, the problem will not happen.

    Is this normal or not?

    In addition, whenever the problem occurred, the latency of writing the corresponding volume will be dry and we will receive alerts similar to the following:

    connection iSCSI target ' xx.xx.xx.xx:3260, iqn.2001 - 05.com.equallogic:xxxxx - test-volume ' initiator 'xx.xx.xx.xx:xxxxx, eqlinitiatorsyncrep' failed for the following reason:

    Initiator, disconnected from the target when connecting.

    Thanks in advance.

    Hi Joerg,

    What is a #SR? I opened a record of support to 24 June and still waiting for Dell to contact me.

    Ryan

  • 1000V VSM lossing contact VEM after vMotion

    Hello

    We have installed 1000v VSMs as virtual machines on a Server Blade HP virtual connect modules.

    This worked well and we migrated most of the VMS to 1000v. Same HA between the primary and backup VSMs worked.

    Now, we see that if you live migrate the active MSM is loses all contact with the MEC, but backup VSM is supported unless you turn off an asset, the same thing happens when clients trend Deep Security Manager did some update to the VSM VM.

    We have the 1000v deployed in L3 mode with the help of control interface and the MEC have their own vmkernel IP address, which in this case is on the same IP subnet as the VSM control interface.

    We control 1000v VSM interface and management interface configured as a system of VLAN

    1000V version 1,0000 sv2 (2.1) on VMware 5.1

    Any ideas? This kind of thing is usually a number system VLAN.

    News config below.

    Thank you

    Pat

    VLAN 60

    name mgmt<--- ssh="" to="" vsm="">

    VLAN 66

    name VEM-mgmt<-- vsm="" to="">

    profile port vethernet n1kv_NEXUS_L3_CONTROL type

    capacity l3control

    VMware-port group

    switchport mode access

    switchport access vlan 66

    no downtime

    System vlan 66

    enabled state

    profile port vethernet n1kv_VLAN60 type

    VMware-port group

    switchport mode access

    switchport access vlan 60

    no downtime

    System vlan 60

    enabled state

    profile n1kv_UPLINK ethernet port type

    VMware-port group

    switchport mode trunk

    switchport trunk allowed vlan 4,7,9,18,54,57-66,68,71,94-95

    type of service-policy output queuing uplink-policy-map

    Automatic channel-group on mac - pinning parent

    no downtime

    System vlan 60,66

    Max-port 32

    enabled state

    SVS-field

    domain ID 10

    check the vlan 1

    package vlan 1

    mode of SVS L3 interface control0

    SVS connection vcenter

    Protocol vmware-vim

    Remote ip address 172.30.60.10 port 80

    VMware dvs uuid "f7 29 21 50 82 15 50 7f - 52 20 2d 6 c f1 6 d 89 a7" datacenter-name Gatwick

    n1kUser admin user

    Max-ports 30000

    connect

    What follows is his work when:

    Show dom svs

    SVS config domain:

    Domain ID: 10

    Check the vlan: NA

    Package vlan: NA

    Command mode of L2/L3: L3

    L3 control interface: control0

    Status: Config push successful VC.

    Control type multicast: No.

    Show con svs

    vcenter to connect:

    IP address: 172.30.60.10

    remote port: 80

    Protocol: vmware-vim https

    certificate: by default

    datacenter name: Gatwick

    Admin: n1kUser (user)

    Max ports: 30000

    DVS uuid: f7 29 21 50 82 15 50 7f - 52 20 2d 6 c f1 6 d 89 a7

    config status: enabled

    operational status: connected

    Sync status: complete

    version: VMware vCenter Server 5.1.0 build-1064983

    VC-uuid: 8C96DBA4-C1AA-490E-86E4-50B76DFA31AC

    Yes, the problem is that any port-profile with "l3control of ability" is supposed to only works with the vmk interfaces.

    You want to put a VM interface on port-profile with l3control. We used to allow this, but for some reason any, he was shot.

    Create another port-profile just for control 0 and see if that fixes the problem. Yet once make identical except for the "l3control ability.

    Your version is good. I'll have to play in my lab. I'm kind of confused why your VSM2 gets blocked while quick loading when you vmotion VSM1. That shouldn't happen now.

    Louis

Maybe you are looking for