VCA / Migration proposed

Hello

I plan a migration of some virtual machines and want to clarify something about the compatibility of VCA.

My existing servers use Intel 53XX / 54XX processors and will be removed after migration took place. My new servers use Intel 5500 processors. Everything will be on VI3.5 and a single 2.5 vCenter server will manage both groups.

I propose to use Storage VMotion to migrate the virtual machine between old and new data warehouses and want to use VMotion if possible to migrate the virtual computer between clusters.

My proposed always is:

Upgrade existing ESX 3.5 Update 4 cluster.

Create new cluster with ESX 3.5 Update 4.

Select CVS on the new cluster and the base line at the level of the Core 2 which is compatible with older processors clusters.

Migrate disks with Storage VMotion VM and VM with VMotion, between old and new clusters.

Cluster of old decommision.

Anyone who has used this approach? It does not work and I would be aware of any witch hunt?

Thank you

If you consider any comments as useful, please give points

---

MCSA, MCTS, VCP, VMware vExpert 2009

http://blog.vadmin.ru

Tags: VMware

Similar Questions

  • Iscsi fibre and 5.5 to 6 migration

    So I have a project going now, where I am moving from iscsi connected storage storage fiber connected using 2 data warehouses.  Addition of 2 new hosts and by removing the 4 former hosts. I have 8 guests that I need to change, but I don't have time to stop close all and migrate on and get back to the top. So I try to find the best possible scenarios for this initiative. So far, I have 3 hosts in fiber, but the other 5, that I don't have.  What I'm running into the fact that I have DRS off so I have to manually managed my vm so I'm not more taxing any one host. So I thought I'd do temp cluster to move my old 4 hosts that I do not plan to update or use this way I can keep my more critical data/vm is there while I prep and move all my other vm of connected fiber 6.0.0u1 new guests. But I don't know if it's the right way to address the issue.  And I'm sure I missed something while typing this stuff, but I'll try to find the best way to describe what I have and what I'm trying to accomplish. Looking for some thoughts or ideas that I have never done this kind of forward movement. Thanks for your time.

    I did it a couple of times in the past, and what you describe should work just fine. If you don't have a lot of settings of cluster you want to keep (ie. you can easily configure on a cluster) simply to create a new cluster, DRS/HA, with new hosts, migrate virtual machines for free a host on the old cluster removal/reinstall the old host to the new cluster and then continue with the remaining virtual machines migration. If in the new cluster hosts are compatible CPU (i.e. share the same mode of VCA) migration can be made without interruption of service.

    A few additional thoughts:

    • If you run base/Hot-add VM backups. ensure that backup devices are available on the two clusters with access to appropriate data storage.
    • According to storage systems, you can benefit (IO performance) to distribute virtual machines through more than just 2 data warehouses.

    André

  • Migration... He explained.

    Hi, come back for some advice...

    In research if someone can try to explain the three methods of migration proposed by the VM inside of vCenter.

    Below, I have tried to summarize my understanding, looking for confirmation as to whether its correct or not

    Change virtual machine host - vMotion

    To move the resources to the virtual machine IE. CPU and memory to another host but leaving the original in the data store VMDK file (including the new host should have access to). This I do not know on that I don't understand how you can move a virtual machine without moving its VMDK file?

    Change of data of the Virtual - Storage vMotion computer store

    Change the data store where the VMDK virtual machine file is stored. Leaving the VM on the original host of resources such as CPU and memory, once again still a little confused as with the method previous RE do not move VMDK files.

    Change VM host and store data - vMotion & Storage vMotion

    Change the host and the data store where the virtual computer resides.

    Can anyone clear this up for me? You can move a virtual machine to another host without move his VMDK file? If this is my correct summary that you are relocating it to another host only to make use of the hosts new CPU and memory etc?

    Looking online I think the answer is that, when you copy a virtual computer to another host, you copy the configuration and memory files that tell the host how much resources should be sent to the virtual machine in terms of CPU and memory, and nothing to do with the hard drive a VMDK file...

    Thanks in advance!

    You understand it correctly.

    For vMotion, it is necessary that the two hosts — source and target — have access to the same data store shared (for example on a separate NAS/iSCSI/FC storage system), so there is no need to copy all the files.

    Storage vMotion only moves the virtual machine files in another data store, which is accessible from the same host. It is like moving a piece of paper from the left to the right. It is completely transparent to the guest OS. Change the host and the entire data store is more difficult, but works on the hypervisor/ESXi layer and is still completely transparent to the guest OS.

    Before you begin a migration, a validation process checks whether sufficient resources are available on the target host, and/or the data store target.

    André

  • DB migration from windows to solaris using RMAN bkp

    Hi all

    current database version: 10.2.0.3
    version of the operating system: windows 2003 server with sp2

    proposed database version: 10.2.0.3
    version of the OS: solaris 5.8 or higher

    We wanted a migration proposal a DB from windows to solaris box. the only available backup is bkp RMAN.
    I know we can do using expdp/impdp or transportable of RMAN tablespace.

    I would like to know if there is another method to migrate using RMAN bkp between cross-platform.

    Thanks in advance.

    http://www.databasejournal.com/features/Oracle/article.php/3600931/Oracle-10G-transportable-tablespace-enhancements-cross-platform-conversion-capabilities.htm

    http://download.Oracle.com/docs/CD/B19306_01/backup.102/b14191/dbxptrn.htm

  • VCA level after the Migration of ESX4.0 to ESXi5.0

    Hello!

    I have a problem (complaint) with the level of my new ESXi 5.0 Cluster under vCentre VCA 5. The VCA level is set to 'Sandy Bridge' and the four hosts in the cluster are the E5-2650 Server ProLiant DL380p Gen8 16 core processors. I think that his key to highlight that the ESXi 5 platform including vCentre server is all new material.

    Our old cluster has been ESX4.0 running on "two" DL380 G5 servers with a vCentre 4 Server (hardware)

    Using the option 'Add host' the OLD 4.0 hosts have been imported into environmental vCentre 5 without a problem and vmotione worked like a dream to all servers of old (4) News (5).

    His key to note that the two VMware infrastructure using the same back-end (San) data warehouses.

    At this point, once vmotioned, the server always had their former levels of VCA. The behavior by default and all looking good.

    So we power cycled the virtual machine. And then on their return.

    VCA level changed... for "Westmere"?

    I tried everything, but I can't get one of the servers vmotioned to the EVC 'Sandy Bridge' level?

    If I create a new template or from scratch server and starts his starts with a level of 'Sandy Bridge' CVS, but servers infrastructure of downtown ESX4 vmotioned now only go as high as "Westmere".

    I had a good glance on forums etc nothing helps?

    I appreciate the level of EVC is used only for "Enhanced Vmotion" and performance differ a lot unless you use encryption but we use encryption.

    EVCLevel.jpg

    Any help or ideas appreciated

    The VCA mode is turned on, so I suspect that there may be some inherited in the VMX files entries that affect it. Can upgrade you virtual hardware to the latest version supported by your hosts and the test of this new (including reset the CPUID mask).

    If this also fails, or you are already on the material correct version can you send a VM affected VMX file and the file in VMX to a newly created (with the same version of virtual hardware) VM.

    Another option to test if the VMX file is the cause, must cancel the registration of the virtual machine, create a new virtual machine, but fix the existing disks - this turns and check the EVC mode. But it might be easier to upgrade the virtual hardware, reset the CPUID masks, remove entries incriminated in the VMX and feed their place.

    EDIT: Of course upgraded the VM tools before the virtual material.

    See you soon,.

    Jon

    Post edited by: jrmunday

  • Dynamic Cluster migration "Haswell" Intel and Intel 'Sandy Bridge' Cluster

    Hi all

    Need your help on this, on our below, we have two clusters ClusterA (EVC mode is Intel Haswell) and focus (EVC mode is Intel Sandy Bridge), the current problem we are facing right now is that we can not migrate from VM direct of clusterA to b, but successful once we stop the virtual machine and migrate to b. Webcam live migration of VM of focus to ClusterA now question. My question is, is there a way that we can migrate from VM direct ClusterA to b without changing the mode of the VCA on the two hosts?

    Details of home:

    Group A:
    CPU: Intel Xeon CPU E5-2698 V3 2.30 Ghz
    RAM: 256 GB
    EVC: Intel "Haswell" generation


    Group b:

    CPU: Intel Xeon CPU 2.6 Ghz E5-2670

    RAM: 256 GB

    EVC: Intel 'Sandy Bridge' generation

    If a processor can support level VCA XN, it can also support XN-1 to Xlevels 0. For example, a processor that supports Intel EVC baseline 'Sandy Bridge' generation has a level of EVC of L4. Therefore, it can also support VCA L0, L1, L3 and L2 levels. However, he cannot bear VCA level L5, which corresponds to the generation of Intel®'s "Ivy Bridge'. Intel EVC baselines are listed in table 1.1.

    Ref: KB VMware: best vMotion compatibility (EVC) processor support

    Thus, virtual machines that are running in your focus (EVC mode is Intel Sandy Bridge), should be able to move freely (vMotion) to b (mode of EVC is Intel Sandy Bridge) to ClusterA (EVC mode is Intel Haswell) and A to B provided that the virtual machine that is in your Cluster B vMotioned while in Group A and A is not in the power Cycle, if you put it off and on again then it will be new pickup EVC mode, otherwise it will continue to work with his original mode of VCA.

    But if you take the case of the virtual machines running within your group, those who can not be vMotioned to group B, because they are running with mode higher CVS, canoe taken to lower VCA as live migration, you can migrate cold (power off and move)

    Ideally if you want to reach vMotion compatibility between the two clusters, you must set the VCA mode for each group for Intel Sandy Bridge. But like asking you if you won't change EVC mode at all, the solution proposed would be

    Turned off of the virtual machines in your A group phase by phase, turn cold migration from Group B, there and let the fashion of these machines virtual micro Cluster B EVC, then move them again in A Cluster, in short to realize all the VM in EVC of Cluster B mode. But you will always be in the face of challenge then if VM through the feeding cycle while in Group A, it will pickup Haswell VCA mode and you will lose compatibility vMotion for this virtual machine on cluster again.

    so, what you're asking can be done using method above, but better would be to have two clusters in the VCA mode even if you don't have to worry about power cycles.

    Note: You need to use vSphere Web Client.

  • Impact of the Migration of the VPC?

    Hello

    I prepare for the migration of a back to back inter-donnees Center, architecture of the VPC from 1 Gbps to 10 Gbps links.  Currently, the LAG is a 2 Gbps aggregate and I need it thrust to 20Gbps.  The connections between these two data centers use dark fiber and covers a single STP domain (for each VLAN associated).  I have attached a diagram which illustrates this topology in detail, including the spanning-tree Protocol (RSTP PV +) port of roles and roles of the VPC, etc..  The diagram also illustrates my proposed method of implementation.

    I am very concerned by the type of convergence, I will experience this migration.  We have some very important systems that use this connectivity, and I can't afford a lot of impact.  The subtleties of transfer between VPC BPDU trunks (and how they differ from normal trunks) complicates things.

    What are your thoughts on this plan?  What do you what the impact of this migration would be?  (I obviously requires a window accommodate such a change, but I would like to be able to prepare the company for what lies ahead.)

    Thanks for your thoughts beforehand.  It is much appreciated.

    Hi, Jeffrey.

    The procedure you follow is correct and you do not feel a major impact on the network. Even at step 7, there will be no impact on the network.

    Check compatibility with module 1 G & 10 G in same VDC or required of any OS upgrade before the plan of an activity or a migration.

    Thank you

    Aouizerate

  • DMVPN spoke of issues after migration double ISR2 3925 hub to ASR-1001 X

    Hello world

    After our hub solution migration DMVPN double ISR2 3925 to ASR - 1001 X (running asr1001x - universalk9.03.12.03.S.154 - 2.S3 - std.SPA.bin) we started to have some problems with tunnels rays beat (which goes up and down) and sometimes never came.

    Running 'show dmvpn' speak it is stuck in State PNDH to our hub. To solve the problem, we run 'stop' and then 'non-stop' on the tunnel interface to actually speak that DMVPN Monte. Also runs "clear encryption session " on the shelf often solves the problem. So, it seems that the question has something to do with IPSEC.

    When the problem occurred, and then debug crypto ipsec, crypto, crypto isakmp and crypto engine socket the following can be seen on the hub:

     Jun 25 10:01:41 SUMMERT: ISAKMP:(46580):Sending NOTIFY DPD/R_U_THERE protocol 1 spi 140130067548488, message ID = 629121681 Jun 25 10:01:41 SUMMERT: ISAKMP:(46580): seq. no 0x64B2238C Jun 25 10:01:41 SUMMERT: ISAKMP:(46580): sending packet to  my_port 500 peer_port 500 (I) QM_IDLE Jun 25 10:01:41 SUMMERT: ISAKMP:(46580):Sending an IKE IPv4 Packet. Jun 25 10:01:41 SUMMERT: ISAKMP:(46580):purging node 629121681 Jun 25 10:01:41 SUMMERT: ISAKMP:(46580):Input = IKE_MESG_FROM_TIMER, IKE_TIMER_IM_ALIVE Jun 25 10:01:41 SUMMERT: ISAKMP:(46580):Old State = IKE_P1_COMPLETE New State = IKE_P1_COMPLETE Jun 25 10:01:41 SUMMERT: ISAKMP (46580): received packet from  dport 500 sport 500 ISP1-DMVPN (I) QM_IDLE Jun 25 10:01:41 SUMMERT: ISAKMP: set new node 3442686097 to QM_IDLE Jun 25 10:01:41 SUMMERT: ISAKMP:(46580): processing HASH payload. message ID = 3442686097 Jun 25 10:01:41 SUMMERT: ISAKMP:(46580): processing NOTIFY DPD/R_U_THERE_ACK protocol 1 spi 0, message ID = 3442686097, sa = 0x7F72986867D0 Jun 25 10:01:41 SUMMERT: ISAKMP:(46580): DPD/R_U_THERE_ACK received from peer , sequence 0x64B2238C Jun 25 10:01:41 SUMMERT: ISAKMP:(46580):deleting node 3442686097 error FALSE reason "Informational (in) state 1" Jun 25 10:01:41 SUMMERT: ISAKMP:(46580):Input = IKE_MESG_FROM_PEER, IKE_INFO_NOTIFY Jun 25 10:01:41 SUMMERT: ISAKMP:(46580):Old State = IKE_P1_COMPLETE New State = IKE_P1_COMPLETE Jun 25 10:01:42 SUMMERT: IPSEC: delete incomplete sa: 0x7F729923A438 Jun 25 10:01:42 SUMMERT: IPSEC(send_delete_notify_kmi): not sending KEY_ENGINE_DELETE_SAS Jun 25 10:01:42 SUMMERT: ISAKMP:(46580):purging node 1111296046 Jun 25 10:01:44 SUMMERT: ISAKMP (46580): received packet from  dport 500 sport 500 ISP1-DMVPN (I) QM_IDLE Jun 25 10:01:44 SUMMERT: ISAKMP: set new node 928225319 to QM_IDLE Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): processing HASH payload. message ID = 928225319 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): processing SA payload. message ID = 928225319 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Checking IPSec proposal 1 Jun 25 10:01:44 SUMMERT: ISAKMP: transform 1, ESP_AES Jun 25 10:01:44 SUMMERT: ISAKMP: attributes in transform: Jun 25 10:01:44 SUMMERT: ISAKMP: encaps is 2 (Transport) Jun 25 10:01:44 SUMMERT: ISAKMP: SA life type in seconds Jun 25 10:01:44 SUMMERT: ISAKMP: SA life duration (basic) of 3600 Jun 25 10:01:44 SUMMERT: ISAKMP: SA life type in kilobytes Jun 25 10:01:44 SUMMERT: ISAKMP: SA life duration (VPI) of 0x0 0x46 0x50 0x0 Jun 25 10:01:44 SUMMERT: ISAKMP: authenticator is HMAC-SHA Jun 25 10:01:44 SUMMERT: ISAKMP: key length is 256 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):atts are acceptable. Jun 25 10:01:44 SUMMERT: CRYPTO_SS(TUNNEL SEC): Active open, socket info: local  /255.255.255.255/0, remote  /255.255.255.255/0, prot 47, ifc Tu3300 Jun 25 10:01:44 SUMMERT: IPSEC(recalculate_mtu): reset sadb_root 7F7292E64990 mtu to 1500 Jun 25 10:01:44 SUMMERT: CRYPTO_SS(TUNNEL SEC): Sending Socket Ready message Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): processing NONCE payload. message ID = 928225319 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): processing ID payload. message ID = 928225319 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): processing ID payload. message ID = 928225319 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):QM Responder gets spi Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Node 928225319, Input = IKE_MESG_FROM_PEER, IKE_QM_EXCH Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Old State = IKE_QM_READY New State = IKE_QM_SPI_STARVE Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Node 928225319, Input = IKE_MESG_INTERNAL, IKE_GOT_SPI Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Old State = IKE_QM_SPI_STARVE New State = IKE_QM_IPSEC_INSTALL_AWAIT Jun 25 10:01:44 SUMMERT: IPSEC(crypto_ipsec_sa_find_ident_head): reconnecting with the same proxies and peer  Jun 25 10:01:44 SUMMERT: IPSEC(crypto_ipsec_update_ident_tunnel_decap_oce): updating profile-shared Tunnel3300 ident 7F7298B2BF80 with lookup_oce 7F7296BF5440 Jun 25 10:01:44 SUMMERT: IPSEC(create_sa): sa created, (sa) sa_dest= , sa_proto= 50, sa_spi= 0x14F40C56(351538262), sa_trans= esp-aes 256 esp-sha-hmac , sa_conn_id= 27873 sa_lifetime(k/sec)= (4608000/3600), (identity) local= :0, remote= :0, local_proxy= /255.255.255.255/47/0, remote_proxy= /255.255.255.255/47/0 Jun 25 10:01:44 SUMMERT: IPSEC(create_sa): sa created, (sa) sa_dest= , sa_proto= 50, sa_spi= 0x3B4731D7(994521559), sa_trans= esp-aes 256 esp-sha-hmac , sa_conn_id= 27874 sa_lifetime(k/sec)= (4608000/3600), (identity) local= :0, remote= :0, local_proxy= /255.255.255.255/47/0, remote_proxy= /255.255.255.255/47/0 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Received IPSec Install callback... proceeding with the negotiation Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Successfully installed IPSEC SA (SPI:0x14F40C56) on Tunnel3300 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): sending packet to  my_port 500 peer_port 500 (I) QM_IDLE Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Sending an IKE IPv4 Packet. Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Node 928225319, Input = IKE_MESG_FROM_IPSEC, IPSEC_INSTALL_DONE Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Old State = IKE_QM_IPSEC_INSTALL_AWAIT New State = IKE_QM_R_QM2 Jun 25 10:01:44 SUMMERT: ISAKMP (46580): received packet from  dport 500 sport 500 ISP1-DMVPN (I) QM_IDLE Jun 25 10:01:44 SUMMERT: ISAKMP: set new node 1979798297 to QM_IDLE Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): processing HASH payload. message ID = 1979798297 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): processing NOTIFY PROPOSAL_NOT_CHOSEN protocol 3 spi 351538262, message ID = 1979798297, sa = 0x7F72986867D0 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580): deleting spi 351538262 message ID = 928225319 Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):deleting node 928225319 error TRUE reason "Delete Larval" Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):peer does not do paranoid keepalives. Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Enqueued KEY_MGR_DELETE_SAS for IPSEC SA (SPI:0x3B4731D7) Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):deleting node 1979798297 error FALSE reason "Informational (in) state 1" Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Input = IKE_MESG_FROM_PEER, IKE_INFO_NOTIFY Jun 25 10:01:44 SUMMERT: ISAKMP:(46580):Old State = IKE_P1_COMPLETE New State = IKE_P1_COMPLETE Jun 25 10:01:44 SUMMERT: IPSEC: delete incomplete sa: 0x7F729923A340 Jun 25 10:01:44 SUMMERT: IPSEC(key_engine_delete_sas): delete SA with spi 0x3B4731D7 proto 50 for  Jun 25 10:01:44 SUMMERT: IPSEC(update_current_outbound_sa): updated peer  current outbound sa to SPI 0 Jun 25 10:01:44 SUMMERT: IPSEC(send_delete_notify_kmi): not sending KEY_ENGINE_DELETE_SAS Jun 25 10:01:44 SUMMERT: CRYPTO_SS(TUNNEL SEC): Sending request for CRYPTO SS CLOSE SOCKET

     #sh pl ha qf ac fe ipsec data drop ------------------------------------------------------------------------ Drop Type Name Packets ------------------------------------------------------------------------ 3 IN_US_V4_PKT_FOUND_IPSEC_NOT_ENABLED 127672 19 IN_OCT_ANTI_REPLAY_FAIL 13346 20 IN_UNEXP_OCT_EXCEPTION 4224 33 OUT_V4_PKT_HIT_IKE_START_SP 1930 62 IN_OCT_MAC_EXCEPTION 9 #sh plat hard qfp act stat drop | e _0_ ------------------------------------------------------------------------- Global Drop Stats Packets Octets ------------------------------------------------------------------------- Disabled 1 82 IpFragErr 170536 246635169 IpTtlExceeded 4072 343853 IpsecIkeIndicate 1930 269694 IpsecInput 145256 30071488 Ipv4Acl 2251965 215240194 Ipv4Martian 6248 692010 Ipv4NoAdj 43188 7627131 Ipv4NoRoute 278 27913 Ipv4Unclassified 6 378 MplsNoRoute 790 69130 MplsUnclassified 1 60 ReassTimeout 63 10156 ServiceWireHdrErr 2684 585112

    In addition, after you run "logging dmvpn rate-limit 20' on the hub

     %DMVPN-3-DMVPN_NHRP_ERROR: Tunnel292: NHRP Encap Error for Resolution Request , Reason: protocol generic error (7) on (Tunnel:  NBMA: )

    On the talks both the following can be seen debugging as well:

     *Jun 25 09:17:26.884: ISAKMP:(1032): sitting IDLE. Starting QM immediately (QM_IDLE ) *Jun 25 09:17:26.884: ISAKMP:(1032):beginning Quick Mode exchange, M-ID of 1599359281 *Jun 25 09:17:26.884: ISAKMP:(1032):QM Initiator gets spi *Jun 25 09:17:26.884: ISAKMP:(1032): sending packet to  my_port 500 peer_port 500 (R) QM_IDLE *Jun 25 09:17:26.884: ISAKMP:(1032):Sending an IKE IPv4 Packet. *Jun 25 09:17:26.884: ISAKMP:(1032):Node 1599359281, Input = IKE_MESG_INTERNAL, IKE_INIT_QM *Jun 25 09:17:26.884: ISAKMP:(1032):Old State = IKE_QM_READY New State = IKE_QM_I_QM1 *Jun 25 09:17:26.940: ISAKMP (1032): received packet from  dport 500 sport 500 Global (R) QM_IDLE *Jun 25 09:17:26.940: ISAKMP:(1032): processing HASH payload. message ID = 1599359281 *Jun 25 09:17:26.940: ISAKMP:(1032): processing SA payload. message ID = 1599359281 *Jun 25 09:17:26.940: ISAKMP:(1032):Checking IPSec proposal 1 *Jun 25 09:17:26.940: ISAKMP: transform 1, ESP_AES *Jun 25 09:17:26.940: ISAKMP: attributes in transform: *Jun 25 09:17:26.940: ISAKMP: encaps is 2 (Transport) *Jun 25 09:17:26.940: ISAKMP: SA life type in seconds *Jun 25 09:17:26.940: ISAKMP: SA life duration (basic) of 3600 *Jun 25 09:17:26.940: ISAKMP: SA life type in kilobytes *Jun 25 09:17:26.940: ISAKMP: SA life duration (VPI) of 0x0 0x46 0x50 0x0 *Jun 25 09:17:26.940: ISAKMP: authenticator is HMAC-SHA *Jun 25 09:17:26.940: ISAKMP: key length is 256 *Jun 25 09:17:26.940: ISAKMP:(1032):atts are acceptable. *Jun 25 09:17:26.940: IPSEC(ipsec_process_proposal): proxy identities not supported *Jun 25 09:17:26.940: ISAKMP:(1032): IPSec policy invalidated proposal with error 32 *Jun 25 09:17:26.940: ISAKMP:(1032): phase 2 SA policy not acceptable! (local  remote ) *Jun 25 09:17:26.940: ISAKMP: set new node -1745931191 to QM_IDLE *Jun 25 09:17:26.940: ISAKMP:(1032):Sending NOTIFY PROPOSAL_NOT_CHOSEN protocol 3 spi 834718720, message ID = 2549036105 *Jun 25 09:17:26.940: ISAKMP:(1032): sending packet to  my_port 500 peer_port 500 (R) QM_IDLE *Jun 25 09:17:26.940: ISAKMP:(1032):Sending an IKE IPv4 Packet. *Jun 25 09:17:26.940: ISAKMP:(1032):purging node -1745931191 *Jun 25 09:17:26.940: ISAKMP:(1032):deleting node 1599359281 error TRUE reason "QM rejected" *Jun 25 09:17:26.940: ISAKMP:(1032):Node 1599359281, Input = IKE_MESG_FROM_PEER, IKE_QM_EXCH *Jun 25 09:17:26.940: ISAKMP:(1032):Old State = IKE_QM_I_QM1 New State = IKE_QM_I_QM1 *Jun 25 09:17:34.068: ISAKMP (1032): received packet from  dport 500 sport 500 Global (R) QM_IDLE *Jun 25 09:17:34.068: ISAKMP: set new node 1021264821 to QM_IDLE *Jun 25 09:17:34.072: ISAKMP:(1032): processing HASH payload. message ID = 1021264821 *Jun 25 09:17:34.072: ISAKMP:(1032): processing NOTIFY DPD/R_U_THERE protocol 1 spi 0, message ID = 1021264821, sa = 0x32741028 *Jun 25 09:17:34.072: ISAKMP:(1032):deleting node 1021264821 error FALSE reason "Informational (in) state 1" *Jun 25 09:17:34.072: ISAKMP:(1032):Input = IKE_MESG_FROM_PEER, IKE_INFO_NOTIFY *Jun 25 09:17:34.072: ISAKMP:(1032):Old State = IKE_P1_COMPLETE New State = IKE_P1_COMPLETE *Jun 25 09:17:34.072: ISAKMP:(1032):DPD/R_U_THERE received from peer , sequence 0x64B2279D *Jun 25 09:17:34.072: ISAKMP: set new node 716440334 to QM_IDLE *Jun 25 09:17:34.072: ISAKMP:(1032):Sending NOTIFY DPD/R_U_THERE_ACK protocol 1 spi 834719464, message ID = 716440334 *Jun 25 09:17:34.072: ISAKMP:(1032): seq. no 0x64B2279D *Jun 25 09:17:34.072: ISAKMP:(1032): sending packet to  my_port 500 peer_port 500 (R) QM_IDLE *Jun 25 09:17:34.072: ISAKMP:(1032):Sending an IKE IPv4 Packet. *Jun 25 09:17:34.072: ISAKMP:(1032):purging node 716440334 *Jun 25 09:17:34.072: ISAKMP:(1032):Input = IKE_MESG_FROM_PEER, IKE_MESG_KEEP_ALIVE *Jun 25 09:17:34.072: ISAKMP:(1032):Old State = IKE_P1_COMPLETE New State = IKE_P1_COMPLETE *Jun 25 09:17:35.356: ISAKMP:(1032):purging node 206299144

    Obviously something seems to be wrong Phase 2 not to come. But why is it going up after having erased the session encryption or close the tunnel interface and activate the interface of tunnel has spoken?

    Very weird. Also, in looking at att the hub debugging messages it seems that Cryptography is associated with evil Tu3300 tunnel interface when it is Tu2010. Normal or Bug?

    The configuration of the hub looks like this:

     crypto keyring ISP1-DMVPN vrf ISP1-DMVPN pre-shared-key address 0.0.0.0 0.0.0.0 key  crypto isakmp policy 10 encr aes authentication pre-share crypto isakmp keepalive 10 3 periodic crypto isakmp nat keepalive 10 crypto isakmp profile ISP1-DMVPN keyring ISP1-DMVPN match identity address 0.0.0.0 ISP1-DMVPN keepalive 10 retry 3 crypto ipsec transform-set AES256-MD5 esp-aes 256 esp-md5-hmac mode tunnel crypto ipsec transform-set AES256-SHA-TRANSPORT esp-aes 256 esp-sha-hmac mode transport crypto ipsec profile ISP1-DMVPN set transform-set AES256-SHA AES256-SHA-TRANSPORT set isakmp-profile ISP1-DMVPN vrf definition ISP1-DMVPN description DMVPN-Outside-ISP1 rd 65527:10 ! address-family ipv4 exit-address-family ! ! interface TenGigabitEthernet0/0/0 no ip address ! interface TenGigabitEthernet0/0/0.71 description VPN;ISP1-DMVPN;Outside;VLAN71 encapsulation dot1Q 71 vrf forwarding ISP1-DMVPN ip address  255.255.255.128 no ip proxy-arp ip access-group acl_ISP1-DMVPN_IN in ! ip route vrf ISP1-DMVPN 0.0.0.0 0.0.0.0  name ISP1;Default ip access-list extended acl_ISP1-DMVPN_IN permit icmp any any permit esp any host  permit gre any host  permit udp any host  eq isakmp permit udp any host  eq non500-isakmp deny ip any any vrf definition 2010  description CUSTA - Customer A  rd 65527:2010 route-target export 65527:2010 route-target import 65527:2010 ! address-family ipv4 exit-address-family ! ! interface Tunnel2010 description CUSTA;DMVPN;Failover-secondary vrf forwarding 2010 ip address 10.97.0.34 255.255.255.240 no ip redirects ip mtu 1380 ip nhrp map multicast dynamic ip nhrp network-id 2010 ip nhrp holdtime 120 ip nhrp server-only ip nhrp max-send 1000 every 10 ip tcp adjust-mss 1340 tunnel source TenGigabitEthernet0/0/0.71 tunnel mode gre multipoint tunnel key 2010 tunnel vrf ISP1-DMVPN tunnel protection ipsec profile ISP1-DMVPN shared router bgp 65527 ! address-family ipv4 vrf 2010 redistribute connected metric 10 redistribute static metric 15 neighbor 10.97.0.39 remote-as 65028 neighbor 10.97.0.39 description spokerouter;Tunnel1 neighbor 10.97.0.39 update-source Tunnel2010 neighbor 10.97.0.39 activate neighbor 10.97.0.39 soft-reconfiguration inbound neighbor 10.97.0.39 prefix-list EXPORT-IVPN-VRF2010 out neighbor 10.97.0.39 route-map AllVRF-LocalPref-80 in neighbor 10.97.0.39 maximum-prefix 5000 80 default-information originate exit-address-family

    Configuring spoke:

     crypto keyring DMVPN01 pre-shared-key address 0.0.0.0 0.0.0.0 key  crypto isakmp policy 10 encr aes authentication pre-share crypto isakmp invalid-spi-recovery crypto isakmp profile DMVPN01 keyring DMVPN01 match identity address 0.0.0.0 keepalive 10 retry 3 crypto ipsec transform-set AES256-SHA esp-aes 256 esp-sha-hmac mode tunnel crypto ipsec transform-set AES256-SHA-TRANSPORT esp-aes 256 esp-sha-hmac mode transport crypto ipsec profile DMVPN01 set transform-set AES256-SHA-TRANSPORT set isakmp-profile DMVPN01 vrf definition inside rd 65028:1 route-target export 65028:1 route-target import 65028:1 ! address-family ipv4 exit-address-family ! interface Tunnel1 description DMVPN to HUB vrf forwarding inside ip address 10.97.0.39 255.255.255.240 no ip redirects ip mtu 1380 ip nhrp map 10.97.0.33  ip nhrp map multicast  ip nhrp map 10.97.0.34  ip nhrp map multicast  ip nhrp network-id 1 ip nhrp holdtime 120 ip nhrp nhs 10.97.0.33 ip nhrp nhs 10.97.0.34 ip nhrp registration no-unique ip nhrp registration timeout 60 ip tcp adjust-mss 1340 tunnel source GigabitEthernet0/0 tunnel mode gre multipoint tunnel key 2010 tunnel protection ipsec profile DMVPN01 shared router bgp 65028 ! address-family ipv4 vrf inside bgp router-id 172.28.5.137 network 10.97.20.128 mask 255.255.255.128 network 10.97.21.0 mask 255.255.255.0 network 10.97.22.0 mask 255.255.255.0 network 10.97.23.0 mask 255.255.255.0 network 172.28.5.137 mask 255.255.255.255 neighbor 10.97.0.33 remote-as 65527 neighbor 10.97.0.33 description HUB1;Tunnel2010 neighbor 10.97.0.33 update-source Tunnel1 neighbor 10.97.0.33 timers 10 30 neighbor 10.97.0.33 activate neighbor 10.97.0.33 send-community both neighbor 10.97.0.33 soft-reconfiguration inbound neighbor 10.97.0.33 prefix-list IROUTE-EXPORT out neighbor 10.97.0.33 maximum-prefix 5000 80 neighbor 10.97.0.34 remote-as 65527 neighbor 10.97.0.34 description HUB2;tunnel2010 neighbor 10.97.0.34 update-source Tunnel1 neighbor 10.97.0.34 timers 10 30 neighbor 10.97.0.34 activate neighbor 10.97.0.34 send-community both neighbor 10.97.0.34 soft-reconfiguration inbound neighbor 10.97.0.34 prefix-list IROUTE-EXPORT out neighbor 10.97.0.34 route-map AllVRF-LocalPref-80 in neighbor 10.97.0.34 maximum-prefix 5000 80 exit-address-family 

    If more information is needed, please say so.

    Any help or advice would be greatly appreciated!

    Thank you!

    It is possible that you touch it--the failure of negotiations of phase 2:

    https://Tools.Cisco.com/bugsearch/bug/CSCup72039/?reffering_site=dumpcr

    [Too little detail to say with certainty:]

    M.

  • Regarding vSphere VCA and vMotion

    Hi all

    I have several questions about vMotion and VCA, can anyone help?

    1. If I mix several types of CPU (from the same supplier of CPUS) in a vSphere without EVC cluster active, what will happen? can I vMotion VM on the host?

    2. I can vMotion VM Cluster with older CPU for the Cluster with more recent CPU? Assume that CVS is not activated on the two groups.

    3. can I vMotion VM Cluster with older CPU for the Cluster with more recent CPU? Assume that CVS is not enabled on the source cluster, but active on the destination Cluster.

    4. I can vMotion VM Cluster with older CPU for the Cluster with more recent CPU? Suppose that EVC is enabled on the source cluster, but not active on the destination Cluster.

    5 can I vMotion of virtual machine of a stand-alone ESXi host on an active EVC cluster? Assume that the CPU is the same.

    6 the KB "EVC and CPU compatibility FAQ" below, if we disable CVS on a cluster, the virtual machine can only be migrated to hosts with CPU even or more.

    Why virtual machines is only able to move to other ESX/ESXi hosts that are in the same generation of CPU or more? It's in front of my knowledge.

    VMware KB: EVC and CPU compatibility FAQ

    What is the impact on vSphere features of the Cluster when I turn off the CVS mode?

    To turn off the CVS on a cluster mode, it affects vSphere Cluster features in the following ways:

    • vSphere HA (High availability): vSphere HA is not affected because of the cyclic power failover virtual machines when you start on a new host. This allows the VM picking up the new ID CPU and it allows to start without problem
    • vSphere DRS (Distributed Resource Scheduler):
      • vMotion: virtual machines is only able to move to other ESX/ESXi hosts that are in the same generation of CPU or more. If the DRS is configured for fully automated, this can result in virtual machines, eventually migrating to hosts on the cluster with the generation of CPU more old where they cannot be moved via vMotion on different hosts with a newer processor generation.
      • Storage vMotion: virtual machines are able to be moved to Storage vMotion with EVC mode disabled in a cluster
    • Location of the pagefile: it is not affected.

    Thank you.

    1. If I mix several types of CPU (from the same supplier of CPUS) in a vSphere without EVC cluster active, what will happen? can I vMotion VM on the host?

    (a) you very well can vMotion virtual computers between the hosts with identical processors within the cluster.

    (b) for vMotions on different hosts, it depends on CPU instruction set parity and orientation, i.e. the older generation-> new generation or vice versa.

    Consider the following: when a VM is turned on at the start, it is presented in UC sets from the physical host, where it has been turned on. It must keep these settings until the virtual machine is turned off, any host it is migrated to in the meantime. Therefore, as long as a host provides these CPU instruction sets, it can be vMotioned it. Further instructions on new hosts games is irrelevant for this and just hidden-away of the guest. So old-> new should be fine, but new-> old most likely will not work because the virtual machine works with games of additional CPU instructions that only is simply not supported by the older physical processor.

    2. I can vMotion VM Cluster with older CPU for the Cluster with more recent CPU? Assume that CVS is not activated on the two groups.

    Yes, generally vMotion shall function in older generations of CPU to the most recent, the same common CPU instruction sets are supported by the destination CPU host and enabled in the BIOS.

    3. can I vMotion VM Cluster with older CPU for the Cluster with more recent CPU? Assume that CVS is not enabled on the source cluster, but active on the destination Cluster.

    4. I can vMotion VM Cluster with older CPU for the Cluster with more recent CPU? Suppose that EVC is enabled on the source cluster, but not active on the destination Cluster.

    Yes, the same general vMotion as explained above requirements. If CVS is enabled on any, both or a cluster does not matter (of course provided a configured VCA level is not reverse the physical hardware capabilities 'new' and 'old').

    5 can I vMotion of virtual machine of a stand-alone ESXi host on an active EVC cluster? Assume that the CPU is the same.

    Yes, the same general vMotion as explained above requirements. A stand-alone host is exactly the same thing treated as a non - active EVC cluster.

  • Server Migration problem

    Hello

    I'm planning on migrating a number of servers between the 2 groups in the same data center. At the present time, that the two groups do not have access to the adapter network same vMotion is not possible, it is very good and will be resolved shortly. However I get an error but another message which reads as follows:

    The virtual machine requires hardware features that are not supported or disabled on the target host:

    * General incompatibilities

    If possible, use an active cluster with vMotion compatibility (EVC) Enhanced; See article 1003212.


    CPUID details: incompatibility at level 0 x 1 'ecx' register

    Bits of the host: 0110:0010:1001:1000:0010:0010:0000:0011

    Required: x0xx:x01x:1001:10xx:xxxx:xx1x:xxxx:x 011

    I'm assuming it's because the processors on hosts are slightly different between the two groups, all hosts in two clusters are IBM HS23 blades but...

    Group 1 - Inter (R) Xeon CPU E5 - 26700 2.60 GHz

    Group 2 -Inter (R) Xeon CPU E5-2650v2 @ 2.60 GHz


    Can someone please advise if there is anything I can do to get rid of this error, if I turned off the VMs system first I can migrate their and they function without problem however, I really want to avoid if possible system failures.


    I'm assuming that the solution may involve the CVS (this is currently disabled on the two clusters) but I would like that confidence that allowing this cause no problem for connection etc.


    Thank you



    It's true. you need downtime to allow CVS on an existing cluster.

    but that's why I suggesting a different solution.

    (1) create a new cluster, select CVS on that.

    -Please find planning intel CPU would be appropriate for your two models that you plan to mix. [Let me give you a simple example, if I have two hosts of AMD, one with AMD Optron G1 and the other with AMD Optron G2 CPU, I would create a cluster of empty with active VCA, basis for VCA updated AMD Optron generation 1, powerdown all VMs running in both hosts, put them in maintenance mode and drag the cluster out of maintenance mode and begin to power on virtual machines]

    -you have two processors in your next guests clustered computers. I don't know too much what appropriate Intel CPU would be at there place for them, please try to find this first release.

    Group 1 - Inter (R) Xeon CPU E5 - 26700 2.60 GHz

    Group 2 - Inter (R) Xeon CPU E5-2650v2 @ 2.60 GHz

    (2) now clear out a host of each departure of cluster 1 and 2, which will be your initial test environment, bringing virtual machines in this third group. While doing this, you may need migration cold of the two clusters, but more likely, you would be able to bring virtual machines to at least one of your cluster using vMotion.

    (3) if things work out well for you, free plan all hosts to join this group and get rid of the remaining clusters of empty.

  • VCA serious question in a cluster after adding servers active DRS blades M630 Dell

    Hi guys,.

    I hope that you will be able to help me with this strange question that I feel right now.

    A couple of details first on our environment:

    • vCenter 5.0 U3
    • ESXi 5.0.0 - 2509828
    • 6 guests active DRS cluster
    • 4 hosts are Dell M620, E5-26XX v2 processor blades
    • 2 hosts are blades M630 Dell with processors of E5-26XX v2
    • VCA mode is currently set at "Westmere".
    • All ESXi hosts are fully patched using AUVS

    Before adding the new M630 blades to the cluster it any of the issues with the DRS or VCA, outside the constraints of resources, and the 4 blades M620 we are able to correctly migrate the virtual machines back.

    I could add the M630 blades to the cluster without problem and without errors or warnings have been issued. A week or so later, I noticed more deviations of DRS desired on the cluster. More inspection I noticed that quite a large number of virtual machines showed N/A beside VCA mode and I was not able to migrate them to any host on the cluster.

    Thinking it could be a bug of sorts, I turned off all the virtual machines in the cluster and run a script that resets the CPUID at default. In addition, as a measure of precaution, I restarted the vCenter server and ensures that the database has been effective. The hosts are synchronized time with an NTP source and fits perfectly on both hosts.

    After feeding all the VMS, I rebooted the M630 servers and inspected the BIOS settings to ensure that virtualization and NX/XD is enabled. With all virtual machines powered off, I was able to change the levels of the VCA to any available without warnings setting.

    When everything has been turned back on I noticed a strange behavior in the cluster.

    Virtual machines that are running on M620 blades show the right level of CVS and I am able to migrate them to all servers in the cluster. Once that they migrate to the blades M630, VCA state change s/o and I am unable to migrate anywhere, not even the other M630 blade in the cluster.

    The error that I am shown during the verification step is:

    The operation is not allowed in the current state of the connection to the host. Host CPU is not compatible with the requirements of the virtual machine to cpuid level 0 x 1 host bit ' ecx' register: 0000:0010:1001:1000:0010:0010:0000:0011 required: x 110: x01x:11 x 1: 1xx0:xx11:xx1x:xxxx:xx11 inconsistency detected for these characteristics: * General incompatibilities; see section 1993 of possible solutions.

    Dig deeper in my investigation, I came across this KB: 1034926[1]

    I checked the M620 user guides and the M630 and parameters mentioned in the KB should be enabled by default. Unfortunately, I have no way to check the BIOS settings because I can not all M630 blades enter into maintenance mode without turning off the virtual machines running on them and another maintenance window one day after previous, we're going to be deprecated. In addition, the error is not the same as that of the KB and my goal is to collect as much information before entering another window of maintenance requiring downtime.

    I hope that you will be able to provide any new perspectives or opening my eyes to things that I may have missed.

    Thanks in advance.

    The processor in this system is a v3 E5-2660, not an E5-26xx v2.  ESXi 5.0 doesn't know how to use the features of this processor FlexMigration.  I think that the first version of ESXi to support this processor is ESXi 5.1U2.

    Unfortunately, I do not see a workaround for ESXi 5.0 solution.

  • VCA Mode change

    Hello
    I'm downgrading to Westmere EVC mode to conform to the old generation of host we add to the cluster.

    To do this, I currently have two clusters created, in production and two hosts using sandybridge.

    the second group has a central server inside and westmere generation.

    I am aware of the procedure found here:VMware KB: VCA allowing on a cluster when vCenter Server is running in a virtual machine

    My question is, rather than quick vMotioning of all hosts another machines.  is it possible to close all virtual machines on a host, set mode now, disconnect the host, move it to the cluster Westmere allow it and then put on all virtual machines?

    I know that I'll still have to disconnect and reconnect my vcenter server, but I would like to avoid having a large amount of machines to vmotion if I can find some downtime to stop all.  and make them move with the host.

    This idea is possible?

    So if I understand what you really want is the future VMs with the host computer, then when you add the host to the new cluster and power them they will run under the new EVC Mode but you essentially in bulk installs what you wanted?

    If so, I did what you describe a bunch of times before (especially during migration between vCenters and want to go to the host and all virtual machines), what I would recommend however keeps the VMs powered on. Disconnect the host and then remove the vcenter. Once removed from the cluster and vCenter connect through thick client and put all virtual machines. Then add it to vCenter, in the cluster you want to and feed the VMS and you would be actually resigned these VMs in bulk.

    Conversely, you can turn off the virtual machines and keep track of those you want to migrate, put the host in maintenance mode and move it to the cluster that you want and then get out of maintenance mode, cold migrates virtual machines and then turn on. I think that this option will be a little easier.

    However, to disconnect a host and removing vcenter will make virtual machines with it, that looks like what you want to do...

  • Migrate of XenServer virtual machines

    Hello

    How can I migrate VM of XenServer to VMWare ESXi?
    VMWare Converter is an option.
    Is there another option, maybe a conversion for example domain controllers offline. ?

    Thank you

    Dennis

    I think that export VM XenServer and vSphere import will not work due to format the virtual drive different... but take a look at the solution proposed in this blog post: did you restart?: convert XenServer XVA to VMDK of VMWare ESXi 5.1

    About V2V, just read the VMware Converter recommended and everything should work: KB VMware: best practices for using and troubleshooting VMware Converter

  • Migration from 5.0 to 5.5

    Hi all

    I have what is probably a very stupid question, but I don't have much experience with VMware so please excuse me...  I recently inherited a succession of vSphere 5.0 which consists of two R710s Dell connected to a SAN Dell EqualLogic network via iSCSI.  Asked me to replace the R710 they are old and need to be replaced.  What I would do, is to buy a few new servers R720 from Dell to replace them, while at the same time upgrade the environment to the latest version (5.5), but I'm not sure of the best upgrade path to take or how can I migrate the virtual machines through the new 5.5 servers?  So ultimately, I want the old R710 / 5.0 servers with a new series of R720 / 5.5 servers that still use the same EqualLogic storage.  Any help or someone can give me any advice would be greatly appreciated .

    With licenses and SAN storage (shared), that you have, the migration should not require much downtime. After the upgrade of the storage system, you can upgrade the vCenter server or installing a new one, depending on your environment (for example version supported SQL, vSphere features such as vDS, permissions, etc.). Once the new vCenter Server in place you can then install new ESXi hosts and add them to vCenter (in trial mode, because processor licenses are always related to the former hosts). Depending on whether the new hosts can be added to the same cluster (VCA compatible mode), you can even vMotion the VMs for new guests without interruption of service, otherwise you will have to create a new cluster for new hosts and migrate virtual machines while turned off. Once the 'old' hosts are empty, remove them from vCenter - who will be CPU - free licenses and assign the licenses to the new hosts. As a last step, you can consider to upgrade VMware Tools and HW (virtual compatibility mode) version on the virtual machines, which will probably require a reset.

    ATTENTION: Do not improve the virtual version HW (virtual compatibility mode) for vCenter Server at HW version 10 (5.5 compatibility). If you do this, you will not be able to manage vCenter Server using vSphere Client Windows-based more!

    André

  • VCA mode disabled / vmotion work?

    We have 2 groups of ESXi at our facility for vSphere, vSphere is version build 5.5U1, currently most recent available.

    Group 1 has Intel Xeon E7540 UC.

    Group 2 is brand new and uses recently published processors Intel Xeon E7-4860 v2.

    VCA is currently disabled on the two groups.

    Athe's systems are listed in the VMware compatibility guide for our latest version of vSphere.

    I expect this Setup to be able to live vMotion of virtual machines from the old to the new cluster without the use of CVS on either, but I assumed that the migration back to group 1 would require cluster 2 to affect the level of reference Intel EVC appropriate (Nehalem in this case).

    It turns out that I am able to migrate VM direct in both directions with no apparent problems (so far), with EVC mode disabled on both groups.

    Is an expected behavior or if ESXi/vCenter is somehow misreading? CPU compatibility (perhaps because the E7-4860v2 are a very recent model) or some other issue that could lead to problems with these migrations?

    I expect this Setup to be able to live vMotion of virtual machines from the old to the new cluster without the use of CVS on either, but I assumed that the migration back to group 1 would require cluster 2 to affect the level of reference Intel EVC appropriate (Nehalem in this case).

    It turns out that I am able to migrate VM direct in both directions with no apparent problems (so far), with EVC mode disabled on both groups.

    Is an expected behavior or if ESXi/vCenter is somehow misreading? CPU compatibility (perhaps because the E7-4860v2 are a very recent model) or some other issue that could lead to problems with these migrations?

    This is normal unless you have turned to bike the VM on the new hosts.

    If a virtual machine has been powered - on on a former host and is migrated to the new cluster, it does not change the current features of the virtual machine CPU runlevel. He will continue to work with the old CPU features on the new host and is therefore able to migrate the old new host without problems, or additional AIDS such as CVS.

    However, if stop you now migrated virtual machine and power it on again on new hosts, the virtual material will apply the new host CPU capabilities (or the level of EVC of cluster, but you do not apply any) and you will not be able to migrate the computer virtual return to the armies of the former cluster.

Maybe you are looking for