Cannot migrate VM on iSCSI SAN. Failure at the helm of the 10%. Peut vmping to all addresses

Hello everyone

Impossible to migrate virtual machines on iSCSI SAN. Failure at the helm of the 10%. Peut vmping to all addresses

I recently installed a new configuration of the VM and I feel the above. My configuration is:

2 HP DL380 with 1CPU 16 GB of ram and 500 GB shared space on iSCSI SAN.

Installation of the network as follows:

each host has the 6 network ports that is load balanced and fault tolrated in 3 VMSwitches. VMSwitch0 goes to 2 ports to shared resources on a 4510th cisco. On this switch, I use VLAN tagging to different servers that perform different functions. It works fine.ie. VLAN 5 is for the management vlan 31 is for secure vlan servers 37 is forinternal Web servers.

VMSwitch1 is a vmkwenelport for the connection of iSCSI SAN. It is set on vlan 200 on the switch, so does tagging vlan put it. We have separated our san network ourusual networks.

vmswitch2 is used for backup purposes and is used in a similar manner as vmswitch0. the ports of trunkied and backup services is vlan tag to go to the correct servers.

I am able to ping betwneen the VLANS allowed. I can also vmkping to all the ip addresses of the two hosts san.

What I can't do is to migrate direct customers between the hosts. He is still unable to 10% with the following error:

Migrate the computer virtual iws3xx - pop1.mns.tlg.private a general error has occurred: the VMotion failed because the ESX hosts were not able to connect to the network for VMotion.  Please check your physical network configuration and settings network VMotion.

I looked in the newspaper of vmkkernel and I get the following text:

9 Oct 15:05:38 esxn1-pop1 vmkernel: & lt; 7 & gt; fn_scroll_back not implemented & lt; 7 & gt; fn_scroll_back not implemented & lt; 7 & gt; fn_scroll_back not implemented & lt; 7 & gt; fn_scroll_back no implemented7:05:04:55.236 cpu2:6534) migrate: vm 6535:2094: info setting VMOTION: ts = 1255097135856641 Source, src = ip & lt; 172.31.207.21 & gt;

9 Oct 15:05:38 esxn1-pop1 vmkernel: ip dest = & lt; 172.31.207.22 & gt; Dest wid = 6447 using SHARED swap

9 Oct 15:05:38 esxn1-pop1 vmkernel: 7:05:04:55.236 cpu2:6534) Tcpip_Vmk: 1107: refining worldwide 6899 172.31.207.21, success

(9 Oct 15:05:38 esxn1-pop1 vmkernel: 7:05:04:55.236 cpu2:6534) VMotion: 1807:1255097135856641 S: Set ip address ' 172.31.207.21 ' worldlet affinity to send the world ID 6899

9 Oct 15:06:53 esxn1-pop1 vmkernel: 7:05:06:10.234 cpu3:6899) WARNING: MigrateNet: 668: 1255097135856641 S: Connect to & lt; 172.31.207.22 & gt; : 8000 failed: timeout

9 Oct 15:06:53 esxn1-pop1 vmkernel: 7:05:06:10.234 cpu3:6899) WARNING: migrate: 295: 1255097135856641 S: Failed: host ESX The failed to connect on the network for VMotion (0xbad010d) @0 x 0

9 Oct 15:06:53 esxn1-pop1 vmkernel: 7:05:06:10.245 cpu1:6538) WARNING: migrate: 3267: 1255097135856641 s: Migration, considered a failure by the VMX.  It is probably a timeout, but check the VMX log for the actual error.

The journal of vmx says:

16:07:38.856 Oct 09: vmx | http://Msg.checkpoint.migration.nodata the VMotion failed because the destination ESX host has not received all of the data of the source on the network for VMotion ESX host.  Please check your network VMotion settings and physical network configuration and ensure that they are correct.

Can anyone help with this please

No iSCSI traffic will work only if vmKernal and vmConsole are on the same vswitch. Maybe on the same vLAN according to how your VLAN.

Tags: VMware

Similar Questions

  • Cannot migrate VM between 5.5 ESXi versions: the version of the product to the destination host does not support one or several CPU features

    We receive an error message that I am trying to make sense. I understand very well the notion of compatibility, CPU, CPUID, masking and so on, after working on a competitor hypervisor. However, this error seems wrong (or needs better wording) based on my understanding:

    Here is the message:

    ----

    A general error occurred: the version of the product to the destination host does not support one or several features of the CPU currently use by the virtual machine.

    Characteristics of CPUID level 0 x 1 'ecx' register are indicated with a '1' bit: x00x:xxx0:xx0x:x 000: x 0 11xx:00: 00xx:11xx

    ----


    This happened during the migration between two hosts ESXi with the same physical processors - in this case, a Westmere X 5650.


    The source host is on ESXi build 2068190 (5.5). The destination host is on ESXi build 1474528.


    The bits it seems to complain are:


    DTES64

    Monitor/MWait

    Cx16

    PDCM

    The punch line seems to be: "the product version of the destination host. However, I have trouble believing that support for these features have been added to ESXi between the two versions. If it were, VMware has been certainly silent about.

    For what it's worth, explicitly set us the CPUID masks in our virtual machines (no, CVS is not an option for us at this time.) This is the mask:

    CPUID.1.EAX = "00000000000000100000011001010001".

    CPUID.1.ECX = "00000010100110001110001000111111".

    CPUID.1.EDX = "10001111111010111111101111111111".

    CPUID.80000001.ECX = "00000000000000000000000000000001.

    CPUID.80000001.EDX = "00101000000100000000100000000000".

    cpuid.d.EAX = "00000000000000000000000000000000".

    cpuid.d.ECX = "00000000000000000000000000000000".

    cpuid.d.EDX = "00000000000000000000000000000000".

    Note that the bits of the message complains (2, 3, 14, 15) * are * in fact forced to '1' in our mask of cpuid.1.ecx. That means (if I understand correctly), ESXi does not have the virtual machine unless the host processor supports the feature. And in our case, the virtual machine will start with happiness on nodes, with both versions. It will be not simply migrate between them.

    So, long story short, is this a bug of ESXi? The error message is misleading? I understand something?

    Thank you


    Matt

    To use the masks feature VM in this way, you must replace all the 1 to your masks with-'s:

    CPUID.1.EAX = "00000000000000-000000 - 00-0 - 000-

    CPUID.1.ECX = "000000-0-00--000---000-000---".

    CPUID.1.EDX = '-000 - 0-0 - 0-

    CPUID.80000001.ECX = "0000000000000000000000000000000 -"

    CPUID.80000001.EDX = "00-0-000000-00000000-00000000000.

    cpuid.d.EAX = "00000000000000000000000000000000".

    cpuid.d.ECX = "00000000000000000000000000000000".

    cpuid.d.EDX = "00000000000000000000000000000000".

    Zeros erases the features that are not available on your hosts Westmere, and dashes will leave the other one features.  The problem with those masks was forced to you certain features on who would have normally been off.

  • Live iSCSI SAN upgrade production network (Dell MD3000i and PowerConnect & ESXi 4)

    Hi all

    For the moment, I have a direct connection to my Dell MD3000i iSCSI SAN in ESXi 4 2 x host with no switch between the two, all of the virtual machine is within the SAN and running on both hosts.

    If I want to put a switch between the two and add one more iSCSI SAN so that the two SAN can be accessed / shared with the 3 ESXi 4 guests total, how should I approach the issue?

    any idea or guideline would be greatly appreciated.

    What I have to plan the time or can I change the old IP on the existing SAN iSCSI manually cable one by one without damage/alter the existing operation of the virtual machine on SAN? or I have to stop the virtual machine and then terminate the connection manually iSCSI session and then restart the host SAN?

    Thank you

    AWT

    Standard installation on the MD3000i is for a port iSCSI on each controller on a subnet and one on each controller to be on another.

    There have been people working successfully on a subnet, reported problems too.   I wish we were longer on our Vsphere upgrade, I'm not at a point but where I test things like Vmotion.

  • Config network initial ESX with iSCSI SAN

    Hi all

    I want to install 2 ESX 3.5 servers which will be connected to an EqualLogic iSCSI SAN.

    The SAN is on a VIRTUAL, 10.x.x.200 with a 255.255.255.224 LAN Gateway.  This VLAN is not routable, DNS servers, etc.

    What I am trying to understand, it's for the initial setup of ESX, when I set the config network (console), should I register the IP address for the VIRTUAL LAN, example was:

    IP address: 10.x.x.201

    Gateway: 255.255.255.224

    Primary DNS: white

    Secondary DNS: white

    Or, for example B, should I use our 'public' speaking:

    IP Adress:129.x.x.201

    Gateway: 255.255.255.0

    Primary DNS:129.x.x.1

    Secondary DNS: 129.x.x.2

    I know that with the VIC, I can later add vSwitches, etc., but at least for the initial installation, I want the configuration to provide smoother operation. Thanks for any idea that you can provide!

    Chad

    Hello and welcome to the forums.

    What I am trying to understand, it's for the initial setup of ESX, when I set the config network (console), should I register the IP address for the VIRTUAL LAN, example was:

    Use example B (129.x.x.x) for the Service Console (management functions), and after that the system is in place to add an another vSwitch used to connect to the SAN VLAN.

    Good luck!

  • iSCSI fail to add the data store - hostdatastoresystem.queryvmfsdatastorecreateoptions

    Hey,.

    All of my iSCSI LUNS have a problem being mounted by my hosts. This is a new host with a new iSCSI SAN. I tested it in dev with 5.5 and all went well. The only difference is the material that dev and production run on. Both use iSCSI Software maps, but the production environment has an Intel PRO 1000 with the support of the TOE. Not sure if this is important enough to warrant mention, but hey wouldn't go through hoops, etc..

    As follows:

    iSCSI target is in the same subnet

    iSCSI Software adapter has two network cards, each separate vSwitches who are dedicated to the iSCSI Software map. Both show "Consistent" and "active".

    The paths are set to Round Robin by VMware for the multipath.

    hanging-iscsi-3.png

    hanging-iscsi.png

    hanging-iscsi-2.png

    has had to remove all the LUNS and recreate it with the size of the blocks of 512.

  • Pavilion dv6t 7000: failure of the BIOS (Caps lock flashes twice a secone)

    My laptop model number is Pavilion dv6t 7000. In recent days, he had a problem. When I turned off my laptop and put it off, the next day I find that is not starting. A black screen, caps lock shift is blicking twice and f12 key is red (WIFI). Nothigs happen. Then, I searched and came to know that it is the failure of the bios. Then I tried some way and nothings worked. ATLAST when I remove the cmos battery it set again after 2 or 1 minute its starts fine. Then againg I turn off the computer and its power day following again the same problem. I was to solve this problem by removing cmos battery and put it again for 4 or 5 days. But today this tip does not work. I tried so many method like (Windows key + B), (byholding down the power button for 1 minute), (Flash UEFI) etc. to solve the problem of failure of the bios, nothing worked at all. What is the problem?  Is this really a failure of the bios? How can solve this problem? What is a dangerous problem? There the chances that my laptop is completely exhausted?

    Hello
    beep codes or led lights will Flash at the start of the system may indicate a hardware fault or BIOS you say you have tried all the troubleshooting steps

    I would like to run the hardware check to make sure it's all working please follow the guide below for instructions and report:

    http://support.HP.com/us-en/document/c03467259#AbT1

    Tests using UEFI diagnostics and run the quick test

    Thank you
    James

  • Planning the migration from DAS to iscsi SAN

    Hello

    We have a physical server running 6 VM ESX 3.5 (1 win SBS 2003 and linux 5) local SAS disks

    It's a 2xquad core clocked at 2.66 GHz dell

    now, let's add a second physical server (dell 2xquad core at 2.5 Ghz) and an ISCSI SAN: we would like to install ESXi on the second server and move all VMs on iscsi storage, so if the first server goes down in a few minutes we can manually set up the other to run all virtual machines (we are a small company and at present cannot afford the company/vmotion license).

    I ask for advice because we would virtualize an old physical terminal so win2003 Server (with 10-15 users, mainly working with MS office applications) using the second new physical server.

    This is the first time that we install a SAN (it's a dell MD300i with 10 disks 146 GB SAS, 5-disc SATA 1 TB for backup and snapshots) and we want to get the best performance out of it. Read the other posts, I understand that the discs are very important in a server virtualization terminal server and having good performance out of it.

    So, with this type of SAN and our workload, we should define a unique group of 5disk RAID with 9 discs and 1 hot spare, with a large ramdisk on it to create one or more data VMFS stores?

    or is it better to create 2 disk groups:

    -1 with two RAID1 disks for terminal server VM

    -1 with seven discs RAID5 for all other VM

    with a global hot spare disk?

    other general suggestions for migration, we will do?

    Thanks in advance

    Guido

    Yes, it's what I would do.

    -KjB

  • design for the iSCSI SAN review

    Hi all

    I intend to implement a storage shared in our vmware infrastructure. And we'll buy DELL or iSCSCI SAN NETAPP.

    Could someone pls review the design of SAN attached. Please note that we do not have a vCenter in the environment. And also we have not yet finalized on behalf of initiator SCSI, whether to use a software or iSCSI HBA for this purpose. Is it worth investing in the hardware initiator. Please suggest.

    Kind regards

    Nithin

    To use several vmks iSCSI, you cannot assign two network adapters to the iSCSI vmks.

    You cannot have an active nic by vmk and the other the vSwitch network adapter must be moved down to "unused controller. Then you can bind to port in the software iSCSI hba adapter. It is the only way to achieve real multichannel, load balancing and failover for multiple iSCSI vmks.

    Here's a KB on how do: http://kb.vmware.com/kb/2045040

    But if you google, it has quite a few tutorials with pictures detailed, etc..

    Tim

  • FC to iSCSI SAN using ESXi 5.1

    Hi all

    We have 4 guests with ESX5.1 installed and use vCentre to manage with Enterprise license. Guests began life as 3.5 and we have updated since then. Had to be replaced in 6 months.

    A key element is that the VMKernel port is a management network using two physical NICS for redundancy.

    All servers used multihomed all FC HBA on a CF polyester fabric. We replace FC SAN with a SAN iSCSI only.

    My question is what is the best way to implement this? I created a vSwitch with two VMKernel ports on it. The vSwitch is introduced as two whole separate from the active NIC to failover. It is all well and done on all hosts.

    However, I was planning a vLAN dedicated iSCSI with just the SAN on it. This seems to be a problem, because I can not put a specific default gateway on port groups because they are VMKernel ports and will only use the default gateway, that I put on the management network which will not work!

    vSwitch0

    Management

    vmk1:192.168.200.102

    vSwitch6

    iSCSI2

    vmk3: 192.168.30.162

    iSCSI

    vmk2: 192.168.30.161

    There is only a single routing table shared for all VMKernel ports? If yes how can I apply this? I'd rather not have iSCSI on my network but according to online guides, I get so far and cannot route iSCSI interfaces I install outside its own subnet. Is it normal?

    Any advice much apprecaited!

    Thanks - Steve

    Highly recommended best practice is to configure a VLAN separate with use IP network only for iSCSI, possible even with a dedicated layer two switches used only for iSCSI, depending on the type and quality of the switches.

    stevehoot wrote:

    Just to be 100% sure, I seem to have done is correct, and that ESXI does not allow a VMK iSCSI have a default gateway on it?

    It is not really the storage network cannot have a default gateway, but rather that the operating system of ESXi (vmkernel) has IP stack with the internal routing table shared on different functions such as management, vMotion, storage and other.

    This means that you have a gateway by default common for each host and not by function, and translates as more generally as the 'common' default gateway will be on the management network.

  • ESXi 4.1 or ESXi 5.0 with iSCSI SAN

    Hello

    I'm trying to establishing an environment that is / will be an iSCSI SAN backend entirely.  I know of a lot of reading that the best practice is to config the ESXi host with:

    VMKernel > vmnic > physical network adapter > switch ports > port on SAN (config 1:1 - no grouping of vmnic)

    Environment:

    ESXi 4.1 & 5.0

    Force10 switches

    EqualLogic iSCSI SAN (PS4000 & 6000)

    Here's my real question (playing the Devil's advocate):

    Why I shouldn't team 3-4 vnmics on a vswitch with multiple vmkernel ports?

    Given my environment.  Can someone point me to a technical document that explains what will / could happen if I set up the environment ESXi in this way?

    Thank you

    BDAboy22

    So, basically, we want the SAN to take full responsibility for the failover instead of the network and/or the SAN for who should resolve the failure (which could lead to the time of restitutio in integrum longer session).   I understand that correctly?

    Yes, that's right, we want to stack storage for vSphere manage with path selection and tipping - and by connecting each vmknic directly to the physical vmnic map we sort of "simulate" how to CF cards and also ignores the network failover mechanisms. As you probably know you also need to "bind" the iSCSI Software to the two vmnic (GUI now in 5.0, esxcli in 4.x)

  • iSCSI SAN Connectivity

    I'm kinda new to this Yes, you can be sweet.

    We are migrating some of our less critical servers to VSphere. Because the servers are at camp, we'll install eSXI on our server and use hosted camp via iSCSI SAN. The camp provided us with a block of IP addresses that we can use for SAN connectivity.

    SAN connectivity is on it's own dedicated NIC (1 for testing purposes) and physical switch.

    Management network
    192.168.72/24

    SAN IP blocked
    172.26.11.0/26
    172.26.11.1 - GW

    The target IPs
    172.31.3.105
    172.31.3.109

    I created a virtual switch for iSCSI and tied a physical NETWORK adapter to it. I then added the iSCSI adapter software, responsible for the target in the dynamic, linked discovery the NIC to the iSCSI Software card.

    I then added a road 172.31.3.0/24 to go to 172.26.11.1.

    When I scan for the storage of the new, I just blank. If I go back into the adapter software, targets are now listed on the static discovery tab. The camp is saying that their HDS does not all applications.

    So I built a virtual Windows machine and loaded on this host (using an Openfiler iSCSI on the management network) and installed the Microsoft iSCSI initiator. By using this software, I am able to connect to the SAN CoLo network from inside the virtual machine.

    What Miss me? Why can I not connect to the network the host SAN? Any help will be much appreciated.

    Bob

    http://pubs.VMware.com/vSphere-50/topic/com.VMware.vSphere.storage.doc_50/GUID-0D31125F-DC9D-475B-BC3D-A3E131251642.html

    (Physical network adapters must be on the same subnet as the iSCSI storage system, that they connect)

    / Rubeck

  • How can I move virtual machines from a local data store for a new iSCSI SAN?

    Hello guys,.

    Now that we have a new iSCSI SAN market how can I tranfer all the virtual machine to a local data store for a new SAN? Can I stop the VM in the host, and then copy the DataStore files with all the info on the new volume?

    Thank you

    Pesinet

    Alternatively, you can use VMware converter or cold Migration / Storage VMotion if you have the appropriate licenses.

    Duncan

    VMware communities user moderator | VCP | VCDX

    -

  • How to connect to iSCSI SAN without compromising security

    Hello:

    How to enable server OSes (VMS or physical host computers) to connect and LUN iSCSI mount without compromising the safety of our ESX host?  We have a few Microsoft servers that need to use iSCSI initiators at Mount MON for MSCS.   We cannot use the ESX initiators because VMware doesn't support iSCSI to virtual storage with MSCS.  We have already read all the documentation and spoke with VMware support, so we know that our only option is to use the iSCSI initiators in Microsoft servers to connect to the LUN.

    Our concern is security.  If we let the servers use their iSCSI initiators to connect to the San, then they also won't have access to our service and the vkernels via the iSCSI network console?  ESX requires that you have a port the service console and the port of vkernel on the iSCSI network for each ESX box that you want to use the ESX initiator for.  We struggle to understand how to connect any machine (virtual or physical) to the iSCSI network to mount LUN without exposing our service and vkernels consoles.  I know that the best practice is to remove this VMs network for this exact reason, but of course many organizations also have physical servers (UNIX, Windows) who need to access their iSCSI SAN.  How people treat this?  How much of a security problem is it?  Is there a way to secure the service console and vkernel ports while allowing host ESX - no access to the SAN?  I know that many of you are facing this exact in your organizations situation, please help.  Obviously, it is not supposed that nobody uses their SAN iSCSI for anything else except for the ESX host.  I thank very you much.

    James

    Hello

    Check out this blog

    Use of firewall is certainly a step in the right direction for that. If you can't have separate iSCSI networks, then you will need to isolate nodes NON-ESX/VCB iSCSI using other mechanisms. I would certainly opt for firewalls or reduce the redundancy to just 2 network-by-network cards and not 4 to a single network.

    Someone at - it any other suggestions? Surely many ESX users share their iSCSI SAN with a lot of different systems and operating systems. Thanks again.

    They do, but they do not secure their networks for their VMs ESX iSCSI / other physical systems. You have asked a very important question and it's how to connect to iSCSI SAN without compromising safety. If the options are currently:

    1. Physically isolate

    2. Isolate using firewall

    Given that ESX speaks in clear text and does not support IPsec for iSCSI, you have very limited options that are available to you. The firewall you use and charge iSCSI, you send through it will determine if there is no latency. Yes its a cost extra, but if it is an independent network switches/ports/etc.

    Best regards
    Edward L. Haletky
    VMware communities user moderator, VMware vExpert 2009
    ====
    Author of the book ' VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.
    Blue gears and SearchVMware Pro items - top of page links of security virtualization - Security Virtualization Round Table Podcast

  • Could not start the automatic updates. Get an error that Windows cannot check the updates, D0000034 after the failure of the Java updates

    OP: Action Center... PC problem: 1 important message

    After the failure of the attempt to install the JAVA update, I received this message from Action Center... Windows cannot check the updates. So I was taken to the dialog box and retried to check the updates of windows fails again with the code D0000034. Also did a system restore to the place before the installation of JAVA update attempt, but was no help. Can you help me?

    Hello Irfan,

    Thanks again for your response. Since your last I discovered through my system status (Control Panel), I wrote that my product ID and the Windows 7 activation status not available. So I followed instuctions to help and microsoft support to activate.  Yet once, I got the error code D0000034 as before with the problem of update. However, I was given the opportunity to phone, but received no phone #. I bought the DVD of Windows 7 and installed on the PC that was custom built for me.  On the site Web microsoft help and support, I was asked for the product ID, which according to them is a # 20 numbers.  Not the only one, I have is the product key which is one 25-alpha-numeric.  I tried to enter only and does not pass.  This is why I say that I can pay for help since I did not ' have the product ID.  Maybe it isn't your area of expertise, Irfan, but I was hoping... I re - install windows on last April when I got in trouble and probably did not re - activate? What do you think?  If you can get back on track, I would be very grateful and thank you once again to be a big help!

    wandrinstar

    Please ignore above request. I have since re-activated Windows and the problem was solved with technical assistance telephone and thank you again for your help, Irfan. Also thanks to all the people answering the phone at microsoft. The experience I had was really a pleasure. I wasn't expecting such an attitude courteous and professional.

    Keep up the good work,

    wandrinstar

  • Design of network for VMware/iSCSI SAN

    I am currently reshaping our business network to take account of the variation between stand-alone servers and an Equallogic/VMware environment. We will use iSCSI to connect to the virtual machine to the San.

    My question is this. How a proper network design should seek this kind of deployment? I've specified my current hardware less than what I have to work with. Given that I can't pay by port / vlan database MTU value on the 3750/2960 should I dedicate a switch for iSCSI?

    Equipment available:

    Core switch/router:

    WS-C37560G-24TS-1U

    Stacked switches:

    WS-C2960S-48TS-L access switch

    WS-C2960S-48TS-L access switch

    WS-C2960S-48LPS-L voice changer

    WS-C2960S-24TS-L Server switch (dedicated iSCSI Possible)

    Unused stacked switches:

    Dell Powerconnect 6224 x 2 reference

    Servers:

    Reference Dell R710 Quad NIC

    Reference Dell R610 Quad NIC

    Storage:

    EqualLogic PS4100 with two 2 x 1 GbE controllers = 4 GbE for iSCSI

    Best regards

    Markus

    given that the same logic has two controllers that you will have to use a pair of switches to cross connect for redundancy. You are going to need some maps as well. 1 sc, 1 for vmotion, 2 for iscsi (cross connected) and probably 2 for the production traffic.

    Sent by Cisco Support technique iPad App

Maybe you are looking for