Storage vMotion possible with ESXi Std Edt or not?

Hi all

We have two implementations of VMware with ESXi Standard edition licenses installed on the two site. A site has vCenter Server Foundation edition while the other has vCenter Server Standard Edition.

The problem is that the site with vCenter Std and vSphere 5 Std doenst allow storage vMotion (see image below)

No Storage VMotion.jpg

While, on the other hand, site 5 Stnd vSphere with vCenter Foundation allows storage Vmotion (see image below)

Storage VMotion.jpg

How is it so that vSphere Stndard this edtion of different characteristics in different places?

I had recorded the call and found the solution... VSphere 5.0 and 5.1 gives really different set of features with the same Std license vSphere.   Take a look at the link for options available with vSphere 5.0 and 51 below.

1 compare vSphere 5.0 editions

Compare the editions of vSphere VMware for Cloud Computing, server and data center virtualization

VSphere editions 2 - compare 5.1

Compare VMware vSphere editions: managed Virtualization & amp; Cloud Computing

For a test, I used the same Standard edition server with 5.1 key to it has worked well with Storage vMotion Option available :-)

Tags: VMware

Similar Questions

  • Storage test 55 certification vmotion, storage vmotion fails with "Could not lock file" on the virtual machine during storage vmotion

    In my view, that it is a known problem in ESXi 5.5 Update 2 release notes, someone knows a solution yet? This is an excerpt from the release notes:

    • Attempts to perform live storage vMotion of virtual machines with RDM disks can fail
      Storage vMotion of virtual machines with RDM disks can fail and virtual machines can be considered in turned off of the State. Attempts to turn on the virtual machine fails with the following error:

      Cannot lock the file

      Workaround: No.

    VMware support for the record said that the patch will update 3, it is known as an intermittent problem. Finally, it happened to me on the 9th attempt.

  • Storage vMotion-Direct between ESXi hosts

    Hi all

    It is my first time on the boards, I hope it's the right way to go on this subject.

    I have problems with Storage vMotion, it's not a set up us would use normally, rather than by necessity with the current scenario.

    We have a desktop which has 2 x 5.1 ESXi hosts. They do not have a budget for SAN with the installation of HBA, and it is only a single requirement. I have to migrate virtual machines from one to the other, but with no external storage. I wait with Storage vMotion directly from a local data store to another.

    I activated the software iSCSI on both ESXi hosts

    Created a vSwitch separated with a VMkernal on two physical hosts

    The two VMkernals are on a separate subnet * 172.16.1 and production vSwitch is 192.168.5. *.

    The NIC allocated for the 172.16.1 network. * is both connected to the same physical switch.

    I ping the VMkernal and the vmkping the VMkernal

    Scan for storage again and nothing shows up.

    I used software iSCSI before, for a long time so I'm probably rusty. Dynamic discovery is normally indicated in the SAN to pick up storage, and finds it just local storage on each host itself.

    In this scenario, I told the IP address of the ESX2 VMkernal on 172.16.1 ESX1. * since I have no external storage. I think it is, I'm wrong, but I don't know what else to do to recover local data warehouses.

    Any ideas to try would be very appreciated.

    Thank you

    Sorted by using 5.1 feature Shared-Nothing vMotion

  • Storage vMotion fails with the error below

    I'm trying to run a virtual machine's storage vMotion. It fails with the error below.

    This method is disabled by 'moref = 17705'

    #1 method - manually remove the entry from the VCDB.

    • Stop the Service of VC.
    • Connect to the SQL Instance.
    • Run the SQL for VCDB.
    • Select * from VPX_VM WHERE name LIKE '% VM_Name % '.
    • Note the ID of VM in column 1.
    • Validate the stale entry.

    Select * from VPX_DISABLED_METHODS WHERE ENTITY_MO_ID_VAL = "vm - ID".

    • Remove obsolete entries.

    delete from VPX_DISABLED_METHODS WHERE ENTITY_MO_ID_VAL = "vm - ID"

    • Start the VC service.
    • Try Storage vMotion.

    Method #2-si this does not work, save the *.vmx file once again which will fix the problem.

    Note: To do this, you will be required to server unavailability.

    • Power off the virtual computer.
    • VM unregistered inventory.
    • Navigate to the path of the data store and VMX re-register.
    • Start the virtual computer.
    • Try Storage vMotion.
  • Advice of external storage for use with ESXi 3.5 (free)

    I'm under ESXi 3.5 (free) on a server with internal storage.  I would buy external storage and switch my VM to it.  The virtual machines that I have running on the server are not used very often, so I don't want to put a lot in it.  Can anyone recommend an external SAN/NAS storage device that would work with ESXi 3.5 (free)?  I'm looking for maybe 500 GB - 1 TB connected to the network via Ethernet.

    Less than the price of Iomega devices (the ~ $300 model) are sold to a group of work so not really high performance. If you find that this performance was not acceptable, you'd have no way to improve the system.

    I would like to use your existing NAS NFS features. If this does not work out, you can always go for something with more umph.

    If it is a lab test the Iomega should be fine. If it has an environment that can grow in size requirements and performance over time, you might want to try to build a more powerful Openfiler SAN/NAS.

    First try your existing infrastructure.

  • What happens with ESXi vMA logs is not available?

    I'm under vMA 4.1 and it set up as a server syslog for my ESXi 4.1 hosts.

    Anyone know what is happening with ESXi logs when the vMA is not available? For example during a restart of the vMA.

    Hello.

    Newspapers would continue to be logged locally, while the VMAs was not available.  Make sure that you also set up the Syslog.Local.DatastorePath, so that you don't lose anything.  See KB 1033696 for more information.

    Good luck!

  • Storage Vmotion ends with zero seconds

    Having a problem where, after opening a vmotion of storage on a virtual machine, it complete in 0 seconds, Ive tried several times with the same results:

    http://monosnap.com/image/503fc909e4b0ca82e9f15382.png

    Anyone with any suggestion would be greatly apprecaited.  Move from one storage to another array, other machines moving very well, a few machines I have to run several times before finally actually starts the migration.

    You try not to go in thickness thin on the same data store are you?

    Sent from my iPhone

  • question of vMotion and storage vMotion

    I have everything set, UCS 5108, ESXi host, VMs, I don't have a SAN connection yet... My question is if I can enable vMotion? Let's say I have a cluster in vCenter, within this group, I have attached two ESXi host, win2008 VM in an ESXi and running, I can migrate (using vMotion) as win2008 VM to an another ESXi without connection SAN?

    How storage vMotion to an ESXi host hardrive to an another ESXi host hardrive? Is this possible?

    Note: I have not connection SAN in this configuration

    One of the requirements for vMotion is shared storage that would allow visibility between source and destination ESXi hosts. (SAN, iSCSI or NFS).

    http://www.VMware.com/PDF/vSphere4/R41/vsp_41_dc_admin_guide.PDF

    pages 210 to 215

    You can use OpenFiler as an iSCSI for lab tests SAN

    http://www.Openfiler.com/community/download/

    Thank you

    Zouheir

  • Is it possible to use Storage Vmotion with VMFS and RDM virtual tool?

    Hello!

    You can use the tool Storage Vmotion to move a server that is running both VMFS disks and RDM virtual drives inside?

    Both SAN is EMC and Vmware ESX and Virtual Center can be reached.

    Thank you!

    For storage on the SAN again the two hosts will need access to the storage. You can then you copuld cold migrate the virtual computer. First, you will need to stop the virtual machine and delete the mapping of RDM. Once the virtual machine is the new data store re-create the mapping of RDM (Bothe hosts will need to access the LUN backend).

  • Multi-NIC vMotion with ESXi/vCenter 4.1

    We take running ESXi and vCenter 4.1 and after the secure Channel secure Channel 5.5 class and sitting for my exam in a few weeks, I have actively tried to improve our environment. Previously, to the study and trying to learn more about VMware, we were in pretty bad condition. Relevant material (AMD, Intel CPU, generations of Intel CPU, amount of RAM and CPU), versions hyperviosr Mismatched ESXi and ESX and no redundancy, vmotion and TONS of snapshots as backups.

    In the two weeks since my course, I have eliminated all snapshots (performance daily vCheck to check on the health of the venvironment), emigrated to 5 similar hosts (and memory/cpu configurations) that we had to drag do not, connected to all ports card NETWORK 6 to 2 x 3560g cisco switches and connected the second switch updated ESX to ESXi 4.1 and patched all hosts with Update Manager (nobody used), created the host profiles and compliance on the cluster and hosts, activated DRS and HA, set up a couple of VAPP for STM systems... the list is long

    I still have a lot to learn, but now I'm a bit confused about one thing...

    We use Fibre Channel SAN, one side get our second Fibre Channel switch hooked up for redundancy and I guess that Multi - pathing (?) I have a couple questions question...

    1. set up the second fiber switch would give me several warehouses of data, correct paths?

    2 can I create and separate vMotion in our configuration, using the CF WITHOUT? Any flow of traffic (for vmotion) through the vswitches or he remains behind the FC switch?

    -I know with iSCSI, you want to create a vSwitche separated and installation multi-nic vmotion

    3. in the configuration of the redundant management interfaces do I need to create two vSwitches with vmkernel with separate IP addresses management ports or just create on vSwitch with a vmkernel port and two network cards is assigned to the (two different connected to 2 physical switches physical cards)?

    -We will most likely use VST if we can get the trunk ports to pass traffic defaullt VLAN, so I think it is still acceptable to create separate vSwitches for management, vMotion (if necessary because of the CF) and port VM group? The designs I see online usually use only a vSwitch for VST and multiple is.

    That's all I can think of for now... Just some things that need to be clarified on... I guess I still need a vSwitch vMotion (allocate 2 of 6 network adapters in it) because some type of traffic would pass over him, but I think that most of the vMotion and all the SvMotion would remain behind the FC switch.

    Thanks for any help!

    With regard to the topic of discussion: Multi-NIC vMotion introduced with vSphere 5.x and is not available in earlier versions.

    1.) Multipathing is not related the number of FC switches, but only for the number of initiator and target. However, using several CF toggle availability increases due to redundancy.

    2.) you must differentiate here. vMotion is a live VM migration process to other hosts, i.e. only the workload of the migration. vMotion only uses the network. Storage vMotion on the other side generally used storage connections - i.e. the CF in your case - to migrate files/folders to the virtual machine.

    3.) redundancy for management traffic can be reached in several ways. The easiest is to simply assign multiple uplinks (vmnic) to vSwitch network management. So, a simple 'Netowrk management' will do, and redundancy is made based on recovery of the vSwitch.

    From a design point of view you can use multiple vSwitches for different traffic types, or combine them on a vSwitch by configuring the failover policies for groups (Active/Standby/Unused) port for example.

    André

  • Storage vmotion esxi 5.5 to 6.0 does not not for some vm

    In a test cluster with 2 nodes with no storage storage shared (local only), host vmotion 5.5 to 6.0 host does not work.

    before the upgrade, I moved all the vm to a host.

    Then I upgraded to 6.0 host and tried to move all the VMS to the host 6.0 updated (moving host + storage).

    20 VM, 5 have been moved to the new host.

    Everyone always get error below.

    The VM Client are all the vm version 10 (but some of them improved 8 or 9).

    ClientOS Windows 2008, 2012 r2 and linux. Some with vmx3, some with e1000.

    The machines can be moved when turned off.

    After the off-vmotion, storage vmotion from 6 to 5.5 and return of 5.5 to 6 works again.

    < code >

    Failed to copy one or more of the disks in the virtual machine. See the journal of the virtual machine for more details.

    Impossible to implement disks on the destination host computer.

    vMotion-Migration [-1408040447:1427943699552189] konnte nicht den Stream-Keepalive read: Connection closed by remote host, probably due to timeout

    Beim Warten auf Daten Fehlgeschlagen. Fehler 195887167. Connection closed by remote host, probably due to delay.

    < code >

    VM - log:

    < code >

    2015-04 - T 02, 03: 01:34.299Z | VMX | I120: VMXVmdb_SetMigrationHostLogState: transits State hostlog of immigration to migrate "at source" mid 1427943699552189

    2015-04 - T 02, 03: 01:34.307Z | VMX | I120: MigratePlatformInitMigration: file DiskOp set to /vmfs/volumes/526d0605-8317b046-f37e-0025906cd27e/test1/test1-diskOp.tmp

    2015-04 - T 02, 03: 01:34.311Z | VMX | A115: ConfigDB: parameter migration.vmxDisabled = 'TRUE '.

    2015-04 - T 02, 03: 01:34.326Z | VMX | I120: MigrateWaitForData: waiting for data.

    2015-04 - T 02, 03: 01:34.326Z | VMX | I120: MigrateSetState: transition of State 8 to 9.

    2015-04 - T 02, 03: 01:34.467Z | VMX | I120: MigrateRPC_RetrieveMessages: was informed of a new message to the user, but cannot process messages from State 4.  Leaving the message in queue.

    2015-04 - T 02, 03: 01:34.467Z | VMX | I120: MigrateSetState: transition of State 9 to 10.

    2015-04 - T 02, 03: 01:34.468Z | VMX | I120: MigrateShouldPrepareDestination: remote host does not support an explicit step to prepare migration destination.

    2015-04 - T 02, 03: 01:34.469Z | VMX | I120: MigrateBusMemPrealloc: BusMem complete pre-allocating.

    2015-04 - T 02, 03: 01:34.469Z | Worker #0 | I120: SVMotion_RemoteInitRPC: completed.

    2015-04 - T 02, 03: 01:34.471Z | Worker #0 | W110: SVMotion_DiskSetupRPC: related disc expected but not found for the file: /vmfs/volumes/526d0605-8317b046-f37e-0025906cd27e/test1/test1-ctk.vmdk

    2015-04 - T 02, 03: 01:34.471Z | Worker #0 | W110: MigrateRPCHandleRPCWork: callback RPC DiskSetup returned UN-successful State 4. RPC fault.

    2015-04 - T 02, 03: 01:34.493Z | VMX | I120: VMXVmdb_SetMigrationHostLogState: hostlog transits of State to fail to migrate 'from' 1427943699552189 environment

    2015-04 - T 02, 03: 01:34.502Z | VMX | I120: MigrateSetStateFinished: type = 2 new State = 12

    2015-04 - T 02, 03: 01:34.502Z | VMX | I120: MigrateSetState: transition of State 10 to 12.

    2015-04 - T 02, 03: 01:34.502Z | VMX | I120: Migrate: list cached migration error message:

    2015-04 - T 02, 03: 01:34.502Z | VMX | I120: [msg.migrate.waitdata.platform] failed waiting for data.  Error bad003f. Connection closed by remote host, probably due to delay.

    2015-04 - T 02, 03: 01:34.502Z | VMX | I120: migration of vMotion [vob.vmotion.stream.keepalive.read.fail] [ac130201:1427943699552189] failed to read streams keepalive: Connection closed by remote host, probably due to timeout

    2015-04 - T 02, 03: 01:34.502Z | VMX | I120: Migrate: cleaning of the migration status.

    2015-04 - T 02, 03: 01:34.502Z | VMX | I120: SVMotion_Cleanup: cleaning of XvMotion State.

    2015-04 - T 02, 03: 01:34.502Z | VMX | I120: Closing all disks in the virtual machine.

    2015-04 - T 02, 03: 01:34.504Z | VMX | I120: Migrate: Final status, who reported the force.

    2015-04 - T 02, 03: 01:34.504Z | VMX | I120: MigrateSetState: transition of the State 12-0.

    2015-04 - T 02, 03: 01:34.505Z | VMX | I120: Migrate: Final status, which reported VMDB.

    2015-04 - T 02, 03: 01:34.505Z | VMX | I120: Migrate Module switch has failed.

    2015-04 - T 02, 03: 01:34.505Z | VMX | I120: VMX_PowerOn: ModuleTable_PowerOn = 0

    2015-04 - T 02, 03: 01:34.505Z | VMX | I120: SVMotion_PowerOff: does not not Storage vMotion. Nothing to do

    2015-04 - T 02, 03: 01:34.507Z | VMX | A115: ConfigDB: parameter replay.filename = «»

    2015-04 - T 02, 03: 01:34.507Z | VMX | I120: Vix: [291569 mainDispatch.c:1188]: VMAutomationPowerOff: power off the power.

    2015-04 - T 02, 03: 01:34.507Z | VMX | W110: /vmfs/volumes/526d0605-8317b046-f37e-0025906cd27e/test1/test1.vmx: cannot remove a link symbolic/var/run/vmware/root_0/1427943694053879_291569/configFile: no such file or directory

    2015-04 - T 02, 03: 01:34.507Z | VMX | I120: WORKER: asyncOps = 4 maxActiveOps = 1 maxPending = 0 maxCompleted = 0

    2015-04 - T 02, 03: 01:34.529Z | VMX | I120: Vix: [291569 mainDispatch.c:4292]: VMAutomation_ReportPowerOpFinished: VarEtat = 1, newAppState = 1873, success = 1 additionalError = 0

    2015-04 - T 02, 03: 01:34.529Z | VMX | I120: Vix: [291569 mainDispatch.c:4292]: VMAutomation_ReportPowerOpFinished: VarEtat = 0, newAppState = 1870, success = 1 additionalError = 0

    2015-04 - T 02, 03: 01:34.529Z | VMX | I120: Past vmx/execState/val of poweredOff

    2015-04 - T 02, 03: 01:34.529Z | VMX | I120: Vix: [291569 mainDispatch.c:4292]: VMAutomation_ReportPowerOpFinished: VarEtat = 0, newAppState = 1870, success = 0 additionalError = 0

    2015-04 - T 02, 03: 01:34.529Z | VMX | I120: Vix: [291569 mainDispatch.c:4331]: error VIX_E_FAIL in VMAutomation_ReportPowerOpFinished(): unknown error

    2015-04 - T 02, 03: 01:34.529Z | VMX | I120: Vix: [291569 mainDispatch.c:4292]: VMAutomation_ReportPowerOpFinished: VarEtat = 0, newAppState = 1870, success = 1 additionalError = 0

    < code >

    Any idea to solve this problem? Maybe it's a bug with vsphere 6.0

    Given that some machines were functioning and vmotion later in both directions from 6.0 to 5.5 works, I think that is no configuration error.

    Hello

    It seems that there could be a bug or that guests have difficulties to negotiate the transfer of drive on a network. VMotions storage are carried out via the management network (limitation of hard due to the codification)-are you sure that management networks are not overloaded or that they operate at the desired speed? This also happens with machines that have actually disabled followed block? It seems that this could be a problem.

    In addition, you use the 'old' client or Web client? It is possible that some features in vSphere 6.0 may have been omitted in the old thick client.

    Below are the errors:

    2015-04 - T 02, 03: 01:34.468Z | VMX | I120: MigrateShouldPrepareDestination: remote host does not support an explicit step to prepare migration destination.

    2015-04 - T 02, 03: 01:34.471Z | Worker #0 | W110: SVMotion_DiskSetupRPC: related disc expected but not found for the file: /vmfs/volumes/526d0605-8317b046-f37e-0025906cd27e/test1/test1-ctk.vmdk

    Thanks for the answers in advance and good luck!

  • double control system - storage vmotion with or without storage in common?

    If I have a cluster of existing (vsphere 5.1), where 5 guests are all connected to the FC storage, and then I create a new cluster (managed by the same vcenter) with its own storage space, and leaders of each group DO NOT access other cluster storage:

    I can storage vmotion a computer virtual from one cluster to another?

    Y at - he of the opposition? Takes if press the company or company longer? The servers are in the same cluster, or is simply to be managed by the same sufficient vcenter?

    Hello

    It is possible to check this: VMware vSphere 5.1

  • Storage vmotion of storage facilities to storage San in ESXi 5.0

    Hello

    I have the host couple ESXi 5.0 with license free of all is vmdk are located in storage facilities. Moving all the vm for local to the SAN storage data store, but the local data store attached failed to attend several ESX host as storage shared, is there a way that allow me to use the method of storage vmotion to migrate all the vmdk to centralize SAN without interruption?

    I have a vcenter to manage whole army but only for normal administrative task. How can I set the local data store storage shared storage and able to present multiple hosts?

    Hello

    Do you have the vSphere licenses?
    If so, you can add the vCenter ESXi hosts, present SAN storage for ESXi hosts and perform Storage vMotion (migrate the data store) local storage to SAN storage.

    You will need because the function of storage vMotion (migrate) is a feature of vCenter vCenter, this option is not available in standalone ESXi.

    See the documentation here - migration of Virtual Machines with svmotion: vSphere Documentation Center

    If you can accept interruptions of service, you can use the migration cold: vSphere Documentation Center

    Thank you

    Bayu

  • Hot migration is possible with VMWare ESXi 5.1.0 without an ESX server management

    Hot migration is possible with VMWare ESXi 5.1.0 without an ESX server management?
    I wonder for a future issue, a scenario where a system administrator must build a Win2008 + DC on local storage, server
    then, once the physical box is integrated into the class network, the physical box is connected to the classified SAN storage.
    Is it possible to hot - migrate the server Win2008 + CC local storage to SAN storage at this time, with only the
    free version of VMWare ESXi 5.1.0?

    Welcome to the community - I guess that you are looking to do this, while the virtual machine is currently running - this is not possible without a management server, also called vCenetr, but all is not lost because you would be able spend the VM local storage to the San with the VM posered using the client storage management features

  • Storage vMotion with zero eager thick

    Hello

    In version 5.0, with the GUI, when you want to svmotion a virtual computer, you have three choices for storage: zero lazy thin / thick / thick zero eager.

    We have a lot of virtual machine in lazy mode thin / thick and want to convert it to thick eager without interruption of service.

    In Powershell, smovtion is ' Move-VM cmdlet. It has the possibility of "DatastoreStorageFormat" who don't accept that end/thick for the parameters (from the documentation).

    After a few tries, moving a VM thin with the cmdlet and "thick" in setting create a vmdk in "zero lazy thick."

    Some try with "EagerZeroVirtualDisk_Task" of the API too, but a problem with a lock on the file. It is not really clear in the documentation, but I think that the virtual machine must be poweredoff in order to use this feature.

    Is there a way to powercli to a storage vMotion and (re) create vmdk zeroing eager mode thick?

    Try it like this

    $vm = Get-VM -Name MyVM $destinationds = Get-Datastore -Name MyTargetDS $HardDisks = @(Get-HardDisk -VM $vm)
    
    $spec = New-Object VMware.Vim.VirtualMachineRelocateSpec $spec.Datastore = $destinationds.Extensiondata.MoRef
    
    1..$HardDisks.Count | %{
        $newFilename = "[" + $destinationds.Name + "]" + ($HardDisks[$_ -1].Filename.Split(']')[1])
    
        $objBackinginfo = New-Object VMWARE.Vim.VirtualDiskFlatVer2BackingInfo    $objBackinginfo.DiskMode = "persistent"    $objBackinginfo.eagerlyScrub = $true    $objBackinginfo.FileName = $newFilename
        $objDisk = New-Object VMware.Vim.VirtualMachineRelocateSpecDiskLocator    $objDisk.DiskID = $HardDisks[$_ - 1].Id.Split('/')[1]
        $objDisk.DataStore = $destinationds.Extensiondata.MoRef    $objDisk.diskBackingInfo = $objBackinginfo
        $spec.Disk += $objDisk
    }
    $vm.ExtensionData.RelocateVM_Task($spec, "defaultPriority")
    

Maybe you are looking for