Migration of storage with RDM

I images migration towards a new framework for storage. Things work really well, but I hit came across a few images which have been implemented several there are many moons with RDM. The kicker is that they are in physical compatibility mode, so I can't just migrate and convert them to a vmdk. Any thoughts on an approach to migrate? Only the approach that comes to mind is to use VMconverter. This would require a little time to stop, so I'll have to postpone this possibility. I have a lot more flexibility on when I can migrate images if I could use the motion of storage because it requires downtime for images. At least there are only a small number who fall into this category but are images that are among the largest disc attached to like 500G - 1 TB...

Buck

Buck - I realize this post is old, but I came across it and thought I would fill you in on how to migrate LUNS RDM in physical compatibility mode.  I read so many messages saying it's impossible, but it's actually much easier than you think, but the virtual machine must be turned off.  Here's how:

(1) log into the ESX console as root

2) go to your virtual machine folder: / vmfs/volumes/vmfs-volume-freindlyname/vm-folder.

(3) locate the pointer to your disk file, this isn't the file - vocation, it's the servername_1.vmdk, it's only a few KBs, inside he does nothing but point to the real disk file, which in this case will be servername_1 - rdmp.vmdk.  It is the file servername_1 - rdmp.vmdk maps of the e/s to the real lun which is still represented by ESX as something like vml.xxxxxxxxxxxxxxx and can be found in/vmfs/devices/drives.

The source that you want to migrate (with this method, it is in fact a copy which is good, because the source is always left intact) can be a virtual disk, virtual rdm or physical rrdm, it is not «question, you will always use servername_x.vmdk as source file in the command string, you're about to see...»

but before I continune, if you want to migrate LUNs compatibility mode physical to a new lun to physical compatibility mode, be sure to have configured the lun so that ESX can see.  If you convert virtual RDM of RDM physical mode, the same applies.  If you convert it to a virtual drive, just be sure to have a volume of vmfs with enough space to accommodate the new drive.

(4) it is here:

To migrate and convert physical RDM in virtual disk:

vmkfstools-i servername_1.vmdk/vmfs/volumes/vm record/name of the server - newdisk_1.vmdk

To migrate and convert physica RDM virtual RDM, there are two methods

-in vi edit customer of VM VM settings, remove the drive mapped to vocation, off voltage then re - add a new hard drive by selecting the same disk make sure you only select virtual fashion, connect you and see all your data and mount points are the same.

-from the ESX prompt in the folder of the virtual machine on the VMFS volume:

vmkfstools-i servername_1.vmdk d rdm:/vmfs/devices/disks/vml.xxxxxxxxxxxxxxxxxxxxxxxx/vmfs/volumes/vm record/servername_1 - rdm.vmdk

To migrate a new physical RDM physica RDM:

vmkfstools-i servername_1.vmdk d rdmp:/vmfs/devices/disks/vml.xxxxxxxxxxxxxxxxxxxx/vmfs/volumes/vm record/servername_1 - rdmp.vmdk

EXPLAINED command strings:

vmkfstools-i is import-command clone

the servername_x.vmdk is the source, the x will be held for the command of the unit, in my example I use this 1 which means that it is the second disc on the virtual computer.  the first disc file is servername.vmdk, it is generally always the system disk.

the-d is the disc type for the destination you are converting to

the rdm (p) is part of the type of drive saying the destination disk is gross, virtual or physical disk mode (the p stands for physics course)

folder/servername_1 - rdm.vmdk/vmfs/volumes/vm tells us where to store the raw disk mapping file, I chose to put it in the folder of the virtual machine as you can see, you can put it on any vmfs filesystem, I like to keep things neat and clean.

Hope you find this useful, don't let anyone tell you that you cannot migrate a physical RDM.  His false information.  This is how its done.

Tags: VMware

Similar Questions

  • VMotion VM with RDM failure

    I am facing a problem that I can not vmotion/migrate a VM with RDM. Other virtual machines with the RDM can vmotion, but I have a problem with a virtual machine. The error is:

    Incompatible device

    specified for the device backup

    "33."

    Copy the configuration of the Virtual Machine

    Check the compatibility mode and check the bottom of the article for more information.

    VMware KB: Difference between RDMs physical compatibility and virtual RDM compatibility

  • Storage vMotion - WITH RDM? - Or will they ignored most of the time?

    Hi all - I have a storage vMotion, 5 hosts in the cluster - five or 6 virtual machines with RDM are pinned to specific hosts for the love of the DRS, but I want to go where the readers of the OS, then to VMDK is one San to another. If the virtual machine has a C:\ drive on the data store, all the other disks on the virtual computer is through physical RDM. ALL I need to move is the c:\(VMDK) in the process of storage vmotion.

    Time without are connected and work today to the ESX host in the cluster, all hosts are sized to see the RDM.

    My question is, when we go to perform the storage vMotion, how to avoid such failure? the RDM are my concern in the past, I think I remember a machine do not migrate when he had attached RDMs.

    any comments or thoughts would be really useful.

    Nick

    You can also move the ROW pointers with the data store 'change' - Assistant. Just be sure to select "same as source" for the format of the target disk. If you choose thick or thin, the RDM will be transformed into a VMDK, but with "same format" simply the ROW pointer is moved to the new data store.

    -Andreas

  • Add new storage "opération TimedOut" with RDM LUN on host. Pls Help

    Hello

    Installation program:

    4 ESX 3.5 hosts each with 2 HBA adapter connected to the HP MSA1000.

    Question:

    I set up MSCS SQL 2000 with RDM through host, MSCS failover without problem, performance wise is not a problem at all too. But the problem is that if the MSCS SQL resource is managed by the Active virtual machine that is running on host1, I can navigate to storage / adding LUNs, set in shape of new LUN with no problems.

    If I want to add new storage on the other host "host2" I received the error "Tor time to ask"is it because of the resource ROW being busy serving the machine virtual active it is why the host not allowing do not change with the addition of new storage?

    Guests see these LUNs. I can add the data store or to do a new analysis also long the LUNS are not presented to the host. Also I can browse the data store, able to see the addition of Storge Wizard when adding new

    Data store and make a new analysis only on the host where the operation of the active node. Suppose the preseneted RDM LUN on host1 and host2 and VM SQL running on host1, I can only browse the data warehouses, a new analysis and add new storge to this host. I can't do the same thing with

    hosts2.

    But if I take these "RDM LUN" LUNS in host2, host3 and host4. I can do a rescan, adding other LUNS as the RDM presented to the virtual machine, able to see the add storage wizard.

    I googled the error and found that the Ontario server is question, or DNS

    Best regards

    Hussain Al Sayed

    Post edited by: habibalby

    I had similar questions to those two problems here where I couldn't add a data store as he would expire and I got the long startup time. I also use RDM which I use for MSCS in the whole of boxes 2. Here's a quote from my previous post in which I found a response that helped me. Changingthe SCSI retry time increased my boot time and allowed me to add one more time without the question of the time-out, data warehouses.

    Response to previous Post:

    I did some research and I think I've made some progress. I found that I get lots of SCSI errors in the VMkernel newspaper. I did some more research and found out that I can change the time retrying SCSI 80 to 10 and it has done wonders for my time to reboot. Now, instead of taking 20 minutes to start, it takes less than 5 minutes now. Much better. I made the change in the host-> advance-> SCSI--> SCSI configuration try again. 80 a the default and 10A was suggested as being a good value. It helped and I will keep an eye on what can make the effect, but so far it has helped with startup times.

    http://communities.VMware.com//thread/203122?TSTART=0

  • Storage vmotion RDM

    I tried to storage servers vmotioned 3 which contained a RDM. The RDM were in virtual compatibility mode. I've migrated 2 servers that have been turned off and all was fine until I tried the 3rd server which was powered and it reported an error saying that the block size was too small for the size of the disk.

    I thought that when a server with RDM was vmotioned storage only the file pointer was actually moved and the RDM remained in its orgional State. It seems that the 2 servers I have storage vmotioned now have virtual disks and the RDM were loosened. I've not turned the servers still to see if the data is still there as I don't know what the status of the servers were before the storage vmotion as they have been turned off. The vcenter is 4.1 build 258902 and the esx servers are 4.1.0,260247.

    Dr

    When you storage vmotion RDM it converts it to a virtual drive and when he tries the vmfs block size is smaller that the actual ROW (must be more than 256 GB) then you get this error. the data contained in the ROW will be always available. Why not start it because it won't hurt

  • Machines MSCS with RDM, how to change the data store?


    Hello world

    I'm having a problem of displacement of 2 machines with RDM disks.
    We have 2 guests ESxi 4.1 with virtualcenter, but no license for vmotion.

    This 2 machines is building a MSCS Cluster (Microsoft) (file server) with multiple disks (5) RDM,
    for a total area of 8.9 TB. Of course, for use with MS cluster, the WDR is with physical compatibility.

    The problem is that the local vmdk (and also rdm.vmdk files) for these 2 machines are on the local data store (a machine for each host) and I need to move them to the shared storage. But if I power off of a computer and trying to change with the ROW data store attached, I get an error because the VMFS of the commune dastore (block size = 1 MB) does not support these large files (up to 2 TB).

    I read a lot of articles
    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1031041
    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1026256
    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1002511
    and a few other discussions of communities.

    In accordance with article
    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1005241

    I tried to turn off the machine, remove the RDM, migrate to another data store, and then reconnect the RDM.
    There is a big problem with the last point, because I don't know how readd the RDM manually (vmfsktools?).
    I want to add an existing drive, but he was in local storage, and I don't want more no reference to the file in the local storage...
    According to the documentation of vmware for MSCS, the second node of the cluster disks can be added simply to add a drive existing and selecting the
    existing node. If it is no problem of shared storage, but the host cannot read local storage to another host...

    Any suggestion to migrate a computer virtual with RDM shared storage room? Thank you

    E' possibile disabilitare he LUN filtering per vedere UN RDM gia awarded:

    http://www.yellow-bricks.com/2010/08/11/storage-filters/

    Option da diabilitare temporaneamente in vCenter config.vpxd.filter.rdmFilter

    Andrea

  • Poor with RDM only when added to the CAB MSCS cluster performance

    Hey guys, thanks for taking a peek at this question... I'm looking for some ideas.

    I have 2 virtual machines running Windows 2008R2.  They put MSCS in place and works as a taxi (cluster across boxes).  I use a 7500 VNX, fiber channel and RDM physical drives.  Is on an 8 node ESXI 5.1 v1117900 implemented.  Functionally, everything works fine.

    The problem is that the performance is very poor on only the RDM that have been added to the MSCS cluster.  I can take the same drive, remove the cluster, run IOmeter and it's fast.  I add to the cluster, MSCS, leaving in storage, and it is 1/15th the IOPS performance / s.  Remove the drive of MSCS and it goes back to when you run as usual.

    I tried different controllers SCSI (LSI Logic SAS vs Paravirtual) and it doesn't seem to make a difference.  I have physical MSCS clusters that do not seem to present this kind of performance problem, so I wonder if it was not something goofy with the MSCS on the configuration of virtual machines.

    I've already implemented in this article: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1016106

    I saw this poor bus rescan the performance issue until I put the RDM perpetually booked, but it was not a problem since I implemented the fix.

    For any help or suggestion is appreciated...

    Thank you

    -dave

    Recently, I upgraded my esxi from 5.0 to 5.1 environment and have a bunch of systems with RDM. I had a problem as well. What I found, was that during the upgrade it changed all the policies of path to the LUN and rdm, it's the turn of robin. This causes a huge performance issue in my environment. I changed all the paths to MRU and it solved the problem.

  • Hot migration is possible with VMWare ESXi 5.1.0 without an ESX server management

    Hot migration is possible with VMWare ESXi 5.1.0 without an ESX server management?
    I wonder for a future issue, a scenario where a system administrator must build a Win2008 + DC on local storage, server
    then, once the physical box is integrated into the class network, the physical box is connected to the classified SAN storage.
    Is it possible to hot - migrate the server Win2008 + CC local storage to SAN storage at this time, with only the
    free version of VMWare ESXi 5.1.0?

    Welcome to the community - I guess that you are looking to do this, while the virtual machine is currently running - this is not possible without a management server, also called vCenetr, but all is not lost because you would be able spend the VM local storage to the San with the VM posered using the client storage management features

  • What is 'Migration to cold with resettlement of the file'?

    Hello

    I just read http://kb.vmware.com/kb/1005241 on the migration of virtual machines with attached RDM. The document states that the following is true when you are migrating cold with moving file.

    When you are migrating cold a VM with RDM attached to it, the content of the raw LUN mapped by ROW is copied into a new hard to destination file, effectively converting or cloning a raw LUN in a virtual disk. This also applies when the virtual machine is not moving between ESX hosts. In this process, your raw LUN origin is left intact. However, the virtual machine is more reads or writes to it. Instead, the newly created virtual disk is used.
    I just tested this and migrated cold a VM with vRDM attached to another data store. However, the vRDM has not converted to a virtual drive, but remained as a vRDM. Is the "Cold Migration with file transfer" something different than right-clicking on the VMS in vCenter, choose "migrate" then 'Datastore change' or 'change of host and data store '?
    Thank you
    Michael

    Hello

    Glad to hear that it worked for you, yes the document isn't very desciptive processes to do it, the only reason that I know how to do it is previous experience.

    Gregg

  • Configuration of MPIO with RDM?

    Hello!  I just finshed alternating activation MPIO on all our volumes / ESX servers (HUGE reduction in latency of disk!) but I can't find a way to do it with RDM who are using virtual machines.  As they do not appear in the storage view, I don't know how to do this?  I can see the connections on the vmhba interface, and I see that they are 'Active' but not 'Active i/o.

    Can someone point me in the right direction?   Thank you!

    You can use either do it from the display of the adapter (as opposed to the storage view, right click on the logic unit number) or use in the command line esxcfg-mpath.

    -Matt

    VCP, vExpert, Unix Geek

  • MSSQL server VM cluster should be in the same host or different hosts with RDM

    Could someone me on how to place the code SQL cluster s VM with RDMs in ESXi hosts for advice.

    What is the best practice to place the SQL VM s in ESXi hosts.

    Affinity or an anti-affinite...?

    Appreciated your valuable answers.

    Depends entirely on the use case.

    • 2 MS SQL nodes on the same host to see the availability of the software
    • 2 nodes on different hosts to see the availability of the equipment

    I'd say MS Clusters on the same host (at the time of HA and vSMP FT) are redundant VMware features and represent an increase in management fees. The MS cluster on hosts provide something in addition to what VMware vSphere alone can provide!

  • RS 5 with netapp SRA, FCOE site iSCSI with RDM

    Hi team, could someone please help to veify if it works in this scenario.

    We have PROD site running vmware vcenter 5.0, esxi5.0, SRM 5.0, Netapp 3210 ONTAP8.1 version file server. The VMs are windows Server2008, MS SQL2008 as major DB. Virtual RDM database files. Only FCOE for this site. AD is a physical Server 2003. The rest are virtual machines

    We offer you to have SRM for DR site deployment. The DR site is synchronized with netapp SRA snapmirrror, controlled by the console SRM5.0. Recovery site has no FCOE, only iSCSI and NFS license availble. Always RDM but using iScsi. vCenter 5.0, 5.0, netapp ONTAP8.1 2040 esxi.

    My questions are:

    1. If VMware SRM support DR failover for this case with the different protocol, PROD is FCoE, iscsi DR

    2. If the SRM supports failover with RDM in our case

    3. If the AD DR Server site need to be installed as a new best practices, or we will have to use SRM to switch production announced on website of Dr. What is recommended.

    4. If the physical AD can be kept as the common way and always taken SRM supported

    5. to the recovery site, which is the best practice? To use NFS to store data and iSCSI for iscsi? Or store data and RDM using iSCSI LUNS to simplify?


    Thank you!

    1. Yes
    2. Yes
    3. I would recommend if possible having a domain controller AD, already running on the DR site - it may be a virtual machine
    4. Yes, the PROD site domain controller can be physical
    5. I'd keep all compatible iSCSI so so -.
  • SRM with RDM, PRO using DR using iSCSI and FCoE

    Hello team, I plan a MRS. for a client, hoping to get your advice. the site of production using vcenter5.0, netapp3210, SRM5, esx5.0. FCoE for RDM for MS SQL server access. but DR site has only iscsi license, no FCOE, no card of the CNA. Will be RS stil works with RDM disks for data from MS SQL MON? advice please, thank you!

    Welcome to the community - there must be no descendants with presentation of RDMs fomr an iSCSI SAN-

  • There is a problem of storage with the data path [path to hard] store

    I have two sites connected by using a VPN connection over a WAN link. In each site, I have an esxi with several VMS host and operation of replication device of vSphere (v 5.1.1.0 Build 1079383) in each site. In each site, these is also a vCenter server with the device of RVS registered and logged with the other site. All the virtual machines are configured to replicate to the other site.

    In my initial configuration of the RV (first install) site remote had 2 vNIC used for the management and I figured it was my problem with replication (see kb2040302). So I have changed since my host configuration from the remote site to have that single vnic management (does not solve the problem). I then registered, removed and reinstalled the VR device without change to the problem.

    What is going on:

    1. I am able to configure replication between sites without a problem.

    2. first replication occurs between sites and reports a success.

    3. the remote VM replication to the main site continue to work properly and all is well for the days and continue to work.

    4. the main site of VM replication to the remote site fails on the second and all attempts to reproduce the error. There is a problem of storage with datastore path '[datastorename] '.

    ReplicationError.JPG

    I followed the advice of kb2040302 better as I can. But I must confess that I could use some of the more detailed instructions on how and what site should I check the logs. and how to check my configuration management vnic.

    I'm looking for someone provide additional troubleshooting steps with good detail.

    Thank you


    Resolution:

    The problem was caused by a security setting on the NFS Server data store. VRA could not delete the (hard).

    Specifically, my problem was with authorizations on the Synology NAS under the application of the "file Station.

  • SRM 5.1 & NetApp - VM with RDM and MSCS

    Hello

    I implement SRM 5.1 with NetApp (with SnapMirror) and I have several questions:

    -Replication of the volumes through the table Manager, runs successfully

    -How the replication of virtual machines with RDM? can also be done through the Bay (with SnapMirror) Manager or better do with vSphere replication. Can be achieved by vSphere replication?

    -Make all the tips for the replication of virtual machines in a cluster of Microsoft?

    Thanks for all!

    I'm not a person of NetApp, but this should be documented in the documentation of the SRA.

    Another point is that MSCS RDM cannot be configured with political tourniquet (which is usually is the default value for NetApp LUN). You must reconfigure the LUNS to MRU.

    Edit: Just have done a quick search on Google and found this pdf that looks like what you need:

    http://www.NetApp.com/us/media/tr-4064.PDF

    Michael.

Maybe you are looking for