Mapping of RDM slow powercli

We have a process where we input a list of devices to drive about 15 in a foreach loop to add them to a virtual machine (see the code snippet).  The process works well, however there is a delay about 1 minute between each map to real device as when we look at the "recent tasks" view in the client...  Any ideas on what could be causing this delay between maps and/or ways to speed it up?... thx...

# Add RDM disks
{foreach ($rdmID to $rdmIDs)
Write-host "Mapping" $rdmID.DeviceId "as a replay of the ' $vmName
$deviceName = (get-ScsiLun - vmhost $esxhost - LunType disc | where {$_.}) ({CanonicalName-match $rdmID.DeviceID}). ConsoleDeviceName
New-disk-hard - VM $vmName - DiskType RawPhysical - $deviceName DeviceName | Out-Null
} # End addition of RDM disks

If you use a column header try something like this

Import-Csv C:\names.csv - UseCulture | % {« naa ». + $_. DeviceId}

Tags: VMware

Similar Questions

  • Simple mapping of RDM vs file several mapping of RDM files?

    Hello

    When you configure Microsoft Cluster nodes in a scenario of cluster-across-boxes (stream clustering) (two ESXi hosts, two-node cluster MSCS, with a bow on each ESXi host), I realize that using physical RDM Mode (pass-through RDM) is the way to go.  I also usually have to locate the C: drive (zero eager thick) for each cluster node on another volume VMFS to ensure two distinct areas of failure.  So, for example the C: drive of node of Cluster 1 (VM1) could be the Datastore1 and the C: drive of Cluster 2 node (VM2) might be on Datastore2.  Similarly, I always created RDM mapping files for each virtual computer as well, pointing to the same raw LUN.  For example, I would put the mapping of RDM for the quorum shared LUN on Datastore1 for VM1 and file a new mapping file RDM pointing to the same conditions of quorum LUN on Datastore2 for VM2.  In this way, the failure of Datastore1 will affect only VM1 and VM2affect only the failure of Datastore2 .  I did this for years (probably since ESX 2.5) and MSCS works very well in this configuration.

    However, I was re-reading of the official documentation of "configuration of Failover Clustering and the Microsoft Cluster Service" VMware and noticed that specific VMware at several locations as "a map file of RDM single and shared for each clustered disk is necessary."  In other words, they call for me to select 'existing hard drive' VM2, and then tap the RDM mapping file that I created on VM1, rather than creating a new fresh RDM for VM2 mapping file.  This seems to be "less safe" because the failure of Datastore1, or RDM 1 mapping file corruption, would result in the failure of VM2.  A separate mapping RDM for VM2, on a separate data store file would provide two distinct areas fault to two virtual machines.

    Does anyone know why VMware requires a shared single RDM mapping file?  Are there specific problems, caused by the presence of two separate files RDM mapping?

    Thank you

    Bill

    It seems that VMware has posted a KB that explicitly indicates that a pointer/mapping "single" RDM file must be used on all nodes of the cluster.  So I guess this settles my question!

    Several files of RDM pointer after the migration of storage (2131011)

    http://KB.VMware.com/kb/2131011

    Bill

  • New hard disc RDM - selection of the mapping of rdm file creation

    Hi all

    I am trying to automate the creation of new RDM disks for virtual machines.

    Watch us row map separately files on a data store named, here is the sequence of orders.

    1. because I can't find the PoweCLI method to create the mapping of RDM file, I use vmkfs as below:

    vmkfstools - z/vfms/devices/disks/naa. XXXX01 /vmfs/volumes/clustername/SharedDisk/hostname/disk1.vmdk

    2. now, we have the mapping of rdm file, I would like to create a new physical disk RDM.

    New-harddisk - Contorller $controller DiskType - RawPhysical - DeviceName ' / vfms, devices, disks, naa.» XXXX01"- vm $vm

    However, this creates a new mapping file. I don't see how I can use the one I created above.

    Am I missing something? The operation of fixing using the existing mapping of RDM file is insignificant in the GUI.

    No, not missing anything.

    This scenario is not possible with the cmdlet New-hard drive I'm afraid.

    You can use a script that uses the ReconfigVM method.

    Through the Device.Backing property, you can specify the location of the VMDK file

  • RDM to delete and recreate the mapping of RDM file and fix

    Hello

    I wrote the following script, but it works fine except it does not create the mapping of RDM file.

    If I comment out the following lines, it works fine.

    $fileMgr.DeleteDatastoreFile_Task ($name, $datacenter)

    $spec.deviceChange [0] .fileOperation = "crΘer".

    Here is the code below

    Replace-VM-RDM function {}

    Param ($scsiid, $rdmname, $vmname, $filename)

    < #.

    Convert SCSI address in Vmware format

    # >

    $scsicontroller = $null

    $scsiid_split = $null

    $scsiid_split = $scsiid.split(":")

    $scsicontroller =($scsiid_split[0])

    # SCSI VMware in the GUI controller ID is a number greater than the real controller id

    $scsicontroller = [int] $scsicontroller + 1

    # Vmware waiting conntroller with 4-character id

    $scsicontroller =($scsicontroller.) (ToString()) + "000".

    $scsicontroller

    # SCSI LOGICAL UNIT NUMBER

    $scsilun = $null

    # VMware SCSI LUN ID in the GUI is a higher number than the id real lun

    $scsilun = [int]($scsiid_split[1]) #+ 1

    ###

    $vm = get-VM-name '$vmname ' | Get-View

    IF (!) (($filename))) {}

    $RDMFile = $rdmname.split("") [0] + "_RDM.vmdk"

    $filename = (($vm.)) (Config.Files.VmPathName). Replace("$vmname.vmx","$RDMFile"))

    }

    $esx is get-view $vm. Runtime.Host

    < #.

    Download CanonicalName RDM Lun

    # >

    $rdmCanonicalName = $null

    $rdmCanonicalName = (($esx.)) Config.StorageDevice.ScsiLun | where {$_.} DisplayName - eq $rdmname}). CanonicalName)

    $rdmDevicePath = (($esx.)) Config.StorageDevice.ScsiLun | where {$_.} DisplayName - eq $rdmname}). DevicePath)

    foreach ($dev in $vm. Config.Hardware.Device) {}

    If (($dev.)) GetType()). {Name - eq "VirtualDisk")}

    If (($dev.Backing.CompatibilityMode-eq "physicalMode") - or)

    ($dev.Backing.CompatibilityMode-eq "virtualMode")) {}

    If (($dev.)) ControllerKey - eq "$scsicontroller") - and ($dev. UnitNumber - eq "$scsilun")) {}

    # Remove the hard drive

    $hd = get-disk hard $vm.name | where {$_.} Filename - eq $dev. Backing.FileName}

    $hd | Remove-hard drive - confirm: $false

    Write-Host "file name:" $dev.backing.fileName "

    Write-Host "Disk Mode:" $dev.backing.diskMode

    $dev.backing.deviceName

    $dev.backing.lunUuid

    $DevKey = $dev.key

    $CapacityInKB = $ dev. CapacityInKB

    $fileMgr = get-View (view Get ServiceInstance). Content.fileManager

    $datacenter = (get-View (Get - VM $VMname |)) Get-Datacenter) USER.USER) .get_MoRef)

    foreach ($disk in $vm. LayoutEx.Disk) {}

    if($Disk.) Key - eq $dev. Key) {}

    foreach ($chain in $disk. {String)

    foreach ($file in $chain.) FileKey) {}

    $name = $vm. LayoutEx.File [$file]. Name

    $fileMgr.DeleteDatastoreFile_Task ($name, $datacenter)

    }

    }

    continue

    }

    }

    }

    }

    Else if (($dev.)) ControllerKey - eq "$scsicontroller") - and ($dev. UnitNumber - eq "$scsilun")) {Write-Host "Selected SCSI address [$scsiid] is not a RDM"}

    }

    }

    #$hd1 = New-disk hard - VM $vmname - DeviceName $rdmDevicePath - DiskType RawPhysical # this line works

    $spec = $null

    $spec = new-Object VMware.Vim.VirtualMachineConfigSpec

    $spec.deviceChange = @)

    $spec.deviceChange += new-Object VMware.Vim.VirtualDeviceConfigSpec

    $spec.deviceChange [0] = new-Object VMware.Vim.VirtualDisk .device

    $spec.devicechange [0].device.capacityInKB = $CapacityInKB

    $spec.deviceChange [0].device.backing = new-Object VMware.Vim.VirtualDiskRawDiskMappingVer1BackingInfo

    $spec.deviceChange [0].device.backing.fileName = $filename

    $spec.deviceChange [0].device.backing.compatibilityMode = "physicalMode".

    $spec.deviceChange [0].device.backing.diskMode = «»

    $spec.deviceChange [0].device.backing.lunUuid = (($esx.)) Config.StorageDevice.ScsiLun | where {$_.} DisplayName - eq $rdmname}). UUID)

    $spec.deviceChange [0].device.connectable = new-Object VMware.Vim.VirtualDeviceConnectInfo

    $spec.deviceChange [0].device.connectable.startConnected = $true

    $spec.deviceChange [0].device.connectable.allowGuestControl = $false

    $spec.deviceChange [0].device.connectable.connected = $true

    # Take the next unused volume for HD key

    $spec.deviceChange [0].device.key = $DevKey + 1

    # 7 SCSIID UnitNUmber is reserved for the controller - then go to 8.

    If ($scsilun - eq 6) {$scsilun = $scsilun + 1}

    # Take the next unit for the HD number

    $spec.deviceChange [0].device.unitnumber = $scsilun

    # Key device for the SCSI controller

    $spec.deviceChange [0].device.controllerKey = $scsicontroller

    # Create vmdk file

    $spec.deviceChange [0] .fileOperation = "crΘer".

    $spec.deviceChange [0] .operation = 'Add '.

    $vm = get - View (Get - VM $VMname) USER.USER

    $vm. ReconfigVM_Task ($spec)

    }

    Replace-VM-RDM $vmname $scsiid $rdmname $filename

    I am working, it just need to reorganization and then the Devicename was wrong.

    Here's the script working.

    Replace-VM-RDM function {}

    Param ($scsiid, $rdmname, $vmname, $filename)

    <>

    Convert SCSI address in Vmware format

    #>

    $scsicontroller = $null

    $scsiid_split = $null

    $scsiid_split = $scsiid.split(":")

    $scsicontroller =($scsiid_split[0])

    # SCSI VMware in the GUI controller ID is a number greater than the real controller id

    $scsicontroller = [int] $scsicontroller + 1

    # Vmware waiting conntroller with 4-character id

    $scsicontroller =($scsicontroller.) (ToString()) + "000".

    $scsicontroller

    # SCSI LOGICAL UNIT NUMBER

    $scsilun = $null

    # VMware SCSI LUN ID in the GUI is a higher number than the id real lun

    $scsilun = [int]($scsiid_split[1]) #+ 1

    ###

    $vm = get-VM-name '$vmname ' | Get-View

    IF (!) (($filename))) {}

    $RDMFile = $rdmname.split("") [0] + "_RDM.vmdk"

    $filename = (($vm.)) (Config.Files.VmPathName). Replace("$vmname.vmx","$RDMFile"))

    }

    $esx is get-view $vm. Runtime.Host

    <>

    Download CanonicalName RDM Lun

    #>

    $rdmCanonicalName = $null

    $rdmCanonicalName = (($esx.)) Config.StorageDevice.ScsiLun | where {$_.} DisplayName - eq $rdmname}). CanonicalName)

    $rdmDevicePath = (($esx.)) Config.StorageDevice.ScsiLun | where {$_.} DisplayName - eq $rdmname}). DevicePath)

    foreach ($dev in $vm. Config.Hardware.Device) {}

    If (($dev.)) GetType()). {Name - eq "VirtualDisk")}

    If (($dev.Backing.CompatibilityMode-eq "physicalMode") - or)

    ($dev.Backing.CompatibilityMode-eq "virtualMode")) {}

    If (($dev.)) ControllerKey - eq "$scsicontroller") - and ($dev. UnitNumber - eq "$scsilun")) {}

    # Remove the hard drive

    $hd = get-disk hard $vm.name | where {$_.} Filename - eq $dev. Backing.FileName}

    $hd | Remove-hard drive - confirm: $false - DeletePermanently

    Write-Host "file name:" $dev.backing.fileName "

    Write-Host "Disk Mode:" $dev.backing.diskMode

    $dev.backing.deviceName

    $dev.backing.lunUuid

    $DevKey = $dev.key

    $CapacityInKB = $ dev. CapacityInKB

    <#$fileMgr =="" get-view="" (get-view="">

    $datacenter = (get-View (Get - VM $VMname |)) Get-Datacenter) USER.USER) .get_MoRef)

    foreach ($disk in $vm. LayoutEx.Disk) {}

    if($Disk.) Key - eq $dev. Key) {}

    foreach ($chain in $disk. {String)

    foreach ($file in $chain.) FileKey) {}

    $name = $vm. LayoutEx.File [$file]. Name

    $fileMgr.DeleteDatastoreFile_Task ($name, $datacenter)

    }

    }

    continue

    }

    }#>

    }

    }

    Else if (($dev.)) ControllerKey - eq "$scsicontroller") - and ($dev. UnitNumber - eq "$scsilun")) {Write-Host "Selected SCSI address [$scsiid] is not a RDM"}

    }

    }

    #$hd1 = New-disk hard - VM $vmname - DeviceName $rdmDevicePath - DiskType RawPhysical # this line works

    $spec = $null

    $spec = new-Object VMware.Vim.VirtualMachineConfigSpec

    $spec.deviceChange = new-Object VMware.Vim.VirtualDeviceConfigSpec [] (1)

    $spec.deviceChange [0] = new-Object VMware.Vim.VirtualDeviceConfigSpec

    # Create vmdk file

    $spec.deviceChange [0] .fileOperation = "crΘer".

    $spec.deviceChange [0] .operation = 'Add '.

    $spec.deviceChange = new-Object VMware.Vim.VirtualDeviceConfigSpec [] (1)

    $spec.deviceChange [0] = new-Object VMware.Vim.VirtualDeviceConfigSpec

    $spec.deviceChange [0] .operation = 'Add '.

    $spec.deviceChange [0] .fileOperation = "crΘer".

    $spec.deviceChange [0] = new-Object VMware.Vim.VirtualDisk .device

    $spec.deviceChange [0].device.key = - 100

    $spec.deviceChange [0].device.backing = new-Object VMware.Vim.VirtualDiskRawDiskMappingVer1BackingInfo

    $spec.deviceChange [0].device.backing.fileName = "$filename".

    $spec.deviceChange [0].device.backing.deviceName = (($esx.)) Config.StorageDevice.ScsiLun | where {$_.} DisplayName - eq $rdmname}). DevicePath)

    $spec.deviceChange [0].device.backing.compatibilityMode = "physicalMode".

    $spec.deviceChange [0].device.backing.diskMode = «»

    $spec.deviceChange [0].device.connectable = new-Object VMware.Vim.VirtualDeviceConnectInfo

    $spec.deviceChange [0].device.connectable.startConnected = $true

    $spec.deviceChange [0].device.connectable.allowGuestControl = $false

    $spec.deviceChange [0].device.connectable.connected = $true

    # Key device for the SCSI controller

    $spec.deviceChange [0].device.controllerKey = [int] $scsicontroller

    # 7 SCSIID UnitNUmber is reserved for the controller - then go to 8.

    If ($scsilun - eq 6) {$scsilun = $scsilun + 1}

    # Take the next unit for the HD number

    $spec.deviceChange [0].device.unitnumber = [int] $scsilun

    $spec.deviceChange [0].device.capacityInKB = [int] $CapacityInKB

    $vm = get - View (Get - VM $VMname) USER.USER

    $vm. ReconfigVM ($spec)

    }

    Replace-VM-RDM $vmname $scsiid $rdmname $filename

  • OK to remove the mapping of RDM?

    I need to cold migrate some virtual machines with virtual RDM mode on new servers. I'll introduce the RDM with different numbers of LUNS on the target hosts. My co worker suggests that when you remove the ROW before the migration of the cold, you can not delete the mapping file, otherwise this will cause data corruption, so it selects the removal of the option of vm (not remove and delete).

    My understanding is the file ROW is just a mapping file and is safe to delete. My thought is that delete saves space and accelerates the migration time.

    RDM files can simply be removed without data corruption?, most vmware articles suggest they can and they're just mapping files.

    When the virtual machine allows you new hosts, a new virtual RDM will be added when the lun is presented to the new hosts.

    Yes, you can delete the ROW with removal of the VM disk option before the migration of the cold.

    It won't cause any data corruption that all data is on LUN storage.

    RDM file contain only for the LUN mapping information.

    Once you have completed the migration of VM, you can add RDM LUN to VM, just make sure that you attach Correct RDM LUN to VM.

    See this KB for more information-

    VMware KB: Switching between physical and virtual compatibility modes in ESX/ESXi, a mapping of raw data

  • How to find all the controller ID: target used for the mapping of RDM on virtual machine

    I want to get a list of all the target ID used for ROW mapping on each controller.  Then next time, I can use the remaining ID of the target for RDM disk mapping using the perl script.

    I checked findVMsWithRDMs.pl but it shows an ID used target for each disc RDM. I use 5.5 ESXi servers and SDK for perl.

    Thank you

    Celine

    You will have to list all config.hardware.device devices, and then find the raw disk support type (to select the ROW).  Then look at the number of drive.  Then use it's value of controllerKey to get the controllers in config.hardware.device with the corresponding key and then get its value busNumber.  So it's busNumber:UnitNumber for the SCSI ID.

    You'll probably want to get all used SCSI identification numbers (use a hash such as BusNumber:UnitNumber-1 online).  Then you can quickly search number the first available bus.  That's what I generally find a free SCSI ID to add a new disk (including the ROW).

  • Clean map of RDM files

    I just want to get clarification.  I removed some RDM tonight a virtual machine, but instead of removing and deleting, I just went with the removal of the RDM.  I now find myself with some of the mapping files and was curious to know if its ok just to remove @ this point?

    http://monosnap.com/image/Hc4AzgXHXET9PBy79UYUZmaRn.png

    Hello

    The withdrawal of RDM mapping files will not cause any problem. When you add RDM back it will create again. It's so safe.

    Concerning

    Mohammed

  • Map of RDM file size?

    So I I've attached some LUN to a virtual machine that is in a 180 GB data store.

    LUNS was 558 GB in size... and it has created a map file and stored with the virtual machine on the data store.

    When I browse the data store, he says he is a size 558 GB... vmdk file that is larger than the actual data store?

    What a mistake... or not the actual vmdk file mapping size?

    Probably it is just a file that indicates the size of the lun it points to...?   which makes sense because I do not understand any delete what space released if a vmdk file mapping in the data store.

    How much space the actual mapping files themselves take in the data store?

    Advice appreciated.

    Using RDM with a virtual machine, you will see a just big VMDK file such as size of the LUN on the VMFS datastore (in the vSphere client), but it is not allocated to the VMFS data bank.

    Logging to the console, you will see two files when mounted with RDM;

    VirtualDisk - rdmp.vmdk size 0 bytes
    VirtualDisk.vmdk size 64 bytes

  • How to properly map NTFS via RDM disk

    Hello!

    I made the next task facing - I need to map storage P6300 EVA NTFS disk.

    The area created for the collector, to EVA host created, presented in a drive to it. After that, map this drive via "Add HDD - RDM-... ». It has been done successfully.

    Virtual Server (Windows 2012 x 64) contains this disk, but I can't access to it - file system is RAW, not NTFS. I can not format this drive - there are a lot of DBs, which are in use

    If I try to open the drive I get error "no access. The required resource is busy.

    So, how can I properly map via RDM NTFS drive?

    NTFS (CSV and applications supporting it) is not just a clustered file system and is never designed for simultaneous access of several sources. Uncontrolled as this access is likely to cause data corruption.

    The physical SQL host is probably SCSI locks as well, completely blocks access fortunately Windows seems smart enough not to try to force to replace the lock of SCSI.

    So yes, unless you stop or remove the LUN from the other host you can not access. A copy separate/snapshot LUN should work fine but.

  • Re-mapping Disk Quorum MSCS RDM

    I have a problem on an Exchange 2003 MSCS on VMWare.  VM1 (active), VM2 (Passive)

    VM2 off and turns not back with the error:

    Virtual disk 'X' is a direct access mapped LUN which is not accessible

    I've referenced this Ko http://kb.vmware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1016210

    and vmware contacted support.

    We discovered that the identifier of vml in the mapping of RDM file is completely different from vml with all 4 the host for the RDM - infact vml in the mapping of RDM file does not exist on any of the hosts.

    VM2 is currently offline and had deleted ROW mapping, but they advised there are:

    1. turn off both nodes.

    2. on node 1 - Note the SCSI controller that is used for the RDM (disc 2).

    3 remove the ROW of the virtual machine and remove the disc (this will remove the file pointer to the ROW, it will not erase the data).

    4. Add the ROW to the node 01 - using the same SCSI controller (which I think SCSI1:1) and the physical bus shares.

    5. Add the ROW to the node 02 - however this time above to add the LUN as an existing drive and use the pointer file created in step 4.

    6. turn power on node 1.

    7. turn power on node 2.

    I can see the logic, but I'm worried because this ROW is the Quorom player for the cluster, so if there is any questions/differencies when VM1 starts up with the new mapping then the cluster does not start and we will lose Exchange. (other resourses in cluster are iSCSI LUNS Netapp connect with SnapDrive one in the virtual machine)

    somebody had to do something similar?

    taking a copy of the driectory vm before making any munmap or remove on VM1 would provide a roll back option?

    any advice appreciated,

    GAV

    I did something similar and the process they describe is what you need to do.

    I also did without delting the pointer in the first vm, but delting the poitner won't matter.

    the disc is already signing, which is on the row, not the pointer.

  • RDM mapping Datastore renames will have an impact on the VMS?

    Hello

    We have 5 ESX 4.1 and a VC 4.1, we configured NFS Datastore (NetApp) and presented to 5 all ESX hosts and all the virtual machines run in the NFS Datastore. We have configured a data store through SAN for mapping files RDM name like 'RDM map', however, for the other VM engines, we used RDM lun. and all the RDM LUN mapping files are stored in the Datastore 'RDM MAP', but I want to rename 'RDM MAP' to 'RDM_MAP_Datastore '. If I renamed the mapping data store, it will affect my running virtual machines?

    PramodKhalate wrote:

    Yes, I rename datastore in Vcenter... not level of storage...

    no problem then

    I've done this several times

  • RAW Device Mapping (RDM) Questions

    The script (required):

    1 vm that will act as a file share.  It will take an operating system drive and a 200 GB data drive.  It is possible, that it may require expansion in the future.

    3 vms who will act as servers XenApp.  They will require 80 GB of space each (as a single disc).

    We wonder if RDM would be a good option for or all of these servers.  We have used only VMFS at this point, but are interested in the benefits of the RDM.

    Issues related to the:

    1. I said RDM would improve I/O on systems requiring a high I/O (eg., SQL) until I see any advantage on a sharing of files right?  How about on XenApp servers?

    2 RDM must be added as a LUN full, correct?  In other words, when I create a LUN on the storage (Dell MD3000i) it will be presented to the ESX host and when I create on the virtual machine, I have only the choice to add the LUNS together as an RDM?  Ideally, I would like to create a group of entire disks on the MD3000i and it cut up.  I should create 4 MON (1 @ 200 GB) and 3 @ 80 GB?

    3. how the reader of RDM file sharing data, would expand if I have to?

    Thank you.

    1. I said RDM would improve I/O on systems requiring a high I/O (eg., SQL) until I see any advantage on a sharing of files right? How about on XenApp servers? Meh I would go with VMFS for file server no real benefits with a ROW here. I really can't tell in what concerns the systems of Xenapp, but said app provider unless otherwise I would stick with VMFS

    2 RDM must be added as a LUN full, correct? In other words, when I create a LUN on the storage (Dell MD3000i) it will be presented to the ESX host and when I create on the virtual machine, I have only the choice to add the LUNS together as an RDM? Ideally, I would like to create a group of entire disks on the MD3000i and it cut up. I should create 4 MON (1 @ 200 GB) and 3 @ 80 GB? Full LUN, yes

    3. how the reader of RDM file sharing data, would expand if I have to? IIRC you expand the LUN on the storage, the case comments and re-create the mapping of RDM. Start the virtual machine and should see the guest space.

  • Virtual RDM: mapping in the vm, VMFS folder file size

    Hello

    I just migrated an iscsi LUN previously managed the MS iscsi initiator (running inside a machine virtual w2k03) to ESX iscsi sw, preserving the data.

    I used RDM (raw device mapping) and I had to use 'virtual' because 'physical' hanged the VM at startup of the operating system.

    The iscsi LUN is 1 TB, with about 60GB used.

    I noticed that on the volume VMFS file mapping for the virtual machine is about 2 GB: happen when on the iscsi LUN more space will be used? And if it goes, is there a rule to calculate how much?

    I'm worried I might risk running out of space on the VMFS volume because of the mapping of RDM file...

    Thank you

    Guido

    It's just a pointer to the original disc of RDM.   I have a few MS Cluster in my setup. For 18 GB RDM Lun he creates about 100 MB.

    Thank you

  • RDM missing after reboot.

    Hi all

    Due to a hardware migration, we had (in maintenance mode) stop all our ESXi host - 5.1 and their VM (configured with RDM). So after the migration of the material we had lit all the ESXi host.

    This is so the problem starts when we try to power on the VM (Linux OS) which mapped with RDM his error launch mapped from storage Lun is missing and that we can not only VM - the Machine running. So to solve this problem, we followed a few steps,

    US planes removes the VM RDM(Hardisk-2) and powered on the Vmachine sucessfully.  But after that when we try to map the RDM manually in the Vcenter, RDM option for the virtual computer machine is seems to be 'gray-out '.

    Unit number logic that is mapped is visible that ESXi, but we can not able to map. We had rescaned, detached and attached the LUN but RDM option is always gray-out and this is why we cannot make RDM for this virtual machine.

    Can someone please guide us to get this problem resolved...!

    Thanks in advance...!

    Kind regards

    Subash.

    When your storage space is returned to the top you will drives?

  • SVMotion rdm for vmdk

    When I try to svmotion a virtual machine with physical rdm attached to it, I want to convert VMDK but I do not see the option. I fact svmotioned the vmx and vmdk Os file. Also moved the map of rdm file. How convert vmdk?

    Remove it, he as a virutal rdm readd and then you can to a vmdk svmotion

    VMware converter should work too

    Here's how to convert your phsical one rdm rdm virutal if you need:

    http://sparrowangelstechnology.blogspot.com/2012/09/convert-your-physical-RDM-RDMP-to.html

    Here's how to convert your virutal rdm a vmdk using sotrage vmotion:

    http://sparrowangelstechnology.blogspot.com/2012/09/SVMotion-RDM-to-VMDK-virtual-disk.html

Maybe you are looking for