Replicated LUNs, and RDM

Hello

We have 2 nodes and 2/place in the same place. Storage 1 is replicated to the storage 2. initially on 1 storage we have created 4 LUNs and added as RDMS to 2 knots. If first storage is off or disconnected, need to add replicated 2 LUNS to 2 2 storage nodes. If the system works on storage 1 LUN then 100 and 101 figures, 2 storage and LUN numbers 110 and 111. We need to do this using powerCLI.

Any help would be greatly appreciated.

LUCD, I read your amazing solutions/responses, would you please help me? Thank you.

My mistake, I use VIProperties for ScsiLunand Monday is here.

You can use the formula instead

New-VIProperty-name Monday ObjectType - ScsiLun-Value {}
Param ($LUN)

[int] (Select-String ": L(?\d+)$ ' ')
-InputObject $lun. RuntimeName). Matches [0]. Groups ["Monday"]. Value
} - Force

Tags: VMware

Similar Questions

  • I have a VM on an SRDF pair replicated LUNS, and can't seem to get the value of R2 mounted

    I do a test of D/R of a virtual machine that is on the SRDF storage based. Separate us the SRDF pair and try to access the R2 lun, but we cannot see the file system. I see the lun in the storage by HBA list, but it does not appear in the VMFS volumes. I did analyze again several times for the new LUN and again VMFS and it does not appear. The R2 appears read-write from the perspective of SAN... Anyone have any ideas?

    If you have a copy or clone of the existing LUN to another ESX host then you have 2 options, you can configure LVM. DisallowSnapshotLUN 0 and then analyze again, you will see the LUN

    If you use will, LVM, you introduce the cloned copy LUN to the same ESX host. EnableResignature = 1 which will resign the clone lun and a new analysis, do NOT set the instant setting do not allow on the same ESX host as it will probably damage your data.

    When you have finished together LVM back to default settings, IE resign = 0 and DisSnap = 1, these settings are under the host - config - advanced

  • SRM protected LUN and RecoverPoint replicated Lun

    We run VMware SRM 5.0 and RecoverPoint 3.4 SP2.  Running 5.0 ESXi and vCenter 5.0. The problem that we get and the VMware SRM and EMC RP support response are conflicting with each other.  Our SRM failover process failed because the DR replicated LUNs was not detected during the scan HBA.  VMware support said that the replicated LUN doesn't have to be added to the storage of VMware on our VNX5300 (DR) group.  Logic unit number is added to the storage PR group.  SRM/RP will present the LUN in VMware as part of the SRM failover process.  EMC, tells us that the replica LUN must be present in VMware and storage of RP group.

    The failover process does not end until we added the LUNs replicated in the VMware storage group.  However, we have tested previously other SRM protected LUNS and this has never been necessary.  Can anyone provide specific details on RecoverPoint requirement to add the LUNS replicated in the VMware storage group?

    Thank you

    EMC is correct!  The target of replication LUNS must be in a group of storage with your hosts ESXi on the VNX for a test to run successfully.  I'll usually create a storage for a VMware cluster group, add LUN replication target and leave them there.  Of course, the Group of storage for all data, journal and repository LUN RecoverPoint must also exist with all the Rwandan Patriotic Army.

  • RS 5 - NFS/VMFS and RDM

    3 x vSphere 5 update 1 guests

    RS 5

    NetApp PROD = mass of FAS3240s (8.02) DR = unique FAS2040

    Our virtual cluster uses NFS data warehouses. We hired an engineer to come in and deploy SnapManager for SQL and SharePoint. During this deployment they install our servers (SQL and SharePoint farm) with ROW 4. During the installation we spun a datastore to VMFS (iSCSI) to store the mapping of RDM files on. Everything was fine.

    Recently, we have begun a deployment of RS 5 I was told that the RDM are managed natively by MRS during the recovery process. Discover, it is and it isn't. Using guests on NFS data warehouses and RDM mappings on a separate data store, we were not able to add a guest in a protection group without seeing errors due to the RDM come back with "not replicated" status, even if we can see the relationship of snapmirror for data store AND mapping ROW in the table view manager. What I've read, so that the SRM properly manage a guest with RDM, the RDM must be on the SAME data store that is located in the comments, as well as the mapping of RDM files. Since SnapDrive one will not allow you to store these mapping on a NFS datastore files, we could not accomplish a fully automated "SRM"recovery without the use of scripts customized via the SnapDrive one and SQL CLI commands."

    So here's my question. What are the pro and the con us a mixed environment on the warehouses of data running? Through testing, I set up a datastore VMFS (iSCSI) with a test prompt. I then fixed 2 RDM local in this data store and then stored the mapping of RDM files it as well. Created a group of protection without problem. Recovery went through without a problem.

    Our current facility is:

    SQLSERVER1:

    Comments = NFS Datastore 1

    RDM1 = NetApp Volume 1

    RDM2 = NetApp Volume 2

    RDM3 = NetApp Volume 3

    RDM4 = NetApp Volume 4

    Mapping of RDM files = Volume VMFS NetApp 1

    Proposed configuration:

    SQLSERVER1

    Reviews - VMFS Datastore 1

    RDM1 - VMFS Datastore Qt1/1/Lun1

    RDM2 - VMFS Datastore 1/Qt2/Lun1

    RDM3 - VMFS Datastore 1/Qt3/Lun1

    RDM4 - VMFS Datastore 1/Qt4/Lun1

    * qt = qtree

    In the present example, snapmirror would be at a volume of verses level qtree it's done now.

    All advice is appreciated.

    You can create a FLIGHT/LUN (VMFS) for your comments. Storage vMotion the prompt for the new VMFS datastore. If you wish, you can remap the ROW pointers to live under this same VMFS datastore. This is the approach I took for this scenario.

    You do snapsjmirrors of Volume or Qtree level based? Best practices say allows you to create a single FLIGHT per LUN.

    Proposed configuration:

    SQLSERVER1

    Reviews - VMFS Datastore 1 (Storage vMotion)

    RDM1 - 1 Datastore VMFS to file pointer, RDM location = / VOL1/Lun1

    RDM2 - 1 Datastore VMFS to file pointer, RDM location = / VOL2/Lun1

    RDM3 - 1 Datastore VMFS to file pointer, RDM location = / VOL3/Lun1

    RDM4 - 1 Datastore VMFS to file pointer, RDM location = / VOL4/Lun1

  • VMFS-4 on ESXi 5 and RDM

    Hi, I have two hosts ESXi 4.1 and I introduced just an ESXi 5 host to the mix (iSCSI SAN).  All guests can see all LUNS.  I wanted to test the presentation of a new physics-4 to RDM LUN an exisiting VM, but when I try to do, I get the message 'mappings LUN with a capacity greater than 2 TB can be stored on VMFS5 data storage only.

    1. Should I first put the hosts ESXi 4.1?
    2. The upgrade will convert iSCSI VMFS-4 VMFS-5 data warehouses?
    3. Will it hurt anything to have the VMFS-4 data warehouses presented an ESXi 5 host
    4. Just to clarify, physical RDM disks over 2 TB are now supported in ESXi 5 - correct?

    Thank you for your help.  I'm about to buy VMware Essentianl is told by the way.

    Should I first put the hosts ESXi 4.1?

    -No - you can do it with the existing configuration, but the virtual machine would be limited to just the 5 ESXi host.  You will need to create a new data store, present it to the host ESXi 5 and create a vmfs5 data store.

    The upgrade will convert iSCSI VMFS-4 VMFS-5 data warehouses?

    -No - upgrade is a manual step that you perform on the data store.

    Will it hurt anything to have the VMFS-4 data warehouses presented an ESXi 5 host

    -There is no problem doing this.   If the virtual machine is version 7, you can run the virtual machine on ESXi 4 or 5 of material.  Once that you upgrade the version of the hardware for the virtual machine can only be executed on the host ESXi 5.

    Just to clarify, physical RDM disks over 2 TB are now supported in ESXi 5 - correct?

    -Yes - RDM physical and data warehouses can exceed 2 TB.   Virtual disks and RDM virtual are still limited to 2 TB.

  • For the recovering site - new or replicated LUN data warehouses?

    I am trying to implimenting SRM 4.1, and so far the process has been very frank.  However I'm confused about something, and that's what data warehouses to be used on the recovering site.  My table replica LUNS on Lun matched.  Shoud I maps ESXi hosts the site caught in paired LUNS that are exact replicas of the protected site data storage?  Or should I create new LUNS and create new warehouses of data, and then use those LUNS when you create Protection groups?

    I hope that I have missed something in the docs, but I couldn't find the answer to this question after seraching various sites and sources.  What others have done with their MRS. configurations?  Thank you!

    SRM in conjunction with your variety of SAN replication maps will do it for you durin site pairing and Setup.

    Basically, the only thing you need to do is zoning on the FC switches or VLANS on Ethernet switches depending on what you use - storage FC or IP so that hosts the site of DR can reach the storage device.

    SRM with the securities regulators will take care of the rest.

    WBR

    Imants

  • Storage vMotion and RDM

    I have a question for you about storage vMotion. I have a virtual machine with a vmdk file and a RDM. I want to storage vmotion the vmdk file. There is also a file associated with the ROW pointer. How this pointer file is reassocie to the RDM? I'm evacuating small LUNS and move virtual machines to the largest lun.

    On page 199 and 201 of the basic ESX Administrator's Guide, it states that, if I select "same as Source for a RDM, only the mapping file will be migrated. Have you done this before? I just wanted to see if anyone in the field has had an experience with this process.

    7. If you chose to move the virtual machine configuration file and virtual disks, select a disc format, and then click

    Next.

    Description of the option

    Same as Source to use the format of the original virtual disk.

    If you select this option for a RDM in physical or virtual drive

    compatibility mode, only the mapping file is migrated.

    Thin provisioned using the format thin to save storage space. The thin virtual disk use as

    storage space because it needs for its initial implementation. When the virtual disk

    needs more space, it can grow in size up to its maximum allowed of capacity.

    This option is not available for RDM in physical compatibility mode. If you

    Select this option for a virtual RDM compatibility mode, the ROW is

    converted to a virtual disk. RDM converted virtual disks cannot be

    converted into RDM.

    Thick allocate an amount fixed disk space for the virtual disk. The virtual

    drive in thick format does not change its size and from the beginning

    occupies the entire data store space for her.

    This option is not available for RDM in physical compatibility mode. If you

    Select this option for a virtual RDM compatibility mode, the ROW is

    converted to a virtual disk. RDM converted virtual disks cannot be

    converted into RDM.

    Thank you

    Savoy6

    Hi Savoy6,

    When you sVmotion a virtual machine with a disc RDM, the pointer file is re-created on the new location and points to the raw LUN.

    Take a look at this kb for more information.

    It may be useful

    Concerning

    Franck

  • Storage vmotion a VM with so many vmdk and RDM for NFS

    I try a virtual machine that has so much Storage VMotion vmdk and RDM disks to a NFS data store.

    I do NOT need convert the RDM disks just move the pointer with the VMDK file will be converted.

    Is this possible?

    Now if I try this hot svmotion, I get an error on the RDM which set out the following "virtual disk" drive hard 3' is a direct access mapped LUN that is not supported on the data store 'myEMCNFSds'.»»

    If I try this svmotion cold then I get no warning, but it seems that svmotion is trying to convert the RDM NFS drive.

    Is your RDM in physical compatibility mode?   Check out this article KB http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1001856

    If you have found this device or any other useful post please consider the use of buttons useful/correct to award points

  • How a virtual machine sees log LUN and storage

    I am new to this so I aplogize for the stupid question

    If I have an Exchange CCR cluster

    1 server is 1 physical in a VM

    I understand that you need to map the logic unit number in physical compatibility mode on the side of the virtual machine

    But what I do not understand how the VM can read what is on that LUN.  I thought that a virtual machine must have been VMFS

    and for example, let's say the Qourum player, it is formatted to NTFS, then how can the VM read what is on this drive?

    Thank you

    Hello

    VMFS volume held the metadata or the RDM VMFS file pointer but the actual data will be on the Raw Lun and the BONE inside the virtual machine which will recognize it as a normal drive.

    Thank you

    Samir

    PS: If you think the answer is useful please consider rewarding points.

  • Is it possible to use Storage Vmotion with VMFS and RDM virtual tool?

    Hello!

    You can use the tool Storage Vmotion to move a server that is running both VMFS disks and RDM virtual drives inside?

    Both SAN is EMC and Vmware ESX and Virtual Center can be reached.

    Thank you!

    For storage on the SAN again the two hosts will need access to the storage. You can then you copuld cold migrate the virtual computer. First, you will need to stop the virtual machine and delete the mapping of RDM. Once the virtual machine is the new data store re-create the mapping of RDM (Bothe hosts will need to access the LUN backend).

  • Problems with an external monitor with the Port Replicator III and M100

    Hey all.

    I just returned from a trip where I took my laptop. Plugged into the Port Replicator port, started and everything was fine. Then after an hour or two, the monitor stopped receiving a signal. I repositioned the laptop and everything was good again. He then loses signal once again, and I was not able to get a signal via the port replicator since.

    The monitor works fine through the hole at the rear of the M100.

    I checked the pins on the Port Replicator port and portable, none folded and tried to clean any dust etc. Still no signal to the monitor.

    Someone has ideas of how to do to solve this problem?

    Jamo

    Hello

    The Toshiba 3 mobile extension is responsible for the good condition of the duplicator 3 market. Try to reinstall this utility and also check the page of the Toshiba driver for updates.

  • What is zoing LUNs and lun masking

    can someone explain ziinsog lun and masking how it's work and about multipathing.

    Zoning - if you want to specify that some hosts to access a device of storage, then you would want to the zoning of the installation.

    This configuration is done on the Fibre Channel switch.  iSCSI and NFS FCoE can also be segmented, but they would use typical methods of segmentation TCPIP as the implementation of a VIRTUAL LAN.

    There are two types of techniques of zoning: zoning Hard and Soft zoning.

    Soft zoning filter one device to another device.  However, if the ports are manually setup, the switch will not stop communication devices.  By comparison, hard zoning prevents a port to send the traffic to the other port and is safer.

    Zoning can also be setup based on the port or the name WWN (World Wide).    Port zoning grants access to a port on a switch to a different port on a switch.  This would require physical security be installed around the Fiber Switch, because the areas could be changed around simply by moving the cable in the switch.  This also makes more of a struggle for management if the switches should be moved or re-cable.  WWN zoning is configured to allow access between two WWN that makes management a little easier, but is also sensitive to the WWN spoofing that could allow access to the storage device.

    Masking-

    Masking of LUN is a process that makes a LUN available to some hosts and unavailable to other hosts.

    LUN masking is implemented mainly at the level of the host bus adapter (HBA). Security benefits of LUN masking implemented at HBA are limited, because with multiple host bus adapters, it is possible to forge source addresses (WWNS/Mac/IPs) and compromising access. Many storage controllers also support LUN masking. LUN masking is implemented at the level of the storage controller, the controller applies access policies to the device and so it is safer. However, it is applied primarily not as a measure of safety in itself, but rather as a protection against faulty servers which can corrupt and disks belonging to other servers. For example, Windows servers connected to a San, under certain conditions, will damage the volumes non-Windows (Unix, Linux, and NetWare) on the SAN by attempting to write Windows volume labels. By hiding the other LUNS in Windows Server, this can be avoided, because the Windows Server does not realize that the other LUNS exist.

  • resize the disk/diskgroup LUN and asm

    Happy new year to all.

    I have a little problem of resizing an ASM diskgroup and maybe someone here can help me.

    I had a lun with size 50G and added on a disk called LUN1D1 asm.
    Then I created a DISKGROUP called data1 Witch (external) contain some LUN1D1.
    Data1 is then conducted asm volume. So I have a volume of asm with size 50G and I need 100 g.

    Now, I have increased the size of the lun with 50 G (100G in total). I rescaned the LUN and os see the new size.

    The problem is that I can't resize DISKGROUP data1 or disk LUN1D1.

    Database of oral 11.2.0.3.0 x 64

    KFOD read the values of the BONE, so something is wrong on the BONE and does not recognize these new values.

    Did you follow the following steps?
    http://www.Novell.com/support/kb/doc.php?id=7009660

  • 2 TB LUN and Clarification of the scope

    I have a need to assign several VM in my environment of clients with 2 TB vmdk files, but I also want to be able to take snapshots of these servers.

    Because of this I would expect the community could clarrify a few points for me by correcting one of my points/assumptions;

    (1) VMFS data warehouses can be a maximum of 64 to in size.

    (2) VMDK can include up to 2 TB in size.

    (3) develop a VMFS datastore and LUN requires that he space / free vacuum at the end of the disc.

    (4) creation of a store of data VMFS higher then 2 TB requires the creation and presentation for ESX (i) multiple LUNS and then link them together using extensions.

    (5) to extend a LUN/VMFS beyond 2048 GB is bad news and can cause data corruption.

    So in my situation it seems to me that I need to create a 2 TB LUN and a second unit logic to say 300 GB number, then link them together using a measure to create a store of VMFS data that spans the two LUNS and has a total of 2.3 TB of space. So I can create my 2 TB VMDK and still be able to take pictures.

    Is there one way other or better to get my result?

    Are there concerns with the help of extensions, it's what trade off?

    Any other comments would be welcome.

    See you soon,.

    Paul

    What is your process to separate the configuration of virtual disk file in the files of virtual disk storage?

    Nothing special. When you create her select VM first data store. Then, when you create the virtual disk in the wizard, just select the second data store

    What the penalty or drawback to the use of extensions?

    They're just not necessary anymore. Extents should be used prior to ESX (i) 4.0 to resize a data store. With ESX(I) 4.0, it is now possible to resize/grow the store of data on the fly after resizing the underlying LUN without needing extensions. With extents, you lose that option. Basically, a data store extended is as a RAID 0 array.

    You can also read http://www.yellow-bricks.com/2009/03/26/resizing-your-vmfs-the-right-way-exploring-the-next-version-of-esxvcenter/

    André

  • ESXi 3.5 U4 - size Max LUN and problems on HP MSA70 SAS storage

    A few questions...

    What size Max LUNS ESX 3.5 U4 can display/use? I ask because a colleague has a 3 TB LUN and for some reason he does not see ESX.

    Anyone who has used a MSA70 with 25 discs SAS 147 GB on ESXi U4 as shown above? We have problems with it see the big LUN. We created the LUN via HP 'Array Configuration Utility' start to the top of the smart start CD, it went well. Downtown's watch that the controller sees the managed disk 1 lun. But when it comes to installed ESXi he did not see the big LUN. Is there a limit max?

    Thank you

    2 TB is max

    http://www.VMware.com/PDF/vi3_35/esx_3/r35u2/vi3_35_25_u2_config_max.PDF

Maybe you are looking for