SAN storage configuration recommendations

We have a HP SAN with two fiber switches and a Bay of storage MSA2012FC with 12 146 GB drives.  I am trying to decide how to configure storage for our VI3 environment and seeks recommendations.

At this point, I am considering two-Bay RAID 10 (6-146GB disks each) with 1 (220 GB) or 2 (440 GB) LUNS defined by the RAID array.  The virtual machines are configured in an HA configuration so I think it would be better to have two tables separated raid against one.  This way I can separate the pairs HA between the RAID arrays.  If I create a single RAID 10 array, then if more then one disk fails in this table we could lose everything.  In the future, we will add another MSA2000 pregnant then we will have another 12 146 GB drives.  At this point, we can expand each Bay RAID 10 to 12 146 GB drives to give us more storage space and performance.

This sounds like a good plan?  Any advice would be appreciated.

Hello

RAID 5 is recommended because the table can stay together if 1 disk breaks down. With spare disks, this is a good configuration. RAID 10, a disk fails and you are switched. If this also works...

I like Raid 5 and I have set up my disk space based on execution of 10-12 VMs per LUN. Who is the the average number per LUN. It's all about redundancy.

Best regards

Edward L. Haletky

VMware communities user moderator

====

Author of the book "VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.»

Blue gears and SearchVMware Pro Articles: http://www.astroarch.com/wiki/index.php/Blog_Roll

Security Virtualization top of page links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

Tags: VMware

Similar Questions

  • Oracle Rac on San storage

    Hello

    We are at the heart of the definition of a process for installation of 2 node to a san storage device.  Oracle 11.2 on Rhel 5.

    What is the process of implementation recommended for shared storage.

    We want the mirroring to stay at the storage level.

    * NFS without ASM

    * NFS with ASM

    * LUN with ASM

    * LUN without ASM

    All other as appropriate.

    I went through the documentation, it gives options, however, does not tell best practices.

    Thanks in advance everybody and apologize if there is already a post about this and I repeat it.

    The only choice, I will focus on what is LUN (fiber or Infiniband fabric layer) with the ASM.

    And no ASMLib. (optional, not required)

    Set the devices property LUN via udev. Configure logical device for ASM using MPIO.

    I'm going to call any other method, below.

  • The MSCS storage configuration

    Hi all

    I would like to create a virtual cluster using MSCS on Windows 2008 R2 to run SQL. I'm having trouble wrapping my brain around the configuration storage room. I would like to have a 2 cluster nodes with each cluster running on a different node.

    We use CF to the EMC CX Clarrion-340. I understand that I must use RDM disks.

    When I create a physical cluster, I create a LUN of ladd and storage on the SAN for the cluster group. I then to the area of everything and he my storage can be added to the cluster.

    How well it works with virtual machines? It is not as if they have cards CF connecting to SAN. So, how access to the SAN storage?

    Maybe I'm on that thought.

    Thanks in advance for your help.

    Kind regards

    Dean

    Dean, present you LUNS to the ESX host only, in the same way you present LUNS to storage data VMFS. Next, you will add these LUNS to your nodes in cluster as RDM disks. Once you have done your esx servers will create mapping files that take the first virtual machine to LUN IO, all consecutive IOs will be sent directly from VM to LUN by your ESX HBAS. I recommend checking your storage against the hardware compatibility list for Vmware for the support of MS Failover cluster in this configuration, but it could still work well without be supported by vmware.

  • Oracle 11g R2 RAC on IBM with SAN storage

    Hello Experts,

    I design an implementation for RAC Oracle 11 g 2 on AIX 6.1 with IBM's SAN storage. My drawing do not include Oracle ASM, I know, it is recommended to use Oracle ASM, but my purpose is not included

    I read the Guide of Installation of Oracle Grid Infrastructure and I have below questions about storage configurations:

    1. do I need a special configuration to replace the DSO?
    2. is it compulsory to use HACMP (IBM's Clustering solution) because I don't use ASM? Since based on the infrastructure Grid section 3.2 Installation guide (the Storage Configuration shared): [http://download.oracle.com/docs/cd/E11882_01/install.112/e17210/storage.htm] there is a section 3.2.7 "Configuration HACMP multinode disk Heartbeat (MNDHB) for Oracle Clusterware" so my question is it is mandatory to use HACMP if I don't want to use ASM?
    3. in the same above installation guide, mentioned in section 3.2.8 that "to use the raw logical volumes for Oracle Clusterware, HACMP must be installed and configured on all nodes in a cluster." what is raw logical volumes? is it compulsory to use it? Since I'm not on ASM?
    4 installation guide, in sections 3.2.3 to sections 3.2.9 are these prerequisites sections sections 3.2.10 to 3.2.15? or I can simply ask section 3.2.10 to 3.2.15 without applying 3.2.3 to 3.2.9?

    One last Question:

    ASM is integrated with the cluster software, do a separate license or license the license of the ASM cluster cover?

    Kind regards

    NB wrote:
    Hello Levi,

    the information provided is very useful. But what I heard in the installation guide network infrastructure that the raw device needs HCMP while GPFS need not of HACMP, GPFS is a cluster provided by IBM file system.
    Your input please?

    Hello

    Raw disk device (RD) is the LUN (disk) storage mapped on AIX.
    Raw logical volume (RLV) is the LUN (device) created the LVM of AIX.

    If you use devices of raw disk that you don't need HACMP, because AIX (core) has only receive instructions from Oracle to read and write, AIX does not support concurrency control in the raw disks, this control is managed by Oracle.

    If you use raw logical volumes (CLVM) HACMP must be installed and configured on all nodes in the cluster, volume logic is a feature of AIX (IBM) to enable simultaneous mode, you install HACMP, Oracle not manage concurrency control because Oracle does not know that there is an LVM behind.

    You are right, that the new version of GPFS are usable without HACMP.

    Kind regards
    Levi Pereira

    Published by: Levi Pereira on October 8, 2011 17:26 # update the info on gels of features

  • If you use network storage, configure ASM disks with external redundancy groups

    Hi Experts,

    If you use network storage, configure ASM disks with external redundancy groups. Don't use groups of Oracle ASM failure. Oracle failure groups consume cycles additional CPU and can run in unpredictable ways after suffering from a disk failure. When you use external redundancy, disk failure are transparent to the database and do consume no additional database CPU cycle, because it is discharged on storage processors.

    This does not mean

    • RAID 1 + 0 for diskgroup + REDO1
    • RAID 1 + 0 for diskgroup + REDO2
    • RAID 5 for diskgroup + DATA
    • RAID 5 for diskgroup + FRA

    Is this one suggested, the recommended best practices for oracle on VMWARE?


    Thank you and best regards,

    IVW


    Hello

    You can check the storage analysis as well...

    http://www.Dell.com/downloads/global/solutions/tradeoffs_RAID5_RAID10.PDF

    discussion of the Oracle

    https://asktom.Oracle.com/pls/asktom/f?p=100:11:P11_QUESTION_ID:359617936136

  • ESXi 5.1 vMotion migration of VM with 2 vmdk: a SAN storage and other storage facilities

    Hi all:

    I got a two host clusters that uses a shared SAN storage where my VM files are stored. Storage hosts is an aggregation of the two 1 Gbps link

    path and so efficient throughput of 2 Gbps.

    Now I need to install a virtual machine that will host our proxy server (Squid). Proxy servers needs a flow of large I/O structure of the disk cache. So I'm

    Whereas by deploying somehow using two VMDK. One with the operating system and software, say vm.vmdk file and the other to the cache on the disk,

    tell the vm - 1.vmdk file.

    My idea is to put the file vm.vmdk on the storage, as well as other VM, I got, by putting the vm - 1.vmdk file to local storage, so does not link of 2 Gbps. say that

    This SAN storage named "san_storage", host local storage is named 'a_local_storage' and local storage of host B is named 'b_local_storage '. So vm.vmdk is

    on "san_storage/vm/vm.vmdk" and is shared both host A and B. say that VM begins to run on host A, vm - 1.vmdk file is "a_local_storage/vm/vm-1.vmdk".

    I would like to know is there is a way, by script or so, to change the configuration of the virtual machine in a way which when I do a host migration has to B, the second disc of the host

    virtual machine will be disconnected "a_local_storage/vm/vm-1vmdk" and will be linked to "b_local_storage/vm/vm-1.vmdk" at VM startup on host B.

    I don't want to copy 'vm - 1.vmdk' from one host to another, because the proxy server parce que le serveur proxy peut peut fills again.

    Is it possible to do? Or is - anyone has another solution?

    I use ESXi 5.1.0 799733 on the two hosts with a license that allows vMotion and using vCenter 5.1.0.5300 build 947940 to manage.

    concerning

    Lucas Brasilino

    You can use the powercli to edit the vmx (offline) configuration file

  • vmkping to the loss of packets of san storage

    Hello

    Today, I have one of the ESXi4. Host (USB started, not any physical storage on local) have a problem.

    Dtx200 (inactive) identification.

    Screen Shot 2011-08-20 at 4.37.42 PM.png

    SSH for the host and try to make a vmking for SAN storage.

    When I do a vmkping d

    I got this:

    Screen Shot 2011-08-20 at 4.47.45 PM.png

    But when I vmkping to the ISCSI SAN (DELL MD3200i) storage, I had packet loss.

    Screen Shot 2011-08-20 at 4.43.34 PM.png

    I had packet losses, and I can't connect to the SAN from DELL (MD3200i) storage.  I therefore had no datastorge on this host.

    And I have a different ESXi4 host with a local hard drive, I don't have this problem.

    Hope someone can give me help.  Thank you!

    Choong Leng

    Have you already double check the physical connections (cables) from the host to the switch and switch port settings!

    Do you use frames? If so, how you have configured the MTU size to the physical switch?

    André

  • iSCSI SAN storage

    Hello

    I am trying to build a practice for VCP certification test lab.  I use Workstation 7.1.4 and ESXi installed on the same host.  Can anyone recommend a good 'free' iSCSI SAN storage virtual appliance?  I went through the market, but wanted to get a recommendation from experts.

    Thank you

    Kim

    There are various products that you can use. E.g. Openfiler, Starwind, open-e or VSA from HP. For the last HP offers especially for VMware Player/workstation demo version.

    André.

  • SAN storage for db to single instance on Windows

    Hello

    We plan to install Oracle 11 g 2 (11.2.0.3) EA (no CARS) on Windows Server 2008 R2. A SAN storage has been attached
    and 4 LUNs created.

    R: unit number logic of database and data files
    B: LUN Redo logs
    C: MON control files
    D: LUN Archive Log Files

    I remember well, the DBCA asks for the location of the data files and of the newspapers to check-in at the time of the creation. Thus, the storage location for
    the redo log files and control files must be configured later if I'm not mistaken?

    Thanks for your comments!

    Published by: user545194 on October 9, 2012 08:12

    Yes - you can reconfigure location if necessary after creation of the database - or anytime actually!

    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/dfiles005.htm#i1006457

    HTH
    Srini

  • Is it possible to use GPFS or another build is a SAN storage shared for multiple ESX and ESXi hosts?

    We have a license GPFS and SAN storage. I am trying to create a storage shared for multiple ESX and ESXi hosts share existing virtual machines. We tried once NFS, it of a little slow and consume too much bandwidth LAN.

    Anyone can help answer? Thank you very much in advance!

    It depends on your storage space.

    You must connect all hosts to the same San, then follow the configuration guide of ESXi and specific documentation for your storage space (for sharing LUNS on multiple hosts).

    André

  • Slower performance of the SAN storage report

    Hi all

    A new type of SAN storage added to the replacement of the old SAN storage environment. After presenting new data warehouses and the migration of virtual machines, users report slow performance. What might must be made to the VM file system to solve this problem... THX.

    Hi Anthony

    Run a block alignment which requires a restart of the VM (it may not be required on newer operating systems

    If it is help full please mark this as useful or correct... Thank you

  • vSAN 6.0 - Virtual by default on SAN storage policy does apply a VM show not

    My VM on my datastore vSAN all show "not applicable".

    The weird part is that caching SSDS still works. How will I know? Because my performance SQLIO tests indicate hundreds MB/s and my braces are only a single SAS6 Drives. No spindle drive a possible way to do this.

    Now the default virtual SAN storage strategy shows also non-compliant, but I can't seem to find the cause.

    Where should I look?

    I'm confused... lol

    Begin by restarting the profile flown by the demon of storage in vcenter.

    When using the appliance, connect with ssh and run "/etc/init.d/vmware-sps restart".

    (Or restart the whole camera)

  • VSAN guests can access the FC SAN storage at the same time?

    Soon we will invest in new ESXi hosts and we want to design them for the ability to VSAN. But since VSAN is a fairly new technology, we also want the same hosts to act as our old guests and have warehouses of data in our SAN (FC SAN attached storage) environment.

    Host VSANS have access to VSAN and FC SAN storage at the same time?

    What advantages and disadvantages do you see with this design?

    Yes, there is nothing to stop an ESXi participating both host a cluster VSAN, and also had access other storage taken in charge of protocols such as NFS, FC, FCoE or iSCSI.

    However, you will not be able to use the LUNS or Volumes presented on NFS, FC, FCoE or iSCSI for storage VSAN.

    VSAN requires local disks to build the VSAN data store.

    I guess the only concern would be the additional management that comes with having many types of different storage, but this is true even if VSAN wasn't in the picture.

  • vSphere 5 drs through san storage separated 2

    Would hi be possible installation storage drs 2 SAN storage?

    To add to that, Yes, it shouldn't be a problem. You can use Storage DRS through multiples of SEM, as long as all guests can see the LUN. Keep in mind however, DRS of storage is not particularly aware of the disks you join, so ideally you want to keep all quite similar - I would not, for example, use the DRS of storage in a cluster with disks SATA and SAS - he would be an impact of significant performance according to the records for the servers had put on which LUN...

    Storage DRS will separate disks if must - if you have several drives assigned to a server, they could potentially find themselves on LUN in a storage DRS cluster. Really, this isn't a bad thing if all of the LUNS is at the same level of performance, just something to know...

  • Migration to different SAN storage

    Hello

    I need to change the storage array in my environment and want to check the migration path. It's as simple as

    1. plug the new storage to existing servers Bay;

    2. create the new existing VMware Cluster data store;

    3. move VMs are old to new (Storage vMotion) or Offline data store;

    4 remove the old data store;

    5. disconnect the old storage servers;

    Are there problems with a cluster of vSphere 4.0 the migration in this way?

    Thank you

    Hello

    Once you connect the new SAN storage, make sure that the redundancy of paths and multitracks with path for each logical unit number selection is correct do ensure the availability and active/active paths are available.

    Also, better test all the components of the new SAN storage redundancy correct, as the power of connections of storage and mapping of table turning each components off / ON, then try your migration.

Maybe you are looking for