design storage on ESXI 5

Hi all

I need to design a new infrastructure for one of my clients. At the moment customer uses any virtualization.

I implémenterai VMware vSphere in the new infra.

I would like to have your opinion on the storage part.

I want to use an external SAS storage shared between 2 ESXI hosts to take advantage of the HA and for the future provisioning. I want to set up a RAID 10.

Don't you think I will get better performance if, during the RAID10 matrix:

(A) I create 2 LUNS with 1 data store on each. A DS dedicated to the virtual machine disks and one for the data on disks

(B) I create 1 LUN with DS 1 where I mix readers of the system and data for all virtual machines

In the first period, I get around 8VMs with Windows 2K8R2 with a mixture of roles (AD, Exchange, DB,...)

Thank you all for your comments on this subject,

Concerning

What is the size of wise mirror?

How many records does

without more information:

c: 2 luns, each 1 data store. joint.

Tags: VMware

Similar Questions

  • Access NAS storage of ESXi Server using VMKernel Port?

    Hello

    I would like to know the best procedure to connect to the NAS storage of ESXi host 5. I am able to connect to the NAS storage according to the screenshots listed in the link below.

    http://www.tintri.com/blog/2011/11/connecting-vSphere-to-NFS-the-easy-way/

    But, I saw different configurations in the past where port dedicated VMkernel is assigned to NAS and all storage goes into network 10 G. I try to do the same and created a VMkernel for NAS port. But I don't think that traffic is going over there as it is not used at all.

    I gave the IP address of the ESXi server on the NAS box to get access. I couldn't give address IP VMkernel sideways NAS. How can I make use of this IP VMKernel dedicated NFS storage?

    I want to implement the best way to pass NAS 10 G network traffic. In addition, How can I test if storage traffic crosses VMkernel port?

    Suggestions needed.

    Virtualinfra is right that you must use a different subnet for NFS traffic. You masked the IP range, so I don't see what you use, but make sure that the vmkernel and storage using a single subnet, not the management subnet. If you get an error, your use of a subnet that goes out the default gateway of the management port, or that you are using the same subnet.

    I wrote a few articles on the use of NFS with vSphere: http://wahlnetwork.com/2012/04/19/nfs-on-vsphere-a-few-misconceptions/

    Also, make sure that the vmnic can route traffic on that subnet, as in the case of the use of VLANs.

  • Basic questions about the storage on ESXi

    Hello

    We are about to implement a VMware ESXi environment and I would like to get some clarification on the storage in ESXi before proceeding.

    Material: 3 guests and HP EVA4000 FC 4 TB SAN.

    We execute the file, mail, web, directory, application and database servers.

    Some of these services require quick I/O and some require data to be 'seen' by at least 2 virtual servers for HA.

    Can I use discs RDM for HA/Clustering servers and for those who need quick I/O, or can I use VMDK:s?

    If I present to you the entire 4 TB SAN ESXi can I then several virtual servers use certain parts of this 4 TB SAN as discs RDM and other servers use certain parts of this SAN as VMDK:s? (Do we have virtual HBA:s in ESXi?)

    Thank you

    Yes. But you must consider as usual VMDK of hard drives in this case, you must install the cluster file system there.

    ---

    VMware vExpert 2009

    http://blog.vadmin.ru

  • Local storage for ESXi - SD USB or a traditional hard drive

    We will replace our ESX host soon with new hardware and I'm on the fence about the use of a player USB SD 4 GB or traditional hard drives in a RAID. Obviously, the reader SD has its advantages, namely its speed and reliability, but is reliable enough to give up using redundant hard drives?

    As someone mentioned previously, we spend much of our time concerned with our redundant hard disks, expecting to fail. Several times the failure is related to moving parts in the drive also. SD readers have the pleasure of having no moving parts to wear out or fail. But what good are they?

    Sells the dealer that I spoke with them for 3 years and never heard of a failure. His $62 for 4 GB, so it's not a bottom of the line edit. It is designed to be in a HP DL380G7, once again, I doubt that it is a lower end model.

    I'm leaning toward the SD drive rather than traditional drives, but I wanted to get feedback from others who may have used this before configuration. I'm looking for the successes and failures (and the reason for the failure).

    4 GB is enough for installation (ESXi 4.1) and I have not any more local storage as VM is on the shared storage and the assistance as AppSpeed and esXpress VM program I can also install it on the shared storage (where in our current datacenter these are installed on the local storage). Or should I worry about this size with respect to logging or something else?

    Thanks in advance, please discuss.

    Eduard Tieseler

    I had a fight 100 ESX hosts running by USB - only questions I ever had were with the original green USB keys that HP provided in the first days of ESXi on USB.

    I like the approach of USB for several reasons:

    1. no local physical disk required, less moving parts = less material problems

    2. If you have local physical disks, you can use these as the dumping grounds for the ISOs etc.

    3. If rebuilding a server, I can remove the old USB key and press a new - and recover simply by replacing the OLD USB key

    4. I can pre-generate a USB one double key as a method of construction of servers in bulk (not tested on 4.1u1 well)

    5. I've had many many hard drive failures. . USB (except Green HP ones) have never failed on me.

    In addition, your syslog problem isn't treally a 0 - practice problem is to get servers to connect to a centralized in any case syslog box (I would even if I used not USB/SD)

    Good luck - let us know what you decide.

  • Access the SMB storage using esxi host and manage with MS server 2008

    Hi there, I know the subject line can be confusing, but I didn't know how to explain what I need in the line object in such a short sentence.

    in any case, I have a storage device SMB, a Promise VTrak M610P, which is attached to a blade server 1U HP through interface channels double Ultra 320 SCSI host. I can't configure the VTrak as a NFS, the only way to access logical drives is connected to a server via the scsi channels and using the operating system to share the drive and the shared disks.

    I would use a server blade with dual ultra 320 SCSI host interface channels and connect the VTrak to this server blade. Then I would install ESXI hypervisor OS 5.5 to this server blade. I want to know is if the ESXI operating system will recognize the logical drives and if I create a virtual machine with MS server 2008 and see if the virtual machine detects logical drives, so that I can share the drives? I hope that I'm supposed to and if I'm not, please let me know what makes no sense, I will do the best to explain it better.

    Thank you

    Andy

    So it turns out that there seems to be something wrong with vClient when you add a hard disk (virtual disk) to a virtual machine of size greater than 4 TB. Article VMware KB: value of range error message when you add more than 4 TB capacity discs in vSphere Client describes this if you encounter this problem, add the hard drive via vSphere CLI, CLI power or vmkfstools. So this seems to be a known issue on vClient. What I ended up doing was using vClient, creation of hard disk, adding to the virtual machine (size of the hard drive is to 5.45) and when I would get the error message on the DiskCapControl out of reach, I would just click OK and then finalize the creation of the hard drive on the virtual machine. Once the process is complete, I selected the virtual machine and noticed that he indeed added a new HDD size 5.45 TB even if he's complained about it. I pulled to the top of the virtual machine with windows server 2008 R2 installed and was able to create a new disk under windows and set it up as a shared drive on the network. Looks like vClient must be updated by VMware and correct this bug, if it's a bug that I think. Thank you for the help vervoort!

  • Flash copy storage IBM - ESXi 5.0

    Hello

    I would be really looking for an answer to my questions for the copy of flash using Vmware environment, can someone answer my questions?

    We have two prod and DR enviroment environment

    We assigned 200 GB lun to production and we formwated like vmfs and then allocated to virtual machines as a file vmdk (around 5 volumes including the C drive)

    Now we copy Flash available on the environment of the DR, now my question is.

    If Reanalyze us DR environment with after which allows re-signing, VMware will not always grateful as VMFS volume? 200 GB flash copy lun and we can get all the VM in this data store? -any question about duplicate volume identify potential of IBM Storage?

    I have the same questions as inferior to discussion, but this discussion has not responded so ask again in a new discussion

    https://communities.VMware.com/message/586050#586050

    Yes, VMware will keep all data on the volume and mount it... the data store can get a new label in the snap -- and you can rename the data store later or before you mount the copy of data store, set the LVM in advanced settings. EnableResignature 1 on ESXi hosts.

    After editing, the data store, you can browse the data store, save virtual machines... and on the Outlook for IBM, the source and target volume storage copy Flash different IDS.

  • Dell VRTX + vCenter 5.5 + Shared storage is ESXi host import issue in vCenter. Help, please!

    All,

    I have two VRTX to be used for lab purposes that I am currently in configuration.

    Blades feature 4 VRTX with a shared storage infrastructure. Each blade has 5.5 installed on ESXi.

    I configured the shared on the VRTX storage, and all the blades can go very well.

    The question I'm currently facing is when you add the vCenter for managing ESXi hosts.

    Add the first host goes without a hitch. However, add any later host fail, because vCenter finds datastore attached hosts to have the same identifier.

    The error message is (see the attached screenshot): ' Datastore 'Main Shared-storage' is in conflict with a store of data that exists in the data center which has the same URL (ds: / / vmfs/volumes/xxxxx /), but is supported by different physical storage.

    Someone knows how to fix this?

    Thank you.

    Thanks for the reply.

    I think that I have found a workaround.

    First of all, this link does not address my particular issue.

    See, it's a whole new vCenter device installation and configuration. Only one of the four hosts to add host has been added.

    Still, the problem is that each ESXi host is a blade VRTX (M620), which has access to the data store created on the shared storage of VRTX.

    Basically, each host is editing the data store shared even (the only data store created on the shared storage), which works very well except for vCenter complain when you import the hosts.

    In any case, my resolution was as follows:

    -Add the first host with the attached and mounted data store

    -Remove the data store and detach the controller shared by other guests before adding in vCenter

    -Re-attach the controller shared and set up the data store via vCenter once guests have been added

    -Re-configure each host for vSphere HA if necessary

    Thank you.

  • iSCSI storage problem ESXi 5 u1

    Hello

    We have six ESXi 5 u1 servers connected to the storage unit Dell EqualLogic 65xx series. On ESXi systems we use the iSCSI Software and PSP is set to RoundRobin. We investigated a question where randomly throughout the day that we receive the following warning:

    VCenter:

    Failed to write command to write-suspended partition naa.6090a08890b27eeab8cee499fb01a0f6:1

    VMKernel:

    WARNING: ScsiDeviceIO: 3075: order of writing-suspended bulkhead 6090a08890b27eeab8cee499fb01a0f6:1 write failure

    Capability3: 13359: Max (10) attempts exceeded for appellant Fil3_FileIO (status "IO has been abandoned by VMFS via virt-reset of the unit")

    BC: 1858: cannot write (couriers) object '. iormstats.SF': level core exceeded Maximum attempts

    Also, we see a lot of messages indicating a loss of connectivity, iSCSI volumes:

    Lost access to the volume (LUN_NAME) 4f58ac5f-ef525322-XXXXXXXXXXXXXXX because of connectivity issues. Recovery attempt is underway and the result will be
    reported in no time.

    This event is generated by all six guests at random time intervals and LUN iSCSI target different. At the same time, we notice a very high spike in CPU usage of 100% on the host that generates the message

    One thing we found through tests of vmkping is that ESXi hosts are configured for extended frames (9000 MTU vmk, vswitch and adapters), but the storage network, does not allow frames.

    This could be an indication of serious IP fragmentation on iSCSI? How we measure this?

    conraddel wrote:

    One thing we found through tests of vmkping is that ESXi hosts are configured for extended frames (9000 MTU vmk, vswitch and adapters), but the storage network, does not allow frames.

    This could be an indication of serious IP fragmentation on iSCSI? How we measure this?

    Do you know if the side SAN is configured for frames? If not, then it should probably not be a problem, given that the TCP connection will take care of the other so-called MSS sizes and your guests should get off by default the images of sizes.

    The worst situation would be if both the ESXi and the SAN supports jumbo frames, but not the switches between the two. Because the switches are 'invisible' problems don't happen, until you actually start sending larger images and those will arrive in silence by switches, i.e. no IP fragmentation.

    Have you also checked with vmkping what kind of end-to-end connectivity, you have? Options-d and s are very important get right: http://rickardnobel.se/archives/992

  • Design storage with VAAI

    Hey everyone, have a question of the storage design.

    At the time where we did not have a San with VAAI, scsi reservations have been a limiting factor in all design considerations.  We did small clusters and presented a less number of LUN - with these LUNS being shared amoungst the smallest possible number of hosts - and ran under VMs per lun.

    Now, however, we have a new San with capacity VAAI.  We currently have 7 business incubators - who all have their own set of LUNS/data warehouses.

    Is there a reason (reservations scsi no longer a problem) why I should not introduce all LUNS to guests in all clusters? I know there is a limit of 255 LUNs, however, I'm not afraid with this limit that strikes. Will be the performance of this impact in some way? Y at - it overload memory serivce console?  Without scsi reservations, our main factor limiting (to me) is one of depth of queue - that would be for each host.  So if I can increase the number of hosts that can be seen by a single data store - I should be able to also increase the number of VMs on this data store as long as load the virtual machine is distributed over several hosts.  Is this correct?

    In addition, most of our cluster is built the way they are because of software migrate license restrictions, so it often happens that we have to move a VM between clusters - I used cold - but now I use a data store models or Transition that is presented to all hosts irrespective of cluster.  Even if it means no interruption of service - there are still several steps to move the virtual machine (2 svmotions and a vmotion).  It would be very convenient that all the guests could see all data warehouses - but I don't want to break any design best practices.

    Thank you and appreciate any input!

    The circumstances are different, and if you find that a recommendation is not so appropriate considering another design choice can be the most appropriate solution for you.

    I'm not entirely sure scsi reservations are no longer a problem, but whose impact has considerably decreased with assisted hardware locking.  As far as I know, there is no additional charge to hypervisor with additional data warehouses.  Performance should not be touched in any way, except when doing things like analyses or updates from storage, these can take longer.  HA master election process in vSphere 5 analysis also stores data connected on each host, it can also add a small overhead (almost irrelevant if)...

    * edition I missed some valuable pieces in the original post made my useless response *.

  • storage and esxi 4.1 configuration

    I'm new to esxi 4.1 and have done research on configurations. The will of VMs, I'm looking at hosting a Server 2008 SBS (Exchange), 2 Windows 2008 servers. We are looking at a Dell Poweredge T710 with 16 GB of RAM, dual Xeon E5645 procs and 6 1 TB 7.2 k drives in a raid 10 configuration.

    The SBS and one of the 2008 servers are just for Exchange and some local databases. Server 2008 remaining is for a scanning solution and will most of the storage.

    What I read in the forums, I have some questions:

    1. 1 TB disks will be good enough (much?) performance wise or should we invest more in the faster discs?

    2. If I create the virtual machine and want to share storage should I create another virtual machine and install openfiler to manage the remaining disk space? Or could I use a raw connection of a store of data to one of the servers 2008 and share him?

    3.I also realize that esxi has a limit of 2 TB data store. Does that mean I can create 2 partitions during the RAID and have 1 to 1 and the other 2 to?

    You can thin available to a player. Add a large thin drive and it will increase in size when you add files. You can also keep addig additional 100 GB disks or you can easily develop a reader if necessary.

    I would just like to add disk space that you need. Use Windows to share flies, rather than adding complexity to your environment.

  • SSD for storage of ESXi swap with overcommitment of memory

    Hello

    is - that someone has tried to use SSDS for ESXi swap storage and making the overcommitment of memory?

    Concerning

    check this: http://www.techhead.co.uk/vmware-esxi-vswapping-with-sandforce-ssds it might be interesting...

  • Add storage to ESXi 4

    I have a Dell 2950 quad-dual core.  I have an array of RAID 10 6 2 TB SATA drives.  When I installed ESXi, he created a single data store of 1.45 TB.

    How do I create additional data storage to take advantage of the remaining 4 TB of storage in my RAID array?  I do in the RAID BIOS?  I thought that I was creating a single array with RAID 10...

    TIA,

    Miles

    You are welcome.

    Think about scoring the answer helped you to solve the problem as being correct. This is not only to award points, but more to help others with similar problems to find an answer more quickly.

    Thank you

    André

  • Upgrading to ESX 4.0 host autonomous with local storage to ESXi 4.1 on a USB

    We have a stand-alone host with a single raid array that is local to the server at a remote site from ours.  The question of what I've done research, you need to reformat the raid array if you install ESXi 4.1 on this host.  Since there is no shared storage, the only option would be to copy the data from one server to another, or maybe install ESXi on a USB key.

    It seems to work fairly well in a test environment.  Are there warnings that I may have missed when I take a system that can be started locally on ESX 4.0 and change it to boot from a USB key with ESXi 4.1?  In my test environment, I know that you can not install ESXi 4.1 on a USB key, if this system had ESXi 4.1 installed on the local disk.  He ended up giving me the GROUP when he discovered another ESXi installation.

    Thoughts?

    Matt

    As you may have noticed a new ESXi installation will wipe away the installation partition.  If you have just a single Bay, then you'll want to go to the root of the USB.  If the data store was on a LUN or a separate table, then you'd be OK to install ESXi on ESX.  As long as the boot order is set correctly, you'll be fine.

    In my test environment, I know that you can not install ESXi 4.1 on a USB key, if this system had ESXi 4.1 installed on the local disk. He ended up giving

    me the GROUP when he discovered another ESXi installation.

    What kind of system did you use for testing?

    Dave

    VMware communities user moderator

    Now available - vSphere Quick Start Guide

    You have a system or a PCI with VMDirectPath?  Submit your specifications to Officieux VMDirectPath HCL.

  • Storage on ESXi shared host

    Hello

    I'll have a bit of a problem of storage configuration shared for vMotion and I'd appreciate any help you can provide.

    I try to have the following configuration:

    • ESXi 4.1 2 hosts (for example, A and B) who will run VMs.

    • ESXi 4.1 1 host (for example, C) as data store shared to keep VM configs.

    • 1 Server Windows Server 2008 running vCenter.

    In the end, I'd like to be able to create a store of data shared on C which A and B can read/write to so I can perform vMotion between A and B.

    It is possible to implement this config with a regular host of ESXi running on C? Currently all ESXi hosts are bare bones PC with IDE drives. In other words, I wonder if I can convert a regular ESXi running on an ordinary PC server to a host of shared storage or do I need a NAS dedicated instead of C for which?

    Finally, I would like to vMotion between two (i.e. ESXi hosts A and B) and keep the files of configuration on a separate host, preferably another plain of ESXi host with an ordinary IDE drive.

    I am very new to all this so I'd appreciate pointers.

    Kind regards

    J

    Hello and welcome to the forums

    I do not know the material for ESXi host A and B.

    But in a scenario you want iSCSI, vMotion, storage

    There are at least 2 additional NIC for each ESXi host (VMotion requires Gigabit)

    If you want to host C for shared data storage,

    -You can go for a virtual machine that does the job (worst)

    -Make a box iSCSI dedicated with (http://www.openfiler.com/) OpenFiler or FreeNAS (http://freenas.org/downloads) (best)

    And do some additional reading on this forum about the use of these products.

    -Buy a (better) separate storage box

    If BOX C has an IDE drive, the performance is not good

    I don't know if you're gooing to run a production environment or just a laboratory.

    In the case of production, I recommend you check the compatibility, in case of support issues

    http://www.VMware.com/resources/compatibility/search.php?q=VMware%20compatibility%20List&AQ=0&AQI=G10&aql=&OQ=VMware%20compat&gs_rfai=

    Paul Grevink

    Twitter: @PaulGrevink http://twitter.com/PaulGrevink

    If you find this information useful, please give points to "correct" or "useful".

  • Limit storage on ESXI

    Hello world

    I installed the free version of ESXi, a number of times on a server Dell Poweredge 2900 witih a 6 TB RAID 5.  The RAID using a Perc controller.  It is a new server with no data on hard drives.

    After each installation, I tried to create a new "data store" with Virtual Infrastructure Client (CIV).  In each case, the maximum available space for the data store was 372.5 G.   In fact, the VI client showed the "capacity" of the server was 6 TB, but showed that 372,5 G of it was "available".

    What is the maximum size that can be a "data store" in ESXi?  And if the answer is 2 TB (by doc maximum configuration), what should I check to understand why I can not allocate more than 372,5 G to a data store ESXi?

    Thank you!

    Hello and welcome to the VMware ESXi community forum.

    You already have two valid responses (limit of 2 terabytes).

    You must use your configuration perc (BIOS tool, dell management solution) to sculpt several volumes out of your discs. So instead of a single volume of 6 TB, you can create volumes of 2 to 3 and use the volumes to separate data storage.

Maybe you are looking for