Design of SAN storage for environment training Lab Manager help

I plan to use Lab Manager for a training environment and could use a small entrance.

The environment will consist of:

-8 (each with72GB + RAM) ESX host
-30 users simultaneously connected, each accessing a configuration that will contain about 10 virtual machines, which means that it will be up to 300 mV.

-60 to SAN contaiming a total of 36 records

My question is this: How can I skin the SAN?  I must:

(A) creating 10 LUNS for each of the 10 basic VM disks, and then create LUN 30 so each of the deployment 30 has a LUN, separated from its delta configuration?

(B) creating 10 LUNS for each of the 10 discs based VM, and the deltas of 30 correspondents for each basic disk stored in the same unit logic as the basic disk number?

(C) create LUNS may be 2 or 5 for VM 10 and 10 MON deltas for basic disks?

(D) none of the above (Please give me an idea

It's everything you need to know:

http://KB.VMware.com/kb/1006694/

And assign the points of this thread if you can

Kind regards

EvilOne

Tags: VMware

Similar Questions

  • SAN storage for db to single instance on Windows

    Hello

    We plan to install Oracle 11 g 2 (11.2.0.3) EA (no CARS) on Windows Server 2008 R2. A SAN storage has been attached
    and 4 LUNs created.

    R: unit number logic of database and data files
    B: LUN Redo logs
    C: MON control files
    D: LUN Archive Log Files

    I remember well, the DBCA asks for the location of the data files and of the newspapers to check-in at the time of the creation. Thus, the storage location for
    the redo log files and control files must be configured later if I'm not mistaken?

    Thanks for your comments!

    Published by: user545194 on October 9, 2012 08:12

    Yes - you can reconfigure location if necessary after creation of the database - or anytime actually!

    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/dfiles005.htm#i1006457

    HTH
    Srini

  • Provisioning for models of Lab Manager

    Hello

    We are beginning to create templates for our implementation of Lab manager and I'm not sure if I have to use thin or thick provisioning. It would be preferable to use thin to save disk space, but I'm not familiar enough with LM (and don't have any best practices docs) to know what consequences this may have. Someone can advise me on this point, is it OK to use and does not performance issues?

    Thank you

    Although not supported, that people discover that this should work, save a ton of disk space and not suffer a drop in performance.   You will need to do the conversion when the virtual machines on the tree are not running.  An enterprising soul might even describe the conversion process to convert all the discs basic installation in thin provisioned disks in an automated way.

    Because its not supported, make sure you have a backup of the original monolithic disc before you do!

  • Lab Manager 4.0 Upgrade Plus new vSphere Env Migration

    Current environment

    -Lab Manager 3.0.1

    U2 - VirtualCenter 2.5

    ESX - hosts running ESX 3.5 U2

    -All stores data on a San

    Will be a new separate building Lab Manager 4.0.1. Server from scratch that connects to a U1 vCenter Server 4.

    The plan is to migrate users to the former LM 3.0 installation to the new LM 4.0/vSphere 4 environment that will also have all new ESX host.  We will not pass original hosts to the new CR.

    Is there a way to copy or move the existing channels of the VM to the new separate environment?

    Or would we need to plan to export the models to a file server, and then import them into the new LM 4.0 environment and have users to start from scratch with these models?

    For any info or suggestions would be greatly appreciated.  Thank you.

    Concern: Data store mappings in LM are deleted once all hosts are posted?

    No, virtual machines are stored in the database and mapped on the data store itself (and not on the host).  As long as the path of the UUID/NFS remains the same on different hosts, you should be fine.

    > Concern: LM 3 is not "supported" with VC 4, but he at least see and connect to it?

    It's REALLY OBSCURE.  He can't let you connect at all, or to connect then say guests are not supported.  I've seen some clients get totally rejected and others which were broadcast in the config not taken in charge even when (as vCenter has been updated in the background and LM was not touched).  they did not actually change the address info and click 'OK '.

    Concern: Assuming 3 LM can at least connect with VC 4, it will be able to reach the resource containing the host ESX 4 pools?

    I think so, but as long as the Lab Manager database indicates that you use the right IP/Username/Password for vCenter... that should be enough to make Lab Manager 3 to 4 and do use the new vCenter for the Initialization Wizard.

    Kind regards

    Jonathan

    VMware vExpert 2009

    NOTE: If your question or problem has been resolved, please mark this thread as answered and awarded points accordingly.

  • Question about Lab Manager and ESX Resource Pools

    Hello world

    I was wondering if I can get feedback from some of the members of the community.  We used Lab Manager very strongly in our support organizations and it has proved to be a valuable tool.  Recently, we collaborated with the Department of technical training and started hosting online seminars and classes for them using Lab Manager.  Last week we had a fairly large class of users (approx. 45), each duty deploy 2 rather Beefy VMs that were very intensive resources (Jboss, SQL, Mail to name a few).  My colleague and I went through a lot of trials and planning to ensure that our infrastructure can handle the load while allowing our users to date at the same speed and reliability, so that they are used.

    Our Lab Manager ESX server pools made up of the following:

    5 HP DL380 G5 servers with 2 x Quad Core 2.8 GHZ processors (8 cores each) with 32 GB of RAM each

    2 HP DL580 G5 servers with 4 x Quad Core 2.8 GHZ processors (16 cores each) with 128 GB of RAM each

    When the class is running and everyone has made their 2 machines, we noticed for some reason, he was all on ESX2 and 3 deployment (380 s).  Then came the ESX alarms, use of 90% of memory, then more than 100% bringing the disk page.  The CPU usage was at a all time high.

    I look at the 2 580 s and there are about 4 virtual machines deployed on them.

    So my question is...

    How does he know where to launch a Virtual Lab Manager machine?  There's no sense 2 servers have been brought to the red zone, and almost overloaded when the 2 most powerful servers in my farm are nearly dormant.  I noticed sometimes in the past, but not this bad.  Normally, we have about 45-50 computers deployed at some point and it seems to spread them out properly.

    This group of training has access to the same LUN and ESX server with Betclick as any other organization that we have.

    We have decided hosting several sessions of training may be greater than this one and would like to know that the virtual machines will be distributed properly.

    I would like to know your opinion on this

    First to answer your question, then make guess what happens.

    As indicated in response Ilya, LM distributed VMs differently with active DRS resource pools and pools of resources without active DRS.

    LM place VMs when the DRS is not used.

    When DRS is used, we use the DRS admission control to select the host to place virtual machines.

    DRS is turned off, LM uses its own placement routine:

    For each virtual machine, LM filter all managed servers that cannot run the virtual machine. A complete list of patterns are:

    • The managed server has not enough memory or quota for virtual machine to run the virtual machine.

    • The host is not connected to the data store that is the virtual machine.

    • The virtual machine has active state (suspended) and the CPU of the server managed by the waking state has been captured is not compatible with the managed server currently considered.

    • The virtual machine has been a guest on 64-bit and the managed server is not 64-bit capable.

    • The virtual machine has more processors than the managed server.

    • The managed server is not accessible or is set to "prohibit deployments." Not reachable may mean that the lm-agent or its not to ping queries is not answer (this is visible in the list of managed server page).

    Once we have the complete list of eligible servers, we place the virtual machines on the host including the smallest (MemoryReservation % + CPUReservation %). By default, we reserve not CPU on LM VMs, this internship will be largely due to the memory on the host computers if you have not changed the settings of reservation of CPU on your virtual machines.

    For closed deploy to the PF3, force us all virtual machines to go on the same host, so if any virtual machines have any special data store / CPU type specification, all virtual machines is forced to go on this host. If there are conflicting requirements between different virtual machines in the same configuration, deployment will fail. Closed VMs also have another condition to check that is rarely achieved in practice: number of closed networks available on the managed server. By default, this is 20 and you will need more than 20 different configurations closed running on a managed server before you who will strike.  (LM4 allows Cross fencing host, so this does not apply in LM4).

    To make a guess blind because of your distribution problem - if you use the saved state and fencing, it's probably the type of processor.  Processor type can be checked on the host properties page.

    If you not are not I would check that the images can deploy on the 580 s by disabling the 380s and trying to deploy them.

    If you use DRS (which you're not), it could be due to deployment of virtual machines too fast for the algorithm of control for the entrance of VC.  An easy solution for this is to extinguish the DRS on the cluster manager of laboratory.

    We tested load internal repeatedly, the product QA in different laboratories, we run (LM has been used in VMworld Labs for the past couple years and internal systems of training and demonstration for our Organization implemented SE), so no need to worry about this.  I assure you that the first 'real' performance problem you will encounter will be almost certainly due to an overload of your storage array (too many IOPS / s)... and for that you have to distribute content in data warehouses and use techniques of creating paths to balance the load.

    Kind regards

    Steven

    Another thread on this: http://communities.vmware.com/message/1258902.

  • I can't power on a virtual machine (with linked disks) after the addition of new hosts of Lab Manager

    ENVIRONMENT

    -Lab Manager 3 using VC 3.5

    -Initially the 2 hosts in a cluster and managed by LM

    -Added new cluster with 2 new hosts in VC

    -Two bunches pointing to the same NFS storage

    PROBLEM

    -When you try to turn on a computer in new cluster, I get a "requested file" "not found" error

    RESULTS

    -This seems to happen only when creating a virtual machine that has virtual disks of parent; most of our VMS is linked clones

    -When you look at the log file, please see attachment, we found the following:

    It is the location of the new virtual machine in the new host:

    /vmfs/volumes/fabdf38d-beff047d/LMW2K3/2909/002909-T_XP_SP3_32b.VMDK

    When you look at the hard, it points to the parent disk, but it seems that it does not:

    /vmfs/volumes/6586c772-53eee1bb/LMW2K3/2904/002904-T_XP_SP3_32b.VMDK

    Please note th volumen IDS are different and can not get the disc parent of the new machine.

    SOLUTIONS?

    Thank you

    Jose

    You should immediately stop using the new hosts otherwise creating new virtual machines can spoil the vmdk channels. Try http://kb.vmware.com/kb/1005930.

  • VCenter Lab Manager dedicated?

    Do you need a dedicated server vCenter for use of Lab Manager or can you use an existing and create a separate data center for Lab Manager hosts?

    No, you do not have a separate vCenter Lab Manager - you can create a new separate data center or even a distinct group within a datacenter exisintg -.

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Storage for Lab Manager

    My client needs storage for Lab Manager running on 100 servers. Is it recommended to use the NAS or SAN for Lab Manager running and what type of e/s that you wait for the same thing?

    I did a little bit with storage of LM in recent years.

    I have a development/test environment on LM 600 machine, using 2 Sun 7210 storage devices (is no longer available) on NFS shares. We have 2 links 1 G servile vmKernel traffc as well as links to more vmnetwork, management, etc.  The devices are related to links of 4 x 1 G each.

    I don't know what are the current constraints of LM/ESX, but whenever we revisit the decision 'block vs NAS' we end up selecting NFS (our Sun boxes can present the iSCSI as easily).

    As Jon says, what storage you need certainly depend on your VMS and load.  NOTE: my experience is * NOT * in a production environment, where I am squeezing virtual machines performance.  It's more about the functionality and "adequate" performance (where "adequate" is a measure based on the user experience topic).

    Some things we have learned over the years:

    • IOPS / s are kings, and most of them are entries.  (Written at the moment, I have a few heavy ready operations push me down to about 65 percent.)  I saw > 80% of entries maintained for days.)
    • Latency does not seem as important - VMs run about the same at 10ms latency and latency 250ms NFS NFS.  When latencies above 750ms grows, they will be noticiable users and if they go more than a second over 1000 ms you will begin to see the impact on the guest OSes, especially Solaris x 86.
    • Normal operations don't care about bandwidth - just current execution machines don't even fill a link 1 G
    • Deployment operations * could * start filling to the top of the bandwidth, maybe
    • Betware storm IO.  Often people think the storm IO deployment/boot, but 400 RedHat machines all the kickoff of their 'updatedb' cron job at 04:00 can produce any IO storm as well.  NOTE: IMHO if you buy storage to handle this kind of thing, you're overpaying storage.  I've spread my cron through the night and now can not see on the graphs of storage performance.
    • There is * NOT * a good correlation between guest OS IOPS / s as shown, for example, iostat and PAHO are / s NFS presented to the storage system.  I'm guessing that ESX made a lot of caching.

    Most storage vendors I spoke last year does not seem to find it.  They think in terms of bandwdith and it accelerates with read cache, so unimportant in my environment - I had to push really hard to get sellers to think in the right conditions and some of them really seem out of their depth there.

    I think that this state of mind is important: consider the IOPS / s as you consider CPU: how much should I give to each machine?  How much should I oversubsubscribe?

    Like CPU, a physical machine which IO a lot more available that he needs and you can consolidate - but don't know how.  A physical machine with a pair of 15 k drives RAID1 has about 200 IOPS / s available and readers spend most of time inactive (a bit like the CPU).

    That will boost the machines and when?  Right now I'm on average 9 inputs/outputs by VM, which in my opinion is quite small.  My storage system can go higher under request, but I have enough machines that it is fairly constant.

    In my quest for the perfect storage of LM, I found the usual: good, not expensive or fast, you can choose any two.  How we use LM (lots and lots of shared models) we would really like a unique namespace shared (i.e. 1 data store). I have yet to find the 'perfect' - storage, the Suns are very profitable for the performance and space, but their management takes too much of my time and my 72XX are not expandable - the 74 might be a different experience.  I am currently looking to add some SSD storage, but the ideal configuration would be automated migratino between levels (for example, EMC QUICKLY) and if I bought only from EMC I would triple my cost per VM.

    I talked to people who have a pattern of use of different models that are quite happy on several data stores.  If you can move your load through many warehouses of data then Isilon looks like interesting storage.

    So, in summary:

    • Watch IOPS / s 1st, 2nd and 3rd, then the latency time, then (maybe) bandwidth
    • Manage the budget or the ability of the IOPS / s as you CPU
    • You may not use comments IOPS / s to provide shared storage IOPS / s

    I'm hoping to have a better storage solution in the next 6 months or more.

    --

    Dewey

  • Is it possible to use GPFS or another build is a SAN storage shared for multiple ESX and ESXi hosts?

    We have a license GPFS and SAN storage. I am trying to create a storage shared for multiple ESX and ESXi hosts share existing virtual machines. We tried once NFS, it of a little slow and consume too much bandwidth LAN.

    Anyone can help answer? Thank you very much in advance!

    It depends on your storage space.

    You must connect all hosts to the same San, then follow the configuration guide of ESXi and specific documentation for your storage space (for sharing LUNS on multiple hosts).

    André

  • VSAN guests can access the FC SAN storage at the same time?

    Soon we will invest in new ESXi hosts and we want to design them for the ability to VSAN. But since VSAN is a fairly new technology, we also want the same hosts to act as our old guests and have warehouses of data in our SAN (FC SAN attached storage) environment.

    Host VSANS have access to VSAN and FC SAN storage at the same time?

    What advantages and disadvantages do you see with this design?

    Yes, there is nothing to stop an ESXi participating both host a cluster VSAN, and also had access other storage taken in charge of protocols such as NFS, FC, FCoE or iSCSI.

    However, you will not be able to use the LUNS or Volumes presented on NFS, FC, FCoE or iSCSI for storage VSAN.

    VSAN requires local disks to build the VSAN data store.

    I guess the only concern would be the additional management that comes with having many types of different storage, but this is true even if VSAN wasn't in the picture.

  • iSCSI SAN storage

    Hello

    I am trying to build a practice for VCP certification test lab.  I use Workstation 7.1.4 and ESXi installed on the same host.  Can anyone recommend a good 'free' iSCSI SAN storage virtual appliance?  I went through the market, but wanted to get a recommendation from experts.

    Thank you

    Kim

    There are various products that you can use. E.g. Openfiler, Starwind, open-e or VSA from HP. For the last HP offers especially for VMware Player/workstation demo version.

    André.

  • Shutdown for Lab Manager

    We are conducting a data center power down and I would like to confirm my judgment for Lab Manager procedure:

    Stop order:

    1. Shutdown VMs LM
    2. Virtual routers (for closed configurations) stop by VC
    3. Stop VMwareLM-ServiceVM (by distributed switch) by VC
    4. Stop LM Server (VM, but not running on the ESX host cluster managed LM)
    5. Shutdown LM managed ESX host
    6. Shutdown VC Server

    Boot order:

    1. Power on server VC
    2. Power on managed LM of the ESX host
    3. Power on LM Server
    4. Power on virtual LM VMs, routers & VMwareLM-ServiceVM by VC

    My questions:

    Are the steps in the proper order?

    What I have to cancel the deployment of my configurations or they may just be in an engine out of State? I like to keep the configurations deployed to maintain external if possible IP addresses.

    just closed down going also to extend the process of closing, it would be: ESX hosts SAN switch SAN storage reverse order to start upward.

  • Migration to different SAN storage

    Hello

    I need to change the storage array in my environment and want to check the migration path. It's as simple as

    1. plug the new storage to existing servers Bay;

    2. create the new existing VMware Cluster data store;

    3. move VMs are old to new (Storage vMotion) or Offline data store;

    4 remove the old data store;

    5. disconnect the old storage servers;

    Are there problems with a cluster of vSphere 4.0 the migration in this way?

    Thank you

    Hello

    Once you connect the new SAN storage, make sure that the redundancy of paths and multitracks with path for each logical unit number selection is correct do ensure the availability and active/active paths are available.

    Also, better test all the components of the new SAN storage redundancy correct, as the power of connections of storage and mapping of table turning each components off / ON, then try your migration.

  • SAN storage configuration recommendations

    We have a HP SAN with two fiber switches and a Bay of storage MSA2012FC with 12 146 GB drives.  I am trying to decide how to configure storage for our VI3 environment and seeks recommendations.

    At this point, I am considering two-Bay RAID 10 (6-146GB disks each) with 1 (220 GB) or 2 (440 GB) LUNS defined by the RAID array.  The virtual machines are configured in an HA configuration so I think it would be better to have two tables separated raid against one.  This way I can separate the pairs HA between the RAID arrays.  If I create a single RAID 10 array, then if more then one disk fails in this table we could lose everything.  In the future, we will add another MSA2000 pregnant then we will have another 12 146 GB drives.  At this point, we can expand each Bay RAID 10 to 12 146 GB drives to give us more storage space and performance.

    This sounds like a good plan?  Any advice would be appreciated.

    Hello

    RAID 5 is recommended because the table can stay together if 1 disk breaks down. With spare disks, this is a good configuration. RAID 10, a disk fails and you are switched. If this also works...

    I like Raid 5 and I have set up my disk space based on execution of 10-12 VMs per LUN. Who is the the average number per LUN. It's all about redundancy.

    Best regards

    Edward L. Haletky

    VMware communities user moderator

    ====

    Author of the book "VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.»

    Blue gears and SearchVMware Pro Articles: http://www.astroarch.com/wiki/index.php/Blog_Roll

    Security Virtualization top of page links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

  • NAS storage solution possible?  Promise VTRAK 15100 (U320 SCSI) as storage for ESXi/VirtualCenter?

    I have several units Promise VTRAK (12110 and 15100) that I would use as storage for validation using ESXi project and a few pens of blade and 1U servers in a configuration of type 'data center '...  The promise of VTRAKs are 12 to 15 SATA drive enclosures, they have 2 x Channel U320 scsi on the back...

    I would like to install ESXi on blades and 1U servers and have direct storage on the promise of VTRAKs and set up a test environment for the deployment of the Web servers and DB servers.   I see a lot of people who use SAN solutions - BUT I don't want to just throw these VTRAKs.

    Is this possible?  I don't the seen in the HCL, but as we all know, this does not mean that it is not "possible"...

    THANKS TO ALL THOSE WHO CAN PROVIDE INSIGHT.

    So, your options...

    1. It's prob. the best option in terms of performance. It will take a bit to get the LInux drivers sorted if they are not already in the bare metal package OpenFiler.

    2. This option might work, however "multiple jumps" storage (i.e. ESX->->-> storage W2K3 OpenFiler) don't you will probably give the best performance and a lot of "moving parts" to complicate things. I would try to avoid it if possible, and if you want to stick with Windows disk the VTrak then watch StarWind of RocketDivision as an alternative to the OpenFiler/Linux optional #1

    3. NFS of W2K3/W2K8 is also possible. There were a few anecdotal accounts to try and find the default of performance, but you will find that it is "good enough" for your needs

    Among all the options #1 is likely to provide you with the best performance, and you could do with OpenFiler, Windows + StarWind, OpenSolaris or some other distribution Linux of your choice...

Maybe you are looking for

  • The upgrade of my iMac for the new year: advice requested

    I intend to buy a '5 K' 27"iMac very soon in the new year. My current iMac is great, but the new iMacs can support 64 GB of RAM (from trust OWC), and mine is only 32 GB, end of story. I really need the impetus of extra RAM for my work. Right now, I'm

  • com (.\src\librart\tLuaCom.cpp,398) exception;

    I've been with RealArcade since 2003.  I bought Vista Ultimate quite some time ago.  I could always interact with the games and the site until my hard drive has failed... backup was useless on a BYTECC hard drive.  So, I got a Tech on Monona drive in

  • Delay of the dialog 'Copy the file' when you drag a file to a folder that already contains.

    Delay of the dialog 'Copy the file' when you drag a file to a folder that already contains.  The dialog box only appears when that I disconnect.  In fact this problem can seriously mislead the user. The file has been copied to or not? This is a serio

  • Windows Vista error 504-Gateway time-out

    I am running HP Pavilion dv6000 laptop with vista 32 bit.  When I try to use one of the search engines for this, nothing, after a long while I get a blank page and at the top it says 504 Gateway time-out. I tried the restore and restore the plant. I

  • Graphics ATI Mobility Radeon 9000/9100 IGP (RS300M). __

    Own um laptop hp laptop pavilion zv5020us, com um placa video ATI Mobility Radeon 9000/9100 IGP (RS300M).Apesar executar o Windows 7 Upgrade Advisor, e não paresentou nenhum problema com o Windows 7 incopatibilidade. Mesmo assim