Provisioning for models of Lab Manager

Hello

We are beginning to create templates for our implementation of Lab manager and I'm not sure if I have to use thin or thick provisioning. It would be preferable to use thin to save disk space, but I'm not familiar enough with LM (and don't have any best practices docs) to know what consequences this may have. Someone can advise me on this point, is it OK to use and does not performance issues?

Thank you

Although not supported, that people discover that this should work, save a ton of disk space and not suffer a drop in performance.   You will need to do the conversion when the virtual machines on the tree are not running.  An enterprising soul might even describe the conversion process to convert all the discs basic installation in thin provisioned disks in an automated way.

Because its not supported, make sure you have a backup of the original monolithic disc before you do!

Tags: VMware

Similar Questions

  • Using Server 2008 R2 64-bit models in Lab Manager 2.5

    I created a template with Windows Server 2008 R2 64-bit; enabled installed VMware tools and LM.  But when I use the model in a configuration that the LM does not seem to work, the machine is not renamed, and now is waiting for activation.  Also, I get an error when the virtual machine tries to run the autoconfig.vbs on login.

    I understand that LM 2.5 does not support Server 2008, but is anyone aware of any workaround that I can put in place for this model is available to be used.  I really don't want to have to active post that we have only a limited deployment configuration Server 2008 activation available.

    All suggestions are well appecicated.

    -Greg

    Greg,

    Lab Manager 2.5 is not a the code for Windows 2008 R2 support.  Even if you have a template, it will customize ever since LM tools does not even what is Windows 2008 R2.

    The activation of Windows 2008 R2 component is due to hardware changes in the deployment of the model (new SID, UUID, NIC MAC addresses) and is a bit of a wait.

    I don't know of a workaround or the other and I have a kit of Lab Manager 2.5 around tinkering with since it's already past end of life care.

    Kind regards

    Jonathan

    B.SC., RHCT, VMware vExpert 2009

    NOTE: If your question or problem has been resolved, please mark this thread as answered and awarded points accordingly.

  • Installation of XP operating system in model of lab manager... No hard drive detected?

    I am currently trying to obtain a model of machine configuration virtual xp in lab management. I have deploy the model, mount the .iso image file, start the installation, but then installing XP spits a message that there is no hard drive detected on the computer. I checked the properties on the machine, there is obviously a hard drive I put in place. It is in the format of BusLogic SCSI parallel. Is there a specific reason for this installation isn't or doesn't see is not the virtual disk? I have not really found something in the help or documentation which speaks of a specific operating system installation issue. Everything is useful, thanks in advance.

    -va

    To install Windows XP, you must have a driver for the disk controller VMware disc. You can consult this article for download one-

    http://KB.VMware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1000863&sliceId=1&docTypeID=DT_KB_1_1&dialogID=32908933&StateID=0%200%2029476991

    If that one doesn't work, then, you must submit a request for support and ask management to send you the right driver

  • Design of SAN storage for environment training Lab Manager help

    I plan to use Lab Manager for a training environment and could use a small entrance.

    The environment will consist of:

    -8 (each with72GB + RAM) ESX host
    -30 users simultaneously connected, each accessing a configuration that will contain about 10 virtual machines, which means that it will be up to 300 mV.

    -60 to SAN contaiming a total of 36 records

    My question is this: How can I skin the SAN?  I must:

    (A) creating 10 LUNS for each of the 10 basic VM disks, and then create LUN 30 so each of the deployment 30 has a LUN, separated from its delta configuration?

    (B) creating 10 LUNS for each of the 10 discs based VM, and the deltas of 30 correspondents for each basic disk stored in the same unit logic as the basic disk number?

    (C) create LUNS may be 2 or 5 for VM 10 and 10 MON deltas for basic disks?

    (D) none of the above (Please give me an idea

    It's everything you need to know:

    http://KB.VMware.com/kb/1006694/

    And assign the points of this thread if you can

    Kind regards

    EvilOne

  • Models of Lab Manager

    If I update my models, which will affect the link clones that were created from the templates before the update?

    Thank you

    No, linked clones are not based on the live disc models use rather a jelly a stored behind the scenes on the data store.

  • VCenter Lab Manager dedicated?

    Do you need a dedicated server vCenter for use of Lab Manager or can you use an existing and create a separate data center for Lab Manager hosts?

    No, you do not have a separate vCenter Lab Manager - you can create a new separate data center or even a distinct group within a datacenter exisintg -.

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Question about Lab Manager and ESX Resource Pools

    Hello world

    I was wondering if I can get feedback from some of the members of the community.  We used Lab Manager very strongly in our support organizations and it has proved to be a valuable tool.  Recently, we collaborated with the Department of technical training and started hosting online seminars and classes for them using Lab Manager.  Last week we had a fairly large class of users (approx. 45), each duty deploy 2 rather Beefy VMs that were very intensive resources (Jboss, SQL, Mail to name a few).  My colleague and I went through a lot of trials and planning to ensure that our infrastructure can handle the load while allowing our users to date at the same speed and reliability, so that they are used.

    Our Lab Manager ESX server pools made up of the following:

    5 HP DL380 G5 servers with 2 x Quad Core 2.8 GHZ processors (8 cores each) with 32 GB of RAM each

    2 HP DL580 G5 servers with 4 x Quad Core 2.8 GHZ processors (16 cores each) with 128 GB of RAM each

    When the class is running and everyone has made their 2 machines, we noticed for some reason, he was all on ESX2 and 3 deployment (380 s).  Then came the ESX alarms, use of 90% of memory, then more than 100% bringing the disk page.  The CPU usage was at a all time high.

    I look at the 2 580 s and there are about 4 virtual machines deployed on them.

    So my question is...

    How does he know where to launch a Virtual Lab Manager machine?  There's no sense 2 servers have been brought to the red zone, and almost overloaded when the 2 most powerful servers in my farm are nearly dormant.  I noticed sometimes in the past, but not this bad.  Normally, we have about 45-50 computers deployed at some point and it seems to spread them out properly.

    This group of training has access to the same LUN and ESX server with Betclick as any other organization that we have.

    We have decided hosting several sessions of training may be greater than this one and would like to know that the virtual machines will be distributed properly.

    I would like to know your opinion on this

    First to answer your question, then make guess what happens.

    As indicated in response Ilya, LM distributed VMs differently with active DRS resource pools and pools of resources without active DRS.

    LM place VMs when the DRS is not used.

    When DRS is used, we use the DRS admission control to select the host to place virtual machines.

    DRS is turned off, LM uses its own placement routine:

    For each virtual machine, LM filter all managed servers that cannot run the virtual machine. A complete list of patterns are:

    • The managed server has not enough memory or quota for virtual machine to run the virtual machine.

    • The host is not connected to the data store that is the virtual machine.

    • The virtual machine has active state (suspended) and the CPU of the server managed by the waking state has been captured is not compatible with the managed server currently considered.

    • The virtual machine has been a guest on 64-bit and the managed server is not 64-bit capable.

    • The virtual machine has more processors than the managed server.

    • The managed server is not accessible or is set to "prohibit deployments." Not reachable may mean that the lm-agent or its not to ping queries is not answer (this is visible in the list of managed server page).

    Once we have the complete list of eligible servers, we place the virtual machines on the host including the smallest (MemoryReservation % + CPUReservation %). By default, we reserve not CPU on LM VMs, this internship will be largely due to the memory on the host computers if you have not changed the settings of reservation of CPU on your virtual machines.

    For closed deploy to the PF3, force us all virtual machines to go on the same host, so if any virtual machines have any special data store / CPU type specification, all virtual machines is forced to go on this host. If there are conflicting requirements between different virtual machines in the same configuration, deployment will fail. Closed VMs also have another condition to check that is rarely achieved in practice: number of closed networks available on the managed server. By default, this is 20 and you will need more than 20 different configurations closed running on a managed server before you who will strike.  (LM4 allows Cross fencing host, so this does not apply in LM4).

    To make a guess blind because of your distribution problem - if you use the saved state and fencing, it's probably the type of processor.  Processor type can be checked on the host properties page.

    If you not are not I would check that the images can deploy on the 580 s by disabling the 380s and trying to deploy them.

    If you use DRS (which you're not), it could be due to deployment of virtual machines too fast for the algorithm of control for the entrance of VC.  An easy solution for this is to extinguish the DRS on the cluster manager of laboratory.

    We tested load internal repeatedly, the product QA in different laboratories, we run (LM has been used in VMworld Labs for the past couple years and internal systems of training and demonstration for our Organization implemented SE), so no need to worry about this.  I assure you that the first 'real' performance problem you will encounter will be almost certainly due to an overload of your storage array (too many IOPS / s)... and for that you have to distribute content in data warehouses and use techniques of creating paths to balance the load.

    Kind regards

    Steven

    Another thread on this: http://communities.vmware.com/message/1258902.

  • Storage for Lab Manager

    My client needs storage for Lab Manager running on 100 servers. Is it recommended to use the NAS or SAN for Lab Manager running and what type of e/s that you wait for the same thing?

    I did a little bit with storage of LM in recent years.

    I have a development/test environment on LM 600 machine, using 2 Sun 7210 storage devices (is no longer available) on NFS shares. We have 2 links 1 G servile vmKernel traffc as well as links to more vmnetwork, management, etc.  The devices are related to links of 4 x 1 G each.

    I don't know what are the current constraints of LM/ESX, but whenever we revisit the decision 'block vs NAS' we end up selecting NFS (our Sun boxes can present the iSCSI as easily).

    As Jon says, what storage you need certainly depend on your VMS and load.  NOTE: my experience is * NOT * in a production environment, where I am squeezing virtual machines performance.  It's more about the functionality and "adequate" performance (where "adequate" is a measure based on the user experience topic).

    Some things we have learned over the years:

    • IOPS / s are kings, and most of them are entries.  (Written at the moment, I have a few heavy ready operations push me down to about 65 percent.)  I saw > 80% of entries maintained for days.)
    • Latency does not seem as important - VMs run about the same at 10ms latency and latency 250ms NFS NFS.  When latencies above 750ms grows, they will be noticiable users and if they go more than a second over 1000 ms you will begin to see the impact on the guest OSes, especially Solaris x 86.
    • Normal operations don't care about bandwidth - just current execution machines don't even fill a link 1 G
    • Deployment operations * could * start filling to the top of the bandwidth, maybe
    • Betware storm IO.  Often people think the storm IO deployment/boot, but 400 RedHat machines all the kickoff of their 'updatedb' cron job at 04:00 can produce any IO storm as well.  NOTE: IMHO if you buy storage to handle this kind of thing, you're overpaying storage.  I've spread my cron through the night and now can not see on the graphs of storage performance.
    • There is * NOT * a good correlation between guest OS IOPS / s as shown, for example, iostat and PAHO are / s NFS presented to the storage system.  I'm guessing that ESX made a lot of caching.

    Most storage vendors I spoke last year does not seem to find it.  They think in terms of bandwdith and it accelerates with read cache, so unimportant in my environment - I had to push really hard to get sellers to think in the right conditions and some of them really seem out of their depth there.

    I think that this state of mind is important: consider the IOPS / s as you consider CPU: how much should I give to each machine?  How much should I oversubsubscribe?

    Like CPU, a physical machine which IO a lot more available that he needs and you can consolidate - but don't know how.  A physical machine with a pair of 15 k drives RAID1 has about 200 IOPS / s available and readers spend most of time inactive (a bit like the CPU).

    That will boost the machines and when?  Right now I'm on average 9 inputs/outputs by VM, which in my opinion is quite small.  My storage system can go higher under request, but I have enough machines that it is fairly constant.

    In my quest for the perfect storage of LM, I found the usual: good, not expensive or fast, you can choose any two.  How we use LM (lots and lots of shared models) we would really like a unique namespace shared (i.e. 1 data store). I have yet to find the 'perfect' - storage, the Suns are very profitable for the performance and space, but their management takes too much of my time and my 72XX are not expandable - the 74 might be a different experience.  I am currently looking to add some SSD storage, but the ideal configuration would be automated migratino between levels (for example, EMC QUICKLY) and if I bought only from EMC I would triple my cost per VM.

    I talked to people who have a pattern of use of different models that are quite happy on several data stores.  If you can move your load through many warehouses of data then Isilon looks like interesting storage.

    So, in summary:

    • Watch IOPS / s 1st, 2nd and 3rd, then the latency time, then (maybe) bandwidth
    • Manage the budget or the ability of the IOPS / s as you CPU
    • You may not use comments IOPS / s to provide shared storage IOPS / s

    I'm hoping to have a better storage solution in the next 6 months or more.

    --

    Dewey

  • Lab Manager network models

    Must the address pool consider a set of addresses on the same physical network?   Or can the router virtual handel who.

    Lets get some terms right: a network model is a specification of the network of the gateway/IP/DNS/mask, etc. When they are copied in a configuration, it becomes a virtual network. When the configuration is deployed, the virtual network manifests as a portGroup + new vSwitch in vCenter. The virtual machines in this configuration connected to this virtual network are simply related to this newly created portGroup. If the configuration would have been deployed with the option "connect virtual networks to physical networks (CVNPN)" is checked, a VM vRouter would be created with 2 network cards, one connected to the newly created portGroup, and the other card NETWORK connected to the physical network portGroup. This internal vRouter NATs the IP of virtual network addresses the IP addresses of external physical network.

    So I must have something misconfigured.   I'd be able to ping the default gateway of the network model that I put in place of my tour VM Manager?

    As you can see, there is nothing in the config or the vRouter who will respond to pings from gateway, this is why you can not ping the gateway.

    I have to define a road on my phyical netwrk will go back to my network of Lab Manager?

    Network of Lab Manager? Don't know what you're talking about here. Did you mean virtual network (aka model network)?

    Is the indide of my virtual router gateway address address by default to the my model Lab Manager?

    No, the internal address of the vRouter comes from the virtual network IP pool. But yes, ideally it should be IP of the gateway to the virtual network - you can drop a feature with VMware Support for this request.

    I'm sure of nuts that I have configured wrong. Cannot ping the default gateway or anything outside.

    The vRouter implements a route between your virtual and physical network, so unless you select CVNPN while deploying the configuration, you will not be able to ping what anyone outside.

    From your images attached, it looks like you have selected, then maybe CVNPN that something is wrong in your case. I think you should open a VMware Support request to deal with this.

  • Internet access for virtual machines in Lab Manager

    Hello

    Our lab environment is configured separately from corporate network and only some public IPs are able to access the internet. How I would install machinery in the Director of the laboratory to access the internet?

    For example, IP range - 64.x.x.x. Creation of a network model with these IP addresses attached to physical networks and external public would give me access to the internet?

    Stan,

    You would have to create a physical network that is attached to a vSwitch or Distributed Switch that can connect to this network of 64.x.x.x.

    If configure you a Pool of IP, Lab Manager can manage physical network IP assignments to the virtual machines.  so if the virtual machines require Internet access, then they will need to access this network (somehow).

    Kind regards

    Jonathan

    B.SC., RHCT, VMware vExpert 2009

    NOTE: If your question or problem has been resolved, please mark this thread as answered and awarded points accordingly.

  • Shutdown for Lab Manager

    We are conducting a data center power down and I would like to confirm my judgment for Lab Manager procedure:

    Stop order:

    1. Shutdown VMs LM
    2. Virtual routers (for closed configurations) stop by VC
    3. Stop VMwareLM-ServiceVM (by distributed switch) by VC
    4. Stop LM Server (VM, but not running on the ESX host cluster managed LM)
    5. Shutdown LM managed ESX host
    6. Shutdown VC Server

    Boot order:

    1. Power on server VC
    2. Power on managed LM of the ESX host
    3. Power on LM Server
    4. Power on virtual LM VMs, routers & VMwareLM-ServiceVM by VC

    My questions:

    Are the steps in the proper order?

    What I have to cancel the deployment of my configurations or they may just be in an engine out of State? I like to keep the configurations deployed to maintain external if possible IP addresses.

    just closed down going also to extend the process of closing, it would be: ESX hosts SAN switch SAN storage reverse order to start upward.

  • Lab Manager dependencies on vCenter and terminology

    I just Lab Manager, of course with ESX and vCenter.  I know very little about Virtualization, as this issue will prove.  What are the activities of configuration required on vCenter for Lab Manager able to build models, installation configurations and share library configurations?

    Also, I'm trying to get my head around the difference between the data store, data center and media store.  My current knowledge are:

    A data store has a one-way relationship to one or more hosts.  If storage is on a specific host (physical hard disk for example) then the data store has a relationship one of exlclusively with a single host.  Any file a host or a virtual machine can read/write can be read/written on this data store.

    A data center is a logical collection (non-brique and mortar) hosts, data warehouses and virtual machines.  A data center can aggregate various data stores in a single logical storage available to all virtual machines area?  Or another flavor of physical storage can same issue on a Server Windows 2008 physical logically be conifgured such as NFS, and can share NFS join a data center?

    A media store is a special part of a data store managed by the Manager of laboratory exclusively for the storage of files ISO for the construction of models in the laboratory Manager.  Is a media store part of a data store or part of a data center?

    Is a model of vCenter the same model Lab Manager?

    Any clarification on these questions would be more useful.

    Kind regards

    Rick

    Good questions! Welcome to the Forums as well!

    Your description data store is pretty right on. With the exception of the now for the storage of the local host, new technologies progressed to virtualize in the shared storage by the use of devices to Vritual Server.  Companies like HP lefthand and Openfiler to name a few.

    With regard to the issues of data center

    > A datacenter can aggregate various data warehouses into a single logical storage available to all virtual machines area?

    No, he learn better to keep data warehouses between HA/DRS separate clusters.  Now between the host in a cluster you "will share stroage" for the use of the DRS, HA or Storage vMotion.

    > can physical storage on a Server Windows 2008 physical logically be conifgured such as NFS, and can share NFS join a data center?

    Yes you can - use Openfiler VSA to connect the two iSCSI LUNS and NFS for ESX. As long as the ESX host can access the virtual machine through the network.

    For your other questions have a good read on a couple of things well:

    http://bsmith9999.blogspot.com/2010/02/Lab-Manager-40-Setup-and-best-practices.html

    This webcast was just as informative as well:

    http://www.VMware.com/a/webcasts/details/284

    See you soon,.

    Chad King

    VCP-410. Server +.

    Twitter: http://twitter.com/cwjking

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Lab Manager 4.0 Upgrade Plus new vSphere Env Migration

    Current environment

    -Lab Manager 3.0.1

    U2 - VirtualCenter 2.5

    ESX - hosts running ESX 3.5 U2

    -All stores data on a San

    Will be a new separate building Lab Manager 4.0.1. Server from scratch that connects to a U1 vCenter Server 4.

    The plan is to migrate users to the former LM 3.0 installation to the new LM 4.0/vSphere 4 environment that will also have all new ESX host.  We will not pass original hosts to the new CR.

    Is there a way to copy or move the existing channels of the VM to the new separate environment?

    Or would we need to plan to export the models to a file server, and then import them into the new LM 4.0 environment and have users to start from scratch with these models?

    For any info or suggestions would be greatly appreciated.  Thank you.

    Concern: Data store mappings in LM are deleted once all hosts are posted?

    No, virtual machines are stored in the database and mapped on the data store itself (and not on the host).  As long as the path of the UUID/NFS remains the same on different hosts, you should be fine.

    > Concern: LM 3 is not "supported" with VC 4, but he at least see and connect to it?

    It's REALLY OBSCURE.  He can't let you connect at all, or to connect then say guests are not supported.  I've seen some clients get totally rejected and others which were broadcast in the config not taken in charge even when (as vCenter has been updated in the background and LM was not touched).  they did not actually change the address info and click 'OK '.

    Concern: Assuming 3 LM can at least connect with VC 4, it will be able to reach the resource containing the host ESX 4 pools?

    I think so, but as long as the Lab Manager database indicates that you use the right IP/Username/Password for vCenter... that should be enough to make Lab Manager 3 to 4 and do use the new vCenter for the Initialization Wizard.

    Kind regards

    Jonathan

    VMware vExpert 2009

    NOTE: If your question or problem has been resolved, please mark this thread as answered and awarded points accordingly.

  • Lab Manager Windows Server 2008

    Hello

    I had a problem with a model of Windows Server 2008. I use Lab Manager to deploy Windows 2008 and I want to do an automatic installation of Active Directory by dcpromo and an answer file. But at the first initialization of Windows 2008, I arrived on the login screen asking me to write the administrator password to activate the administrator account. After this operation, I give by vbs regestry key for automatic logon with the administrator account to start Active Directroy Unattend installation at the next startup.

    I want to cancel this step of password for Windows 2008, I want to activate the account with the default password during deployment of Lab Manager. Any idea?

    Thank you

    We do not like that!

    We create a VC machine on this machine, we install win2008.

    Next, we create a sysprep script (.xlm) and a post script that does the adpromo.

    After that, we make a sysprep on the server and shut it down.

    Then import us the machine virtual Labmanager and unplugged the tag customize.

    When deploy us the virtual machine since the model imported on startup sysprep runs a the machine becommes a Domaincontroller.

    / MatsRob

  • New Lab Manager - IP address conflict

    Hello. I'm new to the Director of the laboratory, and I hope to use it to provide a demo environment of a few machines in a domain for our software. If after installation of Lab manager and Vmtools, I put a machine virtual like a new domain controller/dns in virtual center, imported to the Manager of the laboratory as a VM template, added to a configuration, and it deployed in a workspace in closed mode. But I get IP conflict errors. Is not fenced mode supposed to avoid ip conflicts? I have the original CD/dns off also in virtualcenter. Thanks for your help.

    I think what you want to do is to start with a model with just the OS and create the domain controller once you have a workspace configuration.  See: http://pubs.vmware.com/labmanager3/users/LM_Users_Guide_templates.7.15.html

    "The virtual computer model cannot be configured... a domain controller.

    http://pubs.VMware.com/labmanager3/users/LM_Users_Guide_templates.7.15.html

Maybe you are looking for