2 host HA - DRS cluster

Hi all

I have 2 HA - DRS host clusters, it seems that DRS does not work no doubt, when I look at the performance of 2 hosts hosts one of them consume more memory of 45 GB 42 GB and the other consume 26 GB to 45 GB, I all know the DRS to balance the load on two hosts and automatically migrate VMs from one host to another , on my environment, any idea's end this issue?.

Well, you can increase the threshold for migration to max (which will have a direct impact on the deviation from target load) to enable DRS to vMotion off virtual machines even for the slight increase in performance. Second, you can also consult this advanced setting to limit the no. virtual machines per host for vSphere 5.1 only:

http://www.yellow-bricks.com/2012/10/01/limit-the-amount-of-eggs-in-a-single-basket/

But I do not recommend to take the above measures unless your users are complaining about the performance. I trust with DRS to take corrective measures if the claim arises.

Tags: VMware

Similar Questions

  • Add hosts with existing virtual machines for 'Greenfield' active DRS Cluster

    I'm currently involved in a project for the hypervisor 5.5 and vCenter. Existing are 2 physical servers with redundant everything and 8 SAS hot swap hard drive bays. Initially, 8 bays only 4 have been populated with hard drives. Hard disks have been removed, ESXi 5.0 has been loaded, 4 virtual machines created on each server and all lived happily ever after.

    Now, I would like to upgrade these servers to 5.5... as follows:

    I filled the remaining 4 bays on each server with some hard disks and created a second strip of sufficient capacity (twice the capacity of the 4 original disks). I'm stop servers, past the stripe for starting a new band of readers and installed ESXi 5.5 on the new band. The old Strip also remains in tact, so that I can start to ESXi 5.0, if I set to boot from the original soundtrack or boot to ESXi 5.5 if I boot to the new band (two operating systems starts very well, are properly networked, configured vCenter, etc.).

    When booting in 5.5, he sees his own, new band and is also the soundtrack which is listed as a second data store attached (I think actually I want to make possible migration of the simple VM from the old to the new data store), both are disk space of LSI, Non - SSD, Type VFMS5.

    Panic sets in when I start both computers in 5.5 and the time comes to add 5.5 hosts in a cluster (I also want to test the vStorage DRS and HA) and I've reached the setting of the, "choose Resource Pool." I'm scared to death that choose the first option, "all of the virtual machines in this updated host in the cluster resource pool root. Pools of resources currently present on the host computer will be deleted. "will mean not only a reformatting of the new band which I would like to add to the cluster, but also the still attached old band that includes the data that I want to keep. I don't want to lose data or virtual machines on the soundtrack, but to migrate them in a cluster of 2 servers ESXi 5.5. I was really hoping to migrate data to new tapes on new hosts and then re - purpose boards 2 original (on both computers) as a third table vStorage.

    Issues related to the:

    1. If I choose the option "put all the virtual machines from the host in the pool of resources of the root cluster. Pools of resources currently present on the host computer will be deleted. "with all the drives connected, all my data will be lost?

    2. If I pull the 4 original disks (5.0) and use the option 'put all virtual machines from the host in the pool of resources of the root cluster. Pools of resources currently present on the host computer will be deleted. "that with the new arrays connected (5.5) and then reconnect the old paintings after that the hosts are added to the cluster, will be only the re added still get sucked into the tables and data deleted?

    3. choose the second option, "create a new resource pool for virtual machines from the host and resource pools. This preserves hierarchy to pool resources in. the host' a safe option? If this option works, no matter if I have my original array attached when you add hosts to the cluster?

    Last point: by reading all the documents I found it seems strongly suggested to set up guests who have not a deployed virtual machines, that's why I'm going to great efforts to try to keep the new hosts as empty as possible and with 1 port base networking while waiting to complete the configuration. Does it matter if I migrate virtual machines or add them as guests to the ESXi 5.5 before or after I have add hosts to the cluster?

    Any ideas or help would be greatly appreciated.

    I'd go with option C.

    VSAN I would agree has some stupid requirements, but that they were aiming for is almost class company SAN at a decent price by using the SSD as caching tables, but as you said if you don't need not good I would continue to go with a NAS NFS solution.

  • ESXi hosts are divided into DRS cluster

    Hi all

    I'm going as well as I can explain it... In the case that I'm not being very clear, please not hesitate to ask me more questions.  Here is the scenerio:

    (1) I have a Blade IBM H chassis has 7 in there (host ESXi) servers

    (2) I have implemented a DRS cluster with these 7 Hosts

    (3) I have active VMotion and HA

    (4) the question is that 2,3 and 6 blades can only ping between them on the VMotion network and therefore can migrate to eachother

    (5) blades 1,4,5 and 7 only can ping each other and can therefore only VMotion for eachother.

    (6) this problem started to happen over the weekend, one day, and when we came back to work on Monday, we realized that the 3 hosts were not communicating on the network correctly.

    (7) one thing I noticed is when I unplugged all the DRS cluster hosts and then I connected to them again I realized that all guests were able to successfully connect to the Cluster but the hosts 2, 3 and 6 failed on the AP.  When I then reactivated the AP, he worked on these 3 guests.

    (8) due to this problem, I had remove the feature of DRS cluster because the system could not migrate the machines for guests.

    (9) one last thing I noticed was that when I VMotion, the machine I have VMotion loses network connectivity.  I must go to the publishing settings and disable the NETWORK adapter and re-enable the NETWORK card

    I hope someone can help with this problem

    Thank you all,.

    Antonio.

    Basically, the spanning tree Protocol is used to avoid loops in a bridged network. Please take a look at for example http://en.wikipedia.org/wiki/Spanning_tree_protocol for details on how it works.

    However, the individual switch ports that serve as uplinks for ESXi must be configured for Spanning-tree portfast.

    André

  • Add or remove a host NOT in a DRS Cluster maintenance mode

    Is there a reason why add a host in an active DRS cluster that has say 20 virtual machines running on it would cause problems?  I understand a host must be in maintenance mode when adding or remvoing in a DRS cluster, but this can be done by disconnect/remove/add the host to the cluster.  Is there a technical reason why we should not do?  It disrupts DRS or HA?

    You'll be fine.  Best practice dictates in and out of a cluster in maintenance mode, but...

  • Keep VM on a specific host in a DRS cluster?

    I have a DRS cluster with a handful of machines, but only one is a newer server, so it has a higher class of processor.

    I have a virtual machine that needs as much CPU as it can get, when she needs it.  However, I don't want to put it on manual or anything like that, because if the host is in maintenance mode, or something, I would like to migrate off VM.  But as soon as the host comes back online, I would that this VM to go as soon as possible.

    Is it possible to implement this type of "affinity" with a particular host?

    This ordinary handling DRS rules don't really work, because the virtual machine is somehow a user UC 'bursts '.  Normally, it does not use a lot of CPU, but when that happens, it does.

    I'm just curious to know if this is possible or not.  Thanks in advance!

    Russ

    No, you can not install this type of affinity.

    ---

    MCP, MCTS, VCP, VMware vExpert 2009

    http://blog.vadmin.ru

  • Home put in service of a DRS cluster active mode NOT to trigger and automatic Vmotion

    I'm under 5.5 U2

    I have a cluster with active and defined DRS fully automated, but when I select the host for the maintenance mode of the VM NOT vmotion to another host in the cluster automatically. I have to vmotion manually each virtual machine to a host selected manually. There are a few VM with DRS off on them because they ONLY work on the host, they are assigned to (replication of virtual computer), but all the other VMS are defined 'by default (automated)'. Would it because I only have two of the host in the cluster?

    Capture.JPGCapture1.JPG

    Try this.

    Hope it will solve your problem.

    http://www.vmwareminds.com/troubleshooting-maintenance-mode-process-get-stuck-at-2/

  • How to force use of the VM a specific host in a cluster?

    I use vSphere vSphere Web Client 5.5 thru 5.0U2.

    When I create a virtual machine, I specify a host in a cluster as computing resources.  However, when I turn on the virtual machine, it uses another host.   How can I configure the virtual machine to use only a specific host in the cluster?

    Use the DRS-> Virtual Machine Options to disable the level of automation for the virtual machine. Who will be actually anchor her to this host. You could also put the VM on local storage or attach too to a resource (DVD-ROM) on this host, but the road of the DRS is the best way to go.

  • Move-vm can migrate a powered off the virtual computer to a DRS cluster?

    When I try to move-vm has propelled off VM to a destination of DRS cluster, he says that he cannot migrate a powered off the virtual computer to a DRS cluster.   I can explicitly name the destination of the host and it works, but that defeats the purpose of using the DRS and forces me to script on the destination hosts.

    I do something wrong or I would be able to migrate a powered off the virtual computer to a DRS cluster?

    Thank you

    -MattG

    Hello, MattG-

    I get the same behavior as you when you try to move a VM PoweredOff to a DRS cluster, specifying the cluster as the value for the parameter - Destination to Move-VM (using PowerCLI v5.1 Rel 2 v5.5 Rel 1).  Not quite what I expected.

    To work around the problem, and without having to appoint a particular host as destination, you could use some Get-Random, do something like:

    Move-VM myVM0 -Destination (Get-Cluster myDestDRSCluster | Get-VMHost | ?{$_.State -eq "Connected"} | Get-Random) -RunAsync
    

    Who should receive the machine said cluster and on a host that is in the connected state.  A subsequent operation to the market (whenever do you eventually) should then see DRS power on the virtual machine on a suitable host in the cluster.

    Maybe the dev team can comment if this behavior is the expected or desired.

    Anyway, how does do for you?

  • minimum size of DRS cluster

    What is the minimum recommended DRS cluster size. Is it true that the 2 would be a technical minimum? But y at - it a minimum size which is more effective?

    Is it true that the 2 would be a technical minimum?

    Yes, you can start using DRS with two nodes.

    But y at - it a minimum size which is more effective?

    It is certainly desirable to have several hosts, DRS actually works the same way with two hosts because it works with 32 hosts. The responsibility of the DRS is not not in a cluster to load balance, but to migrate workloads if necessary.

    André

  • MSCS on an active DRS Cluster HA?

    Hi all

    We are setting up a 6 host DRS activated cluster HA with a host as the host of the previous day. Two of our virtual machines will run MSCS cluster (with active and passive node). What we want is that:

    1 - Normal VM must be restarted on a host sleep by using cluster HA when an ESX fails.

    2 VM Normal should be vMotioned DRS decides.

    3 clusterises MSCS VM should restart on a host sleep by using cluster HA when an ESX fails.

    4 MSCS group of VM should not be vMotioned automatically. Must reside on the specific host.

    Are higher than 4 point if possible please confirm how we do. I'm a little confused that as per below document MSCS is not supported on active DRS cluster.

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1037959

    Kind regards

    Khurram Shahzad

    According to the doc of vmware, the installation program for the Clustering failover and Microsoft Cluster Service to Vsphere5 no problem when you use the DRS and HA together. but it takes only

    1 - two nodes must never operate in a single host for HA and DRS
    2. to do this you must create the rules below

    -HA, create a host of DRS group, and then create a group of Virtual Machine DRS, then A VM host affinity rule must be created, with a group of virtual machine DRS and a host of the DRS group.
    -DRS, create anti - affinity machine VM-virtual rules specify which virtual machines should be stored on different physical hosts and turn on strict application of the law of affinity rules virtual machine configuration

    Or you can also disable the DRS for these 2 nodes and enable only HA, but even once you must ensure for HA 2 nodes never operate in an ESXi host

    See books of installation for Failover Clustering and the Microsoft Cluster Service to Vsphere5 and vsphere-esxi-vcenter-server-501-resource-management-guide

  • Migration of VMs to other hosts in the cluster

    Hello

    I am looking for a simple script to move virtual machines on a host and move them to various other hosts in the same cluster

    I found this

    $sourcehostname = (get-VMHost 'esx01') .name

    $clusterhosts = get-Cluster "Group 1" | Get-VMHost |
    Where-Object {$_ .name-notlike '$sourcehostname'}

    $vms = get-VMHost $sourcehostname | Get - VM

    I understand gets all the virtual machines on the host and prepares to be moved inside other hosts in the specified Cluster

    I just need a hand with the Move-VM part, I need to move more than one virtual computer at once, I don't know if I need runassync inside

    Yes, vMotion VMs at the same time, you will have beed to use the parameter of the Move-VM cmdlet RunAsync.

    Note that there is a limit on the number of parallel vMotions imposed by the characteristics of your vSphere environment.

    But it's nothing to worry about in a script, vSphere will make the queue and activate for you.

    You could do something like that

    $clusterName = 'Cluster1' $srcEsxName = 'esx01' Get-VMHost -Name $srcEsxName | Get-VM | %{
      $tgtesx = Get-Cluster -Name $clusterName | Get-VMHost |    where {$_.Name -ne $srcEsxName} | Get-Random  Move-VM -VM $_ -Destination $tgtesx -RunAsync}
    

    The script runs through all the VMs on the node 'esx01.

    He selects at random one of the remaining ESXi hosts in the cluster, then vMotions the virtual machines in asynchronous mode to their new ESXi host.

    BTW if you have active DRS in that group with Automatated mode, you could just place ESXi node "esx01" in maintenance.

    DRS will migrate then all virtual machines on that node to other nodes.

    Get-VMHost -Name $srcEsxName | Set-VMHost -State Maintenance -Evacuate
    
  • Delete an ESX host in a cluster...

    VMware dear Experts

    Please help me I have a cluster (HA/DRS disabled) 2 esx host are members of the same, by mistake I haved added esx bad one in a cluster host. I want to remove the same. is there a way I can do the same thing without any downtime.

    NEED URGENT HELP...

    concerning

    MrVMware.

    Of course, you can do this without Maintenance Mode:

    -Click right host--> disconnect

    -Click right host--> delete

    -Add the host to the cluster

    Furthermore, if you did a search you would have found this: http://communities.vmware.com/thread/210959

  • Best practices for a domain controller in a DRS cluster

    Hi all

    We have our virtualized 2 domain controllers.  We're in a DRS cluster, the other is on a 3rd stand alone ESX host.

    On the cluster, which is the best way to ensure that the domain controller starts first?  We lost our AC for awhile last night (again), and when turned on the cluster hosts, apparently the domain controller did not come entirely everything first, leading to a lot of problems with the other virtual machines.

    On the host of the 3rd, the second domain controller is configured to start first, with all the other virtual machines on a delay of 45 seconds.

    If you use HA, you can set the priority of restarting to 'high '. To find it by changing the settings of your cluster and "the VM options.

  • Maybe to lock a virtual computer to a host in a cluster?

    Hello!

    We have a virtual machine that will require a special material (to test it). We step only HW in an ESX host in our cluster. Is it possible to lock the only VM in this specific host?

    Concerning

    H

    With vSphere 4.1 DRS host-affinity has been added (see http://www.vmware.com/files/pdf/techpaper/VMW-Whats-New-vSphere41-HA.pdf), you can set a 'must' rule to ensure that the virtual machine only works on a specific host.

    André

  • DRS cluster: You can mix 4.1 ESX and ESXi 4.1?

    Not very complext, but we have a DRS cluster of 8 hosts x 3850 of IBM on ESX 4.1 and we will upgrade them both to ESXi 4.1 update 1.  There will be questions mixing a cluster of ESX and ESXi servers?

    Yes, 100% supported!  both are the same hypervisor, so you should have no problem running in the same cluster.  After all it is the best way to do a upgrade spread, and how we made our move between ESX and ESXi.

Maybe you are looking for