HA matter of admission control

It will probably seem like a silly question.  I'm trying to set the parameters on a HA Cluster in vSphere 5.5.  We have a minimal implementation of vSphere: a site, a single cluster, 3 guests.  We have now 5 virtual machines and would accommodate no more than 12 at any time, (I doubt that we will find ourselves with more than 9).

We use the option of admission control to use a "percentage of unused reserved production capacity as cluster resources in failover.  I'm trying to define that it tolerates more than one host being down at one point, or in other words, there must be at least 2 of the 3 hosts running at all times.  I should probably select 'Failures that tolerates the home hub' and set it to 1.  But I read in a document on the subject (as the percentage option) was desirable for performance reasons.  But this is what confuses me.  To indicate to the cluster to tolerate up to a failure of the host, should I choose 66% or 33%?  That is the question.  I read a lot of literature, but I can't seem to find an answer to this simple question.  Below in the image, I have made an assumption and set it to 66%.  A single document I've read has explained that I would have guessed that it is 33%, but then I noticed that with all 3 hosts in normal operation, the ability to failover is 99%, and we want that it tolerates up to a failure of the host who would put at 66%.  So if I go by the percentage and want to allow the failure of a host to most, is this (66%) the correct setting?

vSphere HA.png

Thank you, very much!

Sam L. flow

Hi Sam,

Welcome to the forums! With percentage based on admission control policy you choose how your cluster resources, you want to book for use in a failover, so for your scenario where you have 3 guests and want to tolerate 1 fault of the host you would set them to 34% (round).

Keep in mind with this policy to control intake that counts against the % of cluster resources is: overload of virtual machine (CPU & Mem required by host/ESXi to run the virtual machine) + and a reservation for the virtual machine. This is because the admission control only guarantees that your VMs will restart, not that they will perform well.

Hope that answers your question. If you are interested in reading more about HA I recommend:

http://www.yellow-bricks.com/

http://frankdenneman.nl/

Amazon.com: VMware vSphere 5.1 Clustering Deepdive eBook: Duncan Epping, Frank Denneman: Kindle Store

See you soon!

Tags: VMware

Similar Questions

  • HA Admission Control and DPM with clusters with 3 and 2 guests

    Hello world

    I have two clusters, with 3 hosts first and second with 2 hosts. And I have a few questions about the activation of DPM:

    1. 3 HA/DRS cluster hosts, I activated the admission control to the percentage of cluster resources reserved as failover alternative DART at 33%. Because I have no VM on this cluster, when I enabled DPM he put 2 hosts in standby mode. And now when I powered on any virtual machine on this cluster I get error "the operation is not allowed in the current state. It is possible that when I disable the admission control - but then I lose protection HA. So in Group 3 knots I would always have 2 guests under tension when strict admission control VMware HA is enabled? It is possible that if in DPM options I disabled power on 2 hosts management. Am I wrong? And DPM set power switch my third host (for example brings leaves host standby mode to keep the needs of failover)?
    2. Cluster of 2 guests when I activate CAC (50%) Should I disable DMP, because I still have 2 powered on the hosts?

    1 HA/DRS + DPM you 1 single host standby because HA need 2 host be powered.

    2. no DPM in two Cluster nodes if HA is enabled.

  • vSphere 5.5 HA percentage based Admission Control Policy

    I wanted to light for some time, and it is better described by a scenario.

    1. I have a 2 host cluster, to make it easy, each host is configured with 32 GB of physical memory

    2. the cluster is configured with Admission Control activated, and the policy is set to percentage of the cluster resources CPU 50% / 50% memory with the idea of reserving half of the ability to support a 1 server failure.

    3. for the sake of argument, let's say there are 10 servers in the cluster, NONE of these servers are using any type of booking for CPU / MEM.

    4. each of these 10 servers is configured for 6 GB memory (6 GB x 10 VM = 60 GB memory affected). For the sake of the argument lets say DRS distributes 5 virtual machines per host, 30 GB per host.

    5. now, in my opinion, Cluster, each of my guests show a % 93% memory used, I have alarms jump in vCenter shouting that memory is low.

    6. Since none of these virtual machines run reservations (cut the CPU and memory) switch on the percentage of capacity. Those numbers are hardly showing a tooth due to default reserves applied, (what happens as 32 Mhz?) Readings for the current ability to failover cluster CPU and memory switch are 98% CPU, memory of 99%. Well above the 50% necessary for CAC to kick in and say STOP FEEDING ON VMs!

    Now that I put on the Foundation, we have allocated 60 on the 64 GB of physical memory to virtual machines and vSphere alarms are declaring 'hey you're Out of memory', however, by using a product such as VeeamOne, he pointed out that each of these 10 VMs are really using only 2 GB memory physical host the 6 GB allocated. If "technically", we are only using 20 GB (10 GB/host) of 64 GB of memory available to the cluster host physical.

    The million dollar question is... if I'm not one of the hosts, all THE virtual machines will come online on the last surviving node?

    Scenario 1:

    Host 1 32 GB of physical memory, 5 VMs, 30 GB of allocated memory, alarms to use 93%

    Host alarms 2 32 GB of physical memory, 5, VMs, 30 GB memory allocated, 93% of use

    1 host fails, HA intervenes, it goes to the State, it cannot power on virtual machines more because it is only 2 remaining GB and I need power over 30 GB of value? (I expect that this is not the case, we should be able to provision...)

    Scenario 2:

    1 32 GB of physical memory to the host, 5 VMs, 30 GB of memory allocated, physical 10 GB used by Veam reported, physical of 20 GB remaining

    The host of 2 32 GB of physical memory, 5, VMs, 30 GB of memory allocated, physical 10 GB used by Veeam one reports, physical of 20 GB remaining

    Welcome 1 fails, HA comes into play, it starts every 5 VMs on host 2 because it has 20 GB remaining physical and needs only 10 GB?

    It seems that there should be a better way to really tell if you can fully support 1 node failure model % with the memory allocated in your cluster-based, anyone can demystify it for me and explain?

    There are several measures that serve to plan capacity. For the record, good or bad, I look at demand and active memory. Also, if I see the balloon flight or in Exchange, I feel its time to add a host (or just memory... it all depends on several other measures). DRS is starting to complain when the hosts are stressed. Right resize VMs from the beginning makes it easier trend and plan additional abilities. There should be no need more allocate as much.

    I also use VROPS in my environment. This is very useful for trends and plan ahead.

  • HA admission control - power on a virtual machine with no available slots?

    Hello world

    Is it possible this admission control would allow a machine virtual can be powered on when there is no available slots?  I don't speak HA initiate a failover, but manually power on a virtual machine of the vSphere client...

    My current settings of the laboratory are:

    -vCSA 5.5

    -5 x test VMs

    -3 guests x ESXi 5.5 with 2 processors and 4 GB of RAM each.

    -Admission control policy = host failures tolerates the cluster = 1

    -Fixed CPU slot size = 2000 MHZ (yes I know it is huge, but I am doing this to force a slot by ESX host since resources 'hidden' PR of each host ESXi brings only 3800 MHZ)

    -Then, the slot machines, 2 3 total used, 1 failover, with 2 under voltage VM.

    It is my understanding that once you run out of slots, admission control should leave you not MANUALLY power on the new virtual machines. I could do something wrong, but in my case, I am able to power on the other 2 VMS with 0 slots available. I enclose some screenshots for reference.

    Could I be missing?

    Thanks in advance.

    So this should be no possible and looks like a bug to me. Please file a support call and post the number SR here so that we can have genius watch.

  • How is admission control information is pass on host when vcenter is out of service?

    I expect a scenario when vcenter is down and in the meantime ESXi host is harnessed. How this admission control will be coming in this scenario? It will restart the virtual machines without taking into account the policy of admission control? or how? I don't know how information moves from vcenter for HA.

    Can anyone help me please with this. Thank you very much.

    Thanks much for the reply. Thus, admission control information will be with the guests? where this information is stored inside the host, we can see that or we can change it from the host?

  • Admission control - [Sanity Check] cluster asymmetric w / no reservations of VM

    I spent a time reviews the policy reduced control and just looking for a "sanity check" that the conclusions of my environment and my calculations are accurate.  Currently manages an asymmetric cluster which includes 1 much more memory than the rest host (host 5).  I have read from multiple sources, including 5.1 deepdive of Duncan and Frank policy "tolerate failures of the host in the cluster" to waste resources in the scenario in order to ensure the greatest Web host is protected.  However, the calculation below shows that this policy is almost dead on what percentages policy would cover in a scenario N + 1 (again to ensure the greatest Web host is protected).

    The distribution is:

    Host 1

    Host 2

    Home 3

    Home 4

    Host 5

    Home 6

    Home 7

    Totals

    Memory (GB)

    48

    72

    64

    64

    128

    64

    72

    512

    % Of the memory

    9.38

    14.06

    12.5

    12.5

    25

    12.5

    14.06

    CENTRAL PROCESSING UNIT

    18.08 GHz

    19.12 GHz

    19.12 GHz

    18.08 GHz

    19.12 GHz

    19.12 GHz

    19.12 GHz

    131.76

    PROCESSOR: %

    13.78

    14.5

    14.5

    13.78

    14.5

    14.5

    14.5

    In an N + 1 scenario the percentages recommended would work out at 25% for memory (128 / 512) * 100 and 15% (rounded) for CPU (19.12 / 131,76) * 100 to ensure the greatest web host is protected.  I realize that we can reduce waste by reducing these values, but he would go the end of the point of HA if the largest Web host has failed (I realize again it would need to be heavily used approval) and it does not have enough resources to insider HA restarts.

    Use current political failures the cluster host tolerates slot sizes are default values (32 MHz CPU and 217 MB (overhead) memory) without reserves of VM.  We use the reserves of RP, but they come into play with slot HA size calculation if I understand.  We have currently 2210 (1540 available) number of locations within our core, given the hosts/resources provided.  138 locations are used to power VMs and 532 are reserved for failover.  For the record, it works to 112,73 GB (532 * 217) / 1024).  This is equivalent to 22% of the total memory in the cluster.  It works for the processor to 16.63 GHz (32 MHz * 532) / 1000 round is 17% of the total CPU in the cluster.

    And if I went to the policy of the percentages with best practices in mind I would reserve more memory and less than what current CPU, but only a few given the above percentages.  Are there any underlying problems with this configuration, I'm missing or given configuration is my exact analysis? Or it is recommended to apply the policy of calculated percentages and lower resources reserved for values lower than the largest webhost in the cluster for available resources?  It is the 'catch 22' because we obviously want to maximize the resources available to the cluster while guaranteeing adequate resources of failover.

    I wrote an article about this as well: http://www.yellow-bricks.com/2012/12/11/death-to-false-myths-admission-control-lowers-consolidation-ratio/

  • vSphere HA Admission Control Calculator

    Hello world

    Today, I tried to place a host in maintenance mode and received the infamous "insufficient resources to meet the level of failover configured for vSphere HA. I worked around this by temporarily disabling the control of Admission, but it got me wondering how the cluster would tolerate the loss of one of the hosts.

    This is what looks like my cluster:
    -2 x Dell PowerEdge R720s with 96 GB of memory and 2 x Intel Xeon CPUs E5-2650 (12 cores) per host
    -ESXi 5.0 Update 1 + the latest patches
    -vSphere active HA
    -Admission Control Enabled (set at 50% of the resources of the reserve)
    -Active DRS
    -16 VMs Windows with the configuration of the CPU from 1 to 4 vCPUs and 1 GB to 4 GB of memory.

    I have studied this to try to determine what the cause of the problem. To do this I have set up a calculator based on Excel to determine the capacity of cluster failover. Calculations are based page 22 of «vSphere availability - ESXi 5.0»

    My ad has two parts to it:

    (1) the calculator attached that I would be very grateful if someone would review and confirm that it is correct (I hope it is and others will find it useful). I protected the leaves 'CPU' and 'Memory' so they can be used as examples. Just edit the details of CPU/memory host in other sheets to match your environment. I've left the details of my cluster in there I thought that people might use it as an example. If you like I can clean up towards the top.

    (2) a question: how will my cluster behave with the loss of a single host? If the behavior of a host in maintenance mode is something to go by not all virtual machines will begin on the hosts of the survivors that the processor requirements cannot be met. However, I read that in an event of failover HA will ignore the policy of admission control and try to start the virtual machines on the host of the survivor. Someone can it confirm please?

    I hope this is clear enough. If this isn't the case, I'll be happy to provide any additional information.

    Thank you very much!

    Sean

    Post edited by: STK2. Spelling, grammar correction. Deleted Excel document (will provide doc updated in a separate answer).

    HA admission control uses the reserves of cpu and memory for the virtual machines, not the configured and memory, the computation of the failover capabilities. You use probably the latter in your calculations. Under the tab "Resources" of the pole in order to see the reservations on your virtual machines (by default, they are 0).

    Elisha

  • Admission control and wrap a reservation

    Dear guys,

    as long as I do not have a reservation to my res stocks (with limits) I find it really hard to check how many slots is used if I select "chess tolerate the host cluster" admission control.

    Normally slots are based on VM reserve, but what about reservation assigned to ResPools?

    Then I check the Runtime Info on my group and I do not clearly have some values:

    Subtotal in the cluster = > ok

    used slots = > ok

    places available = > what does mean?

    failover slots = > what does mean?

    For these 2 values, I went to vmware documentation, but it wasn't clear to me...

    Thanks in advance.

    Daniele

    I tested it slots is based on the vm not the Resource Pool reserve reservation

    Example: -.

    Number of slots total = 22

    Used machines slot = 4 (depending on the number of running virtual machine)

    Available places = 7

    Failover = 11 Slots (depends on how many hosts failure cluster can tolerate) if it is 1 and we have 2 node cluster, then this is 50% of the total number of connectors. Means, that you can use according to this example, you can feed only on VM 11 are now no more than that.

    If we don't do not tell me it please

  • Need a Script to extract information for HA, DRS levels and admission control policy.

    Hello

    I'm looking for a Cli Script to get out information for HA, political admission control and the level of the DRS for each Cluster in the CSV format with the name of the Cluster

    (1) it must be HA information, they are handicapped his license, for each Cluster in vcenter.

    Admission control 2) enabled or disabled.

    (3) policy admission policy admission control type speciying control

    (4) DRS Automation level as manual, partially automated and fully automated.

    Need this info to keep track so we have several Cluster

    Thank you

    vmguy

    If you want to select only the groups whose name starts with Cluster, you can then change the command Get-Cluster in the first line of the script in:

    Get-Cluster-name Cluster *.

  • HA Admission Control Policy - slot sizes

    Hello

    I just want to ask everyone on the Slot sizes in HA.

    Somehow, I understand that slot sizes are determined with the underlying physical servers and the allocation of resources to virtual machines or reservation (as appropriate). But according to the comment, I noticed sometimes the 'differences' specifically in the Slots of Cluster Total, used and available places.

    Total number of connectors Cluster is not always equivalent to the sum used and available places. It may be more or less at the same time.

    Can someone explain it to me more detail? Appreciate your help on this. TIA!

    In this particular case, could it have been that HA was later activated on a cluster who suffered many virtual machines power? Or maybe the virtual machines have been powered when admission control has been disabled, then admission control has been activated again? That would be wise.

    Scott.

    -

  • HA admission control, no reservation

    Hi all

    I read trying to get my head around how HA admission control works with virtual machines that have no defined reserves.

    My understanding is that the HA system assigns a default value of 256 Mhz CPU and 0MB + of memory overload VM who do not have reserved resources. This default value would be the size of the slot to determine the amount of the VM, the cluster can run.  The number of places is in turn affected by the number of host failures that can be tolerated or the percentage of resources as production capacity unused failover.

    Where I am confused, this is what happens when HA assigns (assuming that no reserve of VM) default slot size (256 Mhz + (0 MBoverheadMB) to a cluster when the VM memory usage is greater than 0 MBoverhead?)  I guess that there is a possibility that HA slot allocation could exceed by far the cluster resources real/usable in the event of failover/maintenance etc. mode

    For example -.

    A cluster using host to 32 GB with a bunch of 1vcpu 4 GB VM without reservations the value.  According to the guide of resources each VM would have an overhead of 165.98 MB, resulting in a fairly large number of slots for each host.  Assuming that the CPU resources are sufficient, a host might have something like 190 slot machines to play with.

    Does this mean that HA would allow 190 4 GB without reserve in the cluster VM before that admission control would still prevent or add power on VM?  If this is correct, it seems quite possible more than commit the very much, defeating the point of assignment of a host for failover or the percentage of resources etc. as each host in the cluster could be running over 100% of use of.

    Assuming that all virtual machines must stay without reservation, how could configure HA of follow-up admission control admissions (chess hosts or percentage) without saturating the remaining hosts?

    EDIT - this link summarizes my question a little more eloquently that I could put it, note the linked article is 2008 and the figures may be out of date.

    http://vinternals.com/2008/12/ha-slot-size-calculations-in-the-absence-of-resource-reservations/

    See you soon.

    link added

    As shown in the link you posted (and guide availability VMware page 26 - http://www.vmware.com/pdf/vsphere4/r41/vsp_41_availability.pdf) HA admission control uses to determine the size of the slot if no reserve is specified on the virtual machines, you can use HA advanced options to increase the default values: das.vmMemoryMinMB and das.vmCpuMinMHz. This will increase the size of the slot and reduce the number of places available in the cluster.

    Elisha

  • Admission control TE MPLS

    Hi people,

    I'm pretty new to the topic of MPLS TE, and I have a question for which I can't find an answer on my reading

    Suppose we have the topology in the attached diagram

    IS - IS is used as the IGP; MP - BGP for VPN, EP to THIS eBGP East traffic

    Each link is a link 10 Mbps, 6 Mbps are used by YOU.

    each node has a master tunnel which has a default tunnel and a tunnel LLQ

    the metric of YOU was left by the default value (to the IGP metric)

    every single node a tunnel to all other nodes (mesh full tunnels)

    all tunnel interfaces have the config below

    interface Tunnel10

    Description you RX (Master)

    bandwidth 10000

    IP unnumbered Loopback0

    way of tunnel mpls traffic-eng

    tunnel destination 169.254.1.2

    highway tunnel mpls traffic-eng announce

    Master of exp-bundle tunnel mpls traffic-eng

    tunnel mpls traffic-eng exp-bundle Member Tunnel11

    tunnel mpls traffic-eng exp-bundle Member Tunnel12

    !

    interface Tunnel11

    Description you Rx (default)

    bandwidth 6000

    IP unnumbered Loopback0

    load-interval 60

    way of tunnel mpls traffic-eng

    tunnel destination 169.254.1.2

    highway tunnel mpls traffic-eng announce

    tunnel mpls traffic-eng 7 7 priority

    tunnel mpls traffic-eng bandwidth 4000

    dynamics of path-option 10 mpls traffic-eng tunnels

    mpls traffic-eng record-road tunnel

    mpls traffic-eng fast-diversion tunnel

    tunnel of mpls traffic-eng auto-bw frequency 300 setting threshold 5 max - bw 4000 min - bw 1000

    tunnel mpls traffic-eng default exp

    !

    interface Tunnel12

    Description you RX (LLQ)

    bandwidth 4000

    IP unnumbered Loopback0

    way of tunnel mpls traffic-eng

    tunnel destination 169.254.1.2

    highway tunnel mpls traffic-eng announce

    tunnel mpls traffic-eng 7 7 priority

    tunnel mpls traffic-eng bandwidth under pool 2000

    dynamics of path-option 10 mpls traffic-eng tunnels

    mpls traffic-eng record-road tunnel

    mpls traffic-eng fast-diversion tunnel

    tunnel mpls traffic-eng exp 5

    My Questions are

    1. have the MPLS TE metric by default equal to the IGP metric and assuming that the path will R9 in R8 is still the best path from the point of view of the PGI

    the LSP used to get R1 to R9 is R1-R8-R9 and all the available bandwidth was used (6 Mbps), so we have 4 Mbit/s left on the R8 - R9

    R2-R8-R9 is the best way (best metric IGP/TE)

    When RSVP is reported the LSP to go to R2 to R9 is the FSA will be built using a least favorite way in terms of metrics or control that result in and admitted failure?

    2. bandwidth is sous-pool dedicated interface of main tunnel or Tunnel 12?

    Thank you very much in advance for your help

    Kind regards

    Mehdi

    HI Maria,

    I think Yes, because when RSVP message the way it send on all possible paths, in this case the best way not enough bandwidth and suboptimal Yes, the LSP is even on the suboptimal way.

    Who is King

  • HA - Admission Control-number of failures of the host in the cluster can tolerate

    I currently have a 9 host cluster with the game of "Number of failures of the host in the cluster can tolerate" 1 and the VM to be powered even if they violate... 'allow' checked.

    When I look in my VI client to the HA cluster box it says "Current ability to failover" 7 and "Ability to failover" set up 1.

    The 1st statement really limits my cluster just 1 instead of the host or 3 of my guests would die if the VM to be powered even if they violate... "allow" ignore it? I know that you can set for a maximum of 4 failures of the host, but who does not eat resources on other hosts?

    I think I know the answer, but for some reason I guess any second myself.

    Thank you!

    > Does that mean that since I do 'Allow Virtual Machines to be powered even if they violate constraints of availability' have set up that it does not matter how many host fail? He will try to turn on VM as much as possible? Thank you

    Exactly

  • Distributed Version Control in JDeveloper 9.0.3.5

    Hello

    I am currently using JDeveloper 9.0.3.5(Build 1453) with Oracle E-Business Suite 11.5.10 applications to sponsor OAF.

    What I googled so far, all signs point to 'NO' but is it possible to use a system of control of distributed like Git or Mercurial with this version of JDeveloper? The only options I see are CVS and Rational ClearCase SCM Oracle9i. Currently, my company is not no matter what version control system. How to use version control system / do you offer?

    At the time 9,0,3, there is no such thing as Git.

    We support Git in new versions of JDeveloper.

    Version 9, the best would be to use the free CVS version management.

    Another option is to use the tools-> external tools to define calls to the command-line Git, passing parameters in the files that you want to check in and out.

  • Admission policies

    What are the admissions policies?

    Thank you

    Prashant

    vCenter Server uses strategies of admission control to ensure that sufficient resources are available in a cluster to provide
    failover protection while ensuring that reservations of virtual machine resources are respected.

    See http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-501-availability-guide.pdf for more information

Maybe you are looking for

  • Cannot disconnect from the Apple TV

    Greetings. MBP end 2013. El Capitan ATV2 I can connect to my Apple TV very well (mirror or just connected). But once logged in, there is nothing more I can do on my computer. Including disconnect them. The screen clears all the windows and menus, and

  • Windows 7 Upgrade Guide for dv2000 models

    Statement: You use this guide at your own risk. I take no responsibility for any problem. Things you need to know and do before the upgrade: 1 HP may require that reinstall you Vista back if you have hardware problems to solve the. 2 HP will not help

  • XY graphics cursor doesn't snap to draw when you drag

    I have not used the XY graphs a lot and need to have several locations on the chart and be able to find the points on the path value using cursors.  Everything works fine except that I can't drag the cursor along the plot.  Even if Snap to is on and

  • Unknown device after upgrade from windows 7 to windows 8

    My laptop is a Pavilion m7 1015dx I just upgraded to windows 7 to windows 8 and everything went well. Now after installing all the drivers updated for my particular laptop, I have a unknown device in my device manager. The hardware Id are: ACPI\VEN_H

  • Made on emulator ripple 9860 and real device 9860 user interface is different for blackberry OS 7.0

    Hi all I have developed a hybrid application that works fine on Android for almost all the resolutioh which I have supported. For blackberry, I am building and pacakging an application using the ant tool. During the installation of file .cod devive 9