VCentre - DRS question

Hi guys,.

Just FYI, I am new to the organization... so just try to get a handle on their envirnoment.

I noticed an information/warning that there is an unbalanced load... vSpere DRS has been set to "Fully automated", but some machines have been added to a group of DRS? Can you confirm for me, machines must be assigned to an appropriate group before DRS will take effect?

Also, what guys do you set your host tagrget load? our is currently set on < = 0.035

Thank you

Stuart.

Hi Stuart,

None of the virtual machines must be assigned to a specific group for the DBS balancing load the virtual machines on the hosts in a DRS cluster. DRS is enabled on the level of the cluster and cluster becomes balancing domain for Drs This means that - if specified otherwise - all the virtual machines within the cluster are valid canidates for load balancing.

It is recommended to leave the threshold to moderate to DRS load balancing. The value you provide is the gap of load host target, and it is the target for the DRS to reach. THLSD and CHLSDS are pretty advanced topics, if you want to know that the more I recommend buying one of the books of Duncan Epping and I co-wrote on vSphere clustering.

Now, back to your problem, there is an imbalance and not report the DRS able to balance the load. What is the exact caveat?

Tags: VMware

Similar Questions

  • DRS questioned.

    One of my DRS cluster is set to "Partially automated" and often I notice under summary it says out of balance load and gives recommendation to the VM THAT is load balancing.  My question is if I don't move, next to performance, that would influence anything else? Should I change DRS "Fully automated" config... any suggestions would be great--thanks.

    I have never had problems with moving to SQL servers including.  If you are worried about it, you can set special VM do not move by changing the options of VM settings, VMware DRS, Cluster, the individual virtual machine by setting the manual.

  • Add production ESX hosts to a cluster

    Hi all

    I did some research in the admin guides and community forums, and I'm sure that I know what to do, but I would really appreciate a test of consistency here because the manipulation I do is in a production environment:

    I have a campus that contains two ESX areas that are managed by using vSphere and connected to a San. vMotion of works very well, the performance is very good (although the resources of the two boxes are fairly complete upward). However, I recently realized that I'd neglected to set up a cluster HA and DRS.  I want to remedy.

    I created the cluster with these specs:

    • the two HA and DRS, enabled
    • to the left, she also fully automated.
    • the power management of left
    • monitoring and host admission control enabled
    • leave the default settings for the behavior of the virtual machine
    • monitoring VM disabled
    • EVC enabled
    • the storage value of the swap with the virtual machine file

    I think that the next steps would be to add each ESX host consecutively and merged its resources with the cluster. However, here are a few questions:

    • How do you assess the risk factor to do this in a production environment (1 = perfectly safe, is a proven Scenario; 5 = you are out of your bloody mind? Do not)
    • Should I be triple-checking the SAN snapshots and planning of downtime for servers, or is it possible live and without any major qualms?
    • Am I right in assuming that it will increase my performance as well as provide better robustness of the campus, or should I expect a decrease in performance?

    Thank you very much in advance for your advice!

    Hey red,

    Addressing your particular situation, I would say yes to two questions.  Admission control HA is here to help you.  Ensure there are enough resources on the host computer to run the current and any expected load it will be after an HA event.  50% is close to default (but it is really based on the size of the slot) in an environment with two guests when guest cluster failures tolerates is set to 1 in a two and 25% host environment when the percentage of unused reserved as production capacity cluster resources in failover is left to its default value.

    If you have several virtual machines running that allows you to book 50% of your cluster resources (which it sounds like you have), then you have the option of "first category" your virtual machines and their giving priorities to restart event HA.  For their level, you'll want to active DRS (can be set to manual Automation), resource pools and you will need to configure your virtual machine under your HA settings options.  You'll want to pay attention to the priority of restarting VM here.

    I suggest you take a look at blog Duncan Epping http://www.yellow-bricks.com/ and Frank Denneman http://frankdenneman.nl/blog. They are all two fairly well the definitive answer to the HA and DRS questions and advice.

    See you soon,.

    Mike

    http://VirtuallyMikeBrown.com

    https://Twitter.com/#! / VirtuallyMikeB

    http://LinkedIn.com/in/michaelbbrown

    Note: Epping and Denneman explained that the amount reserved by default resources when you use the host cluster failures tolerates is promising to reserve enough resources to power on virtual machines.  This reserve of resource does not on average current, account, or future default load.  If you want to manipulate this feature, modify the memory and CPU reserves, which are the numbers used to calculate the size of the slot.

    Post edited by: VirtuallyMikeB

  • How do I restart the question VCENTRE 5.5 device

    I currently have two servers ESXi VMWare vSphere 4 Essentials x and will upgrade to 5.5 soon and ongoing.

    I've already updated my server vcentre for the new device vcentre I have running on one of the servers very well.

    I have the new vsphere 5.5 client installed and works very well also.

    My question is this:

    If I stop the ESXi server who runs currently vcentre unit how can I vcentre restarted again when I go back to the server?

    Is there a way to do this because I don't want to be more than the time needed for maintenance I have to do on the server.

    I fear that I will get looked out and not be able to get the device vcentre began as there would be no way to do it.

    I would appreciate all your comments and thank you in advance.

    The vCenter Server Appliance can be cast on the ESXi host and any other virtual computer. Just use the vSphere Client to connect to the host on which the VCSA is registered and start it. Just make sure that you have both versions of vSphere Client installed on your system, one for the ESXi 4.x hosts and the other corresponding to your version of vCenter Server.

    André

  • VCA serious question in a cluster after adding servers active DRS blades M630 Dell

    Hi guys,.

    I hope that you will be able to help me with this strange question that I feel right now.

    A couple of details first on our environment:

    • vCenter 5.0 U3
    • ESXi 5.0.0 - 2509828
    • 6 guests active DRS cluster
    • 4 hosts are Dell M620, E5-26XX v2 processor blades
    • 2 hosts are blades M630 Dell with processors of E5-26XX v2
    • VCA mode is currently set at "Westmere".
    • All ESXi hosts are fully patched using AUVS

    Before adding the new M630 blades to the cluster it any of the issues with the DRS or VCA, outside the constraints of resources, and the 4 blades M620 we are able to correctly migrate the virtual machines back.

    I could add the M630 blades to the cluster without problem and without errors or warnings have been issued. A week or so later, I noticed more deviations of DRS desired on the cluster. More inspection I noticed that quite a large number of virtual machines showed N/A beside VCA mode and I was not able to migrate them to any host on the cluster.

    Thinking it could be a bug of sorts, I turned off all the virtual machines in the cluster and run a script that resets the CPUID at default. In addition, as a measure of precaution, I restarted the vCenter server and ensures that the database has been effective. The hosts are synchronized time with an NTP source and fits perfectly on both hosts.

    After feeding all the VMS, I rebooted the M630 servers and inspected the BIOS settings to ensure that virtualization and NX/XD is enabled. With all virtual machines powered off, I was able to change the levels of the VCA to any available without warnings setting.

    When everything has been turned back on I noticed a strange behavior in the cluster.

    Virtual machines that are running on M620 blades show the right level of CVS and I am able to migrate them to all servers in the cluster. Once that they migrate to the blades M630, VCA state change s/o and I am unable to migrate anywhere, not even the other M630 blade in the cluster.

    The error that I am shown during the verification step is:

    The operation is not allowed in the current state of the connection to the host. Host CPU is not compatible with the requirements of the virtual machine to cpuid level 0 x 1 host bit ' ecx' register: 0000:0010:1001:1000:0010:0010:0000:0011 required: x 110: x01x:11 x 1: 1xx0:xx11:xx1x:xxxx:xx11 inconsistency detected for these characteristics: * General incompatibilities; see section 1993 of possible solutions.

    Dig deeper in my investigation, I came across this KB: 1034926[1]

    I checked the M620 user guides and the M630 and parameters mentioned in the KB should be enabled by default. Unfortunately, I have no way to check the BIOS settings because I can not all M630 blades enter into maintenance mode without turning off the virtual machines running on them and another maintenance window one day after previous, we're going to be deprecated. In addition, the error is not the same as that of the KB and my goal is to collect as much information before entering another window of maintenance requiring downtime.

    I hope that you will be able to provide any new perspectives or opening my eyes to things that I may have missed.

    Thanks in advance.

    The processor in this system is a v3 E5-2660, not an E5-26xx v2.  ESXi 5.0 doesn't know how to use the features of this processor FlexMigration.  I think that the first version of ESXi to support this processor is ESXi 5.1U2.

    Unfortunately, I do not see a workaround for ESXi 5.0 solution.

  • Strange question DRS

    We use vCenter v5.1 U2

    We have 18 guests in our dev cluster. We are a mixed Windows and Linux environment. Recently, we decided to start testing rules DRS that would keep all Windows guests on 5 hosts. We have created a host containing 5 hosts group and then created a group VM containing all guests of Windows.

    Then, we created a DRS rule that says that all Windows guests must run on the "Windows Hosts". Within 5 minutes, everyone had migrated to the correct hosts except 1.

    This particular customer (A) has a counterpart in Linux (B) - there is another rule stating that these two machines must run on the same host. For some reason, every 5 minutes (indefinitely) they would both migrating to a new host, but he was never one of the 5 hosts marked for "Windows guests. Finally, I migrated manually was invited to one of the appropriate hosts and finally followed B and that was the end of that.

    If I migrate back out of the Group A, B will follow and the two will start jumping around again, but never makes it on a suitable host. If both A and B are on a Windows host (we have not supported Linux guests are on windows hosts) and I migrate the guest Linux (B) off - it will come back and join the windows (A) prompt. That tells me it's to honor the VM-to-host rule. But for some reason, once it is out of one of the appropriate hosts it can never find on his way back.

    We have two other guests (C and D) and they are also a windows and a linux guest with a VM-virtual machine affinity rule. They behave in the SAME EXACT WAY as A and B!

    What is going on? Is this a bug? Can someone point me to some documentation or an article that describes a similar conclusion? I'm having a hard time researching it.

    Thank you!

    FWIW: we are in no way limited to resources on any of the hosts in question. All the storage and virtual LANs in use are presented and available to all guests. Guests will work perfectly when you migrate manually to the appropriate hosts, they just may not find their way by the DRS.

    Just a sequel to this...

    I opened a case with vmware and they have set up a lab with my exact same of construction # and configuration and they could not duplicate my results. For now, we just came with the workaround of adding the second machine to the large group of virtual machines (in my scenario, which would add the linux VM to the DRS 'windows servers' group). I hope that this issue will go away when we're finally out of this build #.

  • DRS practice help question

    The self-paced review and having contact with a question time and I was wondering if someone can help out me on it. I have attached the issue to this subject.

    Hi friend

    True

    Fake

    True

    True

    Fake

    The end: reference:http://frankdenneman.nl/2010/02/15/impact-of-host-local-vm-swap-on-ha-and-drs/

    Let me know if you need clarification on any point. Also, let me know if you disagree with any one.

  • Question of DRS HA and fully automated

    I have a vCenter enabled for HA cluster (tolerates 1 default host) and DRS (fully automatic)

    Most of my VM have 2 or 4 GB of memory. Now, I need to create a virtual machine with 32 GB of memory.

    If the DRS load balances my cluster there will be no host monkey with 32 GB of memory available even if the cluster has more than 32 GB of memory available.

    My question is (without entering the calculation of the size of opening calculation etc.) if the host where the 32 GB VM fails and HA restarts the virtual machine on the remaining hosts, DRS balance the cluster to allow the 32 GB VM to start?

    In vSphere 4.1 and later, HA works with DRS "defragment" hosts so that your large VM can be restarted. It is a kind of best effort of activity and is not guaranteed to work. I'm not aware of what is true in 4.0.

    For more details, I suggest the post of Duncan Epping on the subject, start by "how HA to calculate how many slots is available for each host?

    http://www.yellow-bricks.com/VMware-high-availability-deepdiv/

  • Question of VMware DRS ESX cluster?

    Hello

    Just got a new Question that arises for me on VMware DRS need answers from you guys.

    -What happens in VMware ESX DRS if all ESX servers nodes cluster are running on full use of resources. Will be VMware DRS will migrate a virtual machine (which does not receive the amount of resources that he needs / configured) to another ESX base?

    According to the doc - http://www.vmware.com/pdf/vi3_35/esx_3/r35u2/vi3_35_25_u2_resource_mgmt.pdf, DRS vmotion virtual machine based on the migration being configured threshold setting believes that a virtual machine for the migration if it does not receive the resources required.

    NUTZ

    VCP 3.5

    (Preparation for VCP 4)

    I agree with Troy and Matt - and would add even if she tried to vmotion virtual machine, that he would be unable to because the way you have described the cluster, it is not all available resources to perform the vmotion since vmotion requires the resources available to complete with success-

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Question on the affinity of the DRS Host 'should' pronounce

    Hello experts,

    When using a rule "should work", VMs can run on a host computer to a host no specific DRS group.

    I would like to know under what condition VMs running on a host unspecified DRS group.

    I checked the following document, but this is not to say about the State:

    http://pubs.VMware.com/vSphere-4-ESX-vCenter/topic/com.VMware.vSphere.ResourceManagement.doc_41/using_drs_clusters_to_manage_resources/c_vm_host_drs_rules.html

    Kind regards

    Kiyo

    Welcome to the community,

    DRS, DPM, and HA is a complex topic and there are many things that are taken into calculation. (Preferential) rules 'should', can be violated by the DRS if necessary and they could also be violated by HA, because HA is not aware of these rules. If the DRS is defined in the lower priority 2 it will correct these HA based violations if possible.

    For very detailed information on this complex (and interesting) topic, I can really recommend "vSphere 5.0 Clustering technique deepdive ' by Duncan Epping and Frank Denneman.

    André

  • Question of DRS

    Hi all

    I have worked with vsphere for awhile and started using some of the more advanced features to get the most out of our hardware. I am trying to configure DRS pools for my virtual machines using idle resources.

    for example

    I have 2 servers, xeon quad core and 22gig of ram in each

    They are in a cluster HA/DRS, a pool of drs has been created

    I have 4 virtual machines, each configured for 1 vcpu and 4gig of RAM, drs is set to auto so it puts 2 on each server

    Currently, the VMS sit just to the 4gig of RAM that was attributed to them, so there are 10 giga or more free ram on each server, how do I configure my pool in order to free ram can be used by the virtual machine, if necessary, but if there is a failure of the host and the vm on the failover of the failed server, they take the ram that was idle?

    see you soon

    Garth

    Hello

    In your scenario, 1 ESX host is probably more that sufficient to handle the load of all your VMS out of DRS (even when value as aggressively as possible) will not need vMotion virtual machines.

    We're going to the reasoning for saying that you did all the VMs 4 x 4 GB on 1 ESX host - if they are all performed at 80% memory (rare, but possible), you are still using only 12ish GB of memory - DRS sees 22GB per ESX host and sees no need to migrate virtual machines.

    resource pools are there that for cases where your ESX host is limited to help decide which virtual machines to reduce resources, in order to feed the other VMs.

  • How to remove a VM inventory in vcentre when the host is down

    Hi guys,.

    We had a power outage today where one of our physical ESX host is dead (we have 2 VM cluster hosts).  HA/DRS unfortunately does not work as there are insufficient resources to automatically migrate the virtual machines on the host failed through to the host remaining (another separate question that I will not here address)

    when the hsot went down, virtual machines have been left in the market, but were in a disconnected state (see table).  I read that you can reallocate virtual machines to another host by 'navigate to the data store and do a right-click on the vmx file and click Add to inventory and select the host. " (Ref: ) http://communities.VMware.com/thread/173314 )

    However when I browse to the data store, the option 'Add to inventory' has also dimmed.  I suspect (Please confirm) that he wwas because the virtual machine in question was still at the "running" State  problematic, because that disconnected/running state, I could not close it out vcentre gui (or console, as host was dead)

    can someone tell me how - if possible - can I migrate the virtual machine to another host, while in the situation described above?

    If this is not possible for future situations where the host is lost, I would be better together virtual machine to stop? (this would allow me to reassign the vms can?)

    see you soon

    The virtual machine will remain in the host inventory and disappear once you remove the host to vCenter inventory. What you then do - at least that's what I would probably - to be able again to power on virtual machines is to register the virtual machine to other hosts on the server vCenter Server (click right the VM .vmx file and choose 'Add to inventory')-after the correction of the host and before adding it again to vCenter Server - connect directly to this host and then delete the virtual machine already 'moved' from his inventory.

    What you need to do after registration of VM for other hosts is to check the VM based backup applications (if used), because the virtual machine will receive new ID.

    André

  • locale ID / local question of evacuation

    Dear team

    Referring to https://pubs.vmware.com/NSX-62/index.jsp?topic=%2Fcom.vmware.nsx-cross-vcenter-install.doc%2FGUID-98B1347A-2961-4E2A-B6AC-2C38FD19D127.html

    I asked the question before a while back - so I try to clarify the essential again.

    ID of locale and local evacuation IIRC of sense in a multi-site/multi-vc configuration where the workload can move between several locations, insofar as that they are connected to universal ls.

    What I'm not 100% grasp here is whether or not we must take into account the attention of site prefix "affinity"-i.e. If I move the workload on the same subnet from left to right and vice versa, I can only think of one almost 32 style host routing underlying features of locale id. "» Even if it would work from a route point of view ESG (border router) controller - how you would ensure that the physical router to the North include the fact that we sometimes different indicators for the same subnet or host route?

    To regular routing, we could say "who cares"-, but if your ESG is configured for FW'ing - this cannot be ignored of course.

    Can you help me here?

    Kind regards

    Rik

    Hello

    I was thinking of scenarios like this, talk to the guy of our network and come only with operational "pin" subnet owners site.

    If you set up rules of the DRS (when stretched cluster) and/or write some operational procedures of your employees (maybe even automate the VM available to reduce the probability of human error).

    For example, you have 192.168.aaa.0/24 subnet (NAV 5aaa) and 192.168.bbb.0/24 (5bbb NAV) and decide that the VMs, connected to the NAV 5aaa, should work in site A and VMs, connected to NAV - site b. 5bbb.

    Then you have to announce these networks in the outside world following way (using OSPF or BGP or whatever):

    • the way favorite to 192.168.aaa.0/24 via the site a router to border, route non-preferred is through site b.
    • preferential routes to 192.168.bbb.0/24 via the edge site B, route router no favorite is via site A

    This way you have routing resiliency against site failure, but if your operational "pin" is somehow broken (for example, VM, connected to the NAV 5aaa found in site B), you have asymmetric traffic that most likely will be dropped by your GSE (because they are firewalls).

    I guess, there is simply no solution "simple and clean" for that, you have to compromise somehow (perhaps leave firewall level DFW only, while traffic can be as asymmetric, as it happens).

    Maybe you should ask you first of all, what is more important in this scenario: locality of traffic, ability to move VMs between sites or something else.

    Another possibility is described in this great blog post http://www.routetocloud.com/2016/02/nsx-dual-activeactive-datacenters-bcdr. Look in the section "active data center with GSE mirrored." But there is no local traffic in this solution.

    And something about the local infiltration I have stumpled to another great post a while ago: ://networkinferno.net/ingress-optimisation-with-nsx-for-vsphere http://https it's not supported of course, but I hope that we will see a feature like this in no time.

  • DRS network

    who use the drs esxi or vmotion network management network

    Please clarify the question!

    DRS is in fact the same as vMotion, except that migrations are initiated by vCenter Server, if necessary.

    André

  • Firewall rules for NSX through 2 vCentres

    I have 2 vCentres, 1 in each physical site, in bound mode and NSX (single component DFW) running on two of them. In each site, the ESX hosts in the pools where I installed NSX are behind a firewall, so I found this doc to get the required ports:

    6.2 NSX VMware vSphere Documentation Center

    I now have the ports to open for guests of ESX NSX managers/vCentre on each site server, i.e. rules allow of NSX Manager/vCentre/ESX hosts communicate within the site 1 only.

    I have similar firewall for site 2 rules.

    My question is, do I need the firewall rules to allow the Manager to communicate with the vCentre NSX site 1 and ESX hosts in site 2 and vice versa?

    Thanks for any help.

    Take a look at the annex in the latest version of the hardening guide - put him to date with some tricks to cross-VC.  You need primary and managers of NSX secondary to communicate for universal synchronization, two managers communicate with the Cluster of universal controller (site 1) and hosts on site 1 and 2 to be able to communicate with the UCC but I don't think you have to your site 2 vCenter/Hosts to communicate with the site 1 NSX Manager if I read it correctly.

    NSX - v 6.2.x - Security Hardening Guide (Published version 1.6)

Maybe you are looking for