DRS issues

Hello, I have a client who has 2 physical servers and a SAN.  they run vSphere 5.5.  I wasn't the one that setup it environment, but I am now responsible.  Ive been so more things and I noticed that a DRS cluster is configured.  DRS is configured as fully automated.  I then noticed that about 5-10 times per day a virtual machine is migrated from one host to another.

Servers are speced on two well, Dell Poweredge r.620, E5-2609 Xeon 2 processors in each server, 146 GB memory to each server.  About 17 VM distributed between two servers.

Im trying to figure out why DRS continues to move vms.  This is not always the same virtual machine every day or at the same time.

There is more then enough memory, most of the servers are allocated around 4 gigs, some with 8, so, not even using half 146 concerts that are in each physical server.

Then I looked at the CPU on the host at the time of migration and it not even using 30%, then watched the virtual machine CPU time, it has been moved and basically the same thing, about 20% cpu.  So it's not a CPU problem.

Use of the network seems weak to non-existent at the time of the movements.

I noticed one thing, it's that there's no assign virtual machines with 4 vCPU assigned to them.  I know in Vmware 3 cpu scheduling was a problem, but vmware 5 is allot better with it.  Could DRS to move the virtual machine around due to the CPU, scheduling issues?

When I check the tasks and events, it just shows that the DRS migrated virtual machine, it does not say why.  Is there somewhere else that I should check for this info?

Im not saying this is a problem that the virtual machine is moved because it seem not to be problems with it.  But it seems not exaggerated so that he can move things around so much.  I would just understand why go here

I wrote to allot, thanks for any input

Mike

Can you tell me the threshold of Migration?

Tags: VMware

Similar Questions

  • In any case about 512GB cap on virtual disks? / DRS issue?

    Hi all

    My first question is this:

    I have a San with about 1.7 TB Free however when I go to create virtual disks for one of my Vm It CAPS he died in 512 GB of space, is anyway around this hairstyle? other then extended partitions within windows using with 2 VMDK?

    My second question is when you have activated DRS, but value manual for all your machines, there is in any case to prevent it from asking which server you want to start the virtual machine on every time you have to start the virtual machine.  Even when I have the virtual machine created on one of the ESX servers in the cluster and go to start from this server, he always asks what server I wish on the VM with.

    (1) Yes, but not after you create the file system on the LUN - choose a new size of block to the VMFS is a destructive operation.

    (2) it is normal, expected behavior.  His request because potentially the best server to start the virtual machine on may not be the current.

    -Matt

  • On 10.5 PLM after DRS licensing issues

    Shortly after upgrading to 10.5 CUCM, we made a DRS Recovery, after doing the recovery that no licenses work. I see the .bin file which shows yet all the licenses, but they don't show up as installed under license use.

    Tips or advice would be great, please note, we run a system offline.

    When you have restored, also rebuild the virtual machine ON THAT PLM is hosted? If so, was the old address MAC defined as static, as recommended, if yes, then make sure that the new virtual machine uses the same MAC, if not the old VM is removed and you can not see what MAC has been that you need Cisco rehost the license for you.

  • DRS and live Migration with the guest operating system issues

    Hello

    Someone at - he never encountered problems with the DRS or dynamic migration where the OS in question invited crashes under the peaks of the processor. This seems to be with servers but not specific to 64-bit or 32-bit servers it sure substance of concern with machinery that uses a lot of CPU or memory crash when migrating on the fly. We have 7 of Quad-Core AMD BL465 with esx3.5 update1 DRS and HA active and animators have many resources to offer.

    Thank you very much

    Rich

    You experience the problem that is described in this KB?

    http://KB.VMware.com/kb/1003638

    If you believe this or any other answer was helpful please consider marking as 'correct' or 'useful '.

  • Impossible to apply the resource DRS settings on host.the of object has already been completely created this can drastically reduce the effectiveness of the DRS

    Impossible to apply the resource DRS settings on host.the of object has already been completely created this can drastically reduce the effectiveness of the DRS

    host is having above mess on summer please help on this

    I found the correct solution

    ESXi 5.1 U3 is the permanent fix for this issue

    U3 is fix

  • VCA serious question in a cluster after adding servers active DRS blades M630 Dell

    Hi guys,.

    I hope that you will be able to help me with this strange question that I feel right now.

    A couple of details first on our environment:

    • vCenter 5.0 U3
    • ESXi 5.0.0 - 2509828
    • 6 guests active DRS cluster
    • 4 hosts are Dell M620, E5-26XX v2 processor blades
    • 2 hosts are blades M630 Dell with processors of E5-26XX v2
    • VCA mode is currently set at "Westmere".
    • All ESXi hosts are fully patched using AUVS

    Before adding the new M630 blades to the cluster it any of the issues with the DRS or VCA, outside the constraints of resources, and the 4 blades M620 we are able to correctly migrate the virtual machines back.

    I could add the M630 blades to the cluster without problem and without errors or warnings have been issued. A week or so later, I noticed more deviations of DRS desired on the cluster. More inspection I noticed that quite a large number of virtual machines showed N/A beside VCA mode and I was not able to migrate them to any host on the cluster.

    Thinking it could be a bug of sorts, I turned off all the virtual machines in the cluster and run a script that resets the CPUID at default. In addition, as a measure of precaution, I restarted the vCenter server and ensures that the database has been effective. The hosts are synchronized time with an NTP source and fits perfectly on both hosts.

    After feeding all the VMS, I rebooted the M630 servers and inspected the BIOS settings to ensure that virtualization and NX/XD is enabled. With all virtual machines powered off, I was able to change the levels of the VCA to any available without warnings setting.

    When everything has been turned back on I noticed a strange behavior in the cluster.

    Virtual machines that are running on M620 blades show the right level of CVS and I am able to migrate them to all servers in the cluster. Once that they migrate to the blades M630, VCA state change s/o and I am unable to migrate anywhere, not even the other M630 blade in the cluster.

    The error that I am shown during the verification step is:

    The operation is not allowed in the current state of the connection to the host. Host CPU is not compatible with the requirements of the virtual machine to cpuid level 0 x 1 host bit ' ecx' register: 0000:0010:1001:1000:0010:0010:0000:0011 required: x 110: x01x:11 x 1: 1xx0:xx11:xx1x:xxxx:xx11 inconsistency detected for these characteristics: * General incompatibilities; see section 1993 of possible solutions.

    Dig deeper in my investigation, I came across this KB: 1034926[1]

    I checked the M620 user guides and the M630 and parameters mentioned in the KB should be enabled by default. Unfortunately, I have no way to check the BIOS settings because I can not all M630 blades enter into maintenance mode without turning off the virtual machines running on them and another maintenance window one day after previous, we're going to be deprecated. In addition, the error is not the same as that of the KB and my goal is to collect as much information before entering another window of maintenance requiring downtime.

    I hope that you will be able to provide any new perspectives or opening my eyes to things that I may have missed.

    Thanks in advance.

    The processor in this system is a v3 E5-2660, not an E5-26xx v2.  ESXi 5.0 doesn't know how to use the features of this processor FlexMigration.  I think that the first version of ESXi to support this processor is ESXi 5.1U2.

    Unfortunately, I do not see a workaround for ESXi 5.0 solution.

  • Strange question DRS

    We use vCenter v5.1 U2

    We have 18 guests in our dev cluster. We are a mixed Windows and Linux environment. Recently, we decided to start testing rules DRS that would keep all Windows guests on 5 hosts. We have created a host containing 5 hosts group and then created a group VM containing all guests of Windows.

    Then, we created a DRS rule that says that all Windows guests must run on the "Windows Hosts". Within 5 minutes, everyone had migrated to the correct hosts except 1.

    This particular customer (A) has a counterpart in Linux (B) - there is another rule stating that these two machines must run on the same host. For some reason, every 5 minutes (indefinitely) they would both migrating to a new host, but he was never one of the 5 hosts marked for "Windows guests. Finally, I migrated manually was invited to one of the appropriate hosts and finally followed B and that was the end of that.

    If I migrate back out of the Group A, B will follow and the two will start jumping around again, but never makes it on a suitable host. If both A and B are on a Windows host (we have not supported Linux guests are on windows hosts) and I migrate the guest Linux (B) off - it will come back and join the windows (A) prompt. That tells me it's to honor the VM-to-host rule. But for some reason, once it is out of one of the appropriate hosts it can never find on his way back.

    We have two other guests (C and D) and they are also a windows and a linux guest with a VM-virtual machine affinity rule. They behave in the SAME EXACT WAY as A and B!

    What is going on? Is this a bug? Can someone point me to some documentation or an article that describes a similar conclusion? I'm having a hard time researching it.

    Thank you!

    FWIW: we are in no way limited to resources on any of the hosts in question. All the storage and virtual LANs in use are presented and available to all guests. Guests will work perfectly when you migrate manually to the appropriate hosts, they just may not find their way by the DRS.

    Just a sequel to this...

    I opened a case with vmware and they have set up a lab with my exact same of construction # and configuration and they could not duplicate my results. For now, we just came with the workaround of adding the second machine to the large group of virtual machines (in my scenario, which would add the linux VM to the DRS 'windows servers' group). I hope that this issue will go away when we're finally out of this build #.

  • DRS practice help question

    The self-paced review and having contact with a question time and I was wondering if someone can help out me on it. I have attached the issue to this subject.

    Hi friend

    True

    Fake

    True

    True

    Fake

    The end: reference:http://frankdenneman.nl/2010/02/15/impact-of-host-local-vm-swap-on-ha-and-drs/

    Let me know if you need clarification on any point. Also, let me know if you disagree with any one.

  • Add hosts with existing virtual machines for 'Greenfield' active DRS Cluster

    I'm currently involved in a project for the hypervisor 5.5 and vCenter. Existing are 2 physical servers with redundant everything and 8 SAS hot swap hard drive bays. Initially, 8 bays only 4 have been populated with hard drives. Hard disks have been removed, ESXi 5.0 has been loaded, 4 virtual machines created on each server and all lived happily ever after.

    Now, I would like to upgrade these servers to 5.5... as follows:

    I filled the remaining 4 bays on each server with some hard disks and created a second strip of sufficient capacity (twice the capacity of the 4 original disks). I'm stop servers, past the stripe for starting a new band of readers and installed ESXi 5.5 on the new band. The old Strip also remains in tact, so that I can start to ESXi 5.0, if I set to boot from the original soundtrack or boot to ESXi 5.5 if I boot to the new band (two operating systems starts very well, are properly networked, configured vCenter, etc.).

    When booting in 5.5, he sees his own, new band and is also the soundtrack which is listed as a second data store attached (I think actually I want to make possible migration of the simple VM from the old to the new data store), both are disk space of LSI, Non - SSD, Type VFMS5.

    Panic sets in when I start both computers in 5.5 and the time comes to add 5.5 hosts in a cluster (I also want to test the vStorage DRS and HA) and I've reached the setting of the, "choose Resource Pool." I'm scared to death that choose the first option, "all of the virtual machines in this updated host in the cluster resource pool root. Pools of resources currently present on the host computer will be deleted. "will mean not only a reformatting of the new band which I would like to add to the cluster, but also the still attached old band that includes the data that I want to keep. I don't want to lose data or virtual machines on the soundtrack, but to migrate them in a cluster of 2 servers ESXi 5.5. I was really hoping to migrate data to new tapes on new hosts and then re - purpose boards 2 original (on both computers) as a third table vStorage.

    Issues related to the:

    1. If I choose the option "put all the virtual machines from the host in the pool of resources of the root cluster. Pools of resources currently present on the host computer will be deleted. "with all the drives connected, all my data will be lost?

    2. If I pull the 4 original disks (5.0) and use the option 'put all virtual machines from the host in the pool of resources of the root cluster. Pools of resources currently present on the host computer will be deleted. "that with the new arrays connected (5.5) and then reconnect the old paintings after that the hosts are added to the cluster, will be only the re added still get sucked into the tables and data deleted?

    3. choose the second option, "create a new resource pool for virtual machines from the host and resource pools. This preserves hierarchy to pool resources in. the host' a safe option? If this option works, no matter if I have my original array attached when you add hosts to the cluster?

    Last point: by reading all the documents I found it seems strongly suggested to set up guests who have not a deployed virtual machines, that's why I'm going to great efforts to try to keep the new hosts as empty as possible and with 1 port base networking while waiting to complete the configuration. Does it matter if I migrate virtual machines or add them as guests to the ESXi 5.5 before or after I have add hosts to the cluster?

    Any ideas or help would be greatly appreciated.

    I'd go with option C.

    VSAN I would agree has some stupid requirements, but that they were aiming for is almost class company SAN at a decent price by using the SSD as caching tables, but as you said if you don't need not good I would continue to go with a NAS NFS solution.

  • List of VMS in DRS group

    In vSphere, you can right-click on a Cluster, click on "change settings > DRS Group Manager > (select a group), change" and view the list of virtual machines that are in the cluster.

    I'm looking to get this list even with the SDK... I was hunting for a way to do it but cannot find where in the API it is perhaps, the ideas of how I could get the list?

    Happy to provide the code so far, but it's just a simple cluster-related statistics and information; anything related to the issue.

    It is nestled in ClusterConfigInfoEx data.

    So you get your clusters and then look at what follows-

    $cluster-> {configurationEx}-> {Group}

    There are two types of groups, host and the Vm.  You can then check the type of groups of MV and the list of members.  You may also look at $cluster-> {configurationEx}-> {rule} if you want to see if its a rule group refines or disaffine.

  • 2 host HA - DRS cluster

    Hi all

    I have 2 HA - DRS host clusters, it seems that DRS does not work no doubt, when I look at the performance of 2 hosts hosts one of them consume more memory of 45 GB 42 GB and the other consume 26 GB to 45 GB, I all know the DRS to balance the load on two hosts and automatically migrate VMs from one host to another , on my environment, any idea's end this issue?.

    Well, you can increase the threshold for migration to max (which will have a direct impact on the deviation from target load) to enable DRS to vMotion off virtual machines even for the slight increase in performance. Second, you can also consult this advanced setting to limit the no. virtual machines per host for vSphere 5.1 only:

    http://www.yellow-bricks.com/2012/10/01/limit-the-amount-of-eggs-in-a-single-basket/
    

    But I do not recommend to take the above measures unless your users are complaining about the performance. I trust with DRS to take corrective measures if the claim arises.

  • DRS Reporting

    Hello

    I doubt I'm the only one that has fallen on this so far with PowerCLI 5.0.

    With vSphere 4.1 (I think) we could start implementing host & groups of MV and then used DRS capacity evolved to pair them with affinity or anti affinity.  I found features to create new groups and add to the existing (thanks Arnim & Luc - http://communities.vmware.com/message/2040365 ) but I'm having a hard time finding a report that could give me the content of these rules (with groups and the configuration of the associated rule) that we have in our environment of DRS of advances.

    I had extremely bad luck with Onyx (it always crashes on me) so I'm wandering in the dark on this issue.  Thoughts?

    I created a function that should give you everything, rules and groups.

    function Get-DrsRuleAndGroup{
      param(
        [string]$ClusterName  )
    
      $vms = @{}
      $hosts = @{}
      $vmGroup =@{}
      $hostGroup = @{}
    
      $cluster = Get-Cluster -Name $ClusterName  Get-VM -Location $cluster | %{
        $vms.Add($_.ExtensionData.MoRef,$_.Name)
      }
      Get-VMHost -Location $cluster | %{
        $hosts.Add($_.ExtensionData.MoRef,$_.Name.Split('.')[0])
      }
      $cluster.ExtensionData.ConfigurationEx.Group | %{
        if($_ -is [VMware.Vim.ClusterVmGroup]){
          $vmGroup.Add($_.Name,$_)
        }
        else{
          $hostGroup.Add($_.Name,$_)
        }
      }
    
      foreach($rule in $cluster.ExtensionData.ConfigurationEx.Rule){
        New-Object PSObject -Property @{
          Name = $rule.Name      Enabled = $rule.Enabled      Mandatory = $rule.Mandatory      Type = &{
            switch($rule.GetType().Name){
            "ClusterAffinityRuleSpec" {"Affinity"}
            "ClusterAntiAffinityRuleSpec" {"AntiAffinity"}
            "ClusterVmHostRuleInfo" {"VmHostGroup"}
            }
          }
          VM = &{
            if($rule.Vm){
              [string]::Join('/',($rule.Vm | %{$vms[$_]}))
            }
          }
          VMGroup = $rule.VmGroupName      VMGroupVM = &{
            if($rule.VmGroupName){
              [string]::Join('/',($vmGroup[$rule.VmGroupName].Vm | %{$vms[$_]}))
            }
          }
          AffineHostGroup = $rule.AffineHostGroupName      AntiAffineHostGroup = $rule.AntiAffineHostGroupName      HostGroup = &{
            if($rule.AffineHostGroupName -or $rule.AntiAffineHostGroupName){
              if($rule.AffineHostGroupName){$groupname = $rule.AffineHostGroupName}
              if($rule.AntiAffineHostGroupName){$groupname = $rule.AntiAffineHostGroupName}
              [string]::Join('/',($hostGroup[$groupname].Host | %{$hosts[$_]}))
            }
          }
        }
      }
    }
    
    Get-DrsRuleAndGroup -ClusterName MyCluster |
    Select Name,Enabled,Mandatory,Type,VM,VMGroup,VMGroupVM,AffineHostGroup,AntiAffineHostGroup,HostGroup | Export-Csv C:\Drs-Rules-Groups.csv -NoTypeInformation -UseCulture
    

    It is always a primitive version of the function, but it should provide the information you want.

  • ESXi 4.1 - DRS & vSwitch NIC w/Zero

    In order to work on some issues to address IP we came up with this solution...

    We have implemented a new vSwitch named VM with ZERO physical NIC network.

    We change the NIC of each VM VDI to the VM network.

    We have installed and configired a Win 2008 R2 server.  His first NETWORK card is connected to a vSwitch with 6 physical network connected to our internal network interface cards.  Its second NETWORK adapter is connected to the VM network as above.

    We run WHAT DHCP networking of VM-related.

    We have configured the traffic between two network routing maps.

    I added the static out the main router routes so for a different subnet, we can get on the network of the VM.

    This configuration is running on our three hosts.  The network of the VM on each host has a different IP subnet.  We also use the DRS through the three hosts.

    I have a test VM VDI.  When I try to manually migrate this virtual machine to one of the other guests, I get a warning that 1 network adapter is a "virtual intranet", which prevents a live migration.  If I can stop the virtual computer and do the migration, I still get the warning, but it allows me to move forward and the VDI VM test works.

    Island of main quest is, if we were to change our production VDI VMs to the diagram above (where 1 network adapter) is the network of the virtual machine (which is considered as a "virtual intranet"), will be able to move the VDI virtual machines to one of the hosts DRS or it will fail?

    Thanks in advance

    Migration of virtual machine test:

    If a virtual machine has a vNIC attached to a vSwitch internal network that has no assigned natachasery there is a question about vMotion, DRS ofc is based on, but can be resolved by configuring the vpxd.cfg on the vCenter server...

    http://KB.VMware.com/kb/1006701

    / Rubeck

  • DRS rules and Resource Pools

    Is it possible to assign the DRS rules to Resource Pools instead of creating groups of DRS and adding to the DRS group VMs and then creating a rule
    I ask this question because of a problem that I currently have
    I am far from being an expert as you'll see soon and back to the use of support groups
    If it I'll give you my scenario
    I have 4 ESX host in my test group
    2 hosts are connected to zFCP
    And all 4 are connected to independent contractors
    I have a resorce pool call FC and another called ISCSI
    We constantly evolve around VM between these 2 pools
    The highest availability VM remains on the rest on ISCSI FC
    I would like to create a rule that States if a virtual machine is in the RP FC then he must reside on host A and B, and if it resides on ISCSI it can reside on any host
    I realize that there is still other issues involved here as data themselves warehouses
    but it is another matter that he owns the rights
    We migrate VMs and the data store when we leave the virtual machine to another RP 1
    If the rule would be always true
    I would like to hear from you
    on this topic
    for now I just use rules and groups of DRS
    Thank you

    I don't think this is possible from the point of view of rule affinity DRS, as he has no relationship with resources or VM disk pools. This seems to be a single-use resource pools.

    You could probably do with a set of scripts PowerCLI who would check two resource pools and ensure that the virtual machines to the breast have their records on the correct data store and if not do a svMotion.

  • Close VMware ESX 3.5 & 4.0 hosts... What about the DRS?

    Hi experts,

    Here I begin this discussion to know the following:

    Facts:

    Farm with guests VMware ESX in u5 versions 3.5 and 4.0 U2

    DRS configured as entirely not automated, no rules.

    Issues related to the:

    What happens with DRS to deploy new virtual machines with Virtual Hardware 7?

    They will receive only on ESX 4.0u2 hosts?

    Could cause instability on the other virtual machines?

    I raise this because I have no experience in mixed environments, and I expect the DRS feature so smart that moves and power on virtual machines on hosts that are able to manage virtual machines with virtual hardware that is compatible with, I'm so innocent?

    Virtual machines with Virtual Hardware 4 can be moved throughout of these clusters 3.5 or 4.0, but what happens with VM with the newest virtual hardware?

    If they had moved in any case on a host ESX 3.5, what will be the behavior of this virtual machine?

    Last, but not least important, what do guys suggest to bypass this until I have all migrated hosts?
    (Migration of all hosts will take more than a month, support and SLA issues, so I need a workaround solution)

    Thanks in advance for your time and answers this!
    Kind regards
    RaMorn

    Hello

    1. what happens with the DRS to deploy new virtual machines with Virtual Hardware 7?

    2. they will receive only on ESX 4.0u2 hosts?

    3 could cause instability on the other virtual machines?

    1 VM' with HW v7 will be migrated only between vSphere 4.X nodes in a cluster

    2 VM with HW v7 will be only on vSphere 4.x nodes

    3 nope

    1. I raise this because I have no experience in mixed environments, and I expect the DRS feature so smart that moves and power on virtual machines on hosts that are able to manage virtual machines with virtual hardware that is compatible with, I'm so innocent?

    2. the virtual machines with Virtual Hardware 4 can be moved throughout these clusters 3.5 or 4.0, but what happens with VM with the newest virtual hardware?

    3. If they got anyway moved on a host ESX 3.5, what will be the behavior of this virtual machine?

    4. last, but not least important, what do guys suggest to bypass this until I have all migrated hosts?
    (Migration of all hosts will take more than a month, support and SLA issues, so I need a workaround solution)

    1. Yes, the DRS will be

    2. see comments of mu above

    3. nothing will change on the behavior of the virtual machine, only VMware tool will be updated

    4. all depends on how your is configured, how is use, VM how you would deploy during the month?

    See you soon

    Artur

    Visit my blog

    Please, do not forget the points of call of the "useful" or "correct" answers

Maybe you are looking for