Nagios + vMotion/DRS

Anyone know of a Nagios plug/module that can detect when a VMware guest virtual computer moves from one host to another host due to the DRS, vMotion? It would be nice if Nagios can also be configured to display history a trend happening on each host for the time period selected as well.

Unfortunately, you aren't able to pull data on vMotion or something like that vsphere SDK for comments.  See page 8 of the

http://www.VMware.com/support/developer/guest-SDK/guest_sdk_40.PDF

Tags: VMware

Similar Questions

  • Prices (not storage vmotion) vmotion, DRS (Distributed Resource Scheduler) features available in ESXi cluster?

    Prices (not storage vmotion) vmotion, DRS (Distributed Resource Scheduler) features available in ESXi cluster?

    I thought that ESXi is a correct free download?

    Yes and no. ESXi Hypervisor binaries are the same for all editions (free and paid). It depends only on the license key that you have what features are available and if you can add an instance of vCenter server host.

    If you just want to test vSphere, you can sign up for a free 60-day trial (no required license key) during this period, all the features of the 'Business Plus' edition are available.

    André

  • VMware streams to show ha/vmotion/drs?

    Hello

    I remember, is that for a long time on the VMware Web site, I have seen animations vmware vmotion/drs/ha capabilities (like a presentation, but with the movement of the blocks).

    I would like to know if anyone has seen these animations on the vmware Web site and could share a link to it?

    Thanks in advance

    If you have found this device or any other answer useful please consider useful or correct buttons using attribute points

    I can't find the original of VMware site, but this presentation is quite similar:

    http://www.itcork.IE/ContentFiles/eventresources/Cork@it_vmware.ppt

    André

  • Tests of vmotion, DRS HA grouping in one box

    Hi all:

    I have a server decommisioned HP proliant, I try to use to test ESX/learning. What I'm trying to accomplish is to install at least two servers ESX and iscsi Opefiler to test vmotion, DRS, and HA. I tried VM ESX on a nesting ESX Server and I get the error 'no network device found' supported. Have you tried to do it all on a single box. Please help, I am new to vmware and try to learn as much as possible.

    Ciao

    James Pearce notes on nested ESX hosts:

    http://www.techhead.co.UK/VMware-vSphere-ESX-install-configure-manage-preparing-your-test-lab

    Datto

  • VMotion / DRS works if the host is ESXi U2 and not U1

    Hello

    Just to confirm, will be all HA / DRS / VMotion functionality work if the ESXi 4.1 Cluster is updated 1 and one of the hosts is the HOTFIX 2?


    Thank you

    Although it is recommended to run the same version/build, it should work properly.

    André

  • Como fazer e e vMotion, DRS, ha no VMware ESXi4.1

    COMO FAZER VMOTION E DRS E HA NO ESXI4.1 VMWARE

    Thiago Rodrigues associate. [email protected]

    Certified ITIL Foundation V2

    Certified Manager engine Op-Manager / Manager-App

    change wila: removed caps in the title by the regulations of the forum.

    Cara, nao gosto muito ficar repetindo like things, mas vamos la. TEM um voce participou chamado boas tips forum. the coloquei um documento chamado blueprint, okay?

    Nele tem tudo as voce precisa estudar, baseado nisso você sentira need or nao testar some things.

    Em relaçao a licença do vcenter, e ISSO AI 60 dias para testar. NAO tem muito fazer o, so is voce gravar outro e depois user restore.

    If you have found this information useful, please consider awarding points to 'Correct' or 'Useful'*.

  • Should we the DRS and VMotion for tolerance fault? Small cluster host 3

    Hello

    I'm slightly confused on some information coming from two different directions. We are in the stages of buying a package of Vsphere Essentials Plus and intend to build a 3 host with shared storage cluster. VSphere essentials plus the package comes with HA which, from my understanding, fail over automatically and restart the virtual machine on the available server. In case of server failure Vmotion or DRS provides something more than HA.

    We are told that VMOTION & DRS moves virtual machine of FAILURE to an available server without restarting the virtual machine. I don't know how that's possible, but I wanted to check first with the community.

    VSPHERE 4 ESSENTIALS PLUS seems ideal for small host deployments 3 @ $2 600 however, that same lic. for the regular VSPHERE enterprise is $2 800 per CPU that @ 6CPUS is more than $17 K.  Then why should we spend $17 K for business verus a $2 600 essentials more lic for such a small deployment?

    Thank you

    CA

    Yes, this is a new feature. http://www.VMware.com/products/fault-tolerance/

    It may work on the existing links, but it is recommended to have dedicated interface for FT.

    ---

    VMware vExpert 2009

    http://blog.vadmin.ru

  • Long Distance vMotion

    Hi guys,.

    Please help me to gain some knowledge here, because I'm really very confusing. I understand (or understand) vMotion at the same level physics site, i.e. all hosts to connect to the same VLAN for vMotion and all guests can see the same LUN storage... you can vMotion should be child's play...

    But I really struggle with the concept of long Distance vMotion... that is another physical site where the same VLAN may not exist and indeed the same storage LUN cannot exist... can someone please (please) explain how this concept would work or how such an activity of vMotion can be implemented? vMotion for me means being able to move a virtual machine from one host to another without any downtime of the virtual machine, that is why I am struggling to understand the concept of a vMotion of long Distance with different networks and storage on different site.

    Confused Dryv

    so effectively you're saying we need meet the pre-reqs that apply to the creation of a vMSC inorder to get long distance vMotion to work?

    Yes, all requirements fundamental clusters as for vMSC applies here as well. It's the same cluster stretched with a more 'cool' name implies support for distances longer than a metropolitan area.

    -What is vmotion without shared storage? This implies that you need not shared storage for vMotion activities? is it still possible? I hear this a lot... the term sharing nothing vmotion I hear people throw a lot!

    vSphere 5.1 presents the vMotion (vMotion "reinforced") turf, which is basically storage vMotion and vMotion of calculation at the same time on the network. You have no need of storage shared between source and destination host but you still need to layer 2 for VM networks and vmkernel connectivity. Of course, it will take much more time than a regular vMotion with shared storage because you must copy the entire VM disks. See here for more information:

    http://wahlnetwork.com/2012/11/21/leveraging-VMwares-enhanced-shared-nothing-VMotion/

    so, if I told you:

    "yes we can just virtual computers vmotion from one site to the other.

    and the underlying environment has been configured as follows:

    -Unique VCenter for all sites

    -Clusters by site (for vMotion/DRS/HA in a cluster only)

    -No VLAN stretched between sites

    -Routing in place across all sites

    -No SAN or SAN replication that extends between sites

    What would you say to me?

    Is there a way (or even remotely possible) "we can just virtual computers vMotion from one site to the other?

    In this scenario you can technically , through the above mentioned improved turf vMotion. But without VLAN stretched your VM will be accessible from the other site, unless do you a few magic fantasy to the physical network layer or decide to re - IP manually the machine virtual or the network as NSX virtualization.

    However, any type of vMotion between hosts is only officially supported if vMotion vmkernel interfaces are on the same domain of dissemination of layer 2. Apparently, you can get an approval of support for VMware kludgy as this http://cumulusnetworks.com/blog/routed-vmotion-why/.

    This will change with vSphere 6 that officially supports vMotion traffic routing however.

    Also note that in this scenario there is y no HA or DRS between sites since your clusters are divided. But the generally longer the general idea that underlies the vMSC or his brother of distance is to have a single stretched cluster.

    I recommend you read some of the messages of Ivan I've linked my post before he made some good shots of face on the whole subject from a perspective of networkers.

  • vmotion in vcloud 5.1 configuration

    We are running vcloud Director 5.1. The cluster backend wasn't originally setup with vmotion because we had no dedicated network. We are now successfully vcloud vapps and would add vmotion for the cluster. I read that there may be problems of vcloud after toggle DRS, I wanted to ask if anyone has seen the same thing with vmotion. Thank you

    vMotion is independent of vCloud Director.  You shouldn't do anything in vCloud Director at all.  Simply configure kernel ports as required and out of the race.

    If you have not configured for vMotion, DRS we that will perform the initial placement of the virtual machine... then more nothing else (since there is no mechanism to allow automatic movement of virtual machines).

  • Can you change what core is vmotion ports without going into maintenance mode?

    If you have a host running, with a vswitch standard, can you change what port kernel manages vMotion without putting it in maintenance mode?

    What is happening is that we have a number of blades, all running vSphere 5.1.  Each host has a unique vSwitch with multiple groups of ports and a dedicated port vmkernal for vmotion and another dedicated one for the management network.  We use a VLan to separate the vmotion network.

    Our networking group will do a reconfiguration that will change the VLan dedicated for the vmotion.  What I would like to check the vmotion network management for each host, clear the check box on the Group of ports "vmotion".  The new vmotion VLAN is set up by the network, and then I go back and make the changes on each slide so that the "vmotion" is now enabled and remove the functions of vmotion for the management port.

    I know that it is not recommended to run the vmotion traffic on the management port network, but it will only be short term (probably a week or two).  My biggest concern is whether I can make changes 'live' without affecting the machines running.

    In a note related, if a host loses contact with other hosts on the network vmotion, which will pose a problem (in addition to not being able to vmotion)?  Will be the loss of connectivity on only the port of vmotion cause any kind of response of isolation, failover, etc.. ?

    Basically, you can modify most of the settings of network - including vMotion - without affecting the virtual machine. When you say that they reconfigure VLANS, do you know how long it will take and if the VLAN current will be available during this time? In this case, you would not have to implement no work around, but just update the VLAN groups of vMotion ESXi host ports.

    André

    PS: about your question "will be the loss of connectivity on only the port of vmotion cause any kind of response of isolation, failover, etc..?

    No, it's the management network that is used for HA, so everything what he let go is vMotion/DRS.

  • SQL mirroring advised DRS?

    I'll put up the mirroing SQL using SQL 2008 r2 servers I have a principle, the mirror and the witness running on the virtual machines in a cluster of ESXi 4.1 DRS. It seems I read somewhere (I can't find it now) that it is best practice to create a rule of DRS to maintain SQL servers on separate host. What is a good practical or recommended? If Yes, what is the reasoning behind it?

    Thank you

    vMotion/DRS (Live Migration) may not work more if your host has hardware problems. What might work in such a case is HA, which restarts the VMs system on other hosts, and that means downtime for the SQL cluster.

    André

  • Physical/virtual MSCS & vMotion

    So, I've spent the last few days search and document clustering options in vSphere 5.  I came to understand the warnings and limits with each type of save this:

    Are the vMotion & DRS valid limits in a MSCS physical-virtual 2008 r2 cluster?  All documents and forums I read set out clearly the purely virtual cluster cannot be vMotioned manually (although it doesn't seem to work if a host goes down), but I don't see anything that indicates the same when a physical host is the mixture.

    Can someone clarify the situation?  Of course, I'll be testing in a lab environment but wanted further clarification before submitting this project from design to management.

    Thank you

    -Blitz

    It does not change if there is a physical box into the mix.

    The virtual SCSI adapter must still be in the bus, sharing mode, which prevents from vMotion.

  • MSCS on an active DRS Cluster HA?

    Hi all

    We are setting up a 6 host DRS activated cluster HA with a host as the host of the previous day. Two of our virtual machines will run MSCS cluster (with active and passive node). What we want is that:

    1 - Normal VM must be restarted on a host sleep by using cluster HA when an ESX fails.

    2 VM Normal should be vMotioned DRS decides.

    3 clusterises MSCS VM should restart on a host sleep by using cluster HA when an ESX fails.

    4 MSCS group of VM should not be vMotioned automatically. Must reside on the specific host.

    Are higher than 4 point if possible please confirm how we do. I'm a little confused that as per below document MSCS is not supported on active DRS cluster.

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1037959

    Kind regards

    Khurram Shahzad

    According to the doc of vmware, the installation program for the Clustering failover and Microsoft Cluster Service to Vsphere5 no problem when you use the DRS and HA together. but it takes only

    1 - two nodes must never operate in a single host for HA and DRS
    2. to do this you must create the rules below

    -HA, create a host of DRS group, and then create a group of Virtual Machine DRS, then A VM host affinity rule must be created, with a group of virtual machine DRS and a host of the DRS group.
    -DRS, create anti - affinity machine VM-virtual rules specify which virtual machines should be stored on different physical hosts and turn on strict application of the law of affinity rules virtual machine configuration

    Or you can also disable the DRS for these 2 nodes and enable only HA, but even once you must ensure for HA 2 nodes never operate in an ESXi host

    See books of installation for Failover Clustering and the Microsoft Cluster Service to Vsphere5 and vsphere-esxi-vcenter-server-501-resource-management-guide

  • Prevent a loop of DRS

    With the DRS/HA cluster if I have an ESX that host randomly restarts during the night.

    Is there something in the settings DRS to say after 2 random reboots keep the host in maintenance mode?

    Example of the problem.

    Failure of the host-> HA kicks in the VMs restarted on other nodes Configuration--> the host reboots and is save vmotions DRS VMs-back > fails host new loops again.

    I know I have the ability to use partially automated in the settings of the DRS, but I prefer to use the full automation.

    Also if it is taken during the day by a tech they can usually put into Maintence mode before the loop starts again, not so much, but late at night.

    Suggestions/comments?

    Deactivation of the DRS or DRS in partially autmetted would be the only ways to prevent this, given that the DRS and HA work how are designed - traore

  • ESXi 4.1 - DRS & vSwitch NIC w/Zero

    In order to work on some issues to address IP we came up with this solution...

    We have implemented a new vSwitch named VM with ZERO physical NIC network.

    We change the NIC of each VM VDI to the VM network.

    We have installed and configired a Win 2008 R2 server.  His first NETWORK card is connected to a vSwitch with 6 physical network connected to our internal network interface cards.  Its second NETWORK adapter is connected to the VM network as above.

    We run WHAT DHCP networking of VM-related.

    We have configured the traffic between two network routing maps.

    I added the static out the main router routes so for a different subnet, we can get on the network of the VM.

    This configuration is running on our three hosts.  The network of the VM on each host has a different IP subnet.  We also use the DRS through the three hosts.

    I have a test VM VDI.  When I try to manually migrate this virtual machine to one of the other guests, I get a warning that 1 network adapter is a "virtual intranet", which prevents a live migration.  If I can stop the virtual computer and do the migration, I still get the warning, but it allows me to move forward and the VDI VM test works.

    Island of main quest is, if we were to change our production VDI VMs to the diagram above (where 1 network adapter) is the network of the virtual machine (which is considered as a "virtual intranet"), will be able to move the VDI virtual machines to one of the hosts DRS or it will fail?

    Thanks in advance

    Migration of virtual machine test:

    If a virtual machine has a vNIC attached to a vSwitch internal network that has no assigned natachasery there is a question about vMotion, DRS ofc is based on, but can be resolved by configuring the vpxd.cfg on the vCenter server...

    http://KB.VMware.com/kb/1006701

    / Rubeck

Maybe you are looking for