Needs of configuration for Fault Tolerance (FT)

Hello

I would like your advice for the configuration of FT.

In "vSphere availability Guide", it says "VMware recommends that you have a minimum of three hosts in the cluster.  He also says his only 'recommendations '.

in another day, I heard that VMware has written "FT is supported although its configurations are 2 ESXs in the cluster.  ".

But really, I can't imagine how works FT with only 2 ESXs.

in FT, esx primery send 'Treaty logging' to the secondary, secondary esx esx do the work and send acknowledgements to the esx primery and SEPARATOR do the work finally

Once esx primery is down, secondary esx become primery and third esx (one spare) become secondary. I think how FT is.

But if FT cluster contains only 2 ESXs, after the failure of SEPARATOR ESXs, the FT cluster has only one ESX (which is primery?). ESX primery can do the job only after he receive receipts.

but in this situation (FT has only 2ESX and SEPARATOR cluster is failed), there is no secondary esx so that means SEPARATOR cannot receive acknowledgements and cannot do the work.

Here are my questions, how can we FT works if FT cluster contains only 2 ESXs?

The entire function of FT is to keep your VM up to 100% of the time with down at any time by making a copy somewhere it is almost ready to begin.

Now, he does this by making a copy on a different host, but is not enabled. As do an on the VM vmotion but stops at 99% and then waits.

If you have only 2 host and it fails, then once the 2nd host resumes the next step is to make a copy of what he can't do because it doesn't have another host.

In a cluster of 3 host

host 1 has direct VM and host 2 copy

Host 1 died 2 host becomes live VM and then made a copy of the FT on host 3

He always wants to make a copy of the living host, in order to have 3 is recommended, but in a host of 2 cluster it cannot make the copy... and therefore more FT, but your VM will always run.

Tags: VMware

Similar Questions

  • What are the requirements\limitation for configuration of fault tolerance?

    What do I need to do properly configured?

    Here are a few good links providing the requirements and details:

    http://KB.VMware.com/kb/1010601

    http://communities.VMware.com/blogs/vmroyale/2009/05/18/VMware-fault-tolerance-requirements-and-limitations

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    repository scripts vGhetto

    VMware Code Central - Scripts/code samples for developers and administrators

    http://Twitter.com/lamw

    If you find this information useful, please give points to "correct" or "useful".

  • SAN replication for fault tolerance

    Hello world

    I hope that someone could point me in the right direction - it seems that I do not have enough review in the subject and the deadlines are too tight for me to explore different scenarios in depth...

    We have two data centers a few miles of each other connected by 100 Mbps link. Each datacenter will have 5 blades BL490 with ESX Standard accommodation about 50 VMs. ABC message a HP eva4400 SAN with SAN replication implemented. VC will be the first data center and the two data center are networked.

    SAN replication is block-level, so it seems that I can not just replicate changes, but all entries will have to be replicated. This should not be a problem because the link can support about 1.8 TB a jouret of data can be buffered.

    I am not however a vision how recovery would work in this case. We don't need instant recovery, I would say that 4 hours of recovery time is accepted if SRM fancy Automatic as DR scenario would not easily be accepted financial reasons, but your comments are welcome.

    Current idea is the following: replicate LUNS of primary site to the secondary. When disaster strikes, personal COMPUTER turns on the ESX host on the remote side and connects replicated LUNS, and then registered VMs and change my IP address.

    I understand that this seems horribly manual process and I'm pretty sure I missed some obvious pitfalls here.

    Could someone let me know which direction should I go? An article on the subject?

    This is a new installation and we prefer to put in place the basic recovery process and resize it later. I just need to have a good direction to allow this scalability.

    I thank very you much in advance!

    I think you have a good understanding of how it should work it is something I would like to implement, but use Veeam as support for replication. Here's what I would do for the IP addresses of the virtual machine. I like to keep the same IP addresses because I know that not every simple VM uses DNS for all of its applications, databases, etc. So unless you are POSITIVE you covered all the basics of your DNS, I'll try to use the same IP addresses. You just work on the part of this routing before start you your virtual machines. So make sure that when switch you or test your virtual machine failover, you make sure that if you have a virtual machine with the IP 10.10.10.10 with two Datacenters A & B, the routing would ensure that all your customers cannot talk to the datacenter B until you do this active way. In this way, you can test your failover

  • 6.0 Update 01 vSphere fault tolerance

    Hello everyone,

    I have a problem whene I activate Fault Tolerance on dv01.

    It is the configuration of my physics

    • 2 ESXi 06 update01 build 3073146
    • 1 cluster HA active
      • Ordered admission disabled
      • Disabled DRS
    • 1 with port configuration of vMotion group vDS
    • 1 starndar vSwitch for Fault Tolerance with 2 x Intel® Ethernet Converged Network Adapter X 520 - T2 10 Gbps
    • e

    Dv01 Config

    • 4 vCPU
    • 3 ramdisk stored a first shared data store datastore01 1.35 to:
      • Reset Eager 136 GB Thik provisioning
      • 500 GB Thik provisioning eager to zero
      • 600 GB Thik provisioning eager to zero
    • connected to the port vDS group 1 VMXNet3 network adapter
    • latest VMware tools installed
    • Guest OS Windows Server

    When activated the fault tolerance, I chose the ESXi02 and second datastore02 shared data store 1.28 at: the vmdk and vmx file is created successfully and I free 44Go on the datastore02 of a second.

    But whene I try to turn on this dv01 I have this error

    • vCenter car not power on the secondary VM
    • and whene I look on the event, the error is not "enough space for the secondary virtual machine."

    Can someone help me, please to fix this whene I enabled FT on litle VM02 with 1 thin vdisk, is working but for dv01 is not work

    THX in advance

    Best regards

    Hi DZinit,

    The computation of the space you need on the second data store, you take the file vswp into account. This will generate enough to on and be equal to the size of RAM allocated to the VM.

  • Certs host message-> vm fault-tolerant

    Salvation communities,.

    I get the following error message configuration of fault tolerance for a virtual machine, we have in our environment

    The configuration of the fault tolerance of the entity {entityName} has a problem: check the host certificates indicator undefined vcenter Server

    Hi Aneetu,

    The box "check certificates of the host" is unchecked in the SSL settings for vCenter Server. You must check this box if it isn't already

  • Cannot access the fault tolerance Mode

    Dear all,

    I have a problem, which allows VMware ft. I have 2 IBM eServer HS12 blade with IntelCore 2 Duo E6305 CPU @ 1.86 GHz

    When I turn on the machine virtual of FT it gives the following error.

    Recording/playback is supported for VMS 64-bit only on some processors. Abandonment of record/replay. Unable to access the mode for fault tolerance.

    Are there settings that I need to do on the host VMware ESX or VM.

    Kind regards

    Rahiz

    Rahiz,

    According to the Knowledge Base the CPU is not supported for the fault tolerance

    .

    Take a look at the Article KB 1008027 - processors and operating systems that support VMware Fault Tolerance

    André

  • You can install vCenter as a virtual machine with fault tolerance

    The benefits of having vCenter as a virtual machine are features such as HA and snapshots, is there a problem with having a fault tolerance enabled on a virtual machine that is the vCenter server to manage fault tolerance?

    He couldn't see in the documentation for fault tolerance limits

    Well...

    vCenter lists 2 CPUs as required and FT only works with one...

    -Matt

    VCP, vExpert, Unix Geek

  • Non supported configuration of virtual machines for the fault tolerance. The operation is not supported on the object. When activating FT

    Hello

    I try to activate FT on a virtual machine that AFAIK ticked all the boxes for FT but I get this message when activating it. No clue as to what might happen or where to connect for extended information? This error message is too generic...

    Thanks in advance mates.

    Did you follow KB1019165-fault tolerant to fails to 42% with the error: not supported configuration of virtual machines for the fault tolerance

    André

  • Windows for the issue of fault tolerance

    I counted on 4 virtual machines running in a fault-tolerant environment.

    I'm reading that I should get 2, Windows server 2008 R2 Ent. licenses because they are counted as separate servers (even if they are just redundent).

    Is this true?

    Do you mean 4 virtual machines protected with VMware Fault tolerance (FT) or using HA to protect an ESXi host failure?

    Windows Enterprise gets you 4 total licenses for virtual machines but it is authorized by the ESXi host. So if you have two hosts of ESXi, you would need two copies of Windows Enterprise to protect the 4. VMs data center total is something much better, if you plan to have more than 4 VMs.

  • Snapshots for virtual machines fault-tolerant

    VSphere availability guide says that you can not fault tolerant VMs snapshot. Otherwise, how would a backup virtual machines?

    If you need to snap for backup, you can stop FT, do the blink of an eye, make the backup, delete the snap and reactivate FT.

    You cannot schedule directly (in the special section), but there are useful scripts that can toggle pi of CLI.

    André

  • for vmware esxi fault tolerance specifications 5

    Can someone tell me where I can find documents to show me the fault tolerance support in version 5?  Are we still liminted to 1vcpu?  How many virtual machines can be set to feet?

    Hello.

    Not much changed. I keep it updated constantly:

    http://communities.VMware.com/blogs/vmroyale/2009/05/18/VMware-fault-tolerance-requirements-and-limitations

    Good luck!

  • WLC balancing without fault tolerance

    Hello, I need 13 points of service and provide load balancing between all connected AP

    Fault tolerance is not a concern at the present time where my reasoning below.

    I look at by specifying two 4402 controllers with the AP 12 license and configure them both in a single group of mobility, I then manually specify the primary access point controller and distributed accordingly between controllers one on access point two controllers for example 7 and 6 on the other.

    Could I ask if it could it be an acceptable method?

    Concerning

    Hi Mark,

    It is a perfectly acceptable design. If/when fault tolerance becomes a review a 25 model AP can be purchased to provide protection for the two 12 AP WLC Fail-over.

    I hope this helps!

    Rob

  • Fault tolerance switch port guidelines

    I am setting up our lab to test fault tolerance. I plan to use a vSwitch sperate isolated FT and separate FT VLAN. Are there specific indications for siwthc trial paramΦtres I can give to my network group. I want to just make sure that the physical ports are configured before idelly I start testing.

    I didn't know that your plan was to use vSwitch dedicated only for FT traffic and nothing else.

    So, I recommended that you use the "load-driven" (new feature in 4.1) for a better distribution of traffic between natachasery on vswitch but also for network IO contol to partition the link between the different types of traffic. But you are right, as long as the vswitch is dedicated for FT, no need to use it.

    The extent of the Cisco switch port config, if you run the two natachasery to the same switch, you can use etherchannel, although the single link 1 Gbit/s is more than enough for traffic FT. Also, I recommend to enable portfast on ports involved to avoid any interruption when other devices added to the network, triggering the spanning tree protocol recalucations.

    For traffic to

    distribue distributed by grouping network adapters, configure the virtual switch through which the logging

    traffic flow on the road based on IP hash policy. This strategy uses

    source IP address and destination of traffic flows to

    determine the uplink. Pairs of feet between different hosts ESX employ more than

    a pair of source address and destination for logging of FT.

    To use the route depending on the hash IP policy, the physical switch ports must be configured etherchannel mode.

  • Fault tolerance limits

    I would like to implement fault tolerance for a couple of our virtual machines, but I was reading and I have a few questions.

    1. it is said that you must have 2 vmkernel NIC dedicated to fault tolerance. Can it be shared with vmotion NIC?

    2 FT does not support SMP. Is this still the case? We lack ESX 4 Update 1.

    3 FT does not support snapshots. VReplicator allows us to reproduce our vm. This creates a snapshot during replication. That are not or not be possible?

    Thank you

    Scott

    1. you need at least 1 vmkernal NETWORK card dedicated for FT (or a new VIRTUAL local network, if you have 10 GbE) there is a lot of traffic that will be flowing for this shot

    -You can see a design on my website http://kendrickcoleman.com/index.php?/Tech-Blog/vsphere-host-nic-configuration.html

    2. Yes. You can't have FT w / 1 vCPU VMs currently. But if you have some awesome Nehalem procs, you may only have 1vCPU for your boxes of criticism

    3. they will fail. The snapshots are not possible on the VMS FT. As a result, backups will not work. I know it's one of the huge problems of scenario best IP, Setup a script to make a virtual machine out of FT, run the backup, and then select FT. A lot of work for a final result.

  • Network: Etherchannel vs. Fault Tolerance

    Here's my question.  I'm not a network guru but I understand if you assign NIC several ports to a virtual switch you gained transmit bandwidth (all ports are used to send), but that the same is not true on the receive side because you need a MAC and a layer of arbitration that can distribute traffic entering.  My hypothesis is that there is no way to go other than having two network adapters to the same switch and using Etherchannel trunking.

    So now, we have a trade off power.  If I choose alternative NIC to different switches for the fault tolerance, I am limited to only 1 GB on the side of the reception no matter what I do.

    If I have strange ESX hosts goes to switch A and even attending the B switch, it does not help much nor how many stores are building a reserve of 50% in their ESX capacity reservations.

    So assuming that my above assumptions are met, I'm curious to know what strategies and practices other stores allows to increase the pipeline of reception beyond 1 GB (without using 10gbE) without exposing to excessive exposure to a change of failure event.

    Thank you!

    Yes, that's basically it.

    I attended a VMWorld 2008 session given by a VMware employee who specializes in the robot of networking. He said that the "from port ID" is the method to use which is the reason why it is by default most recommended balancing. Hash TCP/IP method is OK, but has the configuration requirements of specific physical switch (EtherChannel), which makes it risky to be set up properly or poorly maintained. The hash of the IP would allow a simple VM for potentially communicate with multiple hosts on multiple physical links, but in most environments, it is not really necessary.

    Yes, when you have a redundant configuration of physical switches the redundant natachasery for the virtual switch should be distributed to multiple switches.

Maybe you are looking for