Network: Etherchannel vs. Fault Tolerance

Here's my question.  I'm not a network guru but I understand if you assign NIC several ports to a virtual switch you gained transmit bandwidth (all ports are used to send), but that the same is not true on the receive side because you need a MAC and a layer of arbitration that can distribute traffic entering.  My hypothesis is that there is no way to go other than having two network adapters to the same switch and using Etherchannel trunking.

So now, we have a trade off power.  If I choose alternative NIC to different switches for the fault tolerance, I am limited to only 1 GB on the side of the reception no matter what I do.

If I have strange ESX hosts goes to switch A and even attending the B switch, it does not help much nor how many stores are building a reserve of 50% in their ESX capacity reservations.

So assuming that my above assumptions are met, I'm curious to know what strategies and practices other stores allows to increase the pipeline of reception beyond 1 GB (without using 10gbE) without exposing to excessive exposure to a change of failure event.

Thank you!

Yes, that's basically it.

I attended a VMWorld 2008 session given by a VMware employee who specializes in the robot of networking. He said that the "from port ID" is the method to use which is the reason why it is by default most recommended balancing. Hash TCP/IP method is OK, but has the configuration requirements of specific physical switch (EtherChannel), which makes it risky to be set up properly or poorly maintained. The hash of the IP would allow a simple VM for potentially communicate with multiple hosts on multiple physical links, but in most environments, it is not really necessary.

Yes, when you have a redundant configuration of physical switches the redundant natachasery for the virtual switch should be distributed to multiple switches.

Tags: VMware

Similar Questions

  • Networking VMotion and fault tolerance

    We will configure separate vSwitches for management, vMotion and FT. Each will have a pair of physical interfaces. I wanted to know if it is advisable to have separate IP space for management of ESX, vMotion and FT of if they are all on the same VLAN? If the load is a determining factor, assume heavy vMotion and load FT. 192GO by potential server for 40-50 virtual machines per host.

    I'm a newbie to networking Yes, if separate IP space is a recommendation, which is the basis behind this reasoning?

    I recommend you set up 3 different IP subnets. The subnet IP public/business for the network management and two separate for vMotion private IP subnets and FT. All three go in the VMkernel and from what I've read so far, put them on the same subnet may cause problems with routes traffic through the appropriate interfaces. In addition to this, it makes sense for safety, as amvmware already mentioned.

    André

  • Fault tolerance switch port guidelines

    I am setting up our lab to test fault tolerance. I plan to use a vSwitch sperate isolated FT and separate FT VLAN. Are there specific indications for siwthc trial paramΦtres I can give to my network group. I want to just make sure that the physical ports are configured before idelly I start testing.

    I didn't know that your plan was to use vSwitch dedicated only for FT traffic and nothing else.

    So, I recommended that you use the "load-driven" (new feature in 4.1) for a better distribution of traffic between natachasery on vswitch but also for network IO contol to partition the link between the different types of traffic. But you are right, as long as the vswitch is dedicated for FT, no need to use it.

    The extent of the Cisco switch port config, if you run the two natachasery to the same switch, you can use etherchannel, although the single link 1 Gbit/s is more than enough for traffic FT. Also, I recommend to enable portfast on ports involved to avoid any interruption when other devices added to the network, triggering the spanning tree protocol recalucations.

    For traffic to

    distribue distributed by grouping network adapters, configure the virtual switch through which the logging

    traffic flow on the road based on IP hash policy. This strategy uses

    source IP address and destination of traffic flows to

    determine the uplink. Pairs of feet between different hosts ESX employ more than

    a pair of source address and destination for logging of FT.

    To use the route depending on the hash IP policy, the physical switch ports must be configured etherchannel mode.

  • NEITHER 9852 communication with fault tolerant CAN materials

    Hello

    I have a cRio 9074 and recently bought a module NI 9852 2-port low-speed/fault-tolerant CAN.

    It's my first attempt at network comms in cRio and I realized that none of the devices with that I want to communicate are immune to failures and fault-tolerant physical layer requires different endings to 'normal' high speed MAY (immediately after I broke the seal on the box, of course!)

    Is it possible that I can configure the physical layer for this configuration to work reliably (alebit to)< 125="" kbps="" and="" with="" no="" fault="">

    Kind regards

    Chris

    I'm sorry. The low speed / transceivers and breakdown-tolerant high speed MAY use different communication protocols.

    HS CAN is recessive when CANH and CANL are both to nominally under 2, 5V. It is dominant when CANH is 3.5V and CANL is 1, 5V.

    LS CAN is recessive when CANH is a CHANNEL and 0V to 5V. It is dominant when CANH is 3.6V and CANL is 1.4V.

    You can see that they are almost the same for the dominant, but different for recessive. In addition, the sequence of end between two buses is different. For a HS-bus, you usually resistance 120 ohm between CANH and CANL at both ends of your bus.  A bus of the LS, you have a terminating resistor separate for CANH and CANL (for RTH and RTL).

  • 6.0 Update 01 vSphere fault tolerance

    Hello everyone,

    I have a problem whene I activate Fault Tolerance on dv01.

    It is the configuration of my physics

    • 2 ESXi 06 update01 build 3073146
    • 1 cluster HA active
      • Ordered admission disabled
      • Disabled DRS
    • 1 with port configuration of vMotion group vDS
    • 1 starndar vSwitch for Fault Tolerance with 2 x Intel® Ethernet Converged Network Adapter X 520 - T2 10 Gbps
    • e

    Dv01 Config

    • 4 vCPU
    • 3 ramdisk stored a first shared data store datastore01 1.35 to:
      • Reset Eager 136 GB Thik provisioning
      • 500 GB Thik provisioning eager to zero
      • 600 GB Thik provisioning eager to zero
    • connected to the port vDS group 1 VMXNet3 network adapter
    • latest VMware tools installed
    • Guest OS Windows Server

    When activated the fault tolerance, I chose the ESXi02 and second datastore02 shared data store 1.28 at: the vmdk and vmx file is created successfully and I free 44Go on the datastore02 of a second.

    But whene I try to turn on this dv01 I have this error

    • vCenter car not power on the secondary VM
    • and whene I look on the event, the error is not "enough space for the secondary virtual machine."

    Can someone help me, please to fix this whene I enabled FT on litle VM02 with 1 thin vdisk, is working but for dv01 is not work

    THX in advance

    Best regards

    Hi DZinit,

    The computation of the space you need on the second data store, you take the file vswp into account. This will generate enough to on and be equal to the size of RAM allocated to the VM.

  • XEON X 3210 fault-tolerant

    Hello

    I have a Dell PowerEdge 840 and 860 servers, they are also supposedly compatible with ESXi and vCenter.

    I downloaded the trial version of vCenter Server and ESXi and wanted to build a cluster.

    Unfortunately, this is not possible because it supports the CPU is not a fault-tolerance.

    These and the following hardware configurations

    Dell PowerEdge 840 and 860

    CPU: Intel Xeon X 3210 with VT turned on in the BIOS.
    RAM: 8 GB / system
    Hard drives: Dell 840 GB RAID 10 4 x 320
    Reference Dell 860 1, 2 x 500 GB RAID
    System: ESXi 4.10, 381 591
    VCENTER 4.1 345 043
    Network: a network of 100 MB of Internet traffic map
    and a single 1000 MB card for traffic network.

    According to Intel, the processor over VT is supported.


    Thanks for the help in advance.

    ICH verfuge über einen Dell Poweredge 840 und 860 server, sharp auch angeblich scripte sind mit Vcenter und ESXI.

    ICH habe mir die Testversion von Vcenter Server und ESXI runtergeladen und ein Cluster can actually wanted.

    Leider ist dies nicht possible, da die CPU´s keine Errortolerant unterstuzten.

    Es sich und die following Hardwarekonfigurationen more

    Dell Poweredge 840 und 860

    CPU: Intel Xeon X 3210 mit aktivierten VT im BIOS.
    Arbeitsspeicher: 8 GB I system
    Speaker: Dell RAID 10 4 x 320 GB 840
    Reference Dell 860 RAID 1 2 x 500 GB
    System: ESXI 4.10, 381591
    VCenter 4.1, 345043
    Netzwerk: I eine Netzwerkkarte 100 MB as Internettrafic
    und I eine Netzwerkkarte 1000 MB as den internal traffic.

    Laut Intel hat der oben genannte Prozessor auf support VT.


    Vielen Dank für die Hilfe im Voraus.

    3210 x is not supported for FT. http://KB.VMware.com/kb/1008027

  • SAN with VMware Fault Tolerance storage redundancy

    We built a new data center, with the idea in mind to use VMware Fault Tolerance to create a virtual environment without service interruption between two data centers.  The goal is to have the VMs to continue to operate even with the loss of access to an entire datacenter, including SAN storage units.

    I'm looking for a way for a vSphere 4 virtual infrastructure to be able to write to storage replicated in two different datacenters (with low latency of connections between data centers).  We can do with the networks and host servers, but I found nothing in the documentation of tolerance to failures to mention something else 'shared storage' we have experienced a failure of the individual units of SAN storage (including total loss of hundreds of virtual machines) and hope that we can eliminate the unit of storage as a single point of failure.

    Someone did he do it, or how it could be done?

    Hello

    In order to clarify what idle-jam talks about, you can watch this video of HP and VMware on YouTube.

    -hth

    Kind regards.

  • Fault tolerance limits

    I would like to implement fault tolerance for a couple of our virtual machines, but I was reading and I have a few questions.

    1. it is said that you must have 2 vmkernel NIC dedicated to fault tolerance. Can it be shared with vmotion NIC?

    2 FT does not support SMP. Is this still the case? We lack ESX 4 Update 1.

    3 FT does not support snapshots. VReplicator allows us to reproduce our vm. This creates a snapshot during replication. That are not or not be possible?

    Thank you

    Scott

    1. you need at least 1 vmkernal NETWORK card dedicated for FT (or a new VIRTUAL local network, if you have 10 GbE) there is a lot of traffic that will be flowing for this shot

    -You can see a design on my website http://kendrickcoleman.com/index.php?/Tech-Blog/vsphere-host-nic-configuration.html

    2. Yes. You can't have FT w / 1 vCPU VMs currently. But if you have some awesome Nehalem procs, you may only have 1vCPU for your boxes of criticism

    3. they will fail. The snapshots are not possible on the VMS FT. As a result, backups will not work. I know it's one of the huge problems of scenario best IP, Setup a script to make a virtual machine out of FT, run the backup, and then select FT. A lot of work for a final result.

  • SAN multisite and VMware Fault Tolerance

    Hello

    My apologies if this is in the wrong section.

    I have a requirement for the two main rooms, with SAN and ESX hosts in each room.

    In total I have, for the sake of arguments, 200 VMS, with a split 50-50, so 100 in each room Basic.

    I want to use VMware of fault tolerance for machines remain operational in the event of a major failure or the other room Basic.

    I think I can use Multisite SAN to do.

    Please see attached diagram fo what I want to do, I know that the diagram is not correct but I drew it fast just to try to explain the scenario.

    What I want to know (or have help on) is:

    Is it possible to use fault tolerance in this way have no downtime in the result of a failure of basic room? (basic rooms are not very remote and use the same networks)

    How does multi-site SAN work, are there articles I can read to help me understand this?

    Multisite perhaps SAN iSCSI or fibrechannel?

    Thank you.

    SAN should resemble a single San for all ESXes. It doesn't matter how you do it.

    > If I my synchronous SAN mirror, might simply use FT for the vcenter as well?

    It is possible for the small environment. In general, vCenter requires 2 vCPU, while FT works for uniprocessor VMs only. But do not overestimate FT, it is not a universal solution and the universal answer.

    > Also with FT, can you specify the server host in the cluster to act as the secondary?

    Directly - no. The only way - to create link VM to any resource that is available for 2 ESXes only.

    ---

    MCSA, MCTS, VCP, VMware vExpert 2009

    http://blog.vadmin.ru

  • Fault tolerant in consolidation scenarios

    How do people manage fault tolerance in consolidation scenarios? A client can know in advance that they want to use Fault Tolerance on some servers. Obviously, using fault tolerant on a percentage of servers has the potential to affect the number of servers required.

    The Consolidation scenario does not take into account FT AFAIK.

    You manually cover an instance of the virtual machine on ESX hosts as the failure of FT and must ensure that the customer is aware that there is a load CPU and additional network for her.

    Kind regards

    EvilOne

    VMware vExpert 2009

    NOTE: If your question or problem has been resolved, please mark this thread as answered and awarded points accordingly.

  • Fault tolerance of CAN 'high speed '.

    There is a lot of information on "Low Speed CAN" being "Fault tolérant."  In other words, every time CAN - H or CAN - L is shorted to either on the ground or + 12v, the other non-court-circuit CAN online will continue to work, but with the problems of timing and reduced immunity to noise.  But there is an absolute lack of information on the high-speed CAN fault tolerance (i.e., 125 kb/s speeds).  And if one wants to face that at high speed is as Fault Tolerant as SLOW, why then isn't high speed MAY also deemed "Fault Tolerant"?

    Can someone point me to an ISO document or other official document that specifically speaks to about high-speed CAN and what happens when lines CAN be is shorted to ground or + 12v?  I need a 'official' source of information on this subject, not only his own personal experience or hear - say.

    Thank you.

    ISO 11898-2: 2003 should be a good source for an official response

    http://www.ISO.org/ISO/catalogue_detail.htm?CSNUMBER=33423

  • WLC balancing without fault tolerance

    Hello, I need 13 points of service and provide load balancing between all connected AP

    Fault tolerance is not a concern at the present time where my reasoning below.

    I look at by specifying two 4402 controllers with the AP 12 license and configure them both in a single group of mobility, I then manually specify the primary access point controller and distributed accordingly between controllers one on access point two controllers for example 7 and 6 on the other.

    Could I ask if it could it be an acceptable method?

    Concerning

    Hi Mark,

    It is a perfectly acceptable design. If/when fault tolerance becomes a review a 25 model AP can be purchased to provide protection for the two 12 AP WLC Fail-over.

    I hope this helps!

    Rob

  • On R2 2012 Windows fault tolerance

    We had a problem with fault tolerance.

    We tried to use a Windows Server 2012 R2 for FT, but when they tried to activate it on the virtual machine; 1 vCPU is allowed.

    It does not have a configuration of 2 or 4 vCPU.

    Probably you have already tried it, but you use the Web Client?

    The FT for more than 1 CPU is something you can do only by using the web client.

    Best regards

  • VSAN 6.2 'method of fault tolerance' is not available

    Hello

    I installed 4 6, 0u2 ESXi hosts, each equipped with 3 flash devices and a VCVA 6.0U2

    I built the VSAN cluster without activating dedup and compression.

    But when I try to build a raid5 storage strategy, I can't find the parameter of the method of fault tolerance to allow coding of erasure with RAID5,

    I found the after fault tolerance method parameter enabled dedup and compression.

    Is this good?

    It's surprising. I have 60U2 basic vsan cluster where I am able to see 'Method of fault tolerance' the two option of raid 0 and raid 5/6 without activation of compression and deduplication.

    Can you confirm that you have all the flash disk group when you tried this option

    You can logout/login to the webclient service and check if you are able to see (there is a known, documented problem where some of the policies will not be visible unless you disconnect/connect back to the user interface)

    Was it installed or upgraded charges. If you upgrade, you did the upgrade of the disk version. I think that features will not appear if the upgrades of the disc are not (and automatically activating dedup/compression is an upgrade of the disk)

  • Protect both virtual machines with features of ESXI fault tolerant clustered disk

    Hello world

    I read the following:

    VMware vSphere with operations management: fault tolerance | United States

    The line that interests me is:

    "VMware vSphere® fault tolerance (FT) ensures the continued availability of the applications in the event of server failure by creating an instance of live snapshots of a virtual machine that is always up to date with the first virtual machine. In the event of hardware failure, vSphere FT triggers automatically failover - ensure no interruption of service and prevent data loss. »

    Question:

    FT can protect both virtual machines that share disks in a cluster configuration? (A cluster SQL instance for example)

    FT can protect both virtual machines that share disks in a cluster configuration? (A cluster SQL instance for example)

    No, you can't, because there is an incompatibility with FT.

    Refer to the documentation for all requirements and interoperability: interoperability of fault tolerance

Maybe you are looking for

  • HP Laserjet 2430TN wireless printing

    I have two of the above printers and need to know if there is a map available for this printer wireless so I can communicate wirelessly with it. If not, is there a way to make this printer wireless via a router or something else? Thank you!

  • Cannot delete the sync partnership

    This is the second time that this has happened after I re-installed Windows 7 to solve it the first time. When I have copied a standard file from W7 as my Pictures\Slideshows or moved to the location of Username\Downloads on a network share, it creat

  • Programs not listed in "programs and features".

    Sir, I have a dell laptop inspiron 1545... all of a sudden the programs that I installed on my win 7 32 bits do not appear in the "Programs and features" window... only 3-4 programs are visible... but infact I installed 5 GB of various programs... Wh

  • SPA122 Exchange random IP address

    SPA122 randomly changed IP address. Connected to the internet through the router and the Modem via the Ethernet Jack blue. When it is first installed - IP address was 192.168.1.116 - was supposed to be 115 according to the manual. After a few days, i

  • virus/malware reset my administrator password, how do I restore my admin account

    My netbook Toshiba NB205 running window7 starter had a virus/malware that has return my administrator privileges. No password reset disk... Unable to connect in network mode mode / safe... Impossible to activate the BIOS via ESC key during windows st