dvUplink/consolidation and failover

I have a question related to how work dvuplink about failover link...

Let me say that I have this kind of configuration:

dvSwitch configured with 4 dvUplink (calling it dvu1 dvu2 dvu3 dvu4).

Group 1 with 5 guests must use 2 of these dvuplink as active/active (have 2 bear set to dvu1 and dvu2)

Group 2 with 5 guests need to use the other 2 of these active dvuplink / standby (dvu3 is a 10g and the dvu4 of the auxiliary card is only 1 g)

the portgroup with vkernel interface is configured so that it has:

Active Dvuplinks:

dvu1

dvu2

dvu3

Dvuplinks standby:

dvu4

On the second group (the one with the 10g and 1 g), it seems that I see traffic on the two cards.

I expect that having only configured dvu3 and dvu4 the dvu4 will be not IN as a failover for dvu1 or dvu2 because these uplink are not yet configured on this host.

Only 2 dvuplink port configuration and put in active / standby as expected.

This is the normal behavior of dvUplink ports?

Yes, it is the operation of the grouping algorithm in ESX.  Your policy is essentially interpreted like 'trying to keep active rising 3.  ESX will bring a watch at any time he does not 3 active uplinks, and get the hosts with no bear attached to an uplink port treated the same as if there was a bear here with the status of the link down.  The only way to accomplish what you want really currently would be to use separate vDS for types of different hosts.

A question for you, however, no doubt that you try to configure this way bc you want ESX to promote the 10G uplink in hosts that have this.  Would be establishment of a scheduling IO which reflected uplink speed while for example if you attached a 1 G and 10G uplink active/active, you will see a 10:1 ratio of movement promoting the 10G meet your needs?  Or do you see with a need to specify the grouping on a per host, rather than by portgroup basis?

Tags: VMware

Similar Questions

  • Consolidation and failover for the uplink on the Distributed switch port group

    Hello

    I have a problem with the implementation of a distributed switch, and I don't know I'm missing something!

    I have a few guests with 4 of each physical cards. On the host eash I configured 2 virtual switches (say A and B), with 2 physical network by vSwitch using etherchannel adapter. Everything works fine for etherchannel and route based on the hash of the IP for the latter.

    Recently, I decided to create two distributed switches and move the respective physical ports of virtual switches to this distributed switches. Once again, I want to configure etherchannel and route based on the hash of the IP. But when I open the settings for the uplink port group, aggregation and failover policies are grayed out and cannot be changed. Apparently they inherit configuration also but I don't know where!

    Chantal says:

    Once again, I want to configure etherchannel and route based on the hash of the IP. But when I open the settings for the uplink port group, aggregation and failover policies are grayed out and cannot be changed. Apparently they inherit configuration also but I don't know where!

    You must set the card NIC teaming policy on trade in reality and not on the uplink group more expected.

  • Guided Consolidation and converter plug missing at vSphere5

    I just upgraded to vSphere 5 and guided Consolidation and converter plug-in are the two missing as being available.  Have I missed something or that they are no longer be available?

    I hope not because I've used both of these a little

    Hello.

    You're not missing something, because they are both is no longer available.

    These announcements were made the year last in the VMware vSphere 4.1 Release Notes.

    The latest version of the stand-alone version of the converter is available to replace the VMware vCenter Converter plugin.

    Good luck!

  • HP Flex10 and config recommended vDS for HA and failover?

    In the last two days I have experienced connectivity issues with my VM and I think I reduced it to beacon probing when is used as a failover mechanism in my groups of port / VLAN. I guess that the broader question "is what the settings recommended for failover based on my hardware configuration"...

    My ESXi hosts run 5.0.0U1 or 5.1.0 I just walking to vCenter 5.1.0a. Hosts are BL460c G7 servers in a c7000 with ICs Flex10 chassis. Each ESXi has a profile of server blade with 4 FlexNICs: a pair for the movement of data all VM (several Tagged VLAN) that belong to a vDS. The second pair is used for all traffic management and (at the moment I have reset) is on a stand-alone vSwitch with untagged traffic.

    Uplinks for each IC go with the Avaya VSP9000 switch. The two VSP9ks are connected by a STI trunk with all the VLAN tagged.

    Initially, I discovered the source of my traffic has dropped with the ARP table of a computer virtual pretend flip flop between the local binding of 10 GB on an Avaya and the link of the IST for the other Avaya (i.e. the uplink of the other IC module was used), which can cause a small temporary loop.

    No matter what method of load balancing I configure in the PG (stick to those supported by VC) and regarless if I have the two dvUplinks marked as Active or a marked as Active and the other as the day before, I see occasional about-face when beacon probing is used.

    I've set up virtual networks in Virtual Connect to use Smart Link, so I hope that Link Status should only be enough.

    Now while I still don't entirely understand beacon probe (I read about it on a few posts), I was hoping that it could provide a little more resilience despite having Smart Link set up on the side of VC.

    So for those of you who have the same hardware configuration, how you have configured your?

    Long term, I would like to bring to the management of virtual machines inside the vDS, mainly to have more flexibility in the management of bandwidth to have a slice of the 10 GB uplink established Virtual Connect level. I tried to put the management in the vDS but failed miserably and I think it's down to a scenario of chickens and eggs and I get the order to execute orders wrong... I'll look in that, once my configuration problem above fundamental network has been sorted.

    Thank you

    regarding overhead... NIOC is... good... but the general rule is that we release the core esx... so if we use more features... it will finally be an overhead projector... and if physical hardware does the same... thing... so it's better... This is why vmware has now VAAI... SRIOV... CPU/Memory unloading... If someone does not have the blades... and then if they have 10gig NIOC is only option...

    http://pibytes.WordPress.com/2013/01/12/multi-NIC-VMotion-speed-and-performance-in-vSphere-5-x/

    Management and vMotion are on the same VLAN (but different IP addresses) - it is recommended to use a VLAN from a safety perspective... and in production, this is how we do it. they can share the same NIC... no problem... but it should be labeled. 2 natachasery with 2gbe is good - I have recently done a comparative analysis of the speed of vmotion in the blade, you can see more info in my blog - http://pibytes.wordpress.com/2013/01/12/multi-nic-vmotion-speed-and-performance-in-vsphere-5-x/

    So a few questions:

    • I understand that you recommend keeping management / management of vMotion in a pair separated from flexNICs way to not give an additional charge within VMware.

    It is not mandatory... we have to use separate NICs... its all based on the customer's environment. But in separate VLANS... of couse VST marking consumes cycles CPU... we can ignore... the world... This is ignored. in case, you can use 2 natachasery with 2gbe to combine the mgmt/vmotion.

    • Y at - it an advantage of splitting management and vMotion in different VLAN but keep in the same uplink (so the VLAN tagging)

    security wise and best practices in production, we use separate VLANS, and we can share the same NIC...... There are some use cases as... that if you have 100 guests... and there may be several vmotions happening at the same time... in this case, we use dedicated pSwitch so that VMOtion traffic will not FLOOD the main switch. so here we use natachasery dedicated... in a small environment and a well balanced and if not on subscription of cluster CPU/RAM, then there is very little happening VMotion... If we can share the same natachasery and same pswitch.

    • Would we be better to use two different pairs of FlexNICs, one for management and one for vMotion using 3 pairs of FlexNICs in total (1 for all data VLAN, one not marked for management and a third no marked for vMotion). If so, I guess you'd give different bandwidths in VC to keep vMotion, nice and fast?

    According to my bench marking... what I did... If we give... 2GbE for vmotion, going too fast... and once again, as I said above... We can combine... or just give 2 natachasery with a bandwidth of 500 MB for mgmt and 2 gig for vmotion for VM traffic rest... * it is a good design... Here, we have isolated physically and VLAN wise as we need to isolate.

    for the low response fo the console... check the DNS resolution... and nothing to do with... the bandwidth. Check the CPU/RAM vcenter and the Vcenter CPU/RAM database...

  • Problems of AX150i cluster and failover

    Hi all.

    I tried to implement cluster failover on Win2003 x 64 with AX150i.

    We already have a cluster configured with an AXE so I now it is not working properly.

    The problem is that when I create a cluster node owns the disks and another should see drives but have no rights over them except wait for them to be released from the first node, but on my setup when the first node appropriating on diskettes, disks on the other node start to fail, they become unavailable , cluster service who sees and stops, so the second node becomes unavailable too.

    I have the latest firmware installed, latest PowerPath and the latest Microsoft iScsi initiator installed.

    Any suggestion would be appreciated.

    Thank you

    Concerning

    Amar


  • The physical connectivity of fabric interconnect with MDS and failover please suggest

    Dear team

    We have 2 FI and 2 MDS 1 SAN

    Currenlty connectivity is

    2 physical connections of FI - has direct MDS1

    2 direct physical links to FI - B MDS2

    SDM 1 connection to the main controller of SAN

    SDM 1 connection to the secondary controller SAN

    Connection of 2 MDS to the PDC of SAN

    2 MDS to the secondary controller SAN connection

    Hope the above connectivity is good?

    We had watched

    http://www.Cisco.com/en/US/prod/Collateral/ps4159/ps6409/ps5990/white_paper_c11_586100.html (as a team always preferred to go through the standards of cisco)

    We could consolidate the FI and the primary and subordinate, even here for MDS in we need FI. or it will be through FIS?

    If not these 2 MDs they work as independent but share the configs that happens between them?

    Now this never changes (creation area) we do on MDS1 hope that will get relpicated in MDS2 through FIS?

    If MDS1 fails will be all the configuration is available on MDS2 and yet infra smooth it won't work? and vice versa?

    All the additional steps to be performed to achieve this?

    Which is the best way to do this please suggest

    Thanks and greetings

    Jose

    Hi Jose.

    Physically, your connection is good.

    On the side of the UCS, the role of "Primary" and "Subordinate" refer only to the management of the system and the device that actually performs UCS Manager.

    Each MDS device will have a separate configuration (zoning).  It's different between the 2 devices.

    The blade itself will be a connection on each side, or the "fabric SAN.

    for example

    +----+

    +---+    +-----+

    |   |    |     |

    |   |    |     |

    |   | SAN |     |

    | +->----<--+ >

    | |         |  |

    | |         |  |

    +---+-+-+    +--+--+--+

    |       |    |        |

    | MDS - A |    | MDS - B |

    +--^-^--+    +---^-^--+

    | |           | |

    +--+-+--+    +---+-+--+

    |       |    |        |

    | UCS - A |    | UCS - B |

    +--+----+    +-----+--+

    |               |

    |               |

    VSAN100 +-+ | VSAN200

    |    |     |    |

    +----><>

    | Blade |

    +-----+

    The blade will have a HBA on FabricA (VSAN100) and FabricB (VSAN200)

    Each HBA has a different WWPN, and on the Bay of SAN, each controller will have a WWPN

    So on MDS - A, the zoning will be:

    Blade WWPN

    WWPN primary storage has

    WWPN secondary storage has

    MDS - B, the zoning will be:

    Blade WWPN B

    Storage WWPN primary B

    Storage WWPN secondary B

    So the configuration is * not * synchronized between both MDS devices, but each of them have visibility to the device of vHBA of blades.  At the level of the blade, the software of multiple paths on the operating system will handle any failover.

    On the UCS, we would usually use a channel from Port to the MDS.

  • Peer VPN and failover

    Is it possible to have redundancy - say HSRP - within a VPN infrastructure? In other words - the peer IP address could be one HSRP or VRRP VIP? If no - one you wanted a redundancy of both VPN routers what mechanism would be used for a failover? Thank you.

    I have in fact recently been looking into this myself and there are a few options of derivation according to your platforms and design.

    VPN statefull failover 7200's and 3600's head. This allows failover statefull of the IPSEC Tunnels between a primary router secondary school.

    http://www.Cisco.com/en/us/products/SW/iosswrel/ps5207/products_feature_guide09186a00802d03f2.html

    Failover IPSEC using the injection of road HSRP and reverse. Stateless IOS base tunnel of failover. Closer to what you want if your using IOS VPN.

    http://www.Cisco.com/en/us/Partner/Tech/tk583/TK372/technologies_tech_note09186a00800942f7.shtml

    As I use ASA at the head end and IOS on the remote database, I'm currently looking for the use of static virtual tunnel interfaces on remote sites with HSRP followed these VTI interfaces with failover based on the status of the tunnel. Not quite sure that HSRP to track interfaces VTI but I guess he can.

    http://www.Cisco.com/en/us/products/SW/iosswrel/ps5207/products_feature_guide09186a008041faef.html

    The only other issues that leaves me with, is how the ASA handle routing where it as several tunnels of two different endpoints. Anyone know?

  • ASA 8.3 - WebVPN and failover (Act/Stby)

    In the old version of the code that WebVPN wasn't a feature supported on the SAA, however to 8.x and specifically the 8.3 the note rel attribute is no longer the list as a feature not supported - means that WebVPN is fully supported by failover (Act/Stby) 8.3?

    I can see on my pair of failover Act/Stby 8.3 "CLI" basic config WebVPN to replicate as you can imagine, but I don't see that the config file (used in train 8.x) XML for things such as customizing portal or bookmarks according to the ASA ensures.

    I see the config XML based file WebVPN using ASDM, ASA-related intelligence and it eventualy expires when you try to browse the portal customization or bookmarks.

    The config XML based file WebVPN get reproduced in a failover pair?

    or if not how the contents of the box?

    Thank you

    SEZ

    According to the following document, it states that:

    "In Version 8.0 and later, some elements of the configuration for WebVPN (such as bookmarks and personalization) use VPN failover subsystem, which is part of Failover Stateful." You use Stateful Failover to synchronize these items among members of the failover pair. Stateless (regular) failover is not recommended for WebVPN. »

    http://www.Cisco.com/en/us/docs/security/ASA/asa83/configuration/guide/ha_overview.html#wp1078936

    If you have enabled dynamic failover, and bookmarks and personalization for webvpn Portal is not always replicated to forward, I suggest that you open a TAC case in order to study the question.

  • ASA Vpn load balancing and failover

    Hi all.

    We have two asa5520 configured as main unit and emergency in failover configuration, and everything works fine.

    Is it possible with this configuration (switch), configure the vpn load balancing/grouping?

    Thank you

    Daniele

    Hi Daniele,

    You cannot run two of them on two firewalls ASA, VPN feature load balancing or failover functionality.

    Where you need to use the two feature, you must use more than three ASA firewall, two first ASAs will work as the failover and the ASA third will work as cluster VPN for them, the following example uses four firewalls:

    ASA1 (active FO) - ASA2 (TF Standby)

    (VPN virtual master)

    |

    |

    |

    |

    (Backup VPN device)

    ASA3 (active FO) - ASA4 (TF Standby)

    Kind regards

    Wajih

  • PMODE RDM and failover

    Are there warnings when you create a cluster that will be pmode RDM allowing these virtual machines for the failover to other hosts?  In an event the VM HA will be able to connect from a new host, as long as the san lun masking and zoning is correct?

    The main one you already highlighted, which is a general HA and best practices vSphere, make sure all LUNS are hidden to all hosts in a cluster.

    Otherwise, there is no other outside best practices and standard HA (same networks, port group names, etc.). To make sure that you have no problem, you could start your VM on each host in the cluster to confirm it is not all the problems, but that shouldn't be necessary.

  • Disks have need of consolidation and active snapshot?

    Hi guys

    I have a virtual machine that says in a warning that "disk of the virtual machine consolidation is necessary", but VM also has an active snapshot a former borked incidentally.

    So what should I first remove the snapshot manually? or consolidate discs?

    Thank you very much

    Please provide information such as a list of files in the folder of the virtual machine on the data and mail (attach) the latest vmware.log file store. How much free disk space do you have on the data store?

    André

  • Consolidation and calculation of minority interest

    Hi people. I'm new to HFM. I'm kind of stuck with calculation minority interests and consolidation of it. Now after watching the various messages on the calculation of minority interest here, I'm still confused if we can use the expression of calculation for calculation of minority interest in Sub Consolidate() or should we include it in Sub calculate (). Please find below the script I'm working on.

    SCRIPT:

    Void Consolidate()

    HS = method. Node.Method("")
    HS = PCon. Node.PCon("")
    HS = POwn. Node.POwn("")
    vMIN = 1-HS. Node.POwn("")
    PMin = PCon-POwn
    Dim strAccount, I have



    Set DataUnit HS =. OpenDataUnit("")
    NumItems = DataUnit.GetNumItems
    For i = 0 to NumItems-1
    Call DataUnit.GetItem (i, strAccount, PKI, Custom1, Custom2, Custom3, Custom4, Data)

    If method = "Holding" then
    Dial the HS. Con("",PCon,"")
    End If



    If method = "Global" Then

    If StrAccount = "281100" then

    Dial the HS. Con("A#281100",PMin,"PMin")

    "Call HS." Con("A#281000",vMIN,"")

    On the other

    Dial the HS. Con("",POwn,"")

    End If
    End If


    "If HS. Account.IsConsolidated (strAccount) and Data0 then
    "If Data0 then."
    "" HS. "." «' Con ' has # "& strAccount, POwn,»»
    ' If StrAccount = "281100" then
    "HS. «' Con ' ", PMin,»»
    "End If
    "End If
    "End If

    Next

    EndSub


    Void calculate)

    HS. EXP ' a 281100 # = a #CapitalStock - a #Investments.

    End Sub

    My doubts:

    1. is the correct expression for the calculation of minority interest?
    2. If so, can we use it in Sub Consolidate()?
    3 If the minority interest to its parent using PMin consolidate? If he is kindly provide the appropriate expression.

    Looking forward to some help. Thanks in advance.

    1. the dimension of value about: no doubt, he must see as other members of the value you include in the statement if as below:

    If vValueMember = "" or vValueMember = "" or SH or. Value.IsTransCurAdj () or vValueMember = '[Adjs Parent]' then

    It is essential to control which members of value, your code applies, since it would not be the case where it applies to everyone (as in the case of [percentage] that excluded us in the code that I sent you)

    2. information on parent members: If the minority interest account is a parent account, you must apply all the calculations on the basis of accounts (his children). HFM rules cannot write the results of the calculations of dimension members of parents (accounts, PIC, and custom dimensions). Parent dimension members always get values by bringing together the children of HFM.

    -Kostas

  • Switch Standard virtual Teaming and Failover/Failback w / button batteries

    Hello

    Small question about Standard virtual switches and nic teaming/failover.

    I have 4 phy cards network connected to a single vSS; two are defined as active adapters (vnic A & B) and two are defined as standby adpaters (vnic C & D). The first two active adapters are connected to a primary switch stack using an etherchannel cross-stack and the two other standby adapters are connected to a driving switch stand-by also using a cross-stack etherchannel. The primary and standby switch stacks are not connected to each other (in a stacking context) but are connected to the same physical network.

    Here's my question - if A nic fails, nic C or D will become active. Let's just for arguments sake adapeter C become active. So B and C card are now involved (for vmware). When C becomes active, the network see B and C adapter as a team even if they are spanning two batteries of different switch - this will create duplicate the network MACs? If Yes, what is the proper way to shift several cross-stack etherchannel teams?

    The physical switch batteries are Cisco Catalyst 3750 s.

    Thank you!

    Steve

    ... why bother you even create etherchannels...

    There may be special use cases where you have a high traffic going out to a lot of targets with different IP addresses. Personally I never used EtherChannel in ESX (i) environment to date.

    With the default strategy, VMware uses a turnstile mechanism, where virtual NETWORK card a VM is assigned to a physical connection when the virtual machine is running. It is usually sufficient to balance traffic. ESXi 4.1 and the appropriate license (required vDS), VMware has introduced a new policy of load after grouping , which can also interesting to look at.

    And, when using an etherchannel, why is-IP-HASH the technique only balancing?

    See: http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf

    André

  • Server consolidation and virtualization - where to start?

    Hi guys,.

    Always on the virtualization for my college project - what is the best way to determine how to consolidate a small group of (physical) servers underutilized? The servers have different operating and one or two applications that are installed on all systems. CPU utilization varies from 1% for some servers to 65% in one case. How to analyze which servers can be consolidated? More importantly - how I have set the hardware configuration required to host these virtual servers?

    Well, you need to do an 'assessment '. you could do it using VMware Capacity Planner, or products of third party such as: PlateSpin Recon, Lanamark 5nine, immune,.

    This product allows you to control the use and creates reports that will let you know what equipment fits your needs

    concerning

    Jose Ruelas B

    http://aservir.WordPress.com

  • Who will be the best option with Oracle 11 g dd loadbalancing and failover

    We use weblogic 10.3 with multipool with each source of data pointing to each RAC node. The Multipool manage failover and load balancing scenario. With 11g database now, we have the ability to use SCAN where weblogic will point to a service and load balancing/failover happen side DB.

    Published by: user1122 on April 12, 2010 10:47

    Stick with MultiPools for now. SCAN will confuse pools WLS and will harm XA if you
    using XA, because WLS assumes that all connections in a pool are identical. If some are
    bad, WLS assumes that all are and can kill all the. We try to ensure that all the work
    in a given transaction goes to a specific node of the RAC, but if the pool is heterogeneous
    We can spoil, and depending on the version of the DBMS, it will suffer performance
    questions to the best and can corrupt and if it gets worse...

Maybe you are looking for