Multi-NIC vMotion with ESXi/vCenter 4.1

We take running ESXi and vCenter 4.1 and after the secure Channel secure Channel 5.5 class and sitting for my exam in a few weeks, I have actively tried to improve our environment. Previously, to the study and trying to learn more about VMware, we were in pretty bad condition. Relevant material (AMD, Intel CPU, generations of Intel CPU, amount of RAM and CPU), versions hyperviosr Mismatched ESXi and ESX and no redundancy, vmotion and TONS of snapshots as backups.

In the two weeks since my course, I have eliminated all snapshots (performance daily vCheck to check on the health of the venvironment), emigrated to 5 similar hosts (and memory/cpu configurations) that we had to drag do not, connected to all ports card NETWORK 6 to 2 x 3560g cisco switches and connected the second switch updated ESX to ESXi 4.1 and patched all hosts with Update Manager (nobody used), created the host profiles and compliance on the cluster and hosts, activated DRS and HA, set up a couple of VAPP for STM systems... the list is long

I still have a lot to learn, but now I'm a bit confused about one thing...

We use Fibre Channel SAN, one side get our second Fibre Channel switch hooked up for redundancy and I guess that Multi - pathing (?) I have a couple questions question...

1. set up the second fiber switch would give me several warehouses of data, correct paths?

2 can I create and separate vMotion in our configuration, using the CF WITHOUT? Any flow of traffic (for vmotion) through the vswitches or he remains behind the FC switch?

-I know with iSCSI, you want to create a vSwitche separated and installation multi-nic vmotion

3. in the configuration of the redundant management interfaces do I need to create two vSwitches with vmkernel with separate IP addresses management ports or just create on vSwitch with a vmkernel port and two network cards is assigned to the (two different connected to 2 physical switches physical cards)?

-We will most likely use VST if we can get the trunk ports to pass traffic defaullt VLAN, so I think it is still acceptable to create separate vSwitches for management, vMotion (if necessary because of the CF) and port VM group? The designs I see online usually use only a vSwitch for VST and multiple is.

That's all I can think of for now... Just some things that need to be clarified on... I guess I still need a vSwitch vMotion (allocate 2 of 6 network adapters in it) because some type of traffic would pass over him, but I think that most of the vMotion and all the SvMotion would remain behind the FC switch.

Thanks for any help!

With regard to the topic of discussion: Multi-NIC vMotion introduced with vSphere 5.x and is not available in earlier versions.

1.) Multipathing is not related the number of FC switches, but only for the number of initiator and target. However, using several CF toggle availability increases due to redundancy.

2.) you must differentiate here. vMotion is a live VM migration process to other hosts, i.e. only the workload of the migration. vMotion only uses the network. Storage vMotion on the other side generally used storage connections - i.e. the CF in your case - to migrate files/folders to the virtual machine.

3.) redundancy for management traffic can be reached in several ways. The easiest is to simply assign multiple uplinks (vmnic) to vSwitch network management. So, a simple 'Netowrk management' will do, and redundancy is made based on recovery of the vSwitch.

From a design point of view you can use multiple vSwitches for different traffic types, or combine them on a vSwitch by configuring the failover policies for groups (Active/Standby/Unused) port for example.

André

Tags: VMware

Similar Questions

  • Multi-NIC vMotion question

    BACKGROUND

    I'll put up a new environment vSphere 5 with a couple of hosts ESXi 5 to test how things work in this latest version.

    I use HP DL385 G7 servers that have 4 integrated network cards and I have a quad-port NIC card server that I will implement etherchannel to serve my VM networks. I intend to use the 4 port NIC integrated for vMotion and my Console of service (management).

    CONFIGURATION

    For the vMotion and the Console, I built a unique standard vSwitch on each host and created 4 physical Ports VMKernel linked to my 4 network interface cards. 3 network cards are configured for vMotion with StandBy is defined as the vMotion other network cards. The 4th NIC is configured for management ensures only being 3 NICS of vMotion.  I use VLAN separate NIC of the Console and the vMotion NIC.

    My hosts are in an active cluster with DRS.

    All this seems to work well.

    TEST

    I have a couple of VM located on this cluster (2 hosts), so I use Putty SSH in one of my hosts. I run ESXTOP and press N to view the network settings. Here, things seem normal.

    I take a VM and vMotion he host that I followed. I was all traffic met on a single vMotion NIC instead of my 3 cards of vMotion.  This was not planned.  NOTE that this is the VM being vMotioned to my monitored host.

    When I vMotion a VM on the host that I followed, all 3 cards network vMotion ignite and is used. This was as expected.

    SUMMARY

    Several cards using vMotion network seems to work only when you migrate a computer that host virtual. The host on which the virtual computer is being migrated, yet receives just the virtual machine on the one primary NIC.

    I have a call in VMware and they are studying if this is normal or if there is a problem.

    URDaddy wrote:

    Rickard,

    Mine looks the same, but note that the receipt settings (right) only receive on vmnic2

    Pictured to the right, I don't see MbTX (sent), can you expand it too also see MbRX (receiving information)? It could actually be used...

  • vMotion with ESXi 4.0

    We currently have a host running vSphere ESX. We also have another license for vSphere with the intention to build this second host soon. Is one of the things I want to start using vMotion, Storage vMotion, and obviously with this second host I can start doing this. My question is, can I do all this with ESXi 4.0 as well? If all I want to do is create virtual machines and have them fast vMotioning between the hosts, this free option will allow me to do? ESXi 4.0 does support vMotion and Storage vMotion?

    What drawbacks are with ESXi? I'm looking to save money on licensing costs and put the money saved towards greater SAN storage.

    Thank you very much for those who answer with the information!

    So yes you can vmotion between both versions.

  • Multi-NIC vMotion in vDS

    In the article, 2 exchanges are configured with NIC 1 as asset and NIC 2 as before in the first portgoup and the reverse in the second portgroup.  How would that be different from setting up a single portgroup and adding NIC 1 and NIC 2 as active adapters?   What are the advantages and disadvantages of each approach?

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 2007467

    Note: I moved this question on the forum of vMotion.

    Hi greenpride32

    This ensures that the vmkernel ports are active on the two uplinks by forcing active configurations / Eve.

    Otherwise, there is a chance that two vmkernel ports could choose the same uplink, eliminating the advantage of having 2 vMotion vmkernel ports.

  • Configuration guide for ESXi 5 (Vmnetwork, management and vMotion) with NIC 2 network

    Hello

    I have 4 NIC in Server Blade 7 (ESXI 5), would like to dedecate 2 NIC for (Vmnetwork, management and vMotion) & NIC (iSCSI traffic) 2 with equallogic SAN.

    I equallogic guide to configure ESXI with it, but how do I configure (Vmnetwork, management and vMotion) with NIC 2, my priority is excellent speed for my virtual machine, and then nothing else.

    Then just go for classic switch.

    The configuration is a lot depend on existing infra, the trunk, the physical switch for redundant network & balance, 100 or network 1GbE, no.. virtual machines and etc. If there is a new configuration, I suggest you trunk 2 x available vmnic (the vm network) to balance the load and better performance.

  • Storage vMotion possible with ESXi Std Edt or not?

    Hi all

    We have two implementations of VMware with ESXi Standard edition licenses installed on the two site. A site has vCenter Server Foundation edition while the other has vCenter Server Standard Edition.

    The problem is that the site with vCenter Std and vSphere 5 Std doenst allow storage vMotion (see image below)

    No Storage VMotion.jpg

    While, on the other hand, site 5 Stnd vSphere with vCenter Foundation allows storage Vmotion (see image below)

    Storage VMotion.jpg

    How is it so that vSphere Stndard this edtion of different characteristics in different places?

    I had recorded the call and found the solution... VSphere 5.0 and 5.1 gives really different set of features with the same Std license vSphere.   Take a look at the link for options available with vSphere 5.0 and 51 below.

    1 compare vSphere 5.0 editions

    Compare the editions of vSphere VMware for Cloud Computing, server and data center virtualization

    VSphere editions 2 - compare 5.1

    Compare VMware vSphere editions: managed Virtualization & amp; Cloud Computing

    For a test, I used the same Standard edition server with 5.1 key to it has worked well with Storage vMotion Option available :-)

  • Proper installation and use of Vmotion with 4 NICs

    I have a new VMWare esxi to 5.1.0 two server installation.  Each server has 4 network cards.  I currently have two NICs configuration for iSCSI and installation of two NICs for network administration.  What is the best way to add vmotion in this environment?  Should I include it with iSCSI or administrative network?  The vmotion network must be on the same IP network as the adminstrative or iSCSI network or can it be it's own pattern IP?

    I have a second environment where each server has 8 cards, vmotion is on its own network with its own network card and thus the reason for my lack of knowledge.

    Thank you

    Justin H.

    Welcome to the communities...

    My suggestion would be as below:

    What is the best way to add vmotion in this environment?  Should I include it with iSCSI or administrative network?

    Suggest to include vMotion with (administrative) network management traffic, create a second group of VMkernel ports for vMotion on the network to management switch

    For the management network-> Set as active nic1 and nic2 as standby

    For vMotion Network-> NIC2 defined as asset and NIC1 as standby

    The vmotion network must be on the same IP network as the adminstrative or iSCSI network or can it be it's own pattern IP?

    Best practice is to have a separate network for vMotion traffic

  • vCenter 5.5 with ESXI 5.1

    Hello

    Just confirm whether vCenter 5.5 (Standard Server) works with ESXI 5.1?  Looking to add another host and get my feet wet with vCenter.

    Thank you
    TT

    Yes, vCenter 5.5 work with esxi 5.1.

  • Try to use the converter to push a VMX and VMDK existing in the environment file. Have you tried the Converter 4.3 and 5.0, continues to crash or delay. A 5.1 with ESXI 5.1 vCenter server hosts going conversion.

    Try to use the converter to push a VMX and VMDK existing in the environment file. Have you tried the Converter 4.3 and 5.0, continues to crash or delay. A 5.1 with ESXI 5.1 vCenter server hosts going conversion.

    Take a look at this article. You can try Converter standalone 5.0.1 or 5.1 beta.

    http://KB.VMware.com/kb/2033315

  • Cisco UCS B200 blade with installed vCenter

    Don't put the vCenter on one chassis 5108 blades and can control the ESXi hosts on the other blades? If this is the case, on the design, it's possible? Install the vCenter on a single blade with ESXi or ESXi?

    I ask this question because I have experience only having the vCenter as a separate machine whicn connects only to the network to control the ESXi hosts on 5108 chassis

    Thank you!

    as mentioned above, it is possible

    also the best design of practice in a certain measure, it is preferable to have VCenter in a virtual machine to take the advantage of using Vmotion HA for example features to reduce the risk of losing your vCenter because of HW faular for example

    You may have a blade for VCenter use in hunting and other blad for Vcenter backup for example hunting another!

    Good luck

    If useful rates

  • Live VMotion with vGPU

    Is a feasible direct VMotion with Horizon 6.1 and 6.0 vSphere assuming that both hosts have the same graphics card?

    It depends on using vSGA Yes, you can, but if you choose the vDGA option, unless I missed something, you can't, take a look here: https://www.vmguru.com/2014/03/vmware-horizon-view-graphics-vsga-vs-vdga/

    And view 6.1 documentation: 3D Rendering for desktop configuration

    Because vDGA and NVIDIA GRID vGPU use pass through PCI on the ESXi host, VMotion directly is not supported. vSGA and 3D app supports direct VMotion.

  • How will I know if a host is associated with a vCenter Server

    Once my 5.0 ESXi host is managed by / code with a vCenter server, there are some commands that you can perform only through the vCenter.

    In particular, all-hard drive size, whole-controller SCSI and Set-VMReservations.

    In case my vCenter is down, I would leave my script (PowerCLI 5.5r1) with a mistake that these operations cannot be executed until start of vCenter or the host that contains the virtual machine is no longer active with vCenter.  I think it's safe to let the user decide how he wants to move forward in this case and not build it in my script.

    Is there a cmdlet to ask the host if it is associated with a vCenter Server?  When you use VI Client to connect to the host, you can see the link in the lower right part of the Summary tab that says "separate host to vCenter Server...". ».

    The get-VMHost cmdlet seems like it should have this info, but I was not able to find.

    Thanks for your suggestions

    Maureen

    Please take a look to determine vCenter management host to see if that helps.

    André

  • Best servers to run ESXi/vCenter?

    Howdy all,

    I want to ask, based on personal experience, that Dell servers offer better overall performance for execution of ESXi/vCenter? I'm looking to install a training environment insulated use about 100 VM of Windows and Linux OSes and have to make sure that I have the appropriate resources.

    I have to go with a server blade of Dell or a Dell PowerEdge server to high power would be enough?

    Any information would be appreciated!

    First of all I recommend you consult the VMware HCL: http://www.vmware.com/resources/compatibility/search.php

    I think that the Tower servers are sufficient and here are few compatible with vSphere, ESXi 5.5 Dell Tower servers:

  • vMotion from ESXi 5.0 Update1 app question

    We have a two node ESXi 5.0 cluster simple.

    -The two hosts are HP DL360 G7 servers with 6 cards. We have 3 vswitches in place, everyone has two related to the assigned physical cards:

    . - vSwitch0 (vmnic0 & vmnic1) = VM network and management network on 100.200.1.x/24

    -vSwitch1 (vmnic2 & vmnic3) = iSCSI on 192.168.100.x/24 traffic
    -vSwitch2 (vmnic4 & vmnic5) = vMotion on 192.168.100.x/24

    Our production network resides on VLAN1 via the subnet of 100.200.1.x. VLAN2 (ie: the 192.168.100.x network) is used for iscsi/vmotion traffic only. All network adapters are connected to a set of HP ProCurve gigabit switches. Naturally, the two VLANs are configured on the stack.

    Since updated the two hosts with ESXi 5.0 Update 1 we are no longer able to live migrate virtual machines between hosts. We can however, turning off the machine virtual and cold migrate virtual machines between hosts. If enable us vmotion on vSwitch2 and vmotion on vswitch0 it works fine. All physical cards are the same make and model material and use the same driver bnx2 all. Another application U1, nothing has changed on the storage and network infrastructure. The cards connect to their VLAN relevant had ports on the same HP ProCurve. We recorded a call with VMWare, but they considered this as a network problem and have passed the ball to us. Has anyone seen anything like this before?

    Any advice or suggestions would be welcome.

    I can't say for sure if it is related to the configuration of the network, but I would definitely not run iSCSI and vMotion in the same subnet. Why don't change you the subnet of vMotion for 192.168.101.x (unless it is already used for something else) and then try to vmkping the vMotion IP addresses between the two hosts?

    André

  • VMotion with peripheral issues GROSS (GA)

    Dears

    VMotion with RAW Devices (GA), is it Possible or not?

    I tried to share the doormen and tested the server but did not the vmotion

    will I change the configuration of raw devices on vmware or what?

    I presented to all ESxi clustered and chose the default values when adding to this VM need access to devices of this

    Please notify

    Normal VMotion with RDM, whether virtual or physical compatibility mode is possible and fully supported. Storage vMotion is supported too but real data RDM physical mode can be moved in the process.

    See:

    http://KB.VMware.com/kb/1005241 Migration of virtual machines with Raw Device mapping (RDM)

Maybe you are looking for