Best practices of VMotion

Summer of Googling, but impossible to find a "Best Practices" document on the configuration of VMotion?

Regarding the configuration of the network it is not much - you need connectivity GB NIC - as has already been identified - I usually we vSwitch0 as my management vSwitch and assign 2 NIC's - this provides Actif\Passif NIC for the service console and vMotion then... You must also ensure that vSwitch names are the same if you use standard vSwitches.  You do not try vMotion between AMD and Intel CPU - if you have different generations of AMD and Intel CPU so watch using enhanced vMotion.

Don't forget to leave some points for messages useful/correct.

Tags: VMware

Similar Questions

  • Separate management / VMotion Best Practice?

    We're heading to 4.0 ESX ESXi 4.1.  Our servers have 4 physical Gigabit NIC.

    On ESX 4.0, we lack 2 vSwitches:

    vSwitch0

    Service Console - Active vmnic0 - vmnic3 watch

    VMkernel - Active vmnic3 - vmnic0 eve

    (Unique network interface cards / IPs by function)

    vSwitch1

    Port VM - vmnic1 and vmnic2 active groups

    (Several VLANS to resources shared)

    With the changes in ESXi, is recommended to separate the management of VMotion as we did with ESX?  Notice that we use the same subnet for these two functions.

    Personally, I would prefer combining Management and VMotion.  VMotion will not only benefit an additional NIC usage, especially with the multiple simultaneous VMotions?  At the same time, it seems not that management traffic would be impeded to the point of needing to separation, as we use the same subnet.  In addition, security should not be a problem, since the new, we use the same subnet for management and VMotion.

    Your configuration is consistent with "best practices". I prefer separate management taffic VMkernel myself, even if it will cost me some performances of vMotion.

    ---

    MCITP: SA + WILL, VMware vExpert, VCP 3/4

    http://blog.vadmin.ru

  • Best practices VMotion network prod.

    Our team has looked at the requirements of network speed for the VMotion network on an ESX Server.  Details to our environment, we are looking at a cluster of 8 knots with each ESX Server running over 25 virtual machines.  Given this information, a network of 1 GB for the VMotion network connection seems sufficient, or should we be looking at more closely to the provision of a connection of 10 GB?

    Thank you very much in advance,

    Steve

    best practical vMotion is a network isolated from 1 GB - 10 GB would be a stretch the network is used when a vmotion event happens and if your environment is sized correctly I don't expect vmotion occurning more than a few times per hour - and based on your virtual machines per host, I would say that you will have a lot of extra capacity--

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Best practices for the configuration of VMotion

    Hi guys,.

    I need to try VMware Vmotion. What are the steps to configure Setup for VMotion. Could someone help me with this?

    I had tried to do VMotion, but have not seen the result.

    I currently have two boxes of ESX (ESX 3.5 U3) with VI U4. Primary ESX with Guest OS WIN2K3 and WINXP.

    I am also wanting to know the best practices that can be followed for configuration of VMotion.

    Concerning

    MRM

    http://www.VMware.com/PDF/vi3_35/esx_3/R35/vi3_35_25_3_server_config.PDF

    The link above is a complete in pdf format on the configuration of ESX, including the VMkernel you need for VMotion.

    Duncan

    VMware communities user moderator

    -

  • Dell MD3620i connect to vmware - best practices

    Hello community,

    I bought a Dell MD3620i with 2 x ports Ethernet 10Gbase-T on each controller (2 x controllers).
    My vmware environment consists of 2 x ESXi hosts (each with 2ports x 1Gbase-T) and a HP Lefthand (also 1Gbase-T) storage. The switches I have are the Cisco3750 who have only 1Gbase-T Ethernet.
    I'll replace this HP storage with DELL storage.
    As I have never worked with stores of DELL, I need your help in answering my questions:

    1. What is the best practices to connect to vmware at the Dell MD3620i hosts?
    2. What is the process to create a LUN?
    3. can I create more LUNS on a single disk group? or is the best practice to create a LUN on a group?
    4. how to configure iSCSI 10GBase-T working on the 1 Gbit/s switch ports?
    5 is the best practice to connect the Dell MD3620i directly to vmware without switch hosts?
    6. the old iscsi on HP storage is in another network, I can do vmotion to move all the VMS in an iSCSI network to another, and then change the IP addresses iSCSI on vmware virtual machines uninterrupted hosts?
    7. can I combine the two iSCSI ports to an interface of 2 Gbps to conenct to the switch? I use two switches, so I want to connect each controller to each switch limit their interfaces to 2 Gbps. My Question is, would be controller switched to another controller if the Ethernet link is located on the switch? (in which case a single reboot switch)

    Tahnks in advanse!

    Basics of TCP/IP: a computer cannot connect to 2 different networks (isolated) (e.g. 2 directly attached the cables between the server and an iSCSI port SAN) who share the same subnet.

    The corruption of data is very likely if you share the same vlan for iSCSI, however, performance and overall reliability would be affected.

    With a MD3620i, here are some configuration scenarios using the factory default subnets (and for DAS configurations I have added 4 additional subnets):

    Single switch (not recommended because the switch becomes your single point of failure):

    Controller 0:

    iSCSI port 0: 192.168.130.101

    iSCSI port 1: 192.168.131.101

    iSCSI port 2: 192.168.132.101

    iSCSI port 4: 192.168.133.101

    Controller 1:

    iSCSI port 0: 192.168.130.102

    iSCSI port 1: 192.168.131.102

    iSCSI port 2: 192.168.132.102

    iSCSI port 4: 192.168.133.102

    Server 1:

    iSCSI NIC 0: 192.168.130.110

    iSCSI NIC 1: 192.168.131.110

    iSCSI NIC 2: 192.168.132.110

    iSCSI NIC 3: 192.168.133.110

    Server 2:

    All ports plug 1 switch (obviously).

    If you only want to use the 2 NICs for iSCSI, have new server 1 Server subnet 130 and 131 and the use of the server 2 132 and 133, 3 then uses 130 and 131. This distributes the load of the e/s between the ports of iSCSI on the SAN.

    Two switches (a VLAN for all iSCSI ports on this switch if):

    NOTE: Do NOT link switches together. This avoids problems that occur on a switch does not affect the other switch.

    Controller 0:

    iSCSI port 0: 192.168.130.101-> for switch 1

    iSCSI port 1: 192.168.131.101-> to switch 2

    iSCSI port 2: 192.168.132.101-> for switch 1

    iSCSI port 4: 192.168.133.101-> to switch 2

    Controller 1:

    iSCSI port 0: 192.168.130.102-> for switch 1

    iSCSI port 1: 192.168.131.102-> to switch 2

    iSCSI port 2: 192.168.132.102-> for switch 1

    iSCSI port 4: 192.168.133.102-> to switch 2

    Server 1:

    iSCSI NIC 0: 192.168.130.110-> for switch 1

    iSCSI NIC 1: 192.168.131.110-> to switch 2

    iSCSI NIC 2: 192.168.132.110-> for switch 1

    iSCSI NIC 3: 192.168.133.110-> to switch 2

    Server 2:

    Same note on the use of only 2 cards per server for iSCSI. In this configuration each server will always use two switches so that a failure of the switch should not take down your server iSCSI connectivity.

    Quad switches (or 2 VLAN on each of the 2 switches above):

    iSCSI port 0: 192.168.130.101-> for switch 1

    iSCSI port 1: 192.168.131.101-> to switch 2

    iSCSI port 2: 192.168.132.101-> switch 3

    iSCSI port 4: 192.168.133.101-> at 4 switch

    Controller 1:

    iSCSI port 0: 192.168.130.102-> for switch 1

    iSCSI port 1: 192.168.131.102-> to switch 2

    iSCSI port 2: 192.168.132.102-> switch 3

    iSCSI port 4: 192.168.133.102-> at 4 switch

    Server 1:

    iSCSI NIC 0: 192.168.130.110-> for switch 1

    iSCSI NIC 1: 192.168.131.110-> to switch 2

    iSCSI NIC 2: 192.168.132.110-> switch 3

    iSCSI NIC 3: 192.168.133.110-> at 4 switch

    Server 2:

    In this case using 2 NICs per server is the first server uses the first 2 switches and the second server uses the second series of switches.

    Join directly:

    iSCSI port 0: 192.168.130.101-> server iSCSI NIC 1 (on an example of 192.168.130.110 IP)

    iSCSI port 1: 192.168.131.101-> server iSCSI NIC 2 (on an example of 192.168.131.110 IP)

    iSCSI port 2: 192.168.132.101-> server iSCSI NIC 3 (on an example of 192.168.132.110 IP)

    iSCSI port 4: 192.168.133.101-> server iSCSI NIC 4 (on an example of 192.168.133.110 IP)

    Controller 1:

    iSCSI port 0: 192.168.134.102-> server iSCSI NIC 5 (on an example of 192.168.134.110 IP)

    iSCSI port 1: 192.168.135.102-> server iSCSI NIC 6 (on an example of 192.168.135.110 IP)

    iSCSI port 2: 192.168.136.102-> server iSCSI NIC 7 (on an example of 192.168.136.110 IP)

    iSCSI port 4: 192.168.137.102-> server iSCSI NIC 8 (on an example of 192.168.137.110 IP)

    I left just 4 subnets controller 1 on the '102' IPs for more easy changing future.

  • Best practices to move a volume

    Hello

    What is the best practice to move a volume to a PS4100 to another PS6100

    PS two are not in the same group.

    Today, we have a PS5000 which reproduce on the PS4100. The administrator create 2 volumes on the PS4100 (because there is not enough space on the PS5000).

    We will replace the PS5000 with a new PS6100XV, and we want to move the volume of the PS4100 to the new PS6100.

    What is easiest way to do this with minimal downtime on this volume?

    I think that it is best to create a replica of volume 2 of the PS4100 to the new PS6100, disconnect the volume of the PS4100 and put the volume on the PS6100 online. fix?

    Thank you

    You can replicate as you have described.  Depending on the host operating system it may be easier to do at the level of the host.   With VMware ESX, you can move live VMS using vMotion Storage.

  • Best practices for the restart of the nodes of the ISE?

    Hello community,

    I administer an ISE installation with two nodes (I'm not a specialist of the ISE, my job is simply to manage the user/mac-addresses... but now I have to move my ISE a VMWare Cluster nodes to another VMWare Cluster.

    (Both VMWare environments are connected to our network of the company, but are different environments. vMotion is not possible)

    I want to stop ISE02, move it to our new VMWare environment and start it again.

    That I could do this with our ISE01 node...

    Are there best practices to achieve this? (Stop request first, stopl replikation etc.) ?

    Can I really just reboot a node ISE - or I have consider something before I do this? After I did this?

    All tasks after reboot?

    Thanks for any answer!

    ISE01
    Administration, monitoring, Service policy
    PRI (A), DRY (M)

    ISE02
    Administration, monitoring, Service policy
    SEC (A), PRI (M)

    There is a lot to consider here.  If changing environments involves a change of IP address and IP extended, then your policies, profiles and DACL would also change among other things.  If this is the case, create a new VM ISE in the new environment in evaluation license using the and recreate the old environment deployment by using the address of the new environment scheme.  Then a new secondary node set rotation and enter it on the primary.  Once this is done, you can re - host license from your old environment on your new environment.  You can use this tool to re - host:

    https://Tools.Cisco.com/swift/LicensingUI/loadDemoLicensee?formid=3999

    If IP addressing is to stay the same, it becomes simpler.

    First and always, perform an operational backup and configuration.

    If the downtime is not a problem, or if you have a window of maintenance of an hour or so: just to close the two nodes.  Transfer to the new environment and light them, head node first, of course.

    If the downtime is a problem, stop the secondary node and transfer it to the new environment.  Start the secondary node and when he comes back, stop the main node.  Once that stopped services on the head node, promote the secondary node to the primary node.

    Transfer of the FORMER primary node to the new environment and turn it on.  She should play the role of secondary node.  If it is not the case, assign this role through the GUI.

    Remember, the proper way to shut down a node of ISE is:

    request stop ise

    Halt

    By using these commands, the risk of database corruption decreases by 90% (remember to always backup).

    Please rate useful messages and mark this question as answered if, in fact, does that answer your question.  Otherwise, feel free to post additional questions.

    Charles Moreton

  • Best practices for network configuration of vSphere with two subnets?

    Well, then I'll set up 3 ESXi hosts connected to storage shared with two different subnets. I configured the iSCSI initiator and the iSCSI with his own default gateway - 192.168.1.1 - targets through a Cisco router and did the same with the hosts configured with its own default gateway - 192.168.2.2. I don't know if I should have a router in the middle to route traffic between two subnets since I use iSCSI ports linking and grouping of NETWORK cards. If I shouldn't use a physical router, how do I route the traffic between different subnets and use iSCSI ports binding at the same time. What are the best practices for the implementation of a network with two subnets vSphere (ESX host network: iSCSI network)? Thank you in advance.

    Install the most common iSCSI would be traffic between hosts and

    the storage is not being routed, because a router it could reduce performance.

    If you have VLAN 10(192.168.1.0/24) iSCSI, VLAN 20 (192.168.2.0/24) ESX

    MGMT and VLAN 30 (192.168.3.0/24) comments VMs and VLAN 40 (192.168.4.0/24)

    vMotion a deployment scenario might be something like:

    NIC1 - vSwitch 0 - active VMK (192.168.1.10) MGMT, vMotion VMK (192.168.4.10)

    standby

    NIC2 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC3 - vSwitch 2 - active VMK1 (192.168.1.10) iSCSI

    NIC4 - vSwitch 2 - active VMK2 (192.168.1.11) iSCSI

    NIC5 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC6 - vSwitch 0 - MGMT VMK (192.168.2.10) standby, vMotion

    VMK (192.168.4.10) active

    You would place you on VLAN 10 storage with an IP address of something like target

    192.168.1.8 and iSCSI traffic would remain on this VLAN. The default value

    gateway configured in ESXi would be the router the VLAN 20 with an ip address of

    something like 192.168.2.1. I hope that scenario help set some options.

    Tuesday, June 24, 2014 19:16, vctl [email protected]>

  • Design by using NetApp's best practices

    I am preparing for my VCP5 and I read the new book by Scott Lowe. the book describes how the traffic should be isolated. your vMotion, vmkernal, etc., but in many organizations, I see the NetApp with some of data warehouses and a few LUNS to CIFS share LUNS. I guess you can have your vmMotion on a VLAN separated, but would not safer just configure a windows VM file server to host your files? In freenas and openfiler forums, they stress is not to run their software in virtual machines in a production environment.   Physical separation would be better then just a VLAN? I was inking and correct me if I'm wrong. I think the CIFS shares in a virtual hosting machine would SAN, vMotion, vmkernal, most reliable if you have redundant switches on both sides VMware hosts. So if your kernel switches drop your vmware environment will not drop.

    > traffic must be isolated.

    Yes, the network traffic must be split on networks separated for various reasons, including performance and safety.

    > NetApps with MON a few for data warehouses and a few LUNS to CIFS share.

    Yes, if you have a NetApp file server you can block-level storage server as FCP or iSCSI, CIFS or NFS file-level storage.

    > I guess you can have your vmMotion on a VLAN separated, but would not safer just configure a windows VM file server to host your files?

    OK, you lost me.  Yes, you must separate the vMotion traffic to enhance the performance and because the vMotion traffic is not encrypted.

    I don't see where you're going for vMotion to a Windows file server?

    However, if you are referring to, why don't you your NetApp instead of Windows CIFS Server:

    You don't need to patch and reboot the NetApp at least once a month.

    Performance is better

    You don't need to buy a Windows license and then maintain Windows

    Snapshots.  NetApp has the best shots in the business.  When your Windows I/O high, or just typing box because it of Tuesday and removes all of your VSS snapshots you really wish you had a NetApp.

    > In the forums of freenas and openfiler, they stress is not to run their software in virtual machines in a production environment.

    Note that there are a ton of storage there equipment running as VMs and server NFS for shared storage, including left and they have been stable for years.

    > Physical separation would be better then just a VLAN?

    Yes, if you have the infrastructure.  When it comes to the first time I've seen reference you VLAN?  Are you talking about now the NetApp as the series 2020 with two network cards where you need to carry all traffic (managent, CIFS and iSCSI) through them via VLAN?

    Like this: http://sostechblog.com/2012/01/08/netapp-fas2xxx-fas3xxx-2-nic-ethernet-scheme/

    > I was inking and correct me if I'm wrong. I think the CIFS shares in a virtual hosting machine would SAN, vMotion, vmkernal, most reliable

    CIFS is nothting to do with SAN, vMotion or VMkernel.  CIFS (SMB) is the protocol used mainly by Windows file sharing

    > If you have redundant switches on both sides of the VMware hosts. So if your kernel switches drop your vmware environment will not drop.

    You always want to redundant switches.  No single point of failure is the best practice.

  • Looking for a best practices guide: complete replacement cluster

    Hello

    I was in charge as the complete replacement of our current environment hardware ESXi 5.0 U2 with a new cluster of servers running 5.1.

    Here are the basics:

    Currently, HP Blade Server chassis with 6 hypervisors running ESXi 5.0 U2, the company license, about 100 or so virtual running different operating systems - mainly MS 2003 R2 to 2008 R2, stores the data on without connected through ethernet connections 1 GB.

    Intended to run 7 independent servers as a cluster with ESXi 5.1, license of the company, connections to SAN be improved to 10 GB ethernet or fiber.  The range of virtual machines in the importance of 'can be restarted after hours' to ' should not be restarted or that will cost us money service interruptions.  (Looking for the options live - migrate if possible although I have my doubts, it will be an option given the cluster plans)

    I'm looking for a Guide to best practices (or a combination of the guides) which will help me to determine how best to plan the migration of VM - especially in light of the fact that the new cluster will be not part of the existing.  Given also the fact we upgrade is unable (due to problems on the chassis firmware) 5.1 before this work...

    Any pointers in the right direction would be great - look no no not a handout, just people signs

    See you soon.

    Welcome to the community - from vCenter 5.1 can manage an ESXi 5.0 host just one at a time do guests 5.0 5.1 and vmotion the VMs to new hosts - environment as the two environment will see the same SAN it will be necessary for storage vmotion.

  • vSpere 5 Networking of best practices for the use of 4 to 1 GB NIC?

    Hello

    I'm looking for a networking of best practices for the use of 4-1 GB NIC with vSphere 5. I know there are a lot of good practice using 10 GB, but our current config does support only 1 GB. I need to include the management, vMotion, Virtual Machine (VM) and iSCSi. If there are others you would recommend, please let me know.

    I found a diagram that resembles what I need, but it's for 10 GB. I think it works...

    vSphere 5 - 10GbE SegmentedNetworks Ent Design v0_4.jpg(I had this pattern HERE - rights go to Paul Kelly)

    My next question is how much of a traffic load is each object take through the network, percentage wise?

    For example, 'Management' is very small and the only time where it is in use is during the installation of the agent. Then it uses 70%.

    I need the percentage of bandwidth, if possible.

    If anyone out there can help me, that would be so awesome.

    Thank you!

    -Erich

    Without knowing your environment, it would be impossible to give you an idea of the uses of bandwidth.

    That said if you had about 10-15 virtual machines per host with this configuration, you should be fine.

    Sent from my iPhone

  • Best practices for moving to the 1 of 2 VMDK to different data store

    I have several virtual machines who commit a good amount of data on a daily basis.  These virtual machines have two VMDK; one where the operating system and that where data is committed to.  Virtual machines are currently configured to store in the same data store.  There is a growing need to increase the size of the VMDK where data are stored, and so I would like these put special in a separate data store.  What is the best practice to take an existing virtual computer and moving just a VMDK in another data store?

    If you want to split the vmdks (HDDs) on separate data warehouses, just use Storage vMotion and the "Advanced" option

  • Best practices for vSphere 5 Networking

    Hi all

    Given the following environment:

    (1) 4 physical servers, each server has 16 (Gigabit) network interface cards will install vSphere 5 std.

    (2) 2 switches with SAN storage battery function

    (3) 2 Equallogic PS4000 SAN (two controllers)

    (4) 2 switches for the traffic of the virtual machine

    As for networking, I intend create some vSwitches on each physical server as follows

    1 vSwitch0 - used for iSCSI storage

    6 network adapters are associated with IP-hash, multitracks with iSCSI Storage consolidation policy. and storage load balancing is round Rodin (vmware)

    (vmware suggests to use 2 cards for 1 traget of IP storage, I'm not sure)

    2 vSwitch1 - used for the virtual machine

    6 Teamed network adapters for the traffic of the virtual machine, with the hash intellectual property policy

    3 vSwitch2 - management

    2 network cards are associated

    4 vSwitch3 - vMotion

    2 network cards are associated

    You would like to give me some suggestions?

    Alex, the standard set by the storage and VMware is used by dell for their servers and tested sound on their equipment and the publication of the document, it is recommended...

    This sound is the best practice for dell server with this model mentioned in the document to use.

    Hope that clarifies...

  • Best practices ESXI 4 DRS

    DRS allows guests to get to very high processor use before moving the virtual machines. The question is, customer MANAGER has read only access to vcenter. He sees that some ESX hosts are constantly, getting CPU usage up to 98% with a warning on the host. He wondered about this.

    I'm not aware of the harmful effects on the production environment, but it's a bit noticeable when displaying the cluster on the hosts tab. Sometimes four of the sixteen hosts have a caveat during production hours. The attached picture shows a typical morning, with some hosts to 98% and 50%.

    The cluster has a capacity of 11 guests on the 16 failover, but we have very busy periods.

    Is there anything that can be done to configure things differently from the DRS? We could use Affinity rules to keep busy VMs on different hosts?

    Among the DRS Best practice:

    (1) in deciding which hosts in a DRS cluster, try to choose hosts that are as homogeneous as possible in the processor and memory. This ensures the stability and predictability of performance higher. VMotion is not supported on hosts with incompatible processors. So with heterogeneous systems that have incompatible processors, DRS is limited in the number of opportunities to improve the balance of the load on the cluster.

    (2) where multiple ESX hosts in a DRS cluster are compatible VMotion, DRS has more choices to better balance the load on the cluster

    (3) do not specify affinity rules unless you have a specific need to do so. In some cases, however, specifying the rules of affinity can improve performance

    (4) allocate resources to virtual machines and pools of resources with care. Be aware of the impact of the limits, reserves and overload of virtual machine memory

    (5) virtual machines with smaller sizes of memory or CPU virtual less offer more possibilities for DRS to migrate them to improve the balance on the cluster. Virtual machines with larger memory or virtual CPU add more constraints in the migration of virtual machines. Therefore, you must configure only as many virtual processors and memory for a virtual machine as needed.

  • Best practices of ESXi.

    I'm trying to find out if I implement best practices correctly on ESXi 4.1 U1. Our facility has two on-board NICs and a quad card NETWORK to use for data and management networks. We have fiber for the side storage. If I understand the latest practices should I trunk 6-port then use port groups to manage traffic VLAN where going.

    My question is this does not put excessive pressure on the network card that is used for the traffic of vmotion/vmkernel? If I put the switchport in one VLAN specific datagrams wouldn't get inspected and abandoned by the switch asics as opposed to the ESXi host to have to deal with them? I don't know if traffic would have a significant impact, so be it, but wanted to check before the implementation...

    Thank you
    Froggy

    I guess that you are concerned about air on the ESXi host performance, because it must deal with VLAN tags.

    Don't worry about this. The overhead is minimal and modern NICS also have material unloading of functions for VLAN that will be used by ESX in many cases of handling.

    However, remember that you have no safe the uplinks on the physical switch! Just create a virtual switch and add all the rising physical their. ESXi them then use all function of the active/standby/unused attribute that you can set by group of virtual ports.

    Andreas

    - Blog of experience front of VMware

Maybe you are looking for