VMotion Vlan HP 2920


Hello. This can be a general or a matter of HP network but I thought I'd try here everything first.

I am trying to configure a Vmotion VLAN on a Switch HP 2920 in accordance with article VMware KB: Multiple-NIC in vSphere vMotion

I have ports on the switch HP defined as not tagged with the vlan 50, no aggregation or aggregation in accordance with article.

I have a vswitch with 2 network cards just for that. Management and VM traffic Portgroup is on another v switch.

Everything works perfectly if I set the VLAN on ports 2 of vmkernel vmotion to None (0). If I set the VLAN 50 then it stops working.

Is this as expected? In my another vswitch I have several exchanges of VMS with different VLANS, but these are connected to the ports on the HP switch where him VLAN is tagged (no label)

Very appreciated

Scott

Everything works perfectly if I set the VLAN on ports 2 of vmkernel vmotion to None (0). If I set the VLAN 50 then it stops working.

That's how it is supposed to work with the current configuration of the switch port. "No label" means that physical switch port expects the unmarked packages and adds / removes the VLAN ID from the packages himself.

André

Tags: VMware

Similar Questions

  • vMotion VLAN\Distributed vSwitch Networking question

    I am facing a strange problem and was hoping someone might be able to enlighten. I have a vCenter cluster that contains 5 hosts. Four of the guests standard vSwitches and a host uses a distributed vSwitch. The four hosts who use standard vSwitches have a NETWORK card dedicated for vMotion network. The only host that uses the vSwitch distrubted has two NICS in a team. I followed the instructions listed here to set up the configuration (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 2007467)

    Three standard vSwitch hosts are able to ping to the active IP address for the host by using the distributed vSwitch while one of the hosts using a standard vSwitch is unable to ping the host by using the distributed vSwitch.

    If it was a problem with the distributed vSwitch or etherchannel configured on the Cisco switch for the collection of NETWORK adapters, I think that none of the other guests could ping the IP address, but they can. I also confirmed that the vMotion VLAN is configured on all Cisco switches and is to be shared between all hosts resources.

    I am at a loss to explain why a host is unable to communicate with the host distributed vSwitch on the VLAN of vMotion. Any help would be appreciated.

    Thank you.

    The problem I was experiencing was due to an incorrect configuration. After talking with a technician from VMWare, he explained to me that you should not use a configuration of etherchannel on the switch that connects to NICs hosts vMotion. The reason relates to the policy associated with the map NETWORK load balancing team.

    For etherchannel configuration work, politics of load balancing for the NIC team must be defined, 'route based on IP hash. You are not able to use this strategy to load with vMotion NIC team balancing. The policy load balancing for a correct vMotion NIC team would be 'road based on the original virtual port code '. This is because a NETWORK adapter is active while the other is standby.

    After removing the switch etherchannel configuration and redo my setup according to the above referenced article, guests may communicate on the network of vMotion of appropriately. I asked VMWare technology if there is no indication as to why the three hosts were working while we were not. He did not have an answer and is surprised that these other three hosts were working with the configuration I had initially.

  • System vlan on Nexus 1000v

    Hi all

    I understand that this vlan system allows the traffic flow for the vlan was VSM is not accessible, and vlan system should NOT be normal machine virtual traffic vlan. In my deployment of a normal vSphere environment with N1kv, I'll put these VLANS as system vlan: ESXi Mgmt N1kv mgmt, control & package, VMotion, storage over IP.

    I put the VLANs as system vlan on the uplink port profiles and indivdual port profiles for each VIRTUAL local area network. Correct me if that's wrong.

    What should be system vlan, or what those who shouldn't be system vlan? VMotion vlan? What are the disadvantages to specify all the VLANS as system vlan? Is it not better because even if VSM fell for a reason, MEC will still send traffic for all virtual machines?

    Thank you

    Ming

    Ming,

    Your understanding of the system VLAN is not totally accurate.  All them VLAN will be forwarding the case where your VSM is not accessible.  Each MEC module will continue to pass system and non-vlan traffic if the VSM is offline.  EACH MEC will keep its current programming, but will not accept any changes until the VSM is back online.  System VLAN behaves differently that they will always be in a State of transfer.  VLAN systems will transmit the traffic even before that a MEC is programmed by VSM.  That is why some system profiles demand them - IE. Control/package etc.  These VLANs must be transferred in ORDER for the MEC to talk to the VSM.

    As for your list of "what should be system VLAN"-remove VMotion.  There is no reason that your VMotion network should be defined as a system of VIRTUAL LAN.  All the others are correct.

    Also remember that you can ONLY define a VLAN on the port profile an uplink.   So if you use an uplink for 'system' type traffic and the other for traffic of type "Data VM", you would have just any single VLAN 'authorized' on an uplink - not both.  Allowing them the time will cause problems.   The only case, you have to keep in mind is that for a "system vlan" to apply, it must be defined on the Port of vEthernet profile and a profile of Uplink Port.

    E.g.

    Let's say my Service Console uses VLAN 10 and my VMs also use the VLAN 10 for their data traffic.  (Bad design, but just to illustrate a point).

    VLAN in "two places" seen set the system would you allow to treat ONLY the traffic of your "Service Console" as a traffic system and always apply security programming for your traffic "VLAN Data.  After a reboot, you Console of Service traffic would be routed immediately, but your VM data would not be until the MEC had pulled the programming of the VSM.

    profile port vethernet dvs_ServiceConsole type
    VMware-port group
    switchport mode access
    switchport access vlan 10
    no downtime
    System vlan 10<== defined="" as="" system="">
    enabled state

    profile port vethernet dvs_VM_Data_VLAN10 type
    VMware-port group
    switchport mode access

    switchport access vlan 10<== no="" system="">
    no downtime
    enabled state

    profile system uplink ethernet port type
    VMware-port group
    switchport mode trunk
    switchport trunk allowed vlan 10, 3001-3002
    Active Channel-Group auto mode
    no downtime
    System vlan 10, 3001-3002<== system="" vlan="" 10="">
    enabled state

    Hope this clears your understanding.

    Kind regards

    Robert

  • vMotion works only in a switch and the management

    Okay, I reconfigure my network in all of my 3 guests. I had vMotion (via VLAN50) in a vSwitch dedicated using binding rising vmnic0. vMotion between hosts has worked well. It looked like this...

    1.jpg

    I want to spend vMotion for a vSwitch shared with the management network. So I deleted the vSwitch2 vmnic0 and added to vSwitch4. Then, I created a new port group for vMotion with a new IP address. See below.

    2.jpg

    In vSwitch4 properties, the management network has vmnic7 as active and vmnic0 as the day before. vMotion is set in front of it. The vSwitch is configured to load balance from the originating virtual port ID.

    vMotion does not work between hosts and gets stuck at 14%. I get a timeout between 10.10.50.70 and 10.10.50.71 (another host).

    I don't know that my physical switch is configured correctly. I have two ports accepting traffic on the management and vMotion VLAN. If I pass around the allocation of vmnic (sort of management use vmnic0 as active and vmnic7 as before) then I can still ping the IP management. Also, if I move back to vSwitch2 vmnic0, vMotion starts working again so I guess I'm missing something on the side of ESXi.

    What Miss me?

    Thanks in advance

    OK, problem solved. The webex engineer had in my screen and start troubleshooting this.

    First of all, vmkping he had the IP address of the new IP address of vMotion to another host, that has failed. Then, he ran 'esxcfg-vmknic - l' and noticed that the old port vMotion group was always enabled. In the graphical interface it has disabled this by going into the properties of the vSwitch2 > vMotion port group and checked the property of vMotion. Then run vmkping, but it has always failed. Back in the GUI, he removed the vSwitch2 completely. Ran vmkping and it worked.

    He assumed the problem was vMotion was trying to communicate through the old switch, in spite of remove me the adapters and the engineer vMotion traffic on the port group in this deactivation switch. It is useless for me to keep the old vSwitches, except as a back fall where vMotion failed. Seems I should have been more courageous and just removed them when I was creating the new vSwitch.

  • Can you change what core is vmotion ports without going into maintenance mode?

    If you have a host running, with a vswitch standard, can you change what port kernel manages vMotion without putting it in maintenance mode?

    What is happening is that we have a number of blades, all running vSphere 5.1.  Each host has a unique vSwitch with multiple groups of ports and a dedicated port vmkernal for vmotion and another dedicated one for the management network.  We use a VLan to separate the vmotion network.

    Our networking group will do a reconfiguration that will change the VLan dedicated for the vmotion.  What I would like to check the vmotion network management for each host, clear the check box on the Group of ports "vmotion".  The new vmotion VLAN is set up by the network, and then I go back and make the changes on each slide so that the "vmotion" is now enabled and remove the functions of vmotion for the management port.

    I know that it is not recommended to run the vmotion traffic on the management port network, but it will only be short term (probably a week or two).  My biggest concern is whether I can make changes 'live' without affecting the machines running.

    In a note related, if a host loses contact with other hosts on the network vmotion, which will pose a problem (in addition to not being able to vmotion)?  Will be the loss of connectivity on only the port of vmotion cause any kind of response of isolation, failover, etc.. ?

    Basically, you can modify most of the settings of network - including vMotion - without affecting the virtual machine. When you say that they reconfigure VLANS, do you know how long it will take and if the VLAN current will be available during this time? In this case, you would not have to implement no work around, but just update the VLAN groups of vMotion ESXi host ports.

    André

    PS: about your question "will be the loss of connectivity on only the port of vmotion cause any kind of response of isolation, failover, etc..?

    No, it's the management network that is used for HA, so everything what he let go is vMotion/DRS.

  • Single network or Multi vMotion Nic card

    Here are our current design for our soon vSphere 5.1 deployment

    There has a been a good bit of internal discussions on whether to use a single 10 GB for vMotion network adapter or use two NICs 10 GB for vMotion

    Most of the debate has been around "isolate" the vMotion traffic and makes it as localized as possible

    We all have the vMotion traffic is a vlan separate, vlan127 you can see in our design

    The big question becomes exactly where will the vMotion traffic? What switches/links it really go?

    Is this correct?

    1. If we start with a vMotion nic, then once that Vmotion begins traffic will be generated from the host lose the virtual machine and the host wins the virtual machine. In this scenario, traffic will cross a BNT switch. This leads to two conclusions

      1. Traffic is never as far away as the nucleus of Juniper
      2. vlan127 (vMotion) didn't need to be part of the trunk, go to the heart of Juniper to the TNB
    2. If we go with two NICs of vMotion, then the two 10GB network adapters might be involved in vMotion. This means that vMotion traffic between two hosts ESXi could hit a switch BNT, browse the battery connections (two 10 GB between the BNTs connections) and go to another host via a network card. GB 10 it has also led to two conclusions:
      1. Traffic is never as far away as the nucleus of Juniper. He remains isolated on a single switch BNT or moves between BNT switches through the two stack 10 GB connections
      2. vlan127 (vMotion) didn't need to be part of the trunk, go to the heart of Juniper to the TNB

    Design.png

    vMotion traffic is just unicast IP traffic (well good, except for some bug) between ESXi configured for vMotion, vmkernel ports if all goes well insulated in a non-routed layer 2 broadcast domain (VLAN). Simple as that. Considering that the traffic will cross physically regardless of the physical NIC is configured for the respective vmkernel ports. The path between the two is obviously depends on the layer 2 switching/STP infrastructure, which in your case would be just the blade chassis switches.

    Multi-NIC vMotion essentially implements several independent streams between different IP addresses and MAC, belonging to the same host. Consider the following:

    Host A and B with vmk1, using physical vmnic1, connected to physical pSwitch1 and vmk2, using vmnic2, connected to pSwitch2. The two pSwitches directly the trunk the VLAN vMotion between them.

    If the two hosts have only vmk1 is enabled for vMotion, traffic will never pass by pSwitch1. If host B has only vmk2 enabled for vMotion or you switch uplink, it'll pass the two pSwitches.

    Now, if you enable the two interfaces for vMotion vmkernel, it is difficult to say how the hosts decide what vmk connects to that. You may find yourself going through the two pSwitches for the two water courses, or you're lucky and you end up with source and destination interfaces that reside on the same pSwitch. I don't know how ESXi decides the pairings, this article seems to suggest it's done deterministically for that in a similar configuration, the same key vmk would connect between them:

    http://www.yellow-bricks.com/2011/12/14/multi-NIC-VMotion-how-does-it-work/

    Whatever the case, unless you need to other hosts on different switches, connected through your hearts only, to be able to vMotion between hosts, there no need at all to mark the vMotion VLANS on your links between chassis and Core switches.

    You see, your question of vMotion Multi-NIC is completely unrelated to this.

    If we start with a vMotion nic, then once that Vmotion begins traffic will be generated from the host lose the virtual machine and the host wins the virtual machine. In this scenario, traffic will cross a BNT switch. This leads to two conclusions

    1. Traffic is never as far away as the nucleus of Juniper
    2. vlan127 (vMotion) didn't need to be part of the trunk, go to the heart of Juniper to the TNB

    1. Yes.

    2. Yes.

    Circulation * could * crosses both switches BNT, according to what I explained above.

    If we go with two NICs of vMotion, then the two 10GB network adapters might be involved in vMotion. This means that vMotion traffic between two hosts ESXi could hit a switch BNT, browse the battery connections (two 10 GB between the BNTs connections) and go to another host via a network card. GB 10 it has also led to two conclusions:

    1. Traffic is never as far away as the nucleus of Juniper. He remains isolated on a single switch BNT or moves between BNT switches through the two stack 10 GB connections
    2. vlan127 (vMotion) didn't need to be part of the trunk, go to the heart of Juniper to the TNB

    1.Yes.

    2.Yes.

    Personally, I'd go with Multi-NIC vMotion use NIOC with soft actions in your config file.

  • 2 groups of ESXi allow the same network for vmotion?

    I have 2 groups in the same data center.  The first is a cluster of ESXi 4.1 of 8 guests and appx 120 VM.  The other is a cluster of ESXi 4.1 6 hosts and appx 100 VM.

    On the servers in the cluster first, I mgmt interfaces on vlan 5 and vmotion interfaces on vlan 6 (different VLAN = recommended).  On the servers in the cluster 2, they were Setup with the interfaces of mgmt and vmotion interfaces as well on the vlan 7.  I want to correct this by moving vmotion to one vlan different.

    Is there a reason that I should not use vlan 6 for vmotion for both groups?  Or would it be better to have each cluster on its own vmotion vlan?

    Thank you.

    Yes. We have 9 clusters in two different vCenter, and they all use the same VLAN for vMotion.

  • Problems with vmotion on Esxi

    Hallo.

    I have problem with vmotion migration between hosts.

    It gives me the following error:

    General system error: migration failed when copying data

    and fails to 10%

    HW configuration:

    Box M1000e blade

    5 * Server Blade M610 with 1 card of additional network for fabric b

    2 * I / O modules switch for fabric

    2 * I / O modules switch for fabric B

    It wasn't the economy for 6 modules of e/s so we have a configuration with only 4 i/o modules switch

    If FabricB is dedicated for ISCSI ay

    Fabric is then used for VM, ay and Vmotion management

    vmnic 0 + 1 (fabric) is configured in a nic to the virtual switch team in a configuration of active standby.

    Vmnic0 active and standby Vmnic1

    On the vmkernel on the same switch port virtual I have changes the active standby for vmnic 0 Eve and vmnic1 active (I also tried to use the order active / standby for the vmkernel as the virtual switch setting but the same error occurs).

    VM/management is therefore run traffic Vmnic0 and Vmotion vmnic1 primary primer

    Vswitch0 on host1:

    VM network VLAN 168

    VMotion VLAN 168 ip:192.168.168.201

    Management network VLAN168 ip: 192.168.168.145

    Vswitch0 on host2:

    VM network VLAN 168

    VMotion VLAN 168 ip:192.168.168.202

    Management network VLAN168 ip: 192.168.168.146

    Tried the following without success:

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1013150

    Offline migration works very well.

    And it seems to be a recurring problem because last night he worked with a vmotion online but today mistake once again.

    Any good suggestion on this issue?

    Hi Michael,

    VMotion doesn't storage networking (as long as the data store is accessible by two hosts). Only those ports vmkernel with IP 201 and 202 are involved.

    Check only these two enabled VMotion vmkernel ports. Are two servers connected to the same switch vmnic0? Try to catch the error message in the logs.

    Good luck

    Franck

  • Separate physical network for VMotion?

    In my design to an ESXi 3.5 on HP Blades I've defined for the management network 2 natachasery and 2 natachasery for VMotion. These go to separate switches blade Cisco 3120. Now I stipulated a stack of external switch to reach the switches for VMotion. The management of the network switches then go to another pile of management. Client E wants to reduce costs by sharing the external battery for traffic management and Vmotion and segregation through VLANs and making the VMotion VLAN non routable. Will there be falls for this?

    Keep in mind, it is a safe place. IW has also always said that VLANing should not serve as a separation of security because of the possibility of VLAN hopping. What are the risks here? Keep in mind that it is a sensitive network of defence-biased, so I try to separate the networks as much as possible.

    Your ideas are welcome

    If you have material (switches) in order to separate the network infrastructure then I do for reasons of pure performance.

    A physical firewall allows us to block all to enter our management and VMotion networks.

    They are both VLAN behind the firewall, but we can still allow a privileged access to the workstations of the administrator or a management server to reduce the footprint.  This method allows to manage the network with special exceptions.

    Sound risk pure vs cost.  If you think you have a good chance of someone Vlan jumping on your internal network, and then using physical security is the best bet.  If it's a low risk, then just segment it out with VLAN and use access lists and change ports to reduce the risk of vlan hopping.

    Hope this helps you decide.

  • Firewall between ESX and vCenter vLAN &amp; Production vLAN

    Hello

    Scenario:

    2 ESX hosts with Teddy bear 6. 2 to vLAN S.C. & VMotion, 2 to vLAN DMZ and 2 for the Production of vLAN.

    There are 2 pSwitches to battery-Mode Cluster, having 4 VLANS.

    1. Production of vLAN1

    2. vLAN2 DMZ

    3. Service vLAN3 console

    4. vLAN4 VMotion

    Connectivity is fine no problem. All VLAN works them very well. Service console and VMotion they fold each other so pSwitch failure failure or Teddy bear.

    Requirements:

    Service console is connected in vLAN3 which is 172.16.20.0/24 network under vSwitch0 contains 2pNICs & 3 exchanges.  Service Console PortGroup, VMotion PortGroup & vCenter PortGroup. vCenter PortGroup I want to place the VirtualCenter VM & I'll put the virtual computer's firewall.

    Currently, the Center Virtual under vCenter PortGroup, which is 172.16.20.55. Communication to ESX all very well.

    How I connect to VirtualCenter & ESX host while I am putting in the Production vLAN? I added a static Route in my P.C. 172.16.20.0 to go via 128.104.145.149 "this is the IP address pSwitch" I connect very well without any problems. Of course that does not protect the farm ESX and virtual Center.

    I want to secure the connection between the Production vLAN & Console of service of VMotion vLAN & get rid of the Static Route in the computers in the Admin.

    Bypass the Options:

    1. Physics MS ISA Server with 2 natachasery one be connected in the vCenter PortGroup & to be connected in the Production vLAN & open ports to demand it.

    2. Physical firewall of Teddy 2 a be connected in the vCenter PortGroup & to be connected in the Production vLAN and open ports require.

    3. Virtual Firewall 'SmoothWall or ISA Server' with 2 teddy bears that are connected in the vCenter PortGroup & to be connected in the Production vLAN and open ports require.

    Please take a look at the diagram attached & tips.

    Best regards

    Hussain Al Sayed

    Hello

    On your diagram, I would change your colors. Orange traditionally involves a DMZ not green, but it is up to you. I use Smoothwall for exactly the same behavior.

    Network <-> pNIC1 <-> vSwitch1 <-> vFW (smoothwall) <-> DMZ Network
    ....................................................<-> Green Network
    

    If your front firewall controls access to everything. You can use 'two' firewalls so if you just want to have a set of Red<->Green Networks. On the first, the Red network is outside, green is the demilitarized zone, in the second, red is the demilitarized zone and the ESX hosts are green.

    To grant access to your ESX hosts from a system outside the firewall, you must enable and redirect port 443 to the appropriate location. In fact, I wouldn't do that, create a virtual machine or physical box that is inside the firewall, use the VPN and OpenVPN Tarek Smoothwall addon in the internal location or create a pinhole that allows RDP access to this host/VM and then use the VIC of in the "green network". You must put the kingpins holes in your firewall to grant access you need, so a VPN works much better. You want to limit the number of holes of PIN you use.

    What you describe is quite feasible, but without the holes of the kingpins and proper routing through the firewall is not possible.

    Best regards

    Edward L. Haletky

    VMware communities user moderator

    ====

    Author of the book "VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.»

    Blue gears and SearchVMware Pro Articles: http://www.astroarch.com/wiki/index.php/Blog_Roll

    Security Virtualization top of page links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

  • vSphere vSwitch configuration issue

    I'm currently building a multiclient environment with two ESXi hosts in a data center. A SAN will come eventually if I want to start my basic with this thought in mind build before implemented. When comes the SAN, I just plug it in and not have to re-architect my network.

    I use a switch 24 ports of Cisco 2960 S, 1 of my ESXi hosts has 4 network cards and the other has 8 network cards. I don't know that I will NOT use the VMware vDistributed switch.

    The conversation I did with people, I am told I can't achieve any type of aggregation of links on the end of things Cisco and simply let VMware manage everything that LB/ft. I'm open to any argument on this point that I do not take a position be it.

    Since it is a multiple tenant environment, I am curious about vSwitch/PortGroup design with security in mind.

    #1 can I paste all network cards to a single vSwitch and use several groups of ports for each VLAN? It would be a safe method to keep the traffic between the VLANS segmented?

    #2 instead create a vSwitch for each VLAN and paste the appropriate vSwitch? (looks like a waste of the physical NIC for me)

    #3 should I allow all the VLANS or just specify those I want to pass through (security of mind is thought specify)

    More traffic will take place in the VLAN and WAN, maybe 5% of the traffic will be routed between the VLANS (I read that routing between vSwitches would have on my firewall if between two vSwitches)

    It will hit my Sonicwall firewall that has 6 network adapters. I thought that I would use two network cards on the Sonicwall assigned to different VLANS on my different vSwitches or exchanges (according to the method I). I know the side of the config of the Sonicwall well enough, but I want to keep all traffic to occur on the Cisco 2960 S if it resides on any network on this switch.

    Good,

    So until you have multi tenant VM isolated with no communication between them, you have two options.  (Option 1).  Take advantage of your physics of switching / VLAN / and Firewall to create the speration and control.  (Option 2).  Use PVLANS.

    If you want more information on PLANS to let me know and I'll send you a link to another post I did which addresses in detail about this.

    With this ISC here any visio drawing VERY fast just to get what you want to do.

    Diagram1

    Diagram2

    Digram3

    OK, so moving forward with your setup your life / configuration will be much easier if you could get 6-8 network interface cards on your first host that currently has 4 network cards, however if you can not lets talk about how you get to do what you want.

    Let's start with your ESXi host at 8 cards just to get the idea everywhere so the address that with 4 network cards then.

    So, you will want to create 3 vSwitches:

    (1.) one for management and vMotion this switch will have 2 vmnic assigned for all external communications

    (2.) one for iSCSI with 2 vmnic assigned for the multipath

    3.) for the virtual computer network and all the other networks, you will need for isolation / multi-tenent 4 network cards for the traffic of the virtual machine.  Feel free to borrow this switch network cards if you need it for other purposes

    Now as every vmnic on all vswitchs will be several VLANS through them, you will need to the trunk ports on the physical switch and tag all the VLANS which could pass these network cards. For example, management - VLAN 10, vMotion - VLAN 20 times will go on vmnic0 and or vmnic1.  If the ports that these network cards to connect must be resources shared with vlan 10 and 20 tag on them.  If you have other questinos on this topic let me know

    Ditto for iSCSI however group in the vSwitch iSCSI, most people put the ports in the access mode, which essentially qui essentiellement definit defines just one vlan to everything that is connected to this port by default so you don't require to a VLAN on the port

    Now for all VM networks.  Each of them will be have their own network and VLAN.  In defining a VLAN and a separate network, you can set each one to a different gateway which will your Sonicwall.  Your sonic wall you can create itineraries and or firewall rules to prohibit traffic between networks ect.  If you have any questinos on let me know.

    Now to get your SIN for talking to your virtual machine natively for backup and without going through a data store, you s VM you want this to happen to have two virtual network adapters.  On the normal network on what you want and another on the network backup Nas which will route to the NAS.

    Now for the ESXi host with 4 network cards, you will have more or less the same, the only difference is that your vSwitch0 will do more work that you will have all your networks routing VM, vMotion and ESXi management.  So you have to tag VLAN on ports these network cards work on your physical switch.

    If your planning to your NAS instead of iSCSI you NFS will be limited to only 1 GB throughput and would probably want to watch 10 GB nic if that is the case.  Anyway if that's what you want I would like to know or do not hesitate to ask.

    Also, it would really be a good idea to get a second switch, so you do not have a single point of failure.  CurrentY if you let go of that switch is down and you don't want that once you get a another switch you would simply divide all the vmnic redundent through two switches, both vmnic0 - switch1, vmnic1 - switch2.

    I hope this has helped, let us know if you have any questions.  No this is set in stone is just a quick drawing in order to give you some ideas on how you want to configure everything.

  • What are the vSphere components must be on the same subnet?

    VSphere components, those who need to be on the same subnet / vLAN?

    For example:

    vCenter Server and ESXi hosts?

    Groups separated from ESXi hosts are on separate VLANS / subnets? What is the limit of the number of hosts ESXi I should have on one vLAN to avoid having too much broadcast traffic?

    vCenter Server and he associated with SQL Server?

    VMware Update Manager and vCenter Server?

    VMware Update Manager and ESXi hosts it updated?

    There is no technical requirement that vSphere components must be on the same IP subnet. You can have all the components on a separate and routed (or even a firewall, you implement granted sufficient rules) subnet without problems. If that made sense, however, is a completely different question.

    vCenter Server and ESXi hosts?

    Does not really matter, but makes the most sense to be on the same subnet.

    Groups separated from ESXi hosts are on separate VLANS / subnets? What is the limit of the number of hosts ESXi I should have on one vLAN to avoid having too much broadcast traffic?

    If they are managed by the same vCenter, or similar commercial purposes, I would put them on the same subnet. I only consider something a subnet extra for things like DMZ-Clusters, that can be distinguished from any other as much as possible. A vMotion VLAN cluster might be a correct point in a normal scenario too.

    ESXi hosts really do not generate any significant broadcast traffic, apart from a few ARP requests here and there. This isn't Windows, after all, and even if that were the case, the time when you had to worry about the reasons broadcasting with 'only' a few hundreds or thousands of systems on a broadcast domain have long been more with the advent of fast and gigabit ethernet.

    vCenter Server and he associated with SQL Server?

    VMware Update Manager and vCenter Server?

    Even once it does not really matter, but I would put them on the same subnet unless I have a very good reason not to.

    VMware Update Manager and ESXi hosts it updated?

    As long as they are not connected via a WAN snail, causing the staging of updates hours, that's fine.

  • Installation of a second switch to support redundancy

    Hey people.

    VMware ESX 4.0 w/Update 4 installed, Vmotion license as well.

    4 machines four HOSTS

    Switches - HP Procurve 2910al - 48G

    EMC Clariion AX4 - 5i SAN Bay

    This network configuration has been inherited by me came on board here, not my fault. The mentioned above, switches that I bought just for this project.

    I have a HP Procurve switch.  The switch has two defined VLANs.

    One for VMkernel and VMotion (vlan 200)

    A Console Service (Vlan 201)

    Currently, the console service Vlan (201) is also set to a second former switch, with four connections going to the aforementioned switch and four others to the second switch.

    What I want to do is to add a second switch for redundancy of the network VLAN 200. It would be as simple to install a new switch, create another VLAN 200, then spend half of connections (currently two coming from each HOST, so one of each) to switch B?

    Do I have to change the properties on the vswitch, NIC Teaming tab to "Route based on ip hash", or leave it alone (it is currently 'Route based on the original virtual port code'). ?

    Both switches would still be on shared resources to our main switch, so that there is a path to associate the VLANS set.

    Thanks for the help!

    James

    Welcome to the community - Yes it would be that easy - as long as you do not change VLAN you won't them needt to change anything within the vmware environment.

    For all mix I'd say separating your vmotion and vmkernel traffic on VLANS separated who would follow the best practices - but if all works well I would wait until you make the change of the physical network.

  • Network mapping network Vsphere VCD

    Hello

    So I'm new to VCD and try to understand best practices on how to map vcd networks return to networks vsphere.

    I'll start with the external network.

    In my vsphere network I already have a distributed with several groups of ports of production.  This distributed switch is supported by 2 natachasery starting from each host.  The natachasery connect to the ports of shared resources pswitch that allows traffic on all VLANs.

    To set up my network external vcd.  Should I start with a new Distributed Switch groups port dedicated for the vcd?  If so, I guess I would need to devote natachasery to this new dvs?  I guess this new 'external network' should have no access to my production network?

    So what I think.

    1. new DS with a Teddy connected to my cisco switch.

    2. place a single vlan on that port to pswitch coming out just through my firewall to the internet.

    3. create the Group of ports on this new distributed switch

    3. create my vcd external network to use this port group who has access only to the vlan that comes out to the internet.  No connectivity to my network of vsphere production?

    Again, I am new to vcd so feel free to offer a completely different solution.  I'm tryin to figure out how the vsphere networking should be setup to support these networks of vcd.


    Thanks in advance,
    Ian

    Let me start by asking what is the purpose of this cloud.  Is it to run systems for yourself, or others (such as other companies).

    This is a quick setup which I think will be works for many people, or at least as a starting place to get an idea how it goes toghether. I used it as a whole little clouds which was a private cloud, so it has worked well for the needs could help by giving you a start.

    In VCenter

    Unique DVS

    4 cards network uplink on DVS

    dvPortGroup to name VMKernel/VMotion "VMotion" VLAN 11

    dvPortGroup to Managment named 'Management' VLAN 10

    dvPortGroup to Internet only traffice name "InternetDirect" VLAN 12

    dvPortGroup to the name of "VMNetwork" VLAN 13 Normal network traffic

    VLAN 14 also install switch as private use to VCNI traffice between fenced of vApp

    In VCLoud

    2 x network provider in VCD Setup

    'Internet only' - pointed the DVS dvPortGroup named 'InternetDirect '.

    'Normal network' - pointed the dvPortGroup named 'VMNetwork '.

    1 x network pool

    'Standard network Pool' - configured to use VNCI on VLAN 14

    For each org (we had some, but they are all very easy company)

    added 2 networks for each and pointed provider networks

    Unless I forget something that that's all we had to do to have a "normal" and the only network Internet and allow VAPP is fenced and have private network segments. -A bit of VLAN, we used could have been consolidated, but we liked the idea of knowing esxi-esxi traffic has been segmented wit ha vlan other speculation.

  • A large flat Virtual pass vs several virtual switches

    experts,

    What is the advantage of having a large flat Virtual pass Vs several virtual switches for all types of traffic of the virtual machine (management, Vmotion, Vm Network). Any improvement in performance.

    What is the advantage to have the virtual machine and networking on two different VLANS.  improved performance or security?

    Our current configuration has a virtual switch with all eight physical NICS are assigned.

    Vswitch 0 <-vmnic0, 1, 2, 3, 4, 5, 6, 7

    |

    Management network (Vlan 100) <-Vmnic0 (active), Vmnic7 (at rest)

    |

    VMotion (Vlan 150) <-Vmnic7 (active), Vmnic 0 (standby)

    |

    The virtual machine network (vlan 100) <-Vmnic1, 2, 3, 4, 5, 6 (all active)

    We will be setting up a new environment Vmware (5.0) and I would like to configure two virtual switches as follows

    Vswitch 0 <-assigning two physical network adapters (Vmnic 0, Vmnic 7)

    |

    Management network (Vlan 100) <-Vmnic0 (active), Vmnic7 (standby)

    |

    VMotion (Vlan 150) <-Vmnic7 (active), Vmnic0 (standby)

    And allowing that Vlan 100 and 150 Vmnic0 and Vmnic7 traffic

    Vswitch 1 <-assignment remaining interface cards physical network (Vmnic1, 2, 3, 4, 5, 6)

    |

    Computer virtual network (vlan 101) <-Vmnic1, 2, 3, 4, 5, 6 (all active)

    Allow only the Vlan traffic 101 on above NIC

    I am thinking propose above configuration.  If I go with above setup, I get an improvement in performance? or other benefits. Please explain?

    sansaran wrote:

    I am thinking propose above configuration.  If I go with above setup, I get an improvement in performance? or other benefits.

    I don't think that there should be no difference in performance in the two installation options that you have. The only advantage to create two vSwitches would be to simplify the configuration, less risk of errors if Setup lots of armies and easier to check quickly as the configuration is correct.

    A small advantage with the creation of a large vSwitch is that you could (if you wish) experiment with the help of multi-NIC-vmotion in ESXi and assign several VMNIC for vMotion and it would be easy to give the VMNIC 'return' for the virtual machines more later if wanted.

Maybe you are looking for