iSCSI multipath on a distributed vSwitch?

I read that there are problems with this and I'm pretty positive, I read that it is still not supported in one of the documents ESX4u1 (although for the life of me I can't find this text of presentation this morning).  Is this even a question?  And if so, can someone explain why exactly this is not working?

I'm documenting a config proposed for deployment using cards dual 10GbE and I do research using iSCSI multipath and dvSwitches.  Which raises another question, on a channel 10GbE same multipathing necessary?



-Justin

If you are talking about the use of links in the port, then you must use conventional switches for this purpose.  It is in the notes version of vSphere

vExpert

VMware communities moderator

-KjB

Tags: VMware

Similar Questions

  • iSCSI multipath with Distributed Virtual Switch error

    Ive followed the iSCSI SAN guide and have iSCSI multipathing works well with a regular virtual switch. Ive cancelled all configurations and have begun to see if I can get this working with a dVSwitch.

    I get the error "Add Nic failed in IMA"

    Someone at - it ideas?

    I recently worked on a case of iSCSI with VMware support and they mentioned a known issue with iSCSI and dVS, this is also described in the release notes, but may still have a problem. You can contact the technical support for confirmation.

    • ESX/ESXi 4.0 4.0 does not support the compatible ports with DVS binding configuration VMkernel network interface cards

    If you configure compatible with vNetwork Distributed Switch port binding NICs VMkernel, the operation fails if you enter esxcli swiscsi nic adds - n vmkx d vmhbaxx and vmkiscsi-tool - V - a vmkx vmhbaxx orders through the service console or the vSphere CLI.

    Workaround: Use only inherited vSwitch VMkernel NIC for port binding.

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    repository scripts vGhetto

    VMware Code Central - Scripts/code samples for developers and administrators

    http://Twitter.com/lamw

    If you find this information useful, please give points to "correct" or "useful".

  • Not available to add to Distributed vSwitch to hosts/physical adapters

    It's probably something simple, I forgot, but have not been able to understand...

    Learning of vSphere 5.5 via Virtualization VM Workstation 9. Have many functional things already, to the point where I now both hosts ESXi, datastore iSCSI via standard vSwitches etc. No cluster, but everything works as expected.

    Currently distributed vSwitches research. I am trying to create one, but facing the question that I see not available to add to the hosts/physical adapters dvSwitch... Nothing appears in the list, nor does anything appear under "View Incompatible Hosts", so I can only continue when I choose "add hosts later." It is through vCenter Client and/or vCenter Web Client connected to the server vCenter.

    The two hosts of ESXi have 5 network cards. They have all been used in vSwitches for VMS, the iSCSI management..., so I decided to add an additional NETWORK card for each. I rebooted, ESXi, each of them guests show their new unused NIC under Configuration > network cards... This don't change nothing, always do not appear when you add hosts to the dvSwitch. I also tried to put in a standard vSwitch first on each ESXi host, with no difference. So I thought that maybe you can't see adapters hosts/physics to add to a dvSwitch after that that they have been added to a cluster, so I quickly created a cluster, added two ESXi hosts, no difference. I changed the version of the dvSwitch to 4, 5.5... I tried different settings to maximum number of physical cards by the host from 1 to 8... Nothing helps.

    I hope that I have reason to believe that you can add network individual cards of the hosts who also already have some network cards as used in standard vSwitches uplink adapters?

    Any comments appreciated,

    JH

    -

    Haha... It is even stupider than I thought.

    In order to tackle another question I got one a few days ago, I created a data center extra, with a very similar name. I moved all hosts of this new centre of data at any given time. But when I tried to create the distributed vSwitch, I accidentlly rightclicked the OLD Datacenter (empty). No host there to select a list empty... so of COURSE every time.

    Can someone slap me hard, please?

    JH

  • iSCSI multipath vs network redundancy?

    If I have a redundant network, say a VMkernel port on a vSwitch with two VMnic, maybe Port ID load balancing and the two vmnic goes to two different physical switches and there are redundant links on all switches between the ESXi host and iSCSI SAN and it is Rapid Spanning Tree set up, and there is a kind of Etherchannel / Link aggregation on the iSCSI Server , but only a single IP address...

    Is there an advantage to configuring iSCSI multipath and let the VMkernel manage failover, rather than let the network itself, make sure that there is a path between the host and the target?

    Primary storage provider use several IP for storage NIC (in some cases two different networks).

    With the VMware multichannel modules, you are limited to only one path (at that moment) to a LUN, but the seller might have is multichannel modules, or of course, you can use LUN more have used more of paths.

    Bandwidth could be interesting with several virtual machines... or I/O loads.

    André

  • Why several vmkernels for iSCSI multipath?

    I'm a little confused about the iSCSI Multipathing. What is the reason for the creation of two vmkernel interfaces and then use command line tools to link them to a new vmhba? Is it different for the use of a software iSCSI initator on a vSwitch with multiple VMNIC, connected to different physical switches?

    There is the same discussion in thread Russian vmug.

    iSCSI Multipathing is not on the trail, its about us pairs - initiators (interfaces vmk) and targets.

    If you have two targets

    • and you need to fail over only - 1 vmk is enough (NIC in Active mode / standby)
    • If you need loadbalacing
      • and you can use the aggregation of links + Hash IP - 1 vmk is sufficient (PSP is Roundrobin and NICs in active/active mode)
      • If you cannot use THE - vmk 2 are needed.
  • vMotion VLAN\Distributed vSwitch Networking question

    I am facing a strange problem and was hoping someone might be able to enlighten. I have a vCenter cluster that contains 5 hosts. Four of the guests standard vSwitches and a host uses a distributed vSwitch. The four hosts who use standard vSwitches have a NETWORK card dedicated for vMotion network. The only host that uses the vSwitch distrubted has two NICS in a team. I followed the instructions listed here to set up the configuration (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 2007467)

    Three standard vSwitch hosts are able to ping to the active IP address for the host by using the distributed vSwitch while one of the hosts using a standard vSwitch is unable to ping the host by using the distributed vSwitch.

    If it was a problem with the distributed vSwitch or etherchannel configured on the Cisco switch for the collection of NETWORK adapters, I think that none of the other guests could ping the IP address, but they can. I also confirmed that the vMotion VLAN is configured on all Cisco switches and is to be shared between all hosts resources.

    I am at a loss to explain why a host is unable to communicate with the host distributed vSwitch on the VLAN of vMotion. Any help would be appreciated.

    Thank you.

    The problem I was experiencing was due to an incorrect configuration. After talking with a technician from VMWare, he explained to me that you should not use a configuration of etherchannel on the switch that connects to NICs hosts vMotion. The reason relates to the policy associated with the map NETWORK load balancing team.

    For etherchannel configuration work, politics of load balancing for the NIC team must be defined, 'route based on IP hash. You are not able to use this strategy to load with vMotion NIC team balancing. The policy load balancing for a correct vMotion NIC team would be 'road based on the original virtual port code '. This is because a NETWORK adapter is active while the other is standby.

    After removing the switch etherchannel configuration and redo my setup according to the above referenced article, guests may communicate on the network of vMotion of appropriately. I asked VMWare technology if there is no indication as to why the three hosts were working while we were not. He did not have an answer and is surprised that these other three hosts were working with the configuration I had initially.

  • Again distributed vSwitch in vCenter migration

    Hi all

    So I used these large scripts since.

    http://www.gabesvirtualworld.com/migrating-distributed-vswitch-to-new-vCenter/

    They work great, but I need to know if I can change the value Get-Cluster instead of Get-Data Center, I am trying to run against certain groups and the script runs against the datacenter level.

    I'll do some tests, but I wanted to see what others have to say about it.

    Thank you

    You will need to disconnect these 4 the vdSwitch ESXi hosts before migrating, I suspect.

    You can use the cmdlet Remove-VDSwitchVMHost for this.

  • Distributed vSwitch without a vCenter?

    We look at the standard vSwitch migration to distributed vSwitch in the near future, but I need donkeys, possible headaches about that.

    Distributed VSwitch requires vCenter according to various documents.

    1. That means it must be that it is necessary for the configuration only or it will stop working IE. If the vCenter is inaccessible / crashed?

    2. If I would have to reinstall vCenter, what happends to the distributed vSwitches?

    3. If we would migrate ESX servers to a different vCenter, ESX configuration would work?

    When we went from ESX3.5 to ESX4.0, we simply disconnect the old vCenter2.5 ESX Server and connected to the new vCenter4 without any network related problems and could do without any interruption of service for running virtual machines.

    With Distributed vSwitch and vCenter requirement, it's not not clearly documents if we could make such an operation in the future.

    Does anyone have the answers to the questions? Anyone tried?

    A distributed vSwitch using "hidden or proxy" vswitches to perform tasks on the host itself.

    Thus, the vCenter is required only for the configuration, and a host will be

    continue to function if the connection is lost.

    That being said, the config is part of the vCenter, so you will have problems if you migrate or rebuild vCenter.  Here's a previous post that describes some problems and a workaround: http://communities.vmware.com/message/1400976#1400976

    -KjB

  • iSCSI Multipathing

    This feature is in part?

    Hi John

    When you say iSCSI multipathing are you askingif initiator iSCSI s/w supports iSCSI CIDs? As other posters have said, even in iSCSI 3.5.x load balancing / redundancy is supported. One of the best plates of search (over and top the iSCSI SAN configuration guide) is "TA2213 - VMware Infrastructure 3 storage: iSCSI implementation and best practices" presented at VMworld 2008.This apron covers some details very good implementation around load balancing and path multiple configurations.

    In vSphere, we made a lot of improvements iSCSI.  For example, we have ways to set up the iSCSI initiator ESX to each port allows to establish a session to a target.  This means more paths for failover or load balancing. It is documented in the new SAN iSCSI for vSphere config guide which will be online soon.

    hope this helps

    Lee Dilworth

  • iSCSI Multipathing vs just make the NICs active/active (vSphere)

    Hello

    I have 2 physical cards in my iSCSI vSwitch for dedicated and I understand that if I want to use multipathing I need to create 2 VMKernel ports, assign to each NETWORK adapter and connect them to the iSCSI initiator using the CLI interface.

    It is, what are the benefits this new management feature gives multichannel just teaming network adapters in an Active / Active configuration? The two configurations are not only as redundant ones than the others?

    Thank you

    Pete

    MPIO allows a server with multiple NICs for transmitting and receiving of I/O across all interfaces available in a compatible SAN corresponding MPIO.  If a server had four 1 GB/s network adapters and SAN had four 1 GB/s network adapters, the theoretical maximum rate would be about 400 MB/s (3.2 Gbit/s).

    Aggregation by grouping NETWORK adapters (or LACP, PAgP, 802.3ad, etc.) doesn't work the same way. Aggregation of links does not improve the throughput of a single traffic flow (a single source of communication with a single destination).  Only one stream will always travel the SAME path.

    The advantage of the aggregation of links is seen when several 'unique' streams (each with different source / destination) exist.  Each individual stream is sent down from its own available NETWORK adapter interface (computed by a hash algorithm).  Most unique feeds, most used network cards, more view aggregate throughput achieved. Aggregation of links will not improve throughput for iSCSI, although it gives a degree of redundancy.

    I hope this helps.

    Paul

  • iSCSI - two subnets on a vswitch iscsi ports link

    Hello

    Is less than supported scenario about binding ports for the software iSCSI (ESXi 6.x)?

    Two in two different subnets (2 controllers) iSCSI storage devices: 192.168.10.x and 192.168.20.x (mask 255.255.255.0).

    ESXi host with a vSwitch iSCSI.

    Four exchanges vmkernel: two 192.168.10.x and two subnet 192.168.20.x subnet.

    There is a connection of software ISCSI ports configured for each vmkernel port.

    It is worth noting that this scenario is little different from the examples on VMware KB: considerations for use port binding software iSCSI in ESX/ESXi

    Does not this way. iSCSI ports link requires a one-to-one relationship between the vmkernel ports and vmnic.

    Of https://kb.vmware.com/kb/2045040

    To implement a group policy that is compatible with the binding of iSCSI ports, you need 2 or more ports vmkernel vSwitch and an equivalent of physical cards amount to bind them to....

    André

  • Reduce the number of iSCSI Multipathing in paths

    I'm experimenting with performance iSCSI on NAS. Currently I have 4 NICs on the host and 4 NICs on the NAS. That leaves me with 16 channels for iSCSI data store. I would like to disable multipathing so I find myself with 4 paths. I would try having NIC1 on the host go on the NAS NIC1 and NIC2 on the host then goes to NIC2 on the NAS, so on and so forth. Is this possible? I saw that I could disable paths but I did anyway to say which way goes to which nic so that I might finish with 4 paths all on the same network adapter. Any help would be appreciated.

    I found that if I broke to the top of each of the 4 NIC I used for iSCSI in different subnets traffic and then set the 4 network cards on the NAS in the same subnets I could get the number of paths down to 4. However with adapter for iSCSI for the VMware software there is no way to have more than 1 NIC of the flow value. Round Robin will use only 1 NETWORK with a value of debit card while I was stuck with the help of only 25% of the potential flow of the NAS for iSCSI traffic.

  • iSCSI Multipathing - each path does refer to a specific target IP address?

    Dear Experts,

    I just set up a 6.0 ESXi host and a Synology NAS ds2015xs MPIO Multipathing. On the side of ESXi, I created two dedicated VMkernel with a physical NIC adapters each and every other network cards set to unused. I then added the adapters to the software iSCSI adapter in ESXi. On the side of SIN, I configured both interfaces in the same subnet as the VMkernel adapters. I then created a target which allows multiple iSCSI sessions, add LUNS and allowed access to the target for the IQN of the ESXi iSCSI Software adapter name.

    Back on the side of ESXi, I added a static discovery entry manually adding addresses to IP of NAS devices and target IQN. After a new analysis of the storage, ESXi showed two paths to the interface of the SIN for which I entered the IP address as a static discovery entry:

    mpio-2paths.PNG

    With this configuration, multiple paths does not--once to stop the interface on the NAS, ESXi don't miss not automatically during the second interface of the NAS. After you have added a second static discovery entry with specifying the same target, but the second IP address of the NAS, ESXi showed all four lanes - two at each interface on the NAS:

    mpio-4paths.PNG

    With this configuration, Multipathing works as expected. If an interface on the NAS fails, ESXi will automatically use a path for the second interface.

    In fact, it makes sense. At first, I was wondering about whether it is correct that I have to manually enter each interface of the target in order to obtain the corresponding paths to it. I first thought that I just need to enter an IP address of the target and that MPIO uses a kind of magic to automatically discover the paths to other interfaces available on the NAS. Later, I saw that this is the case when using dynamic discovery instead of static discovery. However, the same dynamic discovery adds all interfaces of the SIN as entries in static discovery. Once I have delete one of these entries in the list, the corresponding paths are also deleted.

    So my correct understanding that always information path refers to the corresponding IP address of the target and that a way will stop working once the corresponding IP address is no longer available?

    Thank you

    Michael

    When you configure access to storage, it is always important to follow the recommendations of the supplier. Not only to learn how the program setup the IP configuration, but also for the way political (e.g. Fixed, MRU, Round-Robin).

    Regarding your question, many storage arrays are configured with the extra VIP (virtual IP address) or IP address of the group, which is then used to configure the target (or another) dynamic on the ESXi hosts. When an initiator connects to such a VIP or group IP, the target will respond with all of its associated IP addresses. If this isn't an option for your storage device, you must provide all the IPS target manually (as you did).

    André

  • iSCSI and LAN traffic even vSwitch?

    Hello

    It is a question for laboratory and not the production. I have a server with two network adapters.

    Is it possible to use these network adapters with two vswitch for LAN and iSCSI. Will it work?

    I know that it is not recommended.

    Thank you

    Edy

    You could also put them on a single switch with the consolidation of NETWORK cards. For iSCSi, you can have active NIC1 and NIC two mode standby. For data traffic, you can have active NIC2 and NIC1 mode standby.

  • iSCSI Multipathing - really useful?

    Suppose to have this configuration:

    2 ESXi host-> HP DL380 G7 (4 x 1 Gbit NIC + 2 x 10 Gbit NIC)

    1 storage-> MSA P2000 (2 x 1 Gbit NIC + 4 x 10 Gbit NIC)

    The two hosts will be connected to the storage through 10 Gbit adapter. Since no 10 Gbit switch is available, guests will be DIRECTLY connected to the storage.

    Multipathing is always necessary in this case? I mean, I can understand, multiple paths is very useful if you have several path to reach the storage (multiple switch with a network different for examples). But in this case, if for example a 10 Gbit card on one server will be lost, I would therefore lost two way.

    Am I wrong?

    Is 4 x 10 GbE NIC on storing a single NETWORK card? Otherwise, you can cross the cables from hosts to it and have a warranty from the point of view of the moribund NIC storage.

    Since I read your post tos ay that the cards 2 x 10 GbE NIC on the hosts are a single card, and if storage 4 x 10 GbE NIC is also a unique card, while the only reason for multipath would be to have access to the additional uplinks to a higher throughput of iSCSI.

    I would consider using the 1GbE NIC as an uplink ensures 10GbE links to help prevent single points of failure.

Maybe you are looking for