iSCSI multipath vs network redundancy?

If I have a redundant network, say a VMkernel port on a vSwitch with two VMnic, maybe Port ID load balancing and the two vmnic goes to two different physical switches and there are redundant links on all switches between the ESXi host and iSCSI SAN and it is Rapid Spanning Tree set up, and there is a kind of Etherchannel / Link aggregation on the iSCSI Server , but only a single IP address...

Is there an advantage to configuring iSCSI multipath and let the VMkernel manage failover, rather than let the network itself, make sure that there is a path between the host and the target?

Primary storage provider use several IP for storage NIC (in some cases two different networks).

With the VMware multichannel modules, you are limited to only one path (at that moment) to a LUN, but the seller might have is multichannel modules, or of course, you can use LUN more have used more of paths.

Bandwidth could be interesting with several virtual machines... or I/O loads.

André

Tags: VMware

Similar Questions

  • iSCSI multipath with Distributed Virtual Switch error

    Ive followed the iSCSI SAN guide and have iSCSI multipathing works well with a regular virtual switch. Ive cancelled all configurations and have begun to see if I can get this working with a dVSwitch.

    I get the error "Add Nic failed in IMA"

    Someone at - it ideas?

    I recently worked on a case of iSCSI with VMware support and they mentioned a known issue with iSCSI and dVS, this is also described in the release notes, but may still have a problem. You can contact the technical support for confirmation.

    • ESX/ESXi 4.0 4.0 does not support the compatible ports with DVS binding configuration VMkernel network interface cards

    If you configure compatible with vNetwork Distributed Switch port binding NICs VMkernel, the operation fails if you enter esxcli swiscsi nic adds - n vmkx d vmhbaxx and vmkiscsi-tool - V - a vmkx vmhbaxx orders through the service console or the vSphere CLI.

    Workaround: Use only inherited vSwitch VMkernel NIC for port binding.

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    repository scripts vGhetto

    VMware Code Central - Scripts/code samples for developers and administrators

    http://Twitter.com/lamw

    If you find this information useful, please give points to "correct" or "useful".

  • iSCSI Multipathing

    This feature is in part?

    Hi John

    When you say iSCSI multipathing are you askingif initiator iSCSI s/w supports iSCSI CIDs? As other posters have said, even in iSCSI 3.5.x load balancing / redundancy is supported. One of the best plates of search (over and top the iSCSI SAN configuration guide) is "TA2213 - VMware Infrastructure 3 storage: iSCSI implementation and best practices" presented at VMworld 2008.This apron covers some details very good implementation around load balancing and path multiple configurations.

    In vSphere, we made a lot of improvements iSCSI.  For example, we have ways to set up the iSCSI initiator ESX to each port allows to establish a session to a target.  This means more paths for failover or load balancing. It is documented in the new SAN iSCSI for vSphere config guide which will be online soon.

    hope this helps

    Lee Dilworth

  • "This host currently has no management network redundancy", but there are?

    While in vcenter, the summary for a host tab displays this message:

    "This host currently has no management network redundancy.

    I have attached photos of the host > Configuration > network and host > Configuration > network cards pages. It seems to me that the management network is behind 2 NIC's team together.

    What I am doing wrong? How can I fix?

    Thank you

    More than an idea. Righ click hosts and run "reconfigure for HA.

    André

  • Why several vmkernels for iSCSI multipath?

    I'm a little confused about the iSCSI Multipathing. What is the reason for the creation of two vmkernel interfaces and then use command line tools to link them to a new vmhba? Is it different for the use of a software iSCSI initator on a vSwitch with multiple VMNIC, connected to different physical switches?

    There is the same discussion in thread Russian vmug.

    iSCSI Multipathing is not on the trail, its about us pairs - initiators (interfaces vmk) and targets.

    If you have two targets

    • and you need to fail over only - 1 vmk is enough (NIC in Active mode / standby)
    • If you need loadbalacing
      • and you can use the aggregation of links + Hash IP - 1 vmk is sufficient (PSP is Roundrobin and NICs in active/active mode)
      • If you cannot use THE - vmk 2 are needed.
  • iSCSI multipath on a distributed vSwitch?

    I read that there are problems with this and I'm pretty positive, I read that it is still not supported in one of the documents ESX4u1 (although for the life of me I can't find this text of presentation this morning).  Is this even a question?  And if so, can someone explain why exactly this is not working?

    I'm documenting a config proposed for deployment using cards dual 10GbE and I do research using iSCSI multipath and dvSwitches.  Which raises another question, on a channel 10GbE same multipathing necessary?



    -Justin

    If you are talking about the use of links in the port, then you must use conventional switches for this purpose.  It is in the notes version of vSphere

    vExpert

    VMware communities moderator

    -KjB

  • That errro again... host currently has no error management network redundancy

    I don't get the "host currently has no error management network redundancy.

    I think that I have the correct configuration, but not sure since our network guys gave me the IP addresses to use for HA.

    Here is the config for HA

    ESX Server 6.

    Service console is on 0 to 172.16.1.106 in 255.255.0.0 vSwitch

    The second redundancy console) is on vSwitch 4 to 77.77.77.10 in 255.255.255.0.

    ESX Server 5.

    Service console is on vSwitch 4 to 172.16.1.105 in 255.255.0.0

    The second redundancy console) is on vSwitch 4 to 77.77.77.9 in 255.255.255.0.

    I've reconfigured HA on ESX Server 6 and ESX Server 5 and even restarted them but still not the mistake of redundancy...

    Something is not properly configured?

    stanj wrote:

    If the collection of network adapters offer the same functions of redundancy, so why so many articles point to using a second console HA redundancy?

    It's simply the options given. There are several ways to provide a redundancy of HA heartbeat.

    The article also indicates HA advanced features such as das.isolationaddress2 and das.failuredetectiontime will be charged when you set up a secondary service console. It is an approach more effective, but still more complicated?

    These advanced options are not always available and also provide another way to configure HA redunancy.

    Here's something interesting to look at as well

    http://www.yellow-bricks.com/VMware-high-availability-deepdiv/

    For me, I always learned the KISS method (Keep it Simple stupid).  Add a second NETWORK card is the easiest way to keep your redundant environment, in my opinion

  • Create alarm "Network redundancy lost" by the script?

    Hello:

    I'm trying to write a script that will create the alarm in order to monitor the redundancy of the lost network (I know manually there) on the host.

    I know how to create alarm for additional exhibit the ' basic' event (LucD - thank you for your help) and I know that I have for alarm 'Lost network redundancy' to use 'vprob.net.redundancy.lost' instead of 'EnteringMaintenanceModeEvent' (as in my example); but I still struggle to combine...

    Can someone help me please?

    Thank you

    qwert

    I don't have a particular script fantasy with the settings and all the rest, but I can show you the steps to filter alarms by name and remove the desired one.

    alarm #retrieve Manager

    $alarmManager = get-view-Id "AlarmManager-AlarmManager.

    #Get the entity that contains the alarm that you want to remove. I'll do it for a virtual machine

    $entityView = get-Vm-name MyVm | Get-View

    #get all MoRefs alarm

    $alarmMoRefList = $alarmManager.GetAlarm($entityView.MoRef)

    #Retrieve tha alarm views from the MoRefs

    $alarmViewList = $alarmMoRefList | foreach (Get-view $_)

    #Now filter the name alrams

    $alarmToDelete = $alarmViewList | where {$_.Info.Name - eq "AlarmToDelete"}

    #Finally remove alarms

    $alarmToDelete.RemoveAlarm)

    It shouldn't be hard to put this in a script or function reussable. If you have problems with this, I'll be happy to help you again!

  • iSCSI Multipathing vs just make the NICs active/active (vSphere)

    Hello

    I have 2 physical cards in my iSCSI vSwitch for dedicated and I understand that if I want to use multipathing I need to create 2 VMKernel ports, assign to each NETWORK adapter and connect them to the iSCSI initiator using the CLI interface.

    It is, what are the benefits this new management feature gives multichannel just teaming network adapters in an Active / Active configuration? The two configurations are not only as redundant ones than the others?

    Thank you

    Pete

    MPIO allows a server with multiple NICs for transmitting and receiving of I/O across all interfaces available in a compatible SAN corresponding MPIO.  If a server had four 1 GB/s network adapters and SAN had four 1 GB/s network adapters, the theoretical maximum rate would be about 400 MB/s (3.2 Gbit/s).

    Aggregation by grouping NETWORK adapters (or LACP, PAgP, 802.3ad, etc.) doesn't work the same way. Aggregation of links does not improve the throughput of a single traffic flow (a single source of communication with a single destination).  Only one stream will always travel the SAME path.

    The advantage of the aggregation of links is seen when several 'unique' streams (each with different source / destination) exist.  Each individual stream is sent down from its own available NETWORK adapter interface (computed by a hash algorithm).  Most unique feeds, most used network cards, more view aggregate throughput achieved. Aggregation of links will not improve throughput for iSCSI, although it gives a degree of redundancy.

    I hope this helps.

    Paul

  • Reduce the number of iSCSI Multipathing in paths

    I'm experimenting with performance iSCSI on NAS. Currently I have 4 NICs on the host and 4 NICs on the NAS. That leaves me with 16 channels for iSCSI data store. I would like to disable multipathing so I find myself with 4 paths. I would try having NIC1 on the host go on the NAS NIC1 and NIC2 on the host then goes to NIC2 on the NAS, so on and so forth. Is this possible? I saw that I could disable paths but I did anyway to say which way goes to which nic so that I might finish with 4 paths all on the same network adapter. Any help would be appreciated.

    I found that if I broke to the top of each of the 4 NIC I used for iSCSI in different subnets traffic and then set the 4 network cards on the NAS in the same subnets I could get the number of paths down to 4. However with adapter for iSCSI for the VMware software there is no way to have more than 1 NIC of the flow value. Round Robin will use only 1 NETWORK with a value of debit card while I was stuck with the help of only 25% of the potential flow of the NAS for iSCSI traffic.

  • iSCSI Multipathing - each path does refer to a specific target IP address?

    Dear Experts,

    I just set up a 6.0 ESXi host and a Synology NAS ds2015xs MPIO Multipathing. On the side of ESXi, I created two dedicated VMkernel with a physical NIC adapters each and every other network cards set to unused. I then added the adapters to the software iSCSI adapter in ESXi. On the side of SIN, I configured both interfaces in the same subnet as the VMkernel adapters. I then created a target which allows multiple iSCSI sessions, add LUNS and allowed access to the target for the IQN of the ESXi iSCSI Software adapter name.

    Back on the side of ESXi, I added a static discovery entry manually adding addresses to IP of NAS devices and target IQN. After a new analysis of the storage, ESXi showed two paths to the interface of the SIN for which I entered the IP address as a static discovery entry:

    mpio-2paths.PNG

    With this configuration, multiple paths does not--once to stop the interface on the NAS, ESXi don't miss not automatically during the second interface of the NAS. After you have added a second static discovery entry with specifying the same target, but the second IP address of the NAS, ESXi showed all four lanes - two at each interface on the NAS:

    mpio-4paths.PNG

    With this configuration, Multipathing works as expected. If an interface on the NAS fails, ESXi will automatically use a path for the second interface.

    In fact, it makes sense. At first, I was wondering about whether it is correct that I have to manually enter each interface of the target in order to obtain the corresponding paths to it. I first thought that I just need to enter an IP address of the target and that MPIO uses a kind of magic to automatically discover the paths to other interfaces available on the NAS. Later, I saw that this is the case when using dynamic discovery instead of static discovery. However, the same dynamic discovery adds all interfaces of the SIN as entries in static discovery. Once I have delete one of these entries in the list, the corresponding paths are also deleted.

    So my correct understanding that always information path refers to the corresponding IP address of the target and that a way will stop working once the corresponding IP address is no longer available?

    Thank you

    Michael

    When you configure access to storage, it is always important to follow the recommendations of the supplier. Not only to learn how the program setup the IP configuration, but also for the way political (e.g. Fixed, MRU, Round-Robin).

    Regarding your question, many storage arrays are configured with the extra VIP (virtual IP address) or IP address of the group, which is then used to configure the target (or another) dynamic on the ESXi hosts. When an initiator connects to such a VIP or group IP, the target will respond with all of its associated IP addresses. If this isn't an option for your storage device, you must provide all the IPS target manually (as you did).

    André

  • iSCSI Multipathing - really useful?

    Suppose to have this configuration:

    2 ESXi host-> HP DL380 G7 (4 x 1 Gbit NIC + 2 x 10 Gbit NIC)

    1 storage-> MSA P2000 (2 x 1 Gbit NIC + 4 x 10 Gbit NIC)

    The two hosts will be connected to the storage through 10 Gbit adapter. Since no 10 Gbit switch is available, guests will be DIRECTLY connected to the storage.

    Multipathing is always necessary in this case? I mean, I can understand, multiple paths is very useful if you have several path to reach the storage (multiple switch with a network different for examples). But in this case, if for example a 10 Gbit card on one server will be lost, I would therefore lost two way.

    Am I wrong?

    Is 4 x 10 GbE NIC on storing a single NETWORK card? Otherwise, you can cross the cables from hosts to it and have a warranty from the point of view of the moribund NIC storage.

    Since I read your post tos ay that the cards 2 x 10 GbE NIC on the hosts are a single card, and if storage 4 x 10 GbE NIC is also a unique card, while the only reason for multipath would be to have access to the additional uplinks to a higher throughput of iSCSI.

    I would consider using the 1GbE NIC as an uplink ensures 10GbE links to help prevent single points of failure.

  • Issue of network redundancy

    Hi guys,.

    Just a quick question/precision.

    On the ESX hosts in our environment, all hosts have 8 physical network interface cards.

    The way in which we have our network configuration is that the IP subnet that the Service Console runs on (10.9.8.X) is also our internal network IP subnet.

    The way I have my setup of the guests is with two physical network adapters on this network goes into the same virtual switch.  For the Service Console, the network adapters are configured as active / standby.  For the internal network, they are configured as active/active.

    What is the correct/preferred way to set this up not only for the Service Console redundancy, but also to have NIC teaming to achieve a greater bandwidth on our network inside?  Or should I try to use some of the other empty NIC ports to finish to remove the Service Console of this IP subnet?

    James

    It is good to have 8 ports.

    A good way to implement an ESX Server is as follows.

    3 ports for network of VM and the Service Console, put them on the same vSwitch and configure the vlan trunking.

    2 port for vMotion (preferabbly on a separate vlan or a network)

    3 ports for IP storage, iSCSI/NFS

    I would use teaming on each vSwitch (static LACP)

    I do not know what type of storage going you...  The above is a good and redundant installation.

    Larry B.

  • question put on the vSphere iSCSI SAN infrastructure network

    Hi all

    We are a small company with a small virtualized environment (3 ESX servers) and are about to buy an AX-5 SAN EMC (model Ethernet not CF) to implement some of the features of high availability of vSphere. My question is related to the networking of the SAN: we switch dual Cisco 2960 G Gigabit and dual Cisco ASA 5510 firewalls in a redundant configuration.

    I understand that the best practice is to implement iSCSI traffic on a separate from all other traffic LAN switch. However, I do not have the knowledge and experience to determine the real difference than a separate switch would really vs the plan to create a separate VLAN on the switches Cisco dedicated to iSCSI traffic only. I would ensure that the iSCSI traffic has been on a VLAN dedicated Physics (not just a logic) with any other VLAN logical (subinterfaces) on the same VLAN). It is difficult for me to understand how a port gigabit on a VLAN isolated on kit Cisco will perform much poorer than on a dedicated Cisco switch somehow. But then again, I don't know what I don't know...

    The thoughts and the input would be appreciated here: I'm (very) hard not to drop another $6 + in another pair of Cisco switches, at least that this decision will significantly compromised performance iSCSI SAN.

    Enjoy your time,

    Rob

    You have 2 SP each with 2 iSCSI, for example:

    SPA: 10.0.101.1 101 VLANS and VLAN 102 10.0.102.1

    SPB: 10.0.101.2 101 VLANS and VLAN 102 10.0.102.2

    On your ESX to create 2 vSwithes, each with a port vmkernel on an iSCSI network and each with a single physical NIC.

    See also:

    http://www.DG.com/microsites/CLARiiON-support/PDF/300-003-807.PDF

    André

  • Can someone explain how to configure iSCSI multipathing software iSCSI?

    I have two physical network adapters, dedicated to a vswitch iSCSI and I created a second VMkernel port, but don't know what to do next to make sure I have redundancy through multiple paths?

    It depends on your type of storage.

    If used if two different iSCSI network (such as the AXE, MD3000i,...) you must set 2 different vSwitches, for each network and add all 4 of your target to the iSCSI initiator.

    If she used a flat (like Equallogic) iSCSI network you must use 1 vSwitch with at least 2 physical NETWORK adapter and 2 vmkernel interface and bind each interface to a network card.

    See page 33 of the:

    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_iscsi_san_cfg.PDF

    André

Maybe you are looking for