Comments iSCSI - NLB on vSwitch?

I have a vSwitch with 2 natachasery in it.  The vSwitch is dedicated to iSCSI traffic.

I have different guests who can access volumes on our SAN the initiator iSCSI MS within the guest.

One of the guests is my primary file server that has 2 vNIC for iSCSI.

What is the policy load balancing optimal to use between Port-ID and hash MAC Source Please?

I did a ton of reading and I'm still a bit confused that one should I choose to try to get the best distribution of iSCSI sessions invited through both my iSCSI NIC.

(Please note I'm only request for comments iSCSI traffic, traffic iSCSI host is configured to use MPIO with a nic explicitly linked to each vmkernel on iSCSI switch)

Thank you

Paul

MAC basic goes back to ESX 1 and 2 of ESXX, wa sthe default load option balancing with the thought that, given that the MAC address has been uniqiue it would be a high probability to distribute the load evenly - in practice, it wasn't the case. It has been found that the hash has not distributed the load evenly even to the point that you could have a team of three natachasery nic and one or two have been used - thatís why in vSphere Port base becomes by default - you add maps for each network is assigned to a different port id and where will come out a different bear - in terms of performance and your physical to your switches configuration physical, there is no difference-

Tags: VMware

Similar Questions

  • Comments iSCSI - is a second vNIC a good idea?

    If the implementation of the guest with iSCSI operating systems internal initators, how do you deal with access to the iSCSI network?

    As in, it seems advisable to have a separate iSCSI storage network that guests normally cannot reach. But if some of the guests must run internal to access certain LUN on the same SAN as VMFS data warehouses are on iSCSI, how should it be set up?

    I think about adding a second virtual NETWORK adapter to the guests and put this vNIC on the portgroup of storage network with VLAN correct, as it would be to the less restricted to a small number of comments to achieve net iSCSI. What is a possible solution?

    How would you protect the accessed directly VMFS iSCSI LUN and, eventually, destroyed by the guests? CHAP or a SAN security feature?

    I think about adding a second virtual NETWORK adapter to the guests and put this vNIC on the portgroup of storage network with VLAN correct, as it would be to the less restricted to a small number of comments to achieve net iSCSI. What is a possible solution?

    Yes, another vNIC is actually the only way to connect to the storage.

    How would you protect the accessed directly VMFS iSCSI LUN and, eventually, destroyed by the guests? CHAP or a SAN security feature?

    Follow these steps on the side storage. According to storage, create hosts or host groups and present different LUNS only to guests who should see.

    André

  • ISCSI, SC, VMotion vswitch config

    I have 2 natachasery with 2 VLANS (ISCSI and Vmotion) on the same groups of ports running on each server. I thought that the best config would put VMotion and ISCSI ISCSI SC on the same vswitch.  This way ESX can control load balancing/failover on both network adapters. Is it possible to have these 3 networks on the same vswitch. ?

    Another installation that I had in mind would be to throw a VMotion on a separate Vswith and configure an active adapter / in standby, but I tell myself, there would be the potential wasted bandwith when you are not using VMotion.  Ideas or recommendations would be greatly appreciated.

    Thank you

    Tony

    If you have 2 switches like this:

    vSwitch1 (VMotion):

    TEDDY - primary

    TEDDY B - secondary

    vSwitch2 (iSCSI):

    TEDDY B - primary

    TEDDY - secondary

    This will not work because each physical NETWORK adapter only relate to a vSwitch.  If you have no other physical NETWORK card in the machine then I would continue with your current setup and usage 802.3ad port aggregation (road based on ip hash on the side of esx) on NIC 2 on 1 vSwitch against the failover pair if the bandwidth is split across the two NIC (active-active or active-passive).

    ---

    If you have found any of my reviews useful please consider giving points to 'Correct' or 'useful '. Thank you!!!

    www.beyondvm.com

  • ISCSI in VMware guest - slow speed

    SAN is a PS4100 with dedicated switches.

    In VMware, I have a Setup vSwitch according to EQ DELL Documentation. The VMKernal properties are:

    • VLAN ID = None (0)
    • vMotion = unchecked
    • Fault tolerance = unchecked Logging
    • Management traffic = unchecked
    • iSCSI Port Binding = Enabled
    • MTU = 9000

    NIC Teaming settings tab:

    • Load balancing = unchecked
    • Failover detection network = unchecked
    • Notify the switches = unchecked
    • Failback = check = No
    • Failover = checked to Override switch control failover

    One in which adapter is set to Active the other place in is not used. This command is enabled for the second VMKernal iSCSI connection.

    The DELL MPIO extension has been added to the VMware host and the connection is configured to use DELL_PSP_EQL_ROUTED for the managed to the ISCSI target paths.

    Managed paths show also active / active status.

    Flow tests with a physical server through a single initiator iSCSI LUNs dedicated show speeds completely saturate the link with 99 MB/s of throughput. All I can manage to inside the guest OS of Server 2012 with NIC vmxnet3, who is also on the SAN is around 30 MB throughput.

    Drops of speed even more during transfer of a network SAN Fibre Channel AX150 hosted VMware guest OS, with an Intel Pro 1000 MT network card, on the one hand of SAN PS4100, it falls to 13 MB/s.

    What I missed, and where should I look to make changes?

    What I should look to increase flow?

    Hello

    There's not much info here to go. I would like to start by making sure that the server is configured by Dell best practices doc.

    en.Community.Dell.com/.../20434601.aspx

    VMware iSCSI configuration will not affect the speed of connection iSCSI comments.   Ideally, comments iSCSI traffic should have its own NIC, separated from the ESXi NIC use because it is the iSCSI traffic.

    Make sure that the VMware tools is installed as well.  Who will ensure that the network driver is optimized.  Run the current version of ESXi is too important.  There was a few KBs on performance with the VMXNET3 adapter problems.   Sometimes change to the E1000 might better work.

  • Not available to add to Distributed vSwitch to hosts/physical adapters

    It's probably something simple, I forgot, but have not been able to understand...

    Learning of vSphere 5.5 via Virtualization VM Workstation 9. Have many functional things already, to the point where I now both hosts ESXi, datastore iSCSI via standard vSwitches etc. No cluster, but everything works as expected.

    Currently distributed vSwitches research. I am trying to create one, but facing the question that I see not available to add to the hosts/physical adapters dvSwitch... Nothing appears in the list, nor does anything appear under "View Incompatible Hosts", so I can only continue when I choose "add hosts later." It is through vCenter Client and/or vCenter Web Client connected to the server vCenter.

    The two hosts of ESXi have 5 network cards. They have all been used in vSwitches for VMS, the iSCSI management..., so I decided to add an additional NETWORK card for each. I rebooted, ESXi, each of them guests show their new unused NIC under Configuration > network cards... This don't change nothing, always do not appear when you add hosts to the dvSwitch. I also tried to put in a standard vSwitch first on each ESXi host, with no difference. So I thought that maybe you can't see adapters hosts/physics to add to a dvSwitch after that that they have been added to a cluster, so I quickly created a cluster, added two ESXi hosts, no difference. I changed the version of the dvSwitch to 4, 5.5... I tried different settings to maximum number of physical cards by the host from 1 to 8... Nothing helps.

    I hope that I have reason to believe that you can add network individual cards of the hosts who also already have some network cards as used in standard vSwitches uplink adapters?

    Any comments appreciated,

    JH

    -

    Haha... It is even stupider than I thought.

    In order to tackle another question I got one a few days ago, I created a data center extra, with a very similar name. I moved all hosts of this new centre of data at any given time. But when I tried to create the distributed vSwitch, I accidentlly rightclicked the OLD Datacenter (empty). No host there to select a list empty... so of COURSE every time.

    Can someone slap me hard, please?

    JH

  • UCS ISCSI Boot Jumb MTU

    Light and matter seeks jumbp MTU working for ISCSI boot.

    UCS 6248S

    IOM 2208

    VIC YEARS 1240

    5.1 ESXi update 1

    VNX 5500

    I have a few rack mount hosts with 10G network cards, who works with jumbo MTU and I can vmkping the VNX SPs with ' vmkping d s 7000 10.1.1.1 "and it works.

    In UCS the operating system starts on the VNX via ISCSI

    The ISCSI network is a pair of Dell Force 10 S4810s

    UCS FI - A and FI - B have 2 10 uplinks in a s port LACP on the strength of Dell 10 channel

    Checked the jumbo MTU is enabled on the side of Dell

    The system class QoS now occupies the 9216

    There is a QoS ESX ISCSI strategy mapped to the class gold

    ESX ISCSI strategy is mapped to 2 vNIC models; one fabric and a fabric b

    If I set the MTU on the vNIC model 9000 errors the OS at startup a message on start-up banks missing

    If I set the MTU on the vNIC model 1500 the host boots fine

    I put groups of ISCSI ports and vSwitch to 9000 in ESX, but I cannot ping the VNX SPs with a package of more than 1500.

    Attached is a diagram

    Someone at - he of the clues as to what I'm missing? Thank you

    Make sure that you assign you best effort to 9216 so class on the system UCS class.

    The return traffic is probably not sent with a COS value so the UCS is the treatment to best effort which by default is 1500.

    Joey

  • Local drive of virtual machine connection - iSCSI Initiator or RDM?

    Hello

    It is early in the morning here, maybe I'm not awake yet, but what is the difference between a connection a cru LUN for a machine virtual Microsoft Windows using a RDM or by installing Microsoft iSCSI Initiator in the virtual machine and connect to the logic unit number in this way? Can you Flash the drive using the road to virtual initiator iSCSI or MDM using only in compatibility mode - that is the only difference, or not at all? How about using MSCS - route decision LUN that will affect?

    Thank you

    Steve

    Hello Steve,.

    snapshots are function of the hypervisor, disks and RDM only virtual to virtual, which are presented to the virtual machine by the hypervisor can be instantaneous. With the help of the initiator iSCSI in the OS itself invited completely bypasses the hypervisor, so you can run snapshots on the side of storage for these LUNs. As far as I know, MSCS comments iSCSI is supported by MS, I don't know if this is also true for a virtual machine with iSCSI comments however.

    André

  • ISCSI LUN

    Hi all

    It is possible to have a present LUN iscsi to a vSwitch with pool.

    I read the manual & quot; Configuration of the initiators iSCSI and storage & quot; but we know only this issue.

    My setup is therefore 1 vSwitch with two nick teamwork for all the computer's network traffic iSCSI and virtual.

    Thanks in advance.

    Kind regards.

    Yes, grouping of NETWORK cards is OK for iscsi, but it is recommended that no more than two cards network is used. See below for more information on:

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalID=1001251

  • Question on the iSCSI initiators and several NIC

    I read the attached document.

    The document speaks to create more iSCSI initiators to talk to the SAN (I understand this bit).

    Page 15 step 4: VMkernel Ports associated with physical cards

    The document speaks about 3 network cards that have been attributed to a vSwitch.

    The document then continue to talk about the creation of links to individual access to a network adapter for each VMkernel path.

    My question is:

    Why do this?

    Why not create three separate vSwitches and create an initiator iSCSI for each vSwitch?

    VCP3 & VCP4 32846

    VSP4

    VTSP4

    Why would you not create a vSwitch for each iSCSI NIC and VMkernel port. Why the document insists on putting all the NIC in a vSwitch?

    The number and type of vSwitches is not interesting.  You can use a vSwitch or vSwitches as much as you have physical NIC.  It works as well for the former two vSwitches static or vDS.  The configuration example is just one of the ways that you can do.  Any combination is supported.

    Andy

  • Best practices for network configuration of vSphere with two subnets?

    Well, then I'll set up 3 ESXi hosts connected to storage shared with two different subnets. I configured the iSCSI initiator and the iSCSI with his own default gateway - 192.168.1.1 - targets through a Cisco router and did the same with the hosts configured with its own default gateway - 192.168.2.2. I don't know if I should have a router in the middle to route traffic between two subnets since I use iSCSI ports linking and grouping of NETWORK cards. If I shouldn't use a physical router, how do I route the traffic between different subnets and use iSCSI ports binding at the same time. What are the best practices for the implementation of a network with two subnets vSphere (ESX host network: iSCSI network)? Thank you in advance.

    Install the most common iSCSI would be traffic between hosts and

    the storage is not being routed, because a router it could reduce performance.

    If you have VLAN 10(192.168.1.0/24) iSCSI, VLAN 20 (192.168.2.0/24) ESX

    MGMT and VLAN 30 (192.168.3.0/24) comments VMs and VLAN 40 (192.168.4.0/24)

    vMotion a deployment scenario might be something like:

    NIC1 - vSwitch 0 - active VMK (192.168.1.10) MGMT, vMotion VMK (192.168.4.10)

    standby

    NIC2 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC3 - vSwitch 2 - active VMK1 (192.168.1.10) iSCSI

    NIC4 - vSwitch 2 - active VMK2 (192.168.1.11) iSCSI

    NIC5 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC6 - vSwitch 0 - MGMT VMK (192.168.2.10) standby, vMotion

    VMK (192.168.4.10) active

    You would place you on VLAN 10 storage with an IP address of something like target

    192.168.1.8 and iSCSI traffic would remain on this VLAN. The default value

    gateway configured in ESXi would be the router the VLAN 20 with an ip address of

    something like 192.168.2.1. I hope that scenario help set some options.

    Tuesday, June 24, 2014 19:16, vctl [email protected]>

  • Get the most out of VMNIC

    Hello

    I have an ESXi host with 4 NICS that are not in use I want to put into practice.

    My environment is as follows:

    1. Console service - 2 NICs (Nic teaming as Active setup

    It's the best practice or that I'd be better off the coast with Active - Standby or active - not used

    2 - 2 (Nic configured teaming) network cards as Active production

    Production traffic is on its own VIRTUAL local network.

    Yet once it is advisable or that I would be better off the coast with Active - Standby or active - not used

    3 iSCSI (we use HP Lefthand iSCSI Storage) - 2 cards network & 2 VMKERNEL ports configured.

    I have configured VMKERNEL Port1 creative NIC1 & NIC2 unused

    I have configured VMKERNEL Port2 Active NIC2 & unused NIC1

    Does that seem correct for iSCSI recommended for performance?

    4 vMotion - configured with 1 NIC

    So my question is my iSCSI or Production vSwitches would benefit from having additional VMNIC added to them?

    Thank you guys. I would appreciate your comments on my setup.

    In my opinion, that your solution is in accordance with best practices, get a network card dedicated to redundancy VMotion and iSCSI for the console service and therefore prior would be improvements more redundancy.

  • VM Exchange or SQL with instant HIT Kit and VSM

    Hello

    I wonder if someone could explain a bit more on integration of VSM and HIT kit without scenario ISCSI based comments like below...

    Exchange and SQL servers are run under vm on Vcenter/Vsphere 5.5 environment, and there is no comments ISCSI connection which means that all disks are vmdk.

    So, based on this scenario, is it possible to use the HIT kit to restore data (granular level) even if the snapshot has been taken by VSM data store option?

    Thanks in advance...

    Hello

    No, you cannot use HIT / ME in this case.   The HIT kit requires the MS ISCSI initiator to work properly. For creation and restoration of snapshots.

    VSM is against the whole VMFS data store.  First VCS invites you to create snapshots, the VMware VMDK VSM will require instant volume EQL.

    VSM don't offer currently not at the file level restore.  However, you can do it manually.

    Bring the EQL data store snapshot online.

    Connect to a node ESXi and add this snapshot of the data store to that node.

    (With storage add, after a new analysis of the ESXi iSCSI initiator)

    It renames the snapshot of the data store for: "snapshot - XXXXXX -

    You can do a few things here.  Copy the original entire virtual disk of the snapshot to the data store. Or more generally to add the file VMDK from the snapshot on another virtual computer as another drive.

    When you start the virtual computer, it will appear as a new drive letter.  Then, you can copy individual files of this copy.

    An important thing to note.  When you use a snapshot, you will use instant reserve.  So make sure if you attach not already 100% instant reserve that increase you it as much as possible.  If you consume the reserve of the snapshot, you work with is removed.  According to which EQL FW version, you can borrow from the free space.  This is especially true if you replace a VMDK set.   That will use up to the reserve very quickly...

    When you are finished with the EQL snapshot, make sure first that no VMs still use it.  Then put the instant EQL offline in EQL GUI.  And then do a rescan the iSCSI initiator in ESXi GUI.  This will cleanly remove the volume from the node ESXi.  Otherwise, he might try constantly to reconnect to the EQL snapshot, generating errors repeated in the EQL GUI and alerts if you have set up.

    Kind regards

  • Replicate the Volumes CSV with the virtual machines in Hyper-V by ASM?

    Hello

    We are migrating to VMware, Hyper-v. VMware reproduced us some across the Equallogic HIT VMFS volumes.

    On Hyper-V, I can replicate VMs in Hyper-V, but they may not have comments ISCSI.

    We want comments ISCSI features, I'm trying to reproduce the CSV volumes.

    Is this supported? Is there a document that describes the configuration and usage?

    Hello

    With Windows 2012 and 2012R2, there is a problem that currently prevents the HIT / replica ME volumes CSV.   Until it's resolved, you need to plan the replication of CSV volumes via the user interface of group.  I think this is mentioned in the HIT / ME v4.7.x documentation.

    Kind regards

  • Lead updating firmware on PS6110

    I read on the documentation for the upgrade of the firmware of the drive and my only other question is what effect does that have on any existing host or comments iSCSI connections?

    Y at - it all lose connectivity to amy connections?

    The table or its controller restarts?

    The firmware of the player process occurs in the background and the table continues to operate normally?

    Thank you!

    To give you an answer. There is no restart module or array of controller. Script updates a disc after another and and for a table with 12 disc it takes less than a minute run time.

    Kind regards

    Joerg

  • Tolerance of failure with multiple NICs

    Apologize if this has been asked before.

    I'll put up a vSphere 5.5 environment and I have NIC 1 GB for each host to devote two NICs per host for tolerance to failures.  I was wondering the best way to set this up to their maximum output.  I have implemented as a Multi - NIC Vmotion?  As noted here: VMware KB: Multiple-NIC in vSphere vMotion 5. Essentially using two groups of ports as FT-1 and FT-2.  Or what I need to do as a Multi-NIC iSCSI with separate vswitches and each vswitch with a Port Group.

    Thanks in advance.

    You run into a problem of flow when you use 1 NIC for FT?

    Currently FT only supports using a single network adapter (1 G or 10 G)

Maybe you are looking for