iSCSI MPIO + Jumbos

I see lots of information about iSCSI MPIO and Jumbo support with iSCSI sw in vSphere 4.0. I see not a lot of information of the HBA iSCSI and MPIO + Jumbo support. Am I missing something? The HBA have access to the same improvements?

Thank you

HBAS are already supported these features in ESX 3.5.

And multipath also exist in ESX 3.x.

The 'new' storage module also accepts NEW 3rd party modules (only the Enterprise Plus edition) have native multichemin of the seller.

Right now there's EMC PowerPath/VE modules products (I suppose that could also be used for iSCSI) and a beta version of Dell Equallogic MPIO.

By Jumbo Frame, now vSphere supported also for iSCSI and NFS software, so you can activate and use it.

Also the map of 10 GB are now supported.

André

Tags: VMware

Similar Questions

  • iSCSI MPIO (Multipath) with Nexus 1000v

    Anyone out there have iSCSI MPIO successfully with Nexus 1000v? I followed the Cisco's Guide to the best of my knowledge and I tried a number of other configurations without success - vSphere always displays the same number of paths as it shows the targets.

    The Cisco document reads as follows:

    Before you begin the procedures in this section, you must know or follow these steps.

    •You have already configured the host with the channel with a port that includes two or more physical network cards.

    •You have already created the VMware kernel NIC to access the external SAN storage.

    •A Vmware Kernel NIC can be pinned or assigned to a physical network card.

    •A physical NETWORK card may have several cards pinned VMware core network or assigned.

    That means 'a core of Vmware NIC can be pinned or assigned to a physical NIC' average regarding the Nexus 1000v? I know how to pin a physical NIC with vDS standard, but how does it work with 1000v? The only thing associated with "pin" I could find inside 1000v was with port channel subgroups. I tried to create a channel of port with manuals subgroups, assigning values of sub-sub-group-id for each uplink, then assign an id pinned to my two VMkernel port profiles (and directly to ports vEthernet as well). But that doesn't seem to work for me

    I can ping both the iSCSI ports VMkernel from the switch upstream and inside the VSM, so I know Layer 3 connectivity is here. A strange thing, however, is that I see only one of the two addresses MAC VMkernel related on the switch upstream. Both addresses show depending on the inside of the VSM.

    What I'm missing here?

    Just to close the loop in case someone stumbles across this thread.

    In fact, it is a bug on the Cisco Nexus 1000v. The bug is only relevant to the ESX host that have been fixed in 4.0u2 (and now 4.1). Around short term work is again rev to 4.0u1. Medium-term correction will be integrated into a maintenance for the Nexus 1000V version.

    Our implementation of code to get the multipath iSCSI news was bad but allowed in 4.0U1. 4.0U2 no longer our poor implementation.

    For iSCSI multipath and N1KV remain 4.0U1 until we have a version of maintenance for the Nexus 1000V

  • ID mapping session Iscsi MPIO path

    Hello

    I'm playing ISCSI with MPIO. I get the connection information iscsi of the cmdlet "Get-IscsiConnection". It gives the target portals to which the initiator is connected. Then I have the mpclaim - v command which gives me the current state of the railways. In my case, I have an active/optimized path and other avenues of relief. This information on mpio path statements is shown with regard to the path ID. I want a way to find out what connection/target portal is this track Id matches. In the GUI, tab window mpio initiator iscsi has this information. Is there a way to get this info through PowerShell?

    Reference for the mpclaim - v output:

    MPIO Storage Snapshot on Tuesday, 05 May 2009, at 14:51:45.023
    Registered DSMs: 1
    ================
    +--------------------------------|-------------------|----|----|----|---|-----+
    |DSM Name                        |      Version      |PRP | RC | RI |PVP| PVE |
    |--------------------------------|-------------------|----|----|----|---|-----|
    |Microsoft DSM                   |006.0001.07100.0000|0020|0003|0001|030|False|
    +--------------------------------|-------------------|----|----|----|---|-----+
    
    Microsoft DSM
    =============
    MPIO Disk1: 02 Paths, Round Robin, ALUA Not Supported
            SN: 600D310010B00000000011
            Supported Load-Balancing Policy Settings: FOO RR RRWS LQD WP LB
    
        Path ID          State              SCSI Address      Weight
        ---------------------------------------------------------------------------
        0000000077030002 Active/Optimized   003|000|002|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
        0000000077030001 Active/Optimized   003|000|001|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
    MPIO Disk0: 01 Paths, Round Robin, ALUA Not Supported
            SN: 600EB37614EBCE8000000044
            Supported Load-Balancing Policy Settings: FOO RR RRWS LQD WP LB
    
        Path ID          State              SCSI Address      Weight
        ---------------------------------------------------------------------------
        0000000077030000 Active/Optimized   003|000|000|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
    Microsoft DSM-wide default load-balancing policy settings: Round Robin
    
    No target-level default load-balancing policy settings have been set.The reference for iscsi connection and session info: 
    
    PS C:\> Get-IscsiConnection
    
    ConnectionIdentifier : ffffe001e67f4020-29fInitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 44996TargetAddress        : 10.120.34.12TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a0InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 46020TargetAddress        : 10.120.34.13
    
    TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a1InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 47044TargetAddress        : 10.120.34.14
    
    TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a2InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 46788TargetAddress        : 10.120.34.15
    
    TargetPortNumber     : 3260PSComputerName       :
    
    PS C:\>I basically want to know which target portal does this pathid "0000000077030002" correspond to ?
    

    Hello

    Please post your question on the TechNet forums:

    Here is the link:

    https://social.technet.Microsoft.com/forums/Windows/en-us/home?category=w7itpro

    Kind regards

  • question of vMotion iSCSI and jumbo frames

    Hello

    I have a cluster ESXi 4.1 using vCenter and a SAN MD3200i.  I'll do an upgrade to optimize the installation, which includes the installation of four NIC ports in hosts that will be used for iSCSI.  I struggle to enable frames or not, but if I do, I just want to do for the iSCSI, at least for now.  Currently, the maps iSCSI network on the hosts and the MD3200 iSCSI connections are connected to two switches separated, although I have seven other switches in the building connected to the PC, printers, etc.  What I only turn frames on two switches that iSCSI is connected, or do I have to enable frames on all my switches?  There is a Dell 2724 and eight 6224Ps of Dell.  ISCSI is connected to the 2724 and one from the 6224Ps.

    The two switches that are connected to the iSCSI ports are not dedicated iSCSI switches; they have other devices on them, but I can only enable the frames extended globally on the switch.  Would there be any problem allowing the frames extended on the switches, otherwise all connected devices are configured for them?

    Another question about vMotion: I will create four ports on each host for iSCSI vmkernel.  Currently, there is also a port configuration vmkernel for traffic management and vMotion.  Is it possible to set up each of the four iSCSI for vMotion vmkernel channels, or should I leave it configured on the port of vmkernel management?

    Thank you!

    Sure. If you're going to enable Jumbo frames, you need to have dedicated switches for that, or a switch that may have different units MTU on their ports.

  • iSCSI without Jumbo frames for 15 Virtual Machines

    I have a physical to VME to be implemented which took the material for the SAN already purchased and resources research I see I have a HP MSA2324i table, two HP DL 380 G6 servers with two HP 2512 switches, but I found that this switch does not support jumbo frames.

    This means that I'll have to stick to the default 1 500 bytes for Ethernet iSCSI frames and I worry, how this may affect the performance of storage network.  I have migrated 15 servers, and of them, I have both SQL Server 2005 and a role of mailbox Exchange 2010 for 100 users.

    My thought is that I have a load of really light work that the SQL Server, databases don't support every 20 users and I should be able to keep these virtual machines running efficiently without the use of frames.

    Are there experiences of the V sphere 4.1 functioning when using standard Ethernet iSCSI?

    See you soon

    Kyle

    I don't have not upgraded to 4.1 yet, but we ran standard ethernet iSCSI on 3.5 for several different machines and never encountered any problems.    VSphere is out with enabled frames, but I did not notice a huge difference.   I think you'll be fine, but you should definilty keep an eye on things.

    Here's a nice blog on putting in place of iSCSI for vSphere environments, http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

    If you have found this device or any other useful post please consider the use of buttons useful/correct to award points

  • ISCSI enabling jumbo frames

    Hello world

    I want to just make sure I'm on the right track here.

    In theory, I should be able to enable jumbo frames 9000 on my switch and SAN without affecting guests. The scenario would be:

    • Stop all ESXi hosts
    • Reconfigure the SAN and switches to take in charge extended frames
    • Turn on ESXi hosts

    Basically, I want to make sure that the switches and SAN work correctly with change before changing anything on the hosts. Ideally, if all goes well with the hosts stable (vmkping d s 1500 x.x.x.x and turn VMs) I then change the MTU on the VMKernel ports and try again the hosts.

    Its good in theory?

    Sounds good to me, even if it is not necessary to stop the ESXi host. Stop the virtual machines should be sufficient.

    A good post on the frames: http://rickardnobel.se/troubleshoot-jumbo-frames-with-vmkping/

    André

  • MPIO - iscsi - ARE links/Ops / s

    Am I correct in saying get;

    iSCSI MPIO will not increase your IOPS / s for a given machine, unless your ethernet connection is saturated?

    MPIO does not increase performance per say, but give you a "big pipe" for several virtual machines on?

    My scavaging on the net I find there are TWO ways to enable MPIO for you.

    (1) Add second Teddy (physical nic), assign the nic to your switch. Create a second VMkernal, substitute blanace loading options and set an active nic by vmk. Add the two vmk to the iscsi HBA, enable repetition alternate on the management of the paths to the data store. Shows that I have two paths.

    or

    (2) add the second bear to switch, allow the two bears will use the unique vmk. storage system, create two targets that map to the same LUN. Shows two paths.

    Does it matter which way MPIO is configured as it shows several paths?

    Out of curiosity I configured esxi using the first way to configure MPIO, then created a second target with the same number of LUN on the storage. ESXi shows that I have 4 paths to storage. A path is a link right? so 4 paths would be unnecessary, at least that I had 4 links and very high demand?

    I read that it takes about 14 000 iops to saturate a gigabit link. Does this mean that since my storage system cannot achieve worldwide real scenerio MPIO is a waste of time?

    Thank you to everyone who takes the time to communicate with me!

    You are confused of serveral things in your argument.

    Let's first start IOPS / s. To make an e/s of the mechanical components are involved on a disk and the number of IOs can run a disc is therefore very limited.

    For this example lets assume that a disk can do 100 IOs (some less, some more do) and that you have 10 disks at hand. This means that your storage hardware can make 1000 IOPS / s. If you invest in a storage Bay real intelligence of implementing adding caching and controller you can get you more out of your drives as long your load is a good target for caching, but I don't know this option.

    How to tie up your storage space (FC, iSCSI,) at an ESX Server direct changes nothing on the number of the IOPS / s your disks can do a lot.

    However, it influences the bandwidth you can achieve.

    Suppose that your consumption of storage is a WIndows NTFS machine. NTFS uses a typical 4 K block size. That calculates to a maximum block size of 4 K x 1000 = 4 MB bandwidth. Which can easily be done on a 100 MB link and you will see this flow if your consumption is doing sequential i/o only.

    Now suppose that your ESX Server has a VMFS formatted with 1 MB (512 GB). While loading fully your 10 discs, you can reach 1 MB x 1000 = 10 GB.

    Again, this is possible if your VMFS was only sequential writing which he did not.

    When you run virtual machines VMs operating system will write to his type of typical files (i.e. 4 K on WIndows) and so the underlying VMFS will buffer a bit in its own cache and access discs in shuffle mode. Thus, you should expect a lot less bandwidth used. It many cases a single gigabit link will not be saturated.

    MPIO is a function which can reach:

    -Path redundancy

    -Additional bandwidth and IOPS / s in some cases

    If MPIO can also add to the number if IOPS / s depends on the type of controller you have on the side storage. If you have a single controller with ports different frontend IOPS gain / s should not be dramatic. With several controllers, indivdual caches and mutual paths to the disks that you can achieve additional IOPS if you get a good cache reference rate.

    How many different ways you can use with MPIO depends on what can make your storage space. What is a passive connection active? Reach you reduncancy but no INCREASE in the Ops / s or bandwidth. Only an active-active array will increase the bandwidth and, possibly, the number of IOPS / s you can achieve.

    With ESX4 you can use up to 8 channels, if it's something your taken storage array supported by parallel controller access.

  • Hyper-V and iSCSI network

    Hello

    We evaluate a migration of vmware for hyperv.

    I try to understand best practices for networks iSCSI comments.

    I have physical 4ports 1GBit dedicated, on the host for iSCSI traffic.

    I like to use all 4 for iSCSI host (vhdx volumes) traffic.

    Now I thought to do 2 of them shared by creating 2 logical switches in VMM, adding 2 virtual network cards for the host to use.

    The new virtual network cards are 10 Gbit. I don't see an option to change them to 1GBit. To me it seems now that the system prefers the 10 GB adapters. My other two physical cards are no more used.

    I tried to do all 4 ports as virtual, but somehow the 4.7EPA ASM does not see virtual cards. He said only: "no network adapters don't find" at the opening of the MPIO settings.

    Should I just ignore this idea to share and use 2 for host and 2 for iSCSI hosts, or is it a medium work?

    It is recommended to devote at least 2 interfaces on the host iSCSI network.  In addition, you must install the Dell EqualLogic for Microsoft host integration tools and install the MPIO feature.  To enable the MPIO in the guest operating system, you must create at least two virtual switches that are related to the physical SAN on the Hyper-V host adatpers.  Virtual machines must be configured with at least two of these virtual switches.  Then, since the guest operating system, configure interfaces with IP iSCSI network, Subnet, etc...  You must also install the Dell EqualLogic for Microsoft host integration tools and functionality MPIO DSM in the guest operating system, if it is not running Windows.  If you use Jumbo frames, ensure that all NETWORK adapters used for iSCSI (NETWORK physical cards, NETWORK cards, Guest OS NICs) are enabled for frames.

    In regards to ASM v4.7 EPA you don't see not cards network for MPIO - there is a known ASM / ME v4.7 bug in Windows Server R2 2012 linked to the EPA.  It is likely that the configuration of MPIO is fine (you can check it out through the initiator Microsoft iSCSI MPIO EqualLogic tab - it's just that ASM / me has a problem of information display.)  This bug has been fixed in version recommended to v4.7 GA HIT/Microsoft - which is intended to be published very soon.

  • NFS/iSCSI ports vmkernel - different VLAN?

    I have a question, if you already have a vmkernel port defined for NFS (in vlanX), and if you want to set the iSCSI on the same physical network adapter vmkernel port/ports, then you would give this NIC iSCSI even vlan like NFS or vlanY for iSCSI?

    If you have found this device or any other answer useful please consider useful or correct buttons using attribute points

    I would create different VGA (and VLAN) for the types of traffic.  It's simple, and it will stand the test of time and changes in your iSCSI environment.  You can add network cards later, you can separate the iSCSI network in main switch.

    My situation is a little different to yours I have NFS coming through vPC on Nexus s 2148 (here 1000V) and traffic iSCSI in France via 3750 s (here 1000V).  The NFS traffic using vPC and iSCSI traffic uses MAC pinning and iSCSI MPIO.  Very different profiles.  A time ago I would have found myself in a situation similar to yours, and I took a simple approach to share the same VLAN I would be regret and detangle it right about now

    Andrew.

  • Reference Dell Eql SANDRA use only not all configured NIC

    Have 1 virtual (in hyper-v) server with 4 NICs. Server OS Sp1 with EQL ASM Win2008R2 is version 4.5. The lan NIC is excluded by subnet in auto-snapshot manager/MPIO settings.

    3 SAN adapters are in a different subnet and are "inclusive". The 3rd nic has no active sessions to the storage group. Tab connections ISCSI MPIO confirms its authorized to connect to the San. Instead of 3 connections to the volume on 3 network cards, he made 2 connections to 1 etc...

    I tried the EQL ping to IP configured for that particular network interface card, got no response. The 2 other IP's ping requests. Checked the Hyper-V host; The NIC 3 are in the same Vlan. This virtual server is the only one connected to the San with this problem.

    Any ideas on why and how to solve this problem?

    I was talking about the specific cable and port associated with this NIC there could also be a bad network card.

  • ESXi 5.5 / the will of StarWind disk number

    A ESXi connected to StarWind v6 via iSCSI MPIO.

    Upgrade to v8 Starwind.  Update went well spent... all the existing disks/objectives remained and ESXi host was all mounted volumes.

    At the same time to upgrade, I wanted to change the RAM cache in Starwind for each of my virtual devices.  Unfortunately Starwind have has this built in existing devices. I found reference to edit the .swdsk file to set the cache.  However my upgraded Starwind did not use this virtual disk format.

    Then... I deleted an existing virtual device without deleting the associated image file.  Re-created Starwind device/virtual disk.  Joined an existing image file.  Re-connected to ESXi iSCSI.  I first tested on a small LUN first.  Everything seemed fine.  So I did it for all my devices StarWind.  Then following problems restart ESXi.

    No mounted volume.  Determined through playing that ESXi has been seeing them as snapshots.  If I rode without re-signing volume disappeared again on restart.

    So I went through the process via SSH

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1011387

    esxcfg-volume -l

    esxcfg-volume -r VMFS_UUID|label


    Completely removed all existing references to "old" volume names

    Renamed all the Volumes mounted in the snap-* my original name

    Re-recorded all the machines.

    Finally the volumes mounted after restart ESXi.

    This was a lot of unnecessary work that I do not want to repeat and want to know more about why it happened.  It is perhaps a matter for the people of StarWind...

    What exactly has changed with the "volume" of ESXi clearly I generated a new Starwind device but to an existing image file.  The real VMFS volume inside this image file has not changed at all.  What is happening at the level of the iSCSI that this was seen as different to ESXi?


    Thank you.

    Hello

    When ESXi connects to the iSCSI READ it identifies using several signs

    1 size

    2. the seller ID

    3. product ID

    4 UUID

    5. ID series

    If the foregoing does not match the original device, you will get a situation as you describe in the original post.

    I think it happened when you were adding the StarWind LU to the management console.

    You have selected the img file and StarWind first threw a warning, saying that it is an existing device. If you chose to use the img file anyway, resigned StarWind device by generating a new UUID and Serial code.

    In the future, please use the swdsk header when adding previously created devices preventing the UUID and change of serial ID.

  • I want to use MPIO to Ms iscsi initiator How get/install MPIO support and MSDSM to windows 7?

    How to enable MPIO are MS Inititator and MS DSM iscsi on window 7?

    Hi Parag Blangy,

    Thank you for visiting Microsoft Answers.

    As this problem is related to the MS iscsi initiator MPIO, it will be better suited in the Technet community.

    Please visit the link below to find a community that will provide the best support.

    http://social.technet.Microsoft.com/forums/en-us/category/windowsvistaitpro

    Installation and configuration of Microsoft iSCSI Initiator:

    http://TechNet.Microsoft.com/en-us/library/ee338480 (WS.10) .aspx

    Microsoft iSCSI Software Initiator Version 2.X users guide:

    http://download.microsoft.com/download/A/E/9/AE91DEA1-66D9-417C-ADE4-92D824B871AF/uGuide.doc.

    Microsoft Multipath i/o: frequently asked Questions:

    http://www.Microsoft.com/windowsserver2003/technologies/storage/MPIO/FAQ.mspx

    Kind regards
    Amal-Microsoft Support.
    Visit our Microsoft answers feedback Forum and let us know what you think.

  • Port-groups, vSphere 5 and Jumbo (iSCSI) frames

    We will implement a UCS system with EMC iSCSI storage. Since this is my first time, I'm a little insecure in design, although I have acquired a lot of knowledge to read in this forum and meanders.

    We will use the 1000V.

    1. is it allowed to use only a GROUP of ports uplink with the following exchanges: mgmt, vmotion, iscsi, vm network, external network?

    My confusion here is what jumboframes? Should we not separate for this connection? In this design all executives are using jumboframes (or are this set by portgroup?)

    I read something about the use of the class of frames extended Service. Maybe it's the idea here.

    2. I read in a thread do not include mgmt and VMotion in the 1000V and put it on a vs. Is this correct?

    In this case, the design of uplink would be:

    1: Mgmt + vMotion (2 vNIC, VSS)

    2: iSCSi (2 vNIC, 1000v)

    3 data VM, external traffic (2 vNIC, 1000v)

    All network cards for parameter as active, Virtual port id teaming

    Answers online.

    Kind regards

    Robert

    Atle Dale wrote:

    I have 2 follow-up questions:

    1. What is the reason I cannot use a 1000V uplink profile for the vMotion and management? Is it just for simplicity people do it that way? Or can I do it if I want? What do you do?

    [Robert] There is no reason.  Many customers run all their virtual networking on the 1000v.  This way they don't need vmware admins to manage virtual switches - keeps it all in the hands of the networking team where it belongs.  Management Port profiles should be set as "system vlans" to ensure access to manage your hosts is always forwarding.  With the 1000v you can also leverage CBWFQ which can auto-classify traffic types such as "Management", "Vmotion", "1000v Control", "IP Storage" etc.

    2. Shouldn't I use MTU size 9216?

    [Robert] UCS supports up to 9000 then assumed overhead.  Depending on the switch you'll want to set it at either 9000 or 9216 (whichever it supports).

    3. How do I do this step: "

    Ensure the switch north of the UCS Interconnects are marking the iSCSI target return traffic with the same CoS marking as UCS has configured for jumbo MTU.  You can use one of the other available classes on UCS for this - Bronze, Silver, Gold, Platinum."

    Does the Cisco switch also use the same terms "Bronze", Silver", "Gold" or "Platimum" for the classes? Should I configure the trunk with the same CoSes?

    [Robert] The Plat, Gold, Silver, Bronze are user friendly words used in UCS Classes of Service to represent a defineable CoS value between 0 to 7 (where 0 is the lowest value and 6 is  highest value). COS 7 is reserved for internal traffic. COS value "any"  equals to best effort.  Weight values range from 1 to 10. The bandwidth percentage can be  determined by adding the channel weights for all channels then divide  the channel weight you wish to calculate the percentage for by the sum  of all weights.

    Example.  You have UCS and an upstream N5K with your iSCSI target directly connected to an N5K interface. If your vNICs were assigne a QoS policy using "Silver" (which has a default CoS 2 value), then you would want to do the same upstream by a) configuring the N5K system MTU of 9216 and tag all traffic from the iSCSI Array target's interface with a CoS 2.  The specifics for configuring the switch are specific to the model and SW version.  N5K is different than N7K and different than IOS.  Configuring Jumbo frames and CoS marking is pretty well documented all over.

    Once UCS receives the traffic with the appropriate CoS marking it will honor the QoS and dump the traffic back into the Silver queue. This is the "best" way to configure it but I find most people just end up changing the "Best Effort" class to 9000 MTU for simplicity sake - which doesn't require any upstream tinkering with CoS marking.  Just have to enable Jumbo MTU support upstream.

    4. Concerning Nk1: Jason Nash has said to include vMotion in the System VLANs. You do not recommend this in previous threads. Why?

    [Robert] You have to understand what a system vlan is first.  I've tirelessly explained this on vaiours posts .  System VLANs allow an interface to always be forwarding.  You can't shut down a system vlan interface.  Also, when a VEM is reboot, a system vlan interface will be FWDing before the VEM attaches to the VSM to securely retrieve it's programming.  Think of the Chicken & Egg scenario.  You have to be able to FWD some traffic in order to reach the VSM in the first place - so we allow a very small subnet of interfaces to FWD before the VSM send the VEM's programming - Management, IP Storage and Control/Packet only.  All other non-system VLANs are rightfully BLKing until the VSM passes the VEM its policy.  This secures interfaces from sending traffic in the event any port profiles or policies have changed since last reboot or module insertion.  Now keeping all this in mind, can you tell me the instance where you've just reboot your ESX and need the VMotion interface fowarding traffic BEFORE communicating with the VSM?  If the VSM was not reachable (or both VSMs down) the VMs virtual interface would not even be able to be created on the receiving VEM.  Any virtual ports moved or created require VSM & VEM communication.  So no, the vMotion interface vlans do NOT need to be set as system VLANs.  There's also a max of 16 port profiles that can have system vlans defined, so why chew up one unnecessarily?

    5. Do I have to set spanning-tree commands and to enable global BPDU Filter/Guard on both the 1000V side and the uplink switch?

    [Robert] The VSM doesn't participate in STP so it will never send BPDU's.  However, since VMs can act like bridges & routers these days, we advise to add two commands to your upstream VEM uplinks - PortFast and BPDUFilter.  PortFast so the interface is FWD faster (since there's no STP on the VSM anyway) and BPDUFilter to ignore any received BPDU's from VMs.  I prefer to ignore them then using BPDU Gaurd - which will shutdown the interface if BPDU's are received.

    Thanks,

    Atle, Norway

    Edit:

    Do you have some recommendations on the weigting of the CoS?

    [Robert] I don't personally.  Others customer can chime in on their suggestions, but each environement is different.  VMotion is very bursty so I wouldn't set that too high.  IP storage is critical so I would bump that up a bit.  The rest is up to you.  See how it works, check your QoS & CoS verification commands to monitor and adjust your settings as required.

    E.g:

    IP storage: 35

    Vmotion: 35

    Vmdata: 30

    and I can then assign management VM-kernels to the Vmdata Cos.

    Message was edited by: Atle Dale

  • iSCSI with MPIO

    I need configure the virtual switch in VMware for an iSCSI Array does not support LACP or aggregation of any kind.

    There are 4 ports on the table that can be used.

    • How many cables should I use for my 7K table?
    • Should I use MPIO for LACP is not an option?
    • How many IP addresses to configure on my table for iSCSI?

    I never used LACP for iSCSI connectivity, I don't know yet if this is supported by VMware! Anyway, basically you will configure an IP address for each target port and an IP address to each initiator VMkernel on one or more ESXi hosts. You may need to double-check with the documentation of the unit of the range, if for example a VIP (virtual IP) for the target to be used, and the question of whether IP addresses should be in the same or in different subnets. If storage systems supports the path of Round-Robin policy, then make sure that set you this for the LUN to the advantage of the bandwidth available.

    André

  • How to activate traffic iSCSI jumbo frames?

    Hi all

    I have my separate iSCSI coming out of a NETWORK card dedicated to one vlan traffic.

    Now, I have activated the switch to extended frames, qnas 869 pro for jumbo frames also and I (AFAIK) activated my extended frames ESXI host.

    However on the stats of the switch, I can't have counters increase jumbo frames.  others are but not huge.

    After reading all that I Although jumbo frames can help increase performance?

    So I appreciated if you could help me!

    Thank you

    Josh

    Well some how I made jumbo frames work on my ESXI host.

    I enabled frames for each unique interface on the ESXI host and on my NAS that provides iSCSI (qnap 869 pro), I enabled frames extended on both interfaces on it.

    Now it works no problem!

Maybe you are looking for

  • How can I download Firefox 21?

    Hi people, I have the feeling that I've been here before! I logged on this morning, and my Roboform toolbar missing (again)! Eventually, I discovered that Firefox was upgrade to V21 V22 from one day to the next. A quick Google search led me to discov

  • Skype crash

    I need help What you can see in this video https://www.YouTube.com/watch?v=WwxdT3DoKy8&feature=youtu.be

  • s6000h start

    When start the Lenovo tablet logo and is the splash screen appear with indicating its fort, this tablet has been activated. Is it possible to turn off the power or at least mute? Thank you Milan

  • 6180 HP dv6 BIOS update

    Hello. When I bought my laptop, the seller said: "I would suggest that you update your laptop BIOS." is this useful? If so, how? I'm waiting for your help. Thank you.

  • SkyDrive you can share different folders with different contacts

    SkyDrive I want to share a folder with sam and bring an action and one with only linda and not sam and bring prosecutions and one with joe only. Is this possible with skydrive and if yes, how should I do this. Thank you