EqualLogic SAN and Jumbo frames

I can't get the jumbo frames to work on our Equallogic PS6000.  We have the following configuration PS6000-> Dell PowerConnect 6248 (bunk)-> Windows 2008 Server.

The network cards on the SAN has been set at 9000 MTU, ports on switches are set to MTU 9216 and the NIC on Server 2008 is fixed to 9014 with active RX & Tx flow control.

When the server connects to the SAN, newspapers account connection iSCSI target to the initiator "172.21.X.X:3260, iqn.2001-05.com.equallogic:0-8a0906-xxxxxxxxxxx-23a001a955452af6-server1" ' 172.21.X.X:49156, iqn.1991 - 05.com.microsoft:server1' managed, using the length of the standard framework.

When I ping the SAN of Server 2008 with a shitload of 8972 bytes, it works very well.

C:\Users\system>ping 172.21.x.x f-l 8972
Ping 172.21.x.x with 8972 bytes of data: reply from 172.21.x.x: bytes = 8972 times<1ms ttl="255" reply="" from="" 172.21.x.x:="" bytes="8972"><1ms ttl="255" reply="" from="" 172.21.x.x:="" bytes="8972"><1ms ttl="255" reply="" from="" 172.21.x.x:="" bytes="8972"><1ms ttl="">

Why 8972?  Told that some genius on the internet.  Any help would be greatly appreciated.

Well we got it to work.  On Server 2008, we have Broadcom and Intel network adapters.  For our tests, we were just using the Broadcom NIC.  Properties Advanced NETWORK adapter, we initially have the value package Jumbo 9014 and it did not work.  So, we opened the Broadcom Advanced Control Suite that came with the NIC and searched and found that there are two parameters that need to be addressed.  There is a jumbo packet definition which has been set at 9014, which is what we wanted it in the advanced NIC Properties.  There is another parameter, MTU, which can be seen on the advanced NIC Properties.  It has been set at 1500.  Once we define to 9000, frames jumbo started working.  We then uncabled Broadcom NIC and wired the card NETWORK Intel and the value of frames 9014 on advanced network adapter properties.  Once we have that frame has worked on this NETWORK adapter, no other measures were needed.  If it's working now.  Thanks to everyone for their help.

Tags: Dell Products

Similar Questions

  • question of vMotion iSCSI and jumbo frames

    Hello

    I have a cluster ESXi 4.1 using vCenter and a SAN MD3200i.  I'll do an upgrade to optimize the installation, which includes the installation of four NIC ports in hosts that will be used for iSCSI.  I struggle to enable frames or not, but if I do, I just want to do for the iSCSI, at least for now.  Currently, the maps iSCSI network on the hosts and the MD3200 iSCSI connections are connected to two switches separated, although I have seven other switches in the building connected to the PC, printers, etc.  What I only turn frames on two switches that iSCSI is connected, or do I have to enable frames on all my switches?  There is a Dell 2724 and eight 6224Ps of Dell.  ISCSI is connected to the 2724 and one from the 6224Ps.

    The two switches that are connected to the iSCSI ports are not dedicated iSCSI switches; they have other devices on them, but I can only enable the frames extended globally on the switch.  Would there be any problem allowing the frames extended on the switches, otherwise all connected devices are configured for them?

    Another question about vMotion: I will create four ports on each host for iSCSI vmkernel.  Currently, there is also a port configuration vmkernel for traffic management and vMotion.  Is it possible to set up each of the four iSCSI for vMotion vmkernel channels, or should I leave it configured on the port of vmkernel management?

    Thank you!

    Sure. If you're going to enable Jumbo frames, you need to have dedicated switches for that, or a switch that may have different units MTU on their ports.

  • Airport Extreme Gigabit Ethernet and Jumbo frames

    Hi all.

    Can anyone support me kindly about Gigabit Ethernet & Jumbo Frames with Airport Express?

    I currently have and use a 5th generation Airport Express and you want to buy the QNAP TS - 453 Pro NAS. In any implementation of storage NAS, Gigabit Ethernet, the frames extended as matching Ethernet card well, are all very important to achieve broadband performance, so my questions are:

    5th generation Airport Express


    • Are the NETWORK Gigabit NIC Ethernet card?
    • It also works with extended frames (QNAP NAS supports 4074, 9000 and 7418 bytes for MTU)?
    • QNAP can trunk NIC (if more then one) with the following options:
      • IEEE 802.3ad (dynamic link aggregation)
      • Balance-tlb (Adaptive Transmit Load Balancing)
      • Balance-alb (Adaptive Load Balancing)
      • others (mainly for failover)

    Is 5th generation Airport Express enough or should I think about moving to the 6th generation?

    Thank you.

    You mix up the terminology... Express and Extreme are totally different.

    The title is correct... Extreme... your question text replaces by Express... they are only 10/100... and gen only 2 of them.

    Any airport extreme Gen2 - Gen6

    Use the Gigabit ports.

    Do not support jumbo frames

    Do not support pairing.

  • ESXi 5 and Jumbo frames

    What is the proper way to configure some jumbo frames for an ESXi cluster used shared storage NetApp?  Configure extended frames to the bottom of the VM Guest NETWORK card or I stop at the dvSwitch?  I already configure jumbo images throughout the storage on the physical hardware.  It's just the virtual machine where I'm not sure and if I do not set up the virtual machine, will there be problems?

    Thank you!!

    Then, you don't need to do anything for the virtual machines.

  • vMotion and Jumbo frames

    Enabling Jumbo frames ever advantage vMotion traffic bad? We will migrate out of our HP EVA on Netapp Storage soon. Currently, we are a single Fibre Channel shop now, but consider using NFS on Netapp only to align virtual machines (recommendation of Netapp Professional Services to increase the speed of the mbralign of a Linux VM).

    As we will only use really based on IP storage for the storage of virtual machine production, someone suggests to go even bother to change the size of the MTU on our switches and virtual switches?

    We are currently using 4.0U1 ESX and vCenter 4.0U1 but will be upgrading to 4.1 ASAP.

    I have never heard or read something about enabling Jumbo frames for vMotion. I don't think it's even a supported configuration.

    There are docs on activation of traffic iSCSI Jumbo frames. Most say, there is no advantage to 1 GBit networks but a significant advantage for 10 Gbps networks.

    André

  • TSO and jumbo frames

    Hi guys

    Question: to enable frames and TSO.

    Needs active on ESX Jumbo frames, but not OSI?

    The two needs enhanced VMXnet adapter in VM and much-needed replacement NIC so it wasn't EV adapter?

    no support for iSCSI on frames?

    Also only windows 2003 entperprise (in the range of windows server familty products) supports these features?

    Thanks in advance

    See you soon

    The enhanced vmxnet (or vmxnet2) is avaible in ESX 3.5 and 4.0.

    The vmxnet3 only in 4.0

    André

  • ESXi 4.0 and 4.1 Free Edition and Jumbo frames

    Hello everyone.

    Happy new year.

    Please can someone answer my question.

    Extended frames are not supported in the free ESXi 4.0 edition

    This is discussed here.

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1012454

    Are they supported in ESXi 4.1 Free Edition or should I upgrade to Essentials?

    Thank you very much

    James

    Extended frames are supported in ESXi 4.1. http://KB.VMware.com/kb/1023990 just not in the free version, at least through the vCLI.

  • Jumbo frames based on VI3 3.5 / iSCSI initiator of material?

    Hello

    Apologies if this has already been posted. If so please point me to the thread, but I just wanted to confirm with you guys jumbo frames based on VI3 3.5.

    As much as I know frames are not taken in charge for software iSCSI but it is supported for the hardware iSCSI initiators but please correct me if I'm wrong. Now what hardware initiator you recommend for jumbo frames?

    Thanking you in advance,

    Saludos,

    Jose Maria Gonzalez,

    -


    http://www.JmGVirtualConsulting.com

    http://www.josemariagonzalez.es

    VMware vExpert 2009

    Co-autor del Libro update1 VMware Site Recovery Manager 1.0

    -


    If you find this or any other answer useful please consider giving points by checking the answer useful or appropriate.

    http://feedproxy.google.com/ElBlogDeVirtualizacionEnEspanol.2.gif[VMware Site Recovery Manager 1.0 update1 | ] http://feedproxy.Google.com/ElBlogDeVirtualizacionEnEspanol ]

    In ESX 3.5, 10 gigabit Ethernet and jumbo frame are NOT supported on storage traffic (for example, the software initiator).

    In ESX 4 are supported.

    André

    * If you found this device or any other answer useful please consider awarding points for correct or helpful answers

  • Port-groups, vSphere 5 and Jumbo (iSCSI) frames

    We will implement a UCS system with EMC iSCSI storage. Since this is my first time, I'm a little insecure in design, although I have acquired a lot of knowledge to read in this forum and meanders.

    We will use the 1000V.

    1. is it allowed to use only a GROUP of ports uplink with the following exchanges: mgmt, vmotion, iscsi, vm network, external network?

    My confusion here is what jumboframes? Should we not separate for this connection? In this design all executives are using jumboframes (or are this set by portgroup?)

    I read something about the use of the class of frames extended Service. Maybe it's the idea here.

    2. I read in a thread do not include mgmt and VMotion in the 1000V and put it on a vs. Is this correct?

    In this case, the design of uplink would be:

    1: Mgmt + vMotion (2 vNIC, VSS)

    2: iSCSi (2 vNIC, 1000v)

    3 data VM, external traffic (2 vNIC, 1000v)

    All network cards for parameter as active, Virtual port id teaming

    Answers online.

    Kind regards

    Robert

    Atle Dale wrote:

    I have 2 follow-up questions:

    1. What is the reason I cannot use a 1000V uplink profile for the vMotion and management? Is it just for simplicity people do it that way? Or can I do it if I want? What do you do?

    [Robert] There is no reason.  Many customers run all their virtual networking on the 1000v.  This way they don't need vmware admins to manage virtual switches - keeps it all in the hands of the networking team where it belongs.  Management Port profiles should be set as "system vlans" to ensure access to manage your hosts is always forwarding.  With the 1000v you can also leverage CBWFQ which can auto-classify traffic types such as "Management", "Vmotion", "1000v Control", "IP Storage" etc.

    2. Shouldn't I use MTU size 9216?

    [Robert] UCS supports up to 9000 then assumed overhead.  Depending on the switch you'll want to set it at either 9000 or 9216 (whichever it supports).

    3. How do I do this step: "

    Ensure the switch north of the UCS Interconnects are marking the iSCSI target return traffic with the same CoS marking as UCS has configured for jumbo MTU.  You can use one of the other available classes on UCS for this - Bronze, Silver, Gold, Platinum."

    Does the Cisco switch also use the same terms "Bronze", Silver", "Gold" or "Platimum" for the classes? Should I configure the trunk with the same CoSes?

    [Robert] The Plat, Gold, Silver, Bronze are user friendly words used in UCS Classes of Service to represent a defineable CoS value between 0 to 7 (where 0 is the lowest value and 6 is  highest value). COS 7 is reserved for internal traffic. COS value "any"  equals to best effort.  Weight values range from 1 to 10. The bandwidth percentage can be  determined by adding the channel weights for all channels then divide  the channel weight you wish to calculate the percentage for by the sum  of all weights.

    Example.  You have UCS and an upstream N5K with your iSCSI target directly connected to an N5K interface. If your vNICs were assigne a QoS policy using "Silver" (which has a default CoS 2 value), then you would want to do the same upstream by a) configuring the N5K system MTU of 9216 and tag all traffic from the iSCSI Array target's interface with a CoS 2.  The specifics for configuring the switch are specific to the model and SW version.  N5K is different than N7K and different than IOS.  Configuring Jumbo frames and CoS marking is pretty well documented all over.

    Once UCS receives the traffic with the appropriate CoS marking it will honor the QoS and dump the traffic back into the Silver queue. This is the "best" way to configure it but I find most people just end up changing the "Best Effort" class to 9000 MTU for simplicity sake - which doesn't require any upstream tinkering with CoS marking.  Just have to enable Jumbo MTU support upstream.

    4. Concerning Nk1: Jason Nash has said to include vMotion in the System VLANs. You do not recommend this in previous threads. Why?

    [Robert] You have to understand what a system vlan is first.  I've tirelessly explained this on vaiours posts .  System VLANs allow an interface to always be forwarding.  You can't shut down a system vlan interface.  Also, when a VEM is reboot, a system vlan interface will be FWDing before the VEM attaches to the VSM to securely retrieve it's programming.  Think of the Chicken & Egg scenario.  You have to be able to FWD some traffic in order to reach the VSM in the first place - so we allow a very small subnet of interfaces to FWD before the VSM send the VEM's programming - Management, IP Storage and Control/Packet only.  All other non-system VLANs are rightfully BLKing until the VSM passes the VEM its policy.  This secures interfaces from sending traffic in the event any port profiles or policies have changed since last reboot or module insertion.  Now keeping all this in mind, can you tell me the instance where you've just reboot your ESX and need the VMotion interface fowarding traffic BEFORE communicating with the VSM?  If the VSM was not reachable (or both VSMs down) the VMs virtual interface would not even be able to be created on the receiving VEM.  Any virtual ports moved or created require VSM & VEM communication.  So no, the vMotion interface vlans do NOT need to be set as system VLANs.  There's also a max of 16 port profiles that can have system vlans defined, so why chew up one unnecessarily?

    5. Do I have to set spanning-tree commands and to enable global BPDU Filter/Guard on both the 1000V side and the uplink switch?

    [Robert] The VSM doesn't participate in STP so it will never send BPDU's.  However, since VMs can act like bridges & routers these days, we advise to add two commands to your upstream VEM uplinks - PortFast and BPDUFilter.  PortFast so the interface is FWD faster (since there's no STP on the VSM anyway) and BPDUFilter to ignore any received BPDU's from VMs.  I prefer to ignore them then using BPDU Gaurd - which will shutdown the interface if BPDU's are received.

    Thanks,

    Atle, Norway

    Edit:

    Do you have some recommendations on the weigting of the CoS?

    [Robert] I don't personally.  Others customer can chime in on their suggestions, but each environement is different.  VMotion is very bursty so I wouldn't set that too high.  IP storage is critical so I would bump that up a bit.  The rest is up to you.  See how it works, check your QoS & CoS verification commands to monitor and adjust your settings as required.

    E.g:

    IP storage: 35

    Vmotion: 35

    Vmdata: 30

    and I can then assign management VM-kernels to the Vmdata Cos.

    Message was edited by: Atle Dale

  • Best practices of Jumbo frames

    Hello

    I'm looking to use a pair of N3048s in a branch, as the heart of the network and for iSCSI (connected to a new equallogic).

    If I understand correctly, you can set the jumbo frames on the basis of a single port so it must be enabled on all ports.

    Will this cause a problem with the LAN network (i.e. not an iSCSI storage) that none of the devices are configured to use frames?

    Bravo for any advice,

    Huw

    It will not cause any problems. Frames allows the switch works with larger packets. If the switch receives a smaller package, it will be the leash at this size and it transmits on. If a larger packet is sent to a device that cannot receive the biggest package, then the package will be broken down into smaller.

  • Enabling Jumbo frames on VMXNET3 adapter in Windows Server 2012

    Hello everyone on the forums of wonderful Equallogic :)

    We have a PS4100X of the virtual computer running on vSphere 5.1 EQL. Everything works very well, and we are about to migrate a VM 2012 SQL our database. I did some research on the issue of whether we should allow the frames extended on the VMXNET3 adapter in the virtual machine.

    Reading this white paper, it seems there are large performance benefits to enable jumbo frames and a few other options in the NETWORK adapter settings:

    http://en.community.Dell.com/TechCenter/Extras/m/white_papers/20403565.aspx

    Just curious as to how other people and if they saw an improvement in performance. Someone at - he had experience with this?

    Thank you

    Lee

    Frames being good or bad, is highly dependent on the switching infrastructure.   If the switches do not manage flowcontrol and frames, then performance could be worse with frames enabled.

    If the switches are not on the list certified for EQL tables, I tend to start with the standard frames, get a baseline run, then try frames gradually and do things of course worsen.

    Extended frames may provide an advantage, he will never be a huge increase, but it improves the efficiency of the network and could reduce the overhead of the processor.    Which are not mandatory for EQL iSCSI environments.

    Please check best practices with document ESX.  That can really help to maximize your performance with storage EQL.

    http://en.community.Dell.com/TechCenter/Extras/m/white_papers/20434601.aspx

    Kind regards

  • ISCSI enabling jumbo frames

    Hello world

    I want to just make sure I'm on the right track here.

    In theory, I should be able to enable jumbo frames 9000 on my switch and SAN without affecting guests. The scenario would be:

    • Stop all ESXi hosts
    • Reconfigure the SAN and switches to take in charge extended frames
    • Turn on ESXi hosts

    Basically, I want to make sure that the switches and SAN work correctly with change before changing anything on the hosts. Ideally, if all goes well with the hosts stable (vmkping d s 1500 x.x.x.x and turn VMs) I then change the MTU on the VMKernel ports and try again the hosts.

    Its good in theory?

    Sounds good to me, even if it is not necessary to stop the ESXi host. Stop the virtual machines should be sufficient.

    A good post on the frames: http://rickardnobel.se/troubleshoot-jumbo-frames-with-vmkping/

    André

  • No on free ESXi 4.1 jumbo frames?

    Hello

    I'm testing a Dell R710 connected to a SAN, iSCSI and Dell MD3200i according to the documentation, I find that I am out of luck if I want jumbo frames because it must be configured using the CLI tool, and this tool is not usable without having to buy a license. Is this correct, that I need at least the package Essentials to use CLI and therefore the frames?

    Thank you

    This is the API vCLI,... which is limited read-only in the free version. However, you can connect to the console (or ssh) directly to execute the necessary commands.

    André

  • iSCSI without Jumbo frames for 15 Virtual Machines

    I have a physical to VME to be implemented which took the material for the SAN already purchased and resources research I see I have a HP MSA2324i table, two HP DL 380 G6 servers with two HP 2512 switches, but I found that this switch does not support jumbo frames.

    This means that I'll have to stick to the default 1 500 bytes for Ethernet iSCSI frames and I worry, how this may affect the performance of storage network.  I have migrated 15 servers, and of them, I have both SQL Server 2005 and a role of mailbox Exchange 2010 for 100 users.

    My thought is that I have a load of really light work that the SQL Server, databases don't support every 20 users and I should be able to keep these virtual machines running efficiently without the use of frames.

    Are there experiences of the V sphere 4.1 functioning when using standard Ethernet iSCSI?

    See you soon

    Kyle

    I don't have not upgraded to 4.1 yet, but we ran standard ethernet iSCSI on 3.5 for several different machines and never encountered any problems.    VSphere is out with enabled frames, but I did not notice a huge difference.   I think you'll be fine, but you should definilty keep an eye on things.

    Here's a nice blog on putting in place of iSCSI for vSphere environments, http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

    If you have found this device or any other useful post please consider the use of buttons useful/correct to award points

  • iSCSI Flow Control vs Jumbo Frames

    It seems with my setup (HP 1800 - 24G switches) I'll have to choose between two frames of flow of GOLD for iSCSI san access control. I am running of frames with a MTU of 9000 bytes. Enabling flow control in addition to yields close to zero rate transfer jumbo frames, so I guess that its not supported by ProCurve 1800-24 G.

    Which option would you recommend to improve the performance of a 2 esx Server (HP DL380G5 ESX 3.5 U2), 1 san installation (HP MSA 2012i, two controllers)?

    Virtualized applications include databases (SAP, Exchange) aswell as moderate use fileserver (actions of the user, company documents).

    I have read other threads on this topic but did not come to a clear conclusion.

    Best regards, Felix Buenemann

    the 1800-24 has a size of 500KB packet buffer.  The specs don't say if this is by port or chassis however since 1800-8A 144KB of buffer space I guess it's chassis, it is not very good news for iSCSI.

    If you have not enough space buffer per port, you can run with a loss of images during periods of high flow.  Loss of images means that TCP will re-transimt the framework and which is slow (compared to normal operation).  I would choose allow control of flow on Jumbos, flow control indicate the end device to stop sending framesuntil the switch has had time to process these images.

    Ben

Maybe you are looking for