Installation of physical switches for ISCSI traffic

Is that all I need to know from a networking perspective to configure ISCSI switches dedicated to support my ISCSI SAN on the left?

I do not plan on switches connected to the prod network. I only plan on using these switches for ISCSI traffic.

LeftHand supports LACP, if your supprt of switches that you should consider using the trunk mode. In my SAN P4300, I have two 3750's stacked. Each SAN node will connect to each switch and is located in a LACP/etherchannel link. All this is condensed to a single virtual IP address which is presented to ESX/i. don't forget to create a vmk for each dedicated vmware iscsi connection and bind according to this pdf.

Tags: VMware

Similar Questions

  • Dedicated switches for ISCSI switches dilemma...

    Hello

    I bought two dedicated switches (I thought they were in any case) for ISCSI traffic that I had planned on just my ISCSI traffic from 3 dedicated ESX host connection.

    Now the network team is said it opened enough ports on switches from my heart for the IP traffic such as vmotion, machine VM... etc.

    How can I use the new switches that I bought for ISCSI traffic only, to include also the IP traffic, but everything in remaining secure? I would do that with the VLAN?

    If so, how would I go on this subject?

    I would use VLAN for that. Trunk at the bottom of the ESX host and dedicate and label your ports vmkernel or groups of ports.

    Example:

    vSwitch0 with -

    iSCSI: vlan 5

    VM traffic: vlan 6

    VMotion: vlan 7

    Configuration:

    VMkernel port ISCSI0, vmk1 tagged vlan 5

    Port group VM-traffic, the tag vlan 6

    VMkernel por vMotion0, vmk2 the tag vlan 7

    Regarding safety, it depends on your needs. You can always leave the VLAN iSCSI and vmotion again according to the requirements of connectivity to other systems.

    Something like that can work?

  • How a policy without moving can be configured for iSCSI traffic in Nexus 5548UP? Are there side effects?

    How a policy without moving can be configured for iSCSI traffic in Nexus 5548UP? Are there side effects?

    Hello

    Side effect depends on your network config, but I can tell you how config no iscsi traffic reduction policy...

    We have three-stage configuration in link below is image...

    1. QOS class - for the first traffic ranking

    2 queue (INPUT/OUTPUT) - this is where you book or traffic police

    3 Netwrok QOS - where you key or setting MTU for classified traffic at the bottom of the basket which tissue in the nexus program

    (config) # class-map type qos myTraffic / / traffic ISCSI of Match
    (config-WCPA-qos) # match iscsi Protocol

    #policy - type myQoS-QoS policy map / / qos Set group 2 ISCSI traffic so that it can be recognized
    class myTraffic
    the value of qos-Group 2

    (config-WCPA-may) # class-map type networks myTraffic
    (nq-WCPA-config) # match qos-Group 2

    (nq-WCPA-config) # type network-qos policy-map myNetwork-QoS-policy
    (nq-pmap-config) # class type networks myTraffic
    (config-pmap-nq-c) # break without moving
    (config-pmap-nq-c) # mtu 2158
    (config-pmap-nq-c) # sh type of network-qos policy-map myNetwork-QoS-policy

    (config-pmap-c-qos) # class-map type myTraffic queues
    (config-WCPA-may) # match qos-Group 2

    (config-pmap-nq-c) # policy - map type queues myQueuing-policy
    (config-pmap-may) # class type myTraffic queues
    % of bandwidth (config-pmap-c-only) # 50
    (config-pmap-c-only) # class type class default queues
    % of bandwidth (config-pmap-c-only) # 25
    (config-pmap-c-only) # sh policy-map type myQueuing-policy Queuing

    (config-sys-qos) # type of service-QoS policy entry strategy myQoS
    (config-sys-qos) # type of service-network-qos myNetwork-QoS-policy policy
    -service policy (qos-sys-config) # type myQueuing-policy input queues
    (config-sys-qos) # type of service-policy output myQueuing-policy queuing

    Let me know your concerns

  • Change STP UME in RSTP mode on two stacked powerconnect 6224 configured for iSCSI during normal operation

    Hello

    I'll do a fw upgrade during normal operation on a stack of 6224 circulating BPMH, I am currently aware of recommendations Dells run RSTP on switches configured for iSCSI traffic connected to Equallogic SAN.

    I intend to set up another pile with two 6224 to failover and then perform the upgrade on the stack of "old." My question is if it's possible to run BPMH on the 'old' stack RSTP executing on the new stack when LAG is configured between the two batteries?

    Another option would be if it is possible to reconfigure the 'old' UME to RSTP stack without interruption between the hosts and the SAN first?

    Guidance on this subject would be greatly appreciated

    Cree

    Multiple Spanning Tree Protocol is based on RSTP and is backward compatible with

    RSTP and STP. So, you should be able to run BPMH on the old and RSTP on the new.

  • Question about VMKernel iSCSI traffic and VLANS

    Hello

    This is a very fundamental question that I'm sure I know the answer too, but I want to ask him anyway just to reassure myself.  As a precursor to my question, the configuration of my ESX infrastructure is best described here: http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVault+MD3000i.  Or more precisely, we have two controllers MD3000i.  Each controller has two ports and each port is configured on two different subnets, with every subnet connected to the different switch.  ESX host are connected to two switches.  The only difference for the guide, is we have two MD3000i configured the same, connection to the same switches.  Each MD ports is configured on the same subnet, but different IP addresses.

    At present, we are in the process of upgrading our two iSCSI switches of humble Dlink DGS - 1224T to Cisco 2960 T of.  The switches have been and continue to be dedicated to iSCSI traffic, however, I'm trying to set up VLAN s on the side of the switch.  Originally, we used the default VLANS on switches, however, after you have added an another MD3000i, noted the Support Dell best practices is to separate each on its own subnet and VLAN MD3000i iSCSI traffic. This would result in iSCSI 4 VLANS, two on each switch and two for each MD3000i.  Firstly, is this in fact of good practices?

    Second, if I migrate preceding 4 iSCSI VLANS, as each switch port will actually be an access port, will there need to complete the VLAN ID field in the VMKernel configuration page? Presumably, this field is used when the tagging VLAN is used, but as our switches do not need any other rocking trunk (as they are dedicated to iSCSI traffic), there should be no need to fill?  I guess it would be prudent to keep the two existing subnets, create two new subnets and make changes to an MD3000i and connection of the ESX host.  Provided the switch and switch ports has been appropriate configured with VLAN on the right, the rest should be transparent and he wouldn't be Intel VLAN in all ESX hosts?

    Would be nice to get answers and thank you in advance!

    Gene

    (1) Yes, it is best practice for ESX iscsi, having an independent network and vlan for iscsi traffic.

    (2) No, there is no need to mention anything in the area of vlan, if you use an access port. Its a mandatory thing than a choice. If you supply the id vland with access port, it loses connectivity.

    Please explain a bit why you need to create two different virtual local networks for each MD3000i. You are going to use several on the same ESX box iscsi storage? Alternatively, you use only a single iscsi and use these 4 ports for the same single VMkernel interface?

    NUTZ

    VCP 3.5

    (Preparation for VCP 4)

  • Implementation of QOS for on multiple switches for two VLAN

    afternoon all that I was looking for a little help in configuring QOS for two local area networks VIRTUAL, I created. These will be for voice traffic vlan 22 and traffic video vlan23. I also have three other VLAN for pc, wireless devices and our cnc machines. We have 5 switches which are all SG30028P with a single switch making the intervlan routing (Layer 3-powered). all switches are resources shared by the main switch and ive been through the guide written on how to do it on a single switch, which I think is layer 3.

    could someone help me with what I want the video and voice to take precedence over the rest of the traffic

    best regards Patrick

    Right, this way, you can give priority to IP or MAC address.

    No, there is nothing different, only one review of the routing. As the L3 switch can make local routing decisions, the placement of the ACL can vary. In an L2 environment, all packages are transferred to the router, then back down the network to the local destination, so you can clear an ACL on the uplink.

    Since you have a L3 switch, you can practice the same, empty an ACL on the link downstream for L2 switches for all traffic subject to the ACL. Any connection to the L3 switch, you can of course apply an ACL to the uplink router to get all notice of incoming traffic correctly. You can also apply a policy on port source interface to get the note as soon as traffic hits the switch.

    QoS policy is the concept of an ACL, you want it is the closest to the resource. If you apply a QoS policy an uplink upward, QoS will be marked up when it hits this port. Not before. So it should be applied closest to the device as possible. Many times, local QoS is not as important, until it starts to get to the router. As most Cat 6 environments can generally manage all local traffic. So, depending on the number of devices and the amount of traffic, a decently robust network can handle all traffic local then you just give priority to the uplink remarks to ensure that what is important is first.

  • How to configure a NIC connection for iSCSI

    I have a few 2950 s PowerEdge with four NICs in them, trying to use the NIC bonding on 3 of them for iSCSI.

    Are there articles or KB links explaining how to do (if it is possible?)

    Or, just point me to the right section in the vSphere client and I can understand... I have just found nothing of poking around, I wonder if it's just not possible?

    Hello

    The best answer here is that consolidation of NICs for iSCSI traffic is not supported and the various guides on the paths where you should look.

  • Right for iSCSI network switch

    I am looking at a switch for an entry-level iSCSI SAN and I'm looking to find the right sort of numbers for the speed and the size of the Mac Address Table.  My storage network is a MSA2324i HP array connected to two HP DL380 G6 servers using iSCSI.

    I had looked at HP Pro curved Switch 2824 and switch SRW2016 of Linksys.  The Linksys, I have used before in the 24 port model and me found it performs well for a 30 users network with VLAN but I want to know if it will support storage network traffic.

    HP Pro curve that I know will be good but it is three times the price and I am limited in the budget.

    Anyone has any advice on a switch which must support traffic from 15 servers, including Exchange 2010 and SQL Server 2005 with DR and HA.

    I would struggle see exhaustion MAC address as a problem. A storage switch must be insulated, and with 15 servers + SAN, you have 16 MAC addresses. Even something very entry level should handle this.

    That you need to make sure, is that the recommendations of the iSCSI standard are supported:

    • Jumbo frames

    • Flow control

    • Rapid STP

    • A bottom of basket fast enough run at full speed on all ports

    • A high level of management.

  • Aggregation of links ESXi requires the configuration of the physical switch?

    Hello

    I have two physical servers that I'll put up with ESXi. Each server has eight physical network cards. My plan is on each server to use:
    -2 management - connected to vSwitch0
    -2 for VMotion - connected to vSwitch1
    -2 for iSCSI - connected to vSwitch2 (I'll use a SAN)
    -2 to access VM network - connected to vSwitch3

    For each pair, a cable should be connected to switch1 and the second would be plugged into switch2.

    The switches are HP A5120 which are stacked into a big switch with RFID. for example I have a 'big' switch which, if one of the two physical switches dies or is turned off, would cause the 'great' switch to lose half of its ports. In this way, all four networks are expected to continue to operate even if a switch "dies."

    All the information I found online said that ESXi can have two physical cards attached to a unique vSwitch and can then do things smart for redundancy (in which case a physical NETWORK card dies) and load balanacing.

    However, while all the guides tell me how to configure it on ESXi, I'm still not sure if I should combine the two cables on the physical switches.

    So, my questions are:

    (1) in order to get the NIC or balancing working in ESXi, redundancy do I have to combine two physical ports that will connect connect two physical ports that connect to the vSwitch?

    (2) with the consolidation of NETWORK cards, I see how to have an active adapter for the management network, and an auxiliary card works fine, but for the iSCSI network, I wouldn't have only two active adapters for double the bandwidth?

    Any ideas would be much appreciated.

    Thank you.

    (1) in order to get the NIC or balancing working in ESXi, redundancy do I have to combine two physical ports that will connect connect two physical ports that connect to the vSwitch?

    No, there is no need to configure etherchannel/LACP but this will depend on the type of load balancing, you click on the ESXi host - if you use by default, you don't need to configure Etehrchannel/LACP since traffic for each virtual network interface will be only com ove a single physical port, but you will always have the reiciliency to the case where if one of the physical NIC fails traffic then released the remaining port

    If you select the hash of the IP load balancing, you must then implement Etherchannel/LACP on your physical switch since traffice can come form any physical port in the team.

    (2) with the consolidation of NETWORK cards, I see how to have an active adapter for the management network, and an auxiliary card works fine, but for the iSCSI network, I wouldn't have only two active adapters for double the bandwidth?

    Onece well once again it will depend on the Load Balancing method, you select, but you will never be able to create links physical ports to get all of the 2 GB =

    I also moved it to a more appropriate forum-

  • Reviews for iSCSI NFS e 3100 & Catalyst 2960-S vs?

    I'll put up my first SAN, a 3100 e with 2 NICs by MS.  We chose a pile of Cisco 2960-S for redundancy at the network level, and during the planning phase, I had chosen to use NFS.  E does not deduplication with iSCSI support (someone?), and the performance is roughly equal to this environment.  It is 3 guests with approximately 15 production MV; I expect a usage rate of 20% per host based on statistics collected from the current production environment.

    The glitch is 2960-S is limited to 6 ports-channels, even in a pile!  Initially, my plan was simple enough, and a commercial engineer.  Create channel-port on each host to storage traffic and traffic VM vMotion/HA.  Each of them would be 2 gigabit NIC in a VLAN dedicated.  But, now that I have that 6 Channel ports to work with, what is the best solution?  I would go with NFS, if possible, but I can't understand a good way to provide a high availability and balancing at the network level (yes I know that the effectiveness of IP hash is questionable in a port channel).

    In the past, I have Setup iSCSI multipathing in environments of test with good results, but it is a little more complex that I want to get for such a small environment, and we lose deduplication.

    Is back to the original question - possible to NFS, highly available, without aggregation of links?  I am referring to each element of the stack - host, network and SAN.  Is there another method would you recommend, and if so, why?

    A few thoughts I had:

    Wouldn't be better to put the vMotion/HA NIC on access ports with 1 NIC in standby mode and use the port for NFS instead channels?  Once the environment is fully migrated, I expect vMotion will be made during failures and maintenance periods.

    If I assigns an IP address to a store NFS SP A and it fails, MS B will remain passive until a failure and then take control of this IP/action?  Or the store NFS appears twice in my list of data stores?

    Thanks for your comments!

    Here is my attempt at bad taking a picture to help visualize this

    I had to redact the names and IP addresses

    MGMT use vmnic1 as a primary vmnic5, as a backup. It is the VLAN 125

    vMOTION uses vmnic5 as a primary vmnic1, as a backup. It is 126 VLAN.

    vmnic1 and vmnic5 are shared resources at the level of the physical switch to allow the 125 & 126 VLAN.

  • VCenter server virtual installation vs. physical

    Could Hello everyone, you give me advice on how to implement Vcenter server in Vsphere 4.1? Physical or virtual Vcenter server installation? Thanks for your help

    I used vCenter virtualized servers in environments of production for a couple of years now... I did first minus 3.5 and did in 4.0 and 4.1 with no problems at all... Performance of the vCenter Server (in production environments) has never been a problem due to using enterprise grade, powerful host servers storage and put them in place properly (all the environment too)...

    Beyond the benefits already mentioned to go with a vCenter Server virtual (that I've seen in action) I'll second the mention of make it easier to administer all of your environment... The savings are not to be rejected either... Not only can you save by not having no no need to have another Canon of material (even if it's the one you had in hand and simply re - purpose it) you save on power and the breakdown of costs (not a small amount per year on the old boxes)... There is also the added benefit of the higher speeds of communication (VM for VM networks run very fast)...

    With respect to the complexity that it could bring your network... Not really... Unless you are about to run out of IP addresses on your network (which would require either expand your subnet to allow more addresses or add another subnet within your DHCP server) you can just assing the IP address of the virtual machine (really simple) i.e. the vCenter Server... Unless you have hundreds of servers, you should not need to another range of VLAN / subnet for the virtual environment...

    I think you're really too complicate the coast network of things too... Think of it this way... Your host servers are intellectual property will 10.1.1.x, if you place the vCenter Server on the same IP range (a single system more on this beach)... You place your VM on the IP range, they are too (can be the same, may be different) linking the port of Machine Group virtual host NIC to physical switch ports correctly. Depending on your storage/SAN, you plug these host ports to correct physical switch ports...

    Do not plan things for how things are going to interconnects. That being said, it is not a surgery of the brain here... I advise you intend having redundancy at the physical level of switch NIC and host for all connections... As two NIC for the management of the network, x 2 NIC / ports for traffic of the VM port group (depending on the size of the cluster/County of VM, you will need to have enough connections to support the claim), x 2 NIC / ports for SAN connectivity, etc... This is why I generally 8-10 NIC / ports in each host I've implemented (production level hosts)... This includes (in general) the four s Integrated NIC, so adding a pair of dual-port, or a quad (always class Intel server NIC is there), or a combination, to each host.

    If you're having trouble designing the configuration of your network, view information such as the physical switches (and how the ports are configured so divided into several VLAN) for regular traffic and SAN connections, host configurations (include the two how many ports on board and through additional cards), SAN brand/model (and how which many network/controllers there how many ports per controller) etc... See also how you plan to put it all and will help people here making sure what did it right (or even better for the environment, in which it is to enter)... If the network is already installed, works well, and looking just to put more things in place (within the cluster) is something else...

    VMware VCP4

    Review the allocation of points for "useful" or "right" answers.

  • switch-v iSCSI configuration

    I just got some iSCSI storage and try to add this to my vmware servers. The service console for my vmware server is on the subnet 192.168.2.x. I plan to use a separate subnet for iSCSI 192.168.3.x.  I don't want the iSCSI traffic on the same network as the rest of my servers.

    I've isolated 8 ports on my switch for use with iSCSI.

    The problem I have is that when I add a vmkernel interface to my v-switches, he wants to be on the same switch-v as my service console, but I want to use separate physical connections to the switch for 2 subnets. If someone else has had this problem and how did you get around it.

    Also be aware if you use ESX and the iSCSI initiator of software you will need to place a service on 192.168.3.x network console because for the software iSCSI initiator to work the vmkernel and service console need to communicate to it - also don't forget to open port 3260 in the service console firewall

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • 10 GB switches for network SAN Eql

    Hello

    We'll buy a new PS4210X 10 GB storage.

    We currently have a PS5000, a PS4000E and a PS6100X with 1Gbit Interfaces in the same network. All have their own group. A new will be in its own group.

    The network is a ring (a single trunk is open through the Protocol spanning tree) 4 Dell Powerconnect 52XX GBit switch (2 x 24, 2 x 48). 2 switches with 2 tables and 2 servers in a room, 2 1 and 2 in another room.

    The switches still have 1-2 years maintenance...

    I know the new storage can always run on GBit until we go to 10 GB on the network.

    1. what switches Dell or HP 10 GB would you consider for the new network?

    2. is it possible to buy 10 GB switches and use 1gbit and 10 Gbit customers stand on this switches at the same time?

    3. you would mix the traffic on the physical network with VLAN for your production traffic, or you would still dedicate the entire network for SAN traffic?

    Any other consideration?

    Thank you.

    For those customers who are in some longer period of transition from 1 to 10 GbE, we choose BaseT rather than SFP +.  Thus, the following switch model comes in my mind

    Force10 s4820 or N4032/64.  Don't forget that you still have 2 switches with good band stacking bandwidth.

    The Force10 comes with 48x10GbE + 4x40GbE. The N4032 have 24 x 10 + 2 x 40 and the N4064 48 x 10 + 4 x 40. Then do the math how many ports you need from the beginning. Stacking is a little less complicated with N40xx the thing (i) VLT on the Force10.

    The three models are certified for EqualLogic and we work in a few areas and a customer more have "legacy" servers/devices connected to the new switch gear.

    Yes, you should always use VLAN and whenever it's possible traffic stored separately from the rest.

    Kind regards

    Joerg

  • Why several vmkernels for iSCSI multipath?

    I'm a little confused about the iSCSI Multipathing. What is the reason for the creation of two vmkernel interfaces and then use command line tools to link them to a new vmhba? Is it different for the use of a software iSCSI initator on a vSwitch with multiple VMNIC, connected to different physical switches?

    There is the same discussion in thread Russian vmug.

    iSCSI Multipathing is not on the trail, its about us pairs - initiators (interfaces vmk) and targets.

    If you have two targets

    • and you need to fail over only - 1 vmk is enough (NIC in Active mode / standby)
    • If you need loadbalacing
      • and you can use the aggregation of links + Hash IP - 1 vmk is sufficient (PSP is Roundrobin and NICs in active/active mode)
      • If you cannot use THE - vmk 2 are needed.
  • Network configuration for ISCSI and VMotion

    Hello

    I have an ESX host configured with the iSCSI storage and am currently working on the best way to affect my NIC I a four VMK vSwitch and two nic

    http://communities.VMware.com/message/1428385#1428385

    I also have an additional switch for VMotion.

    vSwitch3

    -


    -VMkernel

    -Service 2 console

    -


    vmnic6

    -


    vmnic7

    -


    vmnic6 and vmnic7 are both on the San.

    After adding the new VMkernel and activation of vmotion, I was wondering why this has not shown as an additional path to the storage (I want to know if this is another question). Then I ran "esxcli swiscsi nic list d vmhba33" and of course, only the first four VMK was listed.

    Why the new VMKernel is not automatically linked to vmhba33?

    It would be a bad idea?

    See you soon

    Just to play devil's advocate, why shouldn't be VMotion and SAN traffic on the same link though?

    the iSCSI traffic MUST have a low latency and no mistake.

    VMotion can create advanced that could generate problems in iSCSI traffic.

    No idea why it does not automatically bind well?

    Can you vmkping each IPs Eql?

    You have to add each interface vmkernel of initiator iSCSI, with a command like this?

    esxcli swiscsi nic add - n vmk0 d vmhba34

    André

Maybe you are looking for