Dedicated switches for ISCSI switches dilemma...

Hello

I bought two dedicated switches (I thought they were in any case) for ISCSI traffic that I had planned on just my ISCSI traffic from 3 dedicated ESX host connection.

Now the network team is said it opened enough ports on switches from my heart for the IP traffic such as vmotion, machine VM... etc.

How can I use the new switches that I bought for ISCSI traffic only, to include also the IP traffic, but everything in remaining secure? I would do that with the VLAN?

If so, how would I go on this subject?

I would use VLAN for that. Trunk at the bottom of the ESX host and dedicate and label your ports vmkernel or groups of ports.

Example:

vSwitch0 with -

iSCSI: vlan 5

VM traffic: vlan 6

VMotion: vlan 7

Configuration:

VMkernel port ISCSI0, vmk1 tagged vlan 5

Port group VM-traffic, the tag vlan 6

VMkernel por vMotion0, vmk2 the tag vlan 7

Regarding safety, it depends on your needs. You can always leave the VLAN iSCSI and vmotion again according to the requirements of connectivity to other systems.

Something like that can work?

Tags: VMware

Similar Questions

  • Installation of physical switches for ISCSI traffic

    Is that all I need to know from a networking perspective to configure ISCSI switches dedicated to support my ISCSI SAN on the left?

    I do not plan on switches connected to the prod network. I only plan on using these switches for ISCSI traffic.

    LeftHand supports LACP, if your supprt of switches that you should consider using the trunk mode. In my SAN P4300, I have two 3750's stacked. Each SAN node will connect to each switch and is located in a LACP/etherchannel link. All this is condensed to a single virtual IP address which is presented to ESX/i. don't forget to create a vmk for each dedicated vmware iscsi connection and bind according to this pdf.

  • Change STP UME in RSTP mode on two stacked powerconnect 6224 configured for iSCSI during normal operation

    Hello

    I'll do a fw upgrade during normal operation on a stack of 6224 circulating BPMH, I am currently aware of recommendations Dells run RSTP on switches configured for iSCSI traffic connected to Equallogic SAN.

    I intend to set up another pile with two 6224 to failover and then perform the upgrade on the stack of "old." My question is if it's possible to run BPMH on the 'old' stack RSTP executing on the new stack when LAG is configured between the two batteries?

    Another option would be if it is possible to reconfigure the 'old' UME to RSTP stack without interruption between the hosts and the SAN first?

    Guidance on this subject would be greatly appreciated

    Cree

    Multiple Spanning Tree Protocol is based on RSTP and is backward compatible with

    RSTP and STP. So, you should be able to run BPMH on the old and RSTP on the new.

  • NETWORK cards for iSCSI consolidation

    Hello

    I use ESX4.0 on Dell r710 and EqualLogic iSCSI SAN.  Currently I have my setup iSCSI on 2 NETWORK interface card to balance the load with "road based on the original virtual port code".  What is the proper way to configure load balancing for archiving a higher bandwidth for iSCSI?  Should I use another method?  There seems not to be too well balancing because it seems that it is only by using a NIC at a time.  I pushed very hard system see if the second NETWORK card to lead, but it didn't.  I activated the option of iSCSI vSwitch load balancing.

    Is it better if I place 2 connections iSCSI separated to the San for the better bandwidth?  What is this supported method?

    Thank you

    Load balancing only gives you a higher ratio of performance if you build more than one TCP conversation. It balances the conversations between the links. As you have a conversation with your host ISCSI traffic it will always use a link.

    How to generate more than one conversation in the world ESX4.0 is now MPIO. You can do it on the same virtual switch (and it SHOULD load balance their) or even better put them separate from the virtual switches and follow the information published on the design of a solid, redundant ISCSI network

    Jered Rassier

    * Professional technical EqualLogic certified

    * Dell Enterprise foundations certified professional v.2

    ##If you found my post answered your question or helpful please mark it as such.

  • How a policy without moving can be configured for iSCSI traffic in Nexus 5548UP? Are there side effects?

    How a policy without moving can be configured for iSCSI traffic in Nexus 5548UP? Are there side effects?

    Hello

    Side effect depends on your network config, but I can tell you how config no iscsi traffic reduction policy...

    We have three-stage configuration in link below is image...

    1. QOS class - for the first traffic ranking

    2 queue (INPUT/OUTPUT) - this is where you book or traffic police

    3 Netwrok QOS - where you key or setting MTU for classified traffic at the bottom of the basket which tissue in the nexus program

    (config) # class-map type qos myTraffic / / traffic ISCSI of Match
    (config-WCPA-qos) # match iscsi Protocol

    #policy - type myQoS-QoS policy map / / qos Set group 2 ISCSI traffic so that it can be recognized
    class myTraffic
    the value of qos-Group 2

    (config-WCPA-may) # class-map type networks myTraffic
    (nq-WCPA-config) # match qos-Group 2

    (nq-WCPA-config) # type network-qos policy-map myNetwork-QoS-policy
    (nq-pmap-config) # class type networks myTraffic
    (config-pmap-nq-c) # break without moving
    (config-pmap-nq-c) # mtu 2158
    (config-pmap-nq-c) # sh type of network-qos policy-map myNetwork-QoS-policy

    (config-pmap-c-qos) # class-map type myTraffic queues
    (config-WCPA-may) # match qos-Group 2

    (config-pmap-nq-c) # policy - map type queues myQueuing-policy
    (config-pmap-may) # class type myTraffic queues
    % of bandwidth (config-pmap-c-only) # 50
    (config-pmap-c-only) # class type class default queues
    % of bandwidth (config-pmap-c-only) # 25
    (config-pmap-c-only) # sh policy-map type myQueuing-policy Queuing

    (config-sys-qos) # type of service-QoS policy entry strategy myQoS
    (config-sys-qos) # type of service-network-qos myNetwork-QoS-policy policy
    -service policy (qos-sys-config) # type myQueuing-policy input queues
    (config-sys-qos) # type of service-policy output myQueuing-policy queuing

    Let me know your concerns

  • try to reinstall CS on my MacPro which is not online. It runs OS 10.5.1.  He is a dedicated computer for printing on an Epson 7900 and I intentionally keep the older CS3 for this purpose. My hard drive crashed with all my software on it, then

    Hello - I'm trying to reinstall CS3 on my MacPro which is not online. It runs OS 10.5.1.  He is a dedicated computer for printing on an Epson 7900 and I intentionally keep the older CS3 for this purpose. (compatibility).  My hard drive crashed with all my top software, so I replaced the drive and now trying to reinstall my CS3. How can I do this without an internet connection? It will not take my initial activation codes, and this computer is not online. pointers out there?

    Contact support Adobe by clicking here and, when available, click on "still need help," https://helpx.adobe.com/contact.html

  • How to configure a NIC connection for iSCSI

    I have a few 2950 s PowerEdge with four NICs in them, trying to use the NIC bonding on 3 of them for iSCSI.

    Are there articles or KB links explaining how to do (if it is possible?)

    Or, just point me to the right section in the vSphere client and I can understand... I have just found nothing of poking around, I wonder if it's just not possible?

    Hello

    The best answer here is that consolidation of NICs for iSCSI traffic is not supported and the various guides on the paths where you should look.

  • error after creating a console service on vSwitch1 for iSCSI

    For iSCSI, I created vSwitch1 and put a NETWORK card and assigned IP address 10.1.1.1 and 10.1.1.2 default gateway on this (photo 0).

    on iSCSI host 10.1.1.2. The two connenct directly.

    VSwitch0 192.168.1.201, gateway 192.168.1.254 default with another NETWORK card

    Click on property for vSwitch1-> add-> service console

    put the IP address 10.1.1.1 and subnet

    and not changed Service Console Default Gateway (photo 1)

    Finally, get an error message (on photo 2) and the new Console of Service cannot be built.

    So changed Console Default Gateway to 10.1.1.2 Service (photo 3), got an error message too

    George,

    You can have the same IP addresses on the Service Console and VMkernel port!

    Network for software iSCSI Storage configuration describes how to configure iSCSI.

    André

  • Right for iSCSI network switch

    I am looking at a switch for an entry-level iSCSI SAN and I'm looking to find the right sort of numbers for the speed and the size of the Mac Address Table.  My storage network is a MSA2324i HP array connected to two HP DL380 G6 servers using iSCSI.

    I had looked at HP Pro curved Switch 2824 and switch SRW2016 of Linksys.  The Linksys, I have used before in the 24 port model and me found it performs well for a 30 users network with VLAN but I want to know if it will support storage network traffic.

    HP Pro curve that I know will be good but it is three times the price and I am limited in the budget.

    Anyone has any advice on a switch which must support traffic from 15 servers, including Exchange 2010 and SQL Server 2005 with DR and HA.

    I would struggle see exhaustion MAC address as a problem. A storage switch must be insulated, and with 15 servers + SAN, you have 16 MAC addresses. Even something very entry level should handle this.

    That you need to make sure, is that the recommendations of the iSCSI standard are supported:

    • Jumbo frames

    • Flow control

    • Rapid STP

    • A bottom of basket fast enough run at full speed on all ports

    • A high level of management.

  • Reviews for iSCSI NFS e 3100 & Catalyst 2960-S vs?

    I'll put up my first SAN, a 3100 e with 2 NICs by MS.  We chose a pile of Cisco 2960-S for redundancy at the network level, and during the planning phase, I had chosen to use NFS.  E does not deduplication with iSCSI support (someone?), and the performance is roughly equal to this environment.  It is 3 guests with approximately 15 production MV; I expect a usage rate of 20% per host based on statistics collected from the current production environment.

    The glitch is 2960-S is limited to 6 ports-channels, even in a pile!  Initially, my plan was simple enough, and a commercial engineer.  Create channel-port on each host to storage traffic and traffic VM vMotion/HA.  Each of them would be 2 gigabit NIC in a VLAN dedicated.  But, now that I have that 6 Channel ports to work with, what is the best solution?  I would go with NFS, if possible, but I can't understand a good way to provide a high availability and balancing at the network level (yes I know that the effectiveness of IP hash is questionable in a port channel).

    In the past, I have Setup iSCSI multipathing in environments of test with good results, but it is a little more complex that I want to get for such a small environment, and we lose deduplication.

    Is back to the original question - possible to NFS, highly available, without aggregation of links?  I am referring to each element of the stack - host, network and SAN.  Is there another method would you recommend, and if so, why?

    A few thoughts I had:

    Wouldn't be better to put the vMotion/HA NIC on access ports with 1 NIC in standby mode and use the port for NFS instead channels?  Once the environment is fully migrated, I expect vMotion will be made during failures and maintenance periods.

    If I assigns an IP address to a store NFS SP A and it fails, MS B will remain passive until a failure and then take control of this IP/action?  Or the store NFS appears twice in my list of data stores?

    Thanks for your comments!

    Here is my attempt at bad taking a picture to help visualize this

    I had to redact the names and IP addresses

    MGMT use vmnic1 as a primary vmnic5, as a backup. It is the VLAN 125

    vMOTION uses vmnic5 as a primary vmnic1, as a backup. It is 126 VLAN.

    vmnic1 and vmnic5 are shared resources at the level of the physical switch to allow the 125 & 126 VLAN.

  • Create a vSwitch for iSCSI using 10 GB HBA

    Hi all.

    Its been a while that I have installed a new system. I have a basic configuration for the tests, single server with a 10 GB HBA (HP 4000 G2 SFP +) connected directly to a SAN using a cable CAR.

    My question is should I see the HBA listed as material and I would be able to add this to the iscsi switch, that I created? I don't see my NIC 1 GB cards at the moment. So either I'm away or my card is not compatible.

    If it's iSCSI it will appear as a network adapter. If it does not appear as a NETWORK card you should see if HP has a driver for the exact model of card network.

  • Why several vmkernels for iSCSI multipath?

    I'm a little confused about the iSCSI Multipathing. What is the reason for the creation of two vmkernel interfaces and then use command line tools to link them to a new vmhba? Is it different for the use of a software iSCSI initator on a vSwitch with multiple VMNIC, connected to different physical switches?

    There is the same discussion in thread Russian vmug.

    iSCSI Multipathing is not on the trail, its about us pairs - initiators (interfaces vmk) and targets.

    If you have two targets

    • and you need to fail over only - 1 vmk is enough (NIC in Active mode / standby)
    • If you need loadbalacing
      • and you can use the aggregation of links + Hash IP - 1 vmk is sufficient (PSP is Roundrobin and NICs in active/active mode)
      • If you cannot use THE - vmk 2 are needed.
  • Network configuration for ISCSI and VMotion

    Hello

    I have an ESX host configured with the iSCSI storage and am currently working on the best way to affect my NIC I a four VMK vSwitch and two nic

    http://communities.VMware.com/message/1428385#1428385

    I also have an additional switch for VMotion.

    vSwitch3

    -


    -VMkernel

    -Service 2 console

    -


    vmnic6

    -


    vmnic7

    -


    vmnic6 and vmnic7 are both on the San.

    After adding the new VMkernel and activation of vmotion, I was wondering why this has not shown as an additional path to the storage (I want to know if this is another question). Then I ran "esxcli swiscsi nic list d vmhba33" and of course, only the first four VMK was listed.

    Why the new VMKernel is not automatically linked to vmhba33?

    It would be a bad idea?

    See you soon

    Just to play devil's advocate, why shouldn't be VMotion and SAN traffic on the same link though?

    the iSCSI traffic MUST have a low latency and no mistake.

    VMotion can create advanced that could generate problems in iSCSI traffic.

    No idea why it does not automatically bind well?

    Can you vmkping each IPs Eql?

    You have to add each interface vmkernel of initiator iSCSI, with a command like this?

    esxcli swiscsi nic add - n vmk0 d vmhba34

    André

  • Increase throughput for iSCSI with multiple NICs

    I'm implementing a new host ESXi 6 using a Synology NAS as an iSCSI target. I have 4 nic of the Synology stuck in 1 connection, so it has 4 GB of throughput. So far, over my hosting provider, I have two nic I dedicated to iSCSI traffic. So far, I put vSwitch under 1 two NICs and I was playing with patterns. I put two 1 vmkernel network cards and set as active adapters. However when I watch the iSCSI network traffic I only see traffic on a network adapter. This of course caps my speed to 1 GB. I would like to be able to bind several network cards so I can get a flow more. I tried changing the selection of the path for the warehouses of data in Round Robin are therefore at least two active nic. But which always leaves me with two individual connections of 1 GB instead of 2 GB connection. Is it possible for me to get there or I'm better off just having several 1 GB NIC load balanced?

    I'm not familiar with the Synology. However, what is generally - in the case where the storage system supports - is to set up several uplinks on the side of the target of storage/as well as on the initiator/ESXi side, each with their own IP address. Once configured, you can use multiple paths to access the target. According to the capabilities of the target, Round-Robin (if supported) will send traffic through all the links, i.e. using the bandwidth of all the links.

    André

  • Migrate to vDS for iSCSI Software

    So I'm going through the vDS migration guide, but I'm still a little fuzzy on how I need to migrate my VMKernel ports software iSCSI. Here is my setup...

    I have 4 hosting a test cluster. Two network ports are dedicated on each host for vSwitch1 which has a VMKernel port with an IP address in the 10.5.33.

    How can I disconnect from my software iSCSI initiator or migrate the IP address of a vSS to a vDS without a cause of a conflict?

    http://www.keithsopher.com/Images/VCP_logo_40.gif

    While add you ESX host service VDS, there is an option to add the console service & VMkernel iSCSI to the vDS. If you already add this particular vDS host, remove first the host of vDS.

    vcbMC - 1.0.6 Beta

    Lite vcbMC - 1.0.7

    http://www.no-x.org

Maybe you are looking for