FC and iSCSI

I have an IBM Blade Center with 4 blades running enterprise 5.1 ESXI.

The blades are connected to storage via FC (Fibre Channel) switch.

I added a new SAN at the center of the blade, but due to some unexpected port licensing problems with my Cisco MDS 9000 FC switch, I can connect only one of the 2fc connects to my blade right now.

The SAN also comes with ISCSI, so my question is, can I have a put in place with FC and ISCSI multipath?  or is it a bad idea?

Hi friend

Here's the thread that talks about your request:

fiber channel - it is possible to multipath FC and iSCSI? -Server fault

Tags: VMware

Similar Questions

  • 2 questions - group leader and iscsi

    We have a unique PS6500 of the firmware 6.0.5. From 3 days ago, Archbishop Grp freezes when I click on the member icon. I could go into all the rest of the GUI with no problems. I got the firmware installed for about a month, so I don' t think that is the problem. Has anyone else seen elsewhere?

    2nd edition - we have vmware hosts connected to QE through private switches. Vmware hosts have separate vswitches for vmotion and windows iscsi volumes. VMotion is on 10.2.0 subnet and iscsi volumes are on 10.2.1 subnet. The EQ is on 10.2.0 with a 255.255.0.0 mask. The switches are set with the VLANS separated for iscsi and vmotion.  So far in Grp Mgr, I allowed on each volume 10.2. *. *. I started to try to change it so that the vm volumes only allow connections from 10.2.0. * (vmotion subnet) and volumes windows only allow connection of the 10.2.1. *. Currently, I do not use chap or iscsi initiator to limit. Here is an example for a windows volume.

    But here is what I get in the Connections tab for this volume, even after a new analysis of storage host adapters:

    So either I completely ignore the setting of the limits of intellectual property, or my EQ has a problem. Shouldn't I have the 10.2.1 connections to this volume?

    Any ideas? Thank you.

    None of the fixes listed in the 6.0.6 release notes apply to my EQ,.

    BTW, you can add the ACL CHAP to the EQL live volume.  This will not affect existing sessions.  Just make sure that you first create the CHAP user on the table.  Then after setting the ACL volume, set the iSCSI initiator ESX to use CHAP.

    If you take a look at the RPM, you can find a few Commons left default setings.  There are some that will improve performance, other stability.

    vMotion has not EQL access at all.  It's strictly for compose the image memory of a computer virtual from one host to another ESX host.

    Storage vMotion includes EQL, but it does not use the port of vMotion to do anything.

    vMotion traffic must be completely isolated, because the content in the course of the transfer are not encrypted.   So no VM Network exchanges should be on this vSwitch and the physical switch should be isolated too.

  • Recommend bridge SCSI and iSCSI

    Hi guys,.

    Currently I have an ADIC FastStor 2 tape auto-chargeur connected to a physical machine, Backup Exec 12.5, which backs up our running VMS with internal agents. Our ESX host running ESX 4.1, running on servers of 4 HP DL380G5.

    We are looking into converting to a more focused VM disk backup over the next 9 months, but we have a short-term obligation to upgrade our current solution quickly so that we can remove the material (it has long been out of support and our backups start to fail). To this, looking at a quick upgrade to BE2010 R2 (since we have already supported) and put us the media server on a virtual machine. Remains to be seen how to connect to the tape library. A SCSI and iSCSI Bridge seems to be the best option to do it quickly and with headache less.

    Someone at - it experience with these devices? Recommendations for an individual? The current tape drive uses a connection of Ultra2 SCSI wide 68 pins.

    Thank you

    .. .After research forums and seeing the difficulty that people have, I wasn't convinced.

    The positions of 'problem' I see the most are those who use cards non - of Adaptec SCSI and / or not parallel SCSI drives.  The most recent posts I've seen are for tape drives, SATA or SAS drives.  Basically trying to use 'new' band players, rather than reuse the 'old' band of readers.

    Quick is the main concern here. The money is available, it cannot simply be a ridiculous amount since it is monstly going to be throwing next year (otherwise I'd just go straight for an iSCSI tape library).

    Then I recommend the SCSI card because it would be cheaper than a bridge.  Or if you have an existing workstation you can re-tasks to host your tape drive for awhile so that get you the rest of your new backup equipment.  Used & refurb workstations can be made for well under $ 500 and often as low as $200 and would be more powerful enough to accommodate a tape drive.  Because you can reuse the workstation after obtaining new equipment, you could get something more than the bare minimum for this project.  Maybe it's a better use of $$$ instead of a bridge iSCSI that you go to "throw".

  • VCB server with both FC and iSCSI connections

    Hello.

    Anyone know if a VCB implementation that has the FC and iSCSI LUN presented to the proxy server is possible/supported?  I see no technical reason that it wouldn't work, but I'm trying to understand whether or not this is a supported configuration.

    Thank you!

    I don't see why that would be a question of a standpoing of features, but if you didn't mind the performance hit you just the business day following mode of transport and not worry present LUNS.   I guess that maybe in the same way hotadd would work as well.

    If you have found this device or any other useful post please consider the use of buttons useful/correct to award points

  • How to save and reuse network and iscsi on reformatting configuration?

    We have all the physical servers to reformat and reinstall vsphere (Enterprise edition).

    We would like to avoid to rebuild all the configuration network and iscsi for each physical machine.

    Is there a way to save the configuration and apply again to the reformatted Server?

    Thank you

    Tonino

    Hello

    You can save your configuration with the command vicfg - cfgbackup.pl - Server

    Frank

  • Net M4110x - Net management and iSCSI. Clarification of the configuration steps. The documentation is at best ambiguous.

    Hi guys,.

    I'm having a lot of trouble to translate what I know PS6000 table to the new M4100x.  Here's what I'm building:

    I want my iSCSI traffic that is completely isolated from all other traffic, and I want to use the CMC network to run the Board.  It must be a simple configuration, but Dell documentation for which is worse than useless.  It confuses and everyone who's read it confuses.  Why is this?

    It seems that I should be able to assign the IP addresses of management using the MCC according to DELL:

    Step 1.  Initialize the storage

    * Once in the CMC right-click on storage open storage gui initialization.

    * Member name: MY_SAN01
    * Member, IP: 192.168.101.10
    * Member gateway: 192.168.101.254
    Group name: MY_SAN
    Group IP address: 192.168.101.11
    Group of Memebersip password: groupadmin
    * Password Admin group: groupadmin

    It sounds simple enough, and when I apply this I guess I will be disconnected my M4110x simply because it currently resides on a separate network (net 2 in the image above).  Now how to set up the IP address of my CMC (net0 in the picture above) network management?

    Step 2.  Set ip management port

    According to the documentation of Dell, I have:

    To set the management port:

    * Open a telnet (ssh) session on a computer or console that has access to the PS-M4110 range. The array must be configured beforehand.
     
    * Connect the PS-M4110 modules using the following racadm command: racadm Server 15 connect
     
    * Connect to the PS-M4110 array as grpadmin

    Once I am in:

    Activate the management controller ports using the following commands in the CLI:
    0 > member select MY_SAN01
    1. (array1) > eth select 1
    2. (array1 eth_1) > 10.10.10.17 ipaddress netmask 255.255.255.0
    3. (array1 eth_1) > upward
    4. (array1 eth_1) > exit
    5. (array1) > grpparams
    6. (array1 (grpparams)) > network management-ipaddress 10.10.10.17

    (array1 (grpparams)) > exit

    My interpretation is correct?  Now my questions:

    1. in step 2. SubStep 1 - How can I know what ethernet interface to use?  Step 1 automatically assume eth0?

    2. am I correct in using the same IP address for both step 2 - substep 2 and substep 6?  Or do I have to assign a different IP address for these?  10.10.10.18 maybe.

    3. step 2 - substep 6, it doesn't seem to be that a network mask is that correct?

    4. comparison of the ps6000e - I set up an IP address for each controller (so 2) and then assigned an IP address for the group.  It's 3 IP addresses.  For this M4110, it seems that I have only one controller.  Is this correct?  The specifications make a point that there are 2 controllers.  What happened to the IP address of the controller of the 2nd?

    CLOSE-UPS

    I intend on building a VMware cluster using the algorithm of multiple paths of DELL and I built it to the DSC, but a technician Dell put in place the table initially and have not set up a dedicated management port.  Required configuration routing traffic on the net iSCSI management.  It is not recommended, and I don't want to set up this way.

    Currently, he is a blocking problemand I need to go beyond this ASAP.  I work with a large system integrator in Texas and plan on the order that these systems built this way on their part.  This means that I must be able to explain to them how to proceed.  This issue is standing in the way of progress, and I really hope I can get a satisfactory response from this forum.  Thanks for any helpful answers.

    I think I have the answers to my own questions:

    1. YES.  Step 1 automatically assume eth0.  There are TWO Ethernet interfaces and eth1 is disabled by default, and unless you use step 2 to set the management port this second Ethernet interface is never used.

    2. No. I can't use the same IP address for both lines.  In lower level 6 I need to use a different IP address on the same network 10.10.10.18 would work fine.

    3. YES.  It is correct.  Lower level 6 assumes the network mask that I have included in the point 2.

    4. it's tricky.  There is NO WAY to configure Active/active on these tables.  There are 2 controllers, but one "still asleep," unless the other fails.  Actually, the IP address is assigned to an Abstraction Layer, it maintains.  When fails another controller "awakens" and just starting to accept traffic and it doesn't care what its IP address.

    Another point.  Now that my table is initialized and my interfaces are configured, I need to know what IP address to point my ESXi hosts for their storage.  Use the IP address of the group assigned in step 1.  It is 192.168.101.11 (there is typo in the original post).

  • Hyper-V and iSCSI network

    Hello

    We evaluate a migration of vmware for hyperv.

    I try to understand best practices for networks iSCSI comments.

    I have physical 4ports 1GBit dedicated, on the host for iSCSI traffic.

    I like to use all 4 for iSCSI host (vhdx volumes) traffic.

    Now I thought to do 2 of them shared by creating 2 logical switches in VMM, adding 2 virtual network cards for the host to use.

    The new virtual network cards are 10 Gbit. I don't see an option to change them to 1GBit. To me it seems now that the system prefers the 10 GB adapters. My other two physical cards are no more used.

    I tried to do all 4 ports as virtual, but somehow the 4.7EPA ASM does not see virtual cards. He said only: "no network adapters don't find" at the opening of the MPIO settings.

    Should I just ignore this idea to share and use 2 for host and 2 for iSCSI hosts, or is it a medium work?

    It is recommended to devote at least 2 interfaces on the host iSCSI network.  In addition, you must install the Dell EqualLogic for Microsoft host integration tools and install the MPIO feature.  To enable the MPIO in the guest operating system, you must create at least two virtual switches that are related to the physical SAN on the Hyper-V host adatpers.  Virtual machines must be configured with at least two of these virtual switches.  Then, since the guest operating system, configure interfaces with IP iSCSI network, Subnet, etc...  You must also install the Dell EqualLogic for Microsoft host integration tools and functionality MPIO DSM in the guest operating system, if it is not running Windows.  If you use Jumbo frames, ensure that all NETWORK adapters used for iSCSI (NETWORK physical cards, NETWORK cards, Guest OS NICs) are enabled for frames.

    In regards to ASM v4.7 EPA you don't see not cards network for MPIO - there is a known ASM / ME v4.7 bug in Windows Server R2 2012 linked to the EPA.  It is likely that the configuration of MPIO is fine (you can check it out through the initiator Microsoft iSCSI MPIO EqualLogic tab - it's just that ASM / me has a problem of information display.)  This bug has been fixed in version recommended to v4.7 GA HIT/Microsoft - which is intended to be published very soon.

  • UCS FI and iSCSI storage

    We are poised to implement a UCS B Series. I have a question for FI 6248UP. I have read and used the emulator for UCS Manager and noticed that you can configure ports FI as ways for storage material. Is it limited to a certain protocol storage or the seller? We use Dell EQ and plan to be there to connect the EQ 6510 X. I was wondering if that is supported and if iSCSI upstream traffic would be able to access the storage?

    The 6248UPs will be passed to a pair of Catalyst 4506-E race vs. We have an IBM chassis and servers should be able to access the EQ6510X too. I guess I just need master iSCSI VLANS for the 6248UP and the servers would have access?

    Thank you!

    Hi Cowetac,

    Yes, not all storage arrays are supported by UCS, but your DELL Equalogic is supported. If you have any questions about the other compatibility of storage arrays, you can take a look at the UCS (see link below) storage interoperability matrix.

    UCS storage interoperability Matrix (matrix of the storage of UCS - B of table 10-2)

    http://www.Cisco.com/en/us/docs/switches/Datacenter/MDS9000/interoperability/matrix/Matrix8.html

    I guess I just need master iSCSI VLANS for the 6248UP and the servers would have access?

    Depends, if you use one port of the device the UCS for you connect directly to storage, you can use a single vlan.

    If you are connected via the Ethernet uplinks via a switch, you need to set your switches as a trunk port.

  • Nested ESXi 5.1 and iSCSI Shared Storage

    People:

    I am unable to get my ESXi servers nested to see the iSCSI shared storage, that I put in place for them.  I use iSCSI for the ESXi host, who holds the guests of ESXi, so I have a successful iSCSI configuration to use as my reference.

    Here's what I have for the host network config:

    iSCSI targets

    + IP1: 172.16.16.7 (subnet for host IP storage)

    + IP2: 172.16.17.7 (subnet for storage over IP comments)

    vSwitch associated vmnic2 on NIC IP storage

    + Port "IP storage" group containing virtual NIC both ESXi hosts

    + VMkernel Port for iSCSI host connections: 172.16.16.28

    Here's what I have to config network comments:

    + The highest virtual NIC group of ports "IP storage".

    + vSwitch with only a VMkernel port for iSCSI of the guest connections: 172.16.17.38

    From the iSCSI host, I am able to ping 172.16.16.28.  However, I am unable to ping 172.16.17.38, and here's the really confusing part - I am able to get an answer ARP of that associated with the VMkernel port with the correct MAC network CARD!  This eliminates all kinds of potential configuration issues (for example means bad NIC, bad IP, etc..)

    The firewall on the host indicates the software iSCSI port outgoing 3260 Client is open.  Capture of the target iSCSI host package reveals no. TRAFFIC from VMkernel IP guest when updating to storage adapters.

    What should I look at?  The configuration of the host against the host looks identical yet we work and there is no...

    -Steve

    In the debugging process so, I turned on the "Promiscuous" mode on the vSwitch associated vmnic2 on the NETWORK card on the host IP storage ESXi and poof!  Everything magically started working.  The iSCSI traffic be unicast, so I don't see why promiscuous mode would be necessary, but I can't argue with the results observed.  Clearly, I have more to learn more about nested ESX, that's why I'm playing with it.  :-)

    -Steve

  • On broadcom BCM57800 10 Gb nic and iscsi

    Hi guys,.

    I have a 5.1 with 1 NIC broadcom NetXtreme II bcm57800 esxi server and would use my storage via iscsi.

    This adapter support iscsi off load but I do not understand if it is mandatory to add "iscsi software adapter" too esxi or not.

    Thanks in advance.

    Hi gianlucab,.

    Only, you would use the iSCSI Software adapter if you need the kernel ESXi itself to perform iSCSI operations. You would use this with Ethernet adapters that do not support iSCSI unloading.

  • groups and iSCSI connections to ports

    Hello

    We have a 5.1 esx server connected to an iSCSI-based storage device.

    the server is connected using two of the vmkernal iscsi ports.  each network card has a different ip address on the private iscsi vlan.  the configuration is according to the practices of bast vm using only one network by vmkernal port card.

    We have a migration of storage to another storage but I still have to intall it.  I would like to know if its possible to create two groups of ports additional vmkernal using the same NIC already connected to the iscsi storage.  new port groups on one vlan separate and different netwok so go looking, it seems right that I share two NICs for two networks different iscsi.

    as mentioned above, that I have not installed the hardware yet, but to test it a bit it looks like I can add groups of ports by using the network card and I was able to bind to the vmhbaxx with the adapter of storage with the path of staus as not used.  so I hope that when I connect the new buffer I would be if all goes well able to connect once I place a discovery?

    any comments welcome.

    Thank you

    I don't see why you shouldn't be able to do what you describe. It's a little unusual, but it should work to create adapters to VMK several on the same vSwitch and make the same active/unused configuration you already made, but put other id VLAN and IP address on the new adapters. Of course the two logical VMK adapters share the same network link, but you are fully aware of.

    As long as it is not indicated in some manual or KB article as a solution not supported, you should be fine.

  • Nested ESXi hosts and iSCSI POC

    People,

    I want to set up a proof of concept for a two host, two models NAS for my workplace. At this point, I put a host of ESXi nested using this Vcritical guide - http://www.vcritical.com/2011/07/vmware-vsphere-can-virtualize-itself/

    This made wonder. I have two virtual servers within the main install ESXi with the three hosts on the network 192.168.0.x. I then added a second NETWORK adapter on the 'server' and separated from the network iSCSI (192.168.1.x) and pointed to a simple NAS. This went well now. The problem is the hosts nested cannot access the iSCSI network.

    It's all a bit above my paygrade, but I can't work on how I can get the hosts nested on the iSCSI network. I tried to add network cards second in virtual computers running virtual hosts and their allocation to the 192.168.1.x network.

    Can anyone help to shed light? Someone you will understand my question or what I'm doing? I hope so!

    Thanks in advance

    HP

    Hello

    Your primary host see the iscsi network and connect to your nas Server?

    It should just be a case of creating a VM portgroup on this same vswitch, with settings of appropriate VLANs if necessary, assigning your esx hosts nested this portgroup vNIC. On the nested hosts themselves, you will need to implement the initiator software iscsi etc. as usual.

  • Creation of vMotion, management and iSCSI

    Hi all

    I have doubts that I'd like to have some clarification.

    First vMotion and traffic management.

    I have two network cards. My question is, should I create IDE oucederomsurlesecondport VMkernel for each one (and adding the vMotion Option and management traffic) adding a different IP address for each and put the two in a vSwtich but the example; an active nic1 for vMotion and nic2 Eve and the management of the same traffic, nic2 active and standby nic1. With this, I have two different networks, but by using two NICs in case of failure.

    Or create a VMkernel and activate them both on the same VMkernel and put the two active network cards.

    What is the best choice, or the best performance?

    I have 6 guests on this cluster, so I need to do this in on the hosts.

    First I use QLOGIC

    The iSCSI side, should I use VMkernel Port binding? Or just use a normal VMkernel with my wihout VLAN iSCSI definition of any Biding VMkernel Port on iSCSI Software adapter?

    I have both running, but do not know which is the best way to get the best performance.

    Thank you

    JL

    Nice day!

    In summary, you should do something like that.

    vSwitch0 - Mgmt & vMotion

    ====================

    vmk0

    --------

    Management

    Uplink 1: assets

    Uplink 2: Passive

    vmk1

    --------

    vMotion

    Uplink 1: Passive

    Uplink 2: Active

    vSwitch1 - iSCSI

    ====================

    vmk2

    --------

    iSCSI traffic

    Uplink 3: assets

    If you only have an uplink for iSCSI traffic, you don't have to set up the binding of ports.  iSCSI ports is for the multipath.  If you only have one NIC for iSCSI, you cannot have MPIO.  You need two or multiple NICs or uplinks to the multipath.

    Now, for the management and the links of vMotion, I suggest using only an asset and a liability for each type of traffic.  Although vSphere 5 has the multi-NIC vMotion functionality, the idea of separating vMotion and traffic management is to separate vMotion and its important traffic in your management traffic bursts, which happens to include your traffic HA (also important).  You have certainly not * have * to set it up like that, but maintaining vMotion on his own physical link will keep walking on your management traffic.  In the case of a defective cable, I'm sure you would rather a cluttered link as no management or vMotion traffic.

    All the best,

    Mike

    http://VirtuallyMikeBrown.com

    https://Twitter.com/#! / VirtuallyMikeB

    http://LinkedIn.com/in/michaelbbrown

  • vSphere 5 and iSCSI LACP

    Hello - this is my first post here so please, be gentle.

    I tried to find information about configuring LACP in vSphere, ESXi 5. Let me give a brief overview of my environment and the objectives that we strive to achieve. We use NICs 1 GB on 6710 VDX Brocade fabric switches.

    We have a small cluster of vSphere, ESXi 5 standard edition. We have 6 physical network interface cards in each host - 2 are associated to the network of the VM, 2 are associated for vMotion/management, and the last two are vmKernal ports one two separate subnets connected using MPIO to our Compellent SAN iSCSI. This has been our test bench and we use the nic teaming in ESXi 5 running "Route based on the originating virtual port ID" with no specific switch-side config. We have other servers only using LACP 802.3ad configured on the host and the switch that work very well - gives us a better failure protection that we use two switches and plug in a link in each switch. We would like to do the same with ESXi hosts.

    Our new project is coming to virtualize a larger number of systems we currently serve. We want to do is expand our use VM to include a large (30 - big for us) number of SQL servers. The basic functions of these systems require a decent amount of e/s SAN backend. The physical servers we would be virtualize emballerions a density close to 4 / 1 or up to 8:1 with this conversion. We are concerned that having just the 2 iSCSI nic MPIO paths will not be sufficient to support the increased load of I/O.

    We would like to know if you are using LACP on the two subnet iSCSI connections and join 2 + NIC for each connection is viable in ESXi 5 and with iSCSI technology and what configuration parameters that we set up to do this.

    In addition, this project would be to use Enterprise Edition VMWare vSphere 5 - DRS or distributed switching introduced other complications or benfits for this configuration?

    Thanks for any helpful input or direction of already published documents.

    Scott

    I find using LACP / etherchannel is rarely effective or useful in VMware environments.

    For iSCSI storage, my standard configuration is to use 2 uplinks with binding of iSCSI ports. Here are the screenshots of the configuration.

  • vDS and iSCSI VMKernel

    It is preferable to use Standard switches or switches for iSCSI VMKernel ports vDS?  What is the best way to go?

    Is there documentation describing the process somewhere?

    aacjao wrote:

    Could it facilitate the management of use vDS rather than this configuration on each host?

    For iSCSI vmkernel will not really facilitate the management that you would have to create "virtual interfaces" unique for each host, so it will be more or less the same amount of work.

    aacjao wrote:

    Y at - it literature all about putting this in place on vDS?

    The official manual on iSCSI has only documentation on how to implement on vSwitches standard: http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage-guide.pdf - page 76.

    On how to configure Vmkernel interfaces on a distributed switch, you might see this manual and page 32.

    Most of the best practical information on the web around iSCSI and VMware are around the ordinary vSwitches - that might make it easier to install and troubleshoot.

Maybe you are looking for

  • Save "Search results" in a file?

    When editing in the CVI, I often usefully copy content from "Search results" in a window of change so that I can change/analysis/filter/sort information.  I wish I could do the same thing in the TestStand sequence editor.  Is a temporary file that is

  • Reformatting of the x 3 disk password

    Product HP2140 Windows xp S / N: {private information deleted} p/n NN357EA #AK8 I can not English, I write by the translator. Formatted disk and I can't run my laptop. He wrote the x 3 password, but I had not asked at all, so I can't install. Thank y

  • Listener email on OS 5.0

    Copy the following code: Session s = Session.getDefaultInstance(); Store msgStore = s.getStore(); msgStore.addFolderListener(this); used to work to listen for all the incoming emails on the device, but seems to no longer work on OS 5.0. It only check

  • Win XP downgrade dvd for z400

    SOMEONE CAN ME SAY IF THERE IS A XP DOWNGRADE OPTION AVAILABLE DISK SHAPE THE Z400 WORKSTATION FROM WINDOWS 7 TO XP? OR will I HAVE to INSTALL XP from SCRATCH and download all the drivers and tools etc.

  • Select the text to which an anchoredobject is attached

    HelloI have a couple of linked text frames, attached within the paragraph in another frame and I need to retrieve the paragraphs, but I don't see a way to do it.I already select managers of related texts (and do something with them), but no matter wh