UCS FI and iSCSI storage

We are poised to implement a UCS B Series. I have a question for FI 6248UP. I have read and used the emulator for UCS Manager and noticed that you can configure ports FI as ways for storage material. Is it limited to a certain protocol storage or the seller? We use Dell EQ and plan to be there to connect the EQ 6510 X. I was wondering if that is supported and if iSCSI upstream traffic would be able to access the storage?

The 6248UPs will be passed to a pair of Catalyst 4506-E race vs. We have an IBM chassis and servers should be able to access the EQ6510X too. I guess I just need master iSCSI VLANS for the 6248UP and the servers would have access?

Thank you!

Hi Cowetac,

Yes, not all storage arrays are supported by UCS, but your DELL Equalogic is supported. If you have any questions about the other compatibility of storage arrays, you can take a look at the UCS (see link below) storage interoperability matrix.

UCS storage interoperability Matrix (matrix of the storage of UCS - B of table 10-2)

http://www.Cisco.com/en/us/docs/switches/Datacenter/MDS9000/interoperability/matrix/Matrix8.html

I guess I just need master iSCSI VLANS for the 6248UP and the servers would have access?

Depends, if you use one port of the device the UCS for you connect directly to storage, you can use a single vlan.

If you are connected via the Ethernet uplinks via a switch, you need to set your switches as a trunk port.

Tags: Cisco DataCenter

Similar Questions

  • Cisco UCS Direct Attached ISCSI Storage access upstream to storage array

    I'm doing my first installation of UCS with a chassis 5108) and (4) B200 M3 blade and (2) 6248 fabric interconnects. The plan is to connect our EMC VNX directly on the fabric of interconnections as the chassis will be the only system for access to the SAN, after migration from our current system. The fabric interconnects will be transmitted to (2) 4500-X switches configured in a vs.

    In my research of best practices and general deployment guidelines I noticed the need to create (2) separate iscsi VLANS for fabric and a fabric in B. I'm guessing that this requirement is the lack of STP as YEW is configured in EHM, but I was not able to find an exact answer. Any idea would be appreciated.

    My main question and my concern is that if I create a second VLAN ISCSI on our X 4500 and then create both of these VIRTUAL on FI networks A & B I'll be able to connect the SAN to the UCS as well as our former VMware cluster (we are migrating our system HP Compute and storage UCS 5.5 environment) in order to facilitate the vMotions of storage?

    I noticed the need for iscsi separated (2) VLAN a fabric and a fabric B

    Are you referring to

    https://www.EMC.com/collateral/hardware/technical-documentation/h8229-VN...

    p 36

    If I understand your question: can a host attached to 4500 (connected to the UCS FI), access LUNS on the VNX, which is directly connected to the UCS fabric interconnect through lanes (EHM FI) material? YES! I would accept this for a migration, but not forever, because it means that traffic on the fabric of interconnection.

  • 2 questions - group leader and iscsi

    We have a unique PS6500 of the firmware 6.0.5. From 3 days ago, Archbishop Grp freezes when I click on the member icon. I could go into all the rest of the GUI with no problems. I got the firmware installed for about a month, so I don' t think that is the problem. Has anyone else seen elsewhere?

    2nd edition - we have vmware hosts connected to QE through private switches. Vmware hosts have separate vswitches for vmotion and windows iscsi volumes. VMotion is on 10.2.0 subnet and iscsi volumes are on 10.2.1 subnet. The EQ is on 10.2.0 with a 255.255.0.0 mask. The switches are set with the VLANS separated for iscsi and vmotion.  So far in Grp Mgr, I allowed on each volume 10.2. *. *. I started to try to change it so that the vm volumes only allow connections from 10.2.0. * (vmotion subnet) and volumes windows only allow connection of the 10.2.1. *. Currently, I do not use chap or iscsi initiator to limit. Here is an example for a windows volume.

    But here is what I get in the Connections tab for this volume, even after a new analysis of storage host adapters:

    So either I completely ignore the setting of the limits of intellectual property, or my EQ has a problem. Shouldn't I have the 10.2.1 connections to this volume?

    Any ideas? Thank you.

    None of the fixes listed in the 6.0.6 release notes apply to my EQ,.

    BTW, you can add the ACL CHAP to the EQL live volume.  This will not affect existing sessions.  Just make sure that you first create the CHAP user on the table.  Then after setting the ACL volume, set the iSCSI initiator ESX to use CHAP.

    If you take a look at the RPM, you can find a few Commons left default setings.  There are some that will improve performance, other stability.

    vMotion has not EQL access at all.  It's strictly for compose the image memory of a computer virtual from one host to another ESX host.

    Storage vMotion includes EQL, but it does not use the port of vMotion to do anything.

    vMotion traffic must be completely isolated, because the content in the course of the transfer are not encrypted.   So no VM Network exchanges should be on this vSwitch and the physical switch should be isolated too.

  • FC and iSCSI

    I have an IBM Blade Center with 4 blades running enterprise 5.1 ESXI.

    The blades are connected to storage via FC (Fibre Channel) switch.

    I added a new SAN at the center of the blade, but due to some unexpected port licensing problems with my Cisco MDS 9000 FC switch, I can connect only one of the 2fc connects to my blade right now.

    The SAN also comes with ISCSI, so my question is, can I have a put in place with FC and ISCSI multipath?  or is it a bad idea?

    Hi friend

    Here's the thread that talks about your request:

    fiber channel - it is possible to multipath FC and iSCSI? -Server fault

  • groups and iSCSI connections to ports

    Hello

    We have a 5.1 esx server connected to an iSCSI-based storage device.

    the server is connected using two of the vmkernal iscsi ports.  each network card has a different ip address on the private iscsi vlan.  the configuration is according to the practices of bast vm using only one network by vmkernal port card.

    We have a migration of storage to another storage but I still have to intall it.  I would like to know if its possible to create two groups of ports additional vmkernal using the same NIC already connected to the iscsi storage.  new port groups on one vlan separate and different netwok so go looking, it seems right that I share two NICs for two networks different iscsi.

    as mentioned above, that I have not installed the hardware yet, but to test it a bit it looks like I can add groups of ports by using the network card and I was able to bind to the vmhbaxx with the adapter of storage with the path of staus as not used.  so I hope that when I connect the new buffer I would be if all goes well able to connect once I place a discovery?

    any comments welcome.

    Thank you

    I don't see why you shouldn't be able to do what you describe. It's a little unusual, but it should work to create adapters to VMK several on the same vSwitch and make the same active/unused configuration you already made, but put other id VLAN and IP address on the new adapters. Of course the two logical VMK adapters share the same network link, but you are fully aware of.

    As long as it is not indicated in some manual or KB article as a solution not supported, you should be fine.

  • vSphere 5 and iSCSI LACP

    Hello - this is my first post here so please, be gentle.

    I tried to find information about configuring LACP in vSphere, ESXi 5. Let me give a brief overview of my environment and the objectives that we strive to achieve. We use NICs 1 GB on 6710 VDX Brocade fabric switches.

    We have a small cluster of vSphere, ESXi 5 standard edition. We have 6 physical network interface cards in each host - 2 are associated to the network of the VM, 2 are associated for vMotion/management, and the last two are vmKernal ports one two separate subnets connected using MPIO to our Compellent SAN iSCSI. This has been our test bench and we use the nic teaming in ESXi 5 running "Route based on the originating virtual port ID" with no specific switch-side config. We have other servers only using LACP 802.3ad configured on the host and the switch that work very well - gives us a better failure protection that we use two switches and plug in a link in each switch. We would like to do the same with ESXi hosts.

    Our new project is coming to virtualize a larger number of systems we currently serve. We want to do is expand our use VM to include a large (30 - big for us) number of SQL servers. The basic functions of these systems require a decent amount of e/s SAN backend. The physical servers we would be virtualize emballerions a density close to 4 / 1 or up to 8:1 with this conversion. We are concerned that having just the 2 iSCSI nic MPIO paths will not be sufficient to support the increased load of I/O.

    We would like to know if you are using LACP on the two subnet iSCSI connections and join 2 + NIC for each connection is viable in ESXi 5 and with iSCSI technology and what configuration parameters that we set up to do this.

    In addition, this project would be to use Enterprise Edition VMWare vSphere 5 - DRS or distributed switching introduced other complications or benfits for this configuration?

    Thanks for any helpful input or direction of already published documents.

    Scott

    I find using LACP / etherchannel is rarely effective or useful in VMware environments.

    For iSCSI storage, my standard configuration is to use 2 uplinks with binding of iSCSI ports. Here are the screenshots of the configuration.

  • Best way to disconnect the iSCSI storage

    Hello

    I have a server cluster, connected to the Equallogic iSCSI storage. I want to delete and re-create a volume storage, but in a fairly safe way.
    Ideally, I would be able to take the volume out of action for a few days, but there are avaiable to reconnect if necessary. Normally, I would do this to simply put the volume offline. It can then be seen by the hosts, but I put it online as appropriate.
    East - meaning the best, though? It does not seem to cause some problems that are to do with map software iSCSI being unable to find the device, until a new analysis/reset of the host is according to me.
    Can anyone suggest a better way to do this?

    Hello.

    The procedure is described in http://kb.vmware.com/kb/1015084

    Good luck!

  • StarWind iscsi storage problem

    Hello

    I'm new to vmware kindly help me

    I have create a laboratory at home to prepare for the vcp exam. When I create an iscsi storage using the starwind (external hard drive) image file and add it I try to add Storage in vmware storage it does not show the data store, but when I create an internal hard drive, it shows the data store can kindly help a whole to solve the problem

    Thank you and best regards,

    Knani V

    Hello

    Post moved to the vSphere storage forum.

    All data stores must be mounted/targeted through a vmkernel device if you want them to be used as a location for a virtual disk.

    Best regards

    Edward L. Haletky

    Host communities, VMware vExpert,

    Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the 2nd business edition

    Podcast: the Podcast for security virtualization of resources: the virtual virtualization library

  • MSCS and Datacore storage

    Hi all

    I make several facilities of MSCS clusters in ESX 3.5 and 4 boxes that it worked fine. But now, I'm in trouble with a cluster configuration W2008R2 with a Symphony Orchestra of SAN (FC) Datacore storage server.

    I can't find any documentation talking (except a note on virtualized storage).

    I am able to viem and configure RDM on a virtual machine on a host, but this disc is not visible by the second VM (on a different host).

    The virtual machine has been created to support clustering and disk (RDM) is bound to the physical compatibility.

    I hope someone could help me.

    ARM

    CMO

    Once you have created a RDM on a VM all creating a RDM disk options are greyed out so you can add an existing drive.

    This is normal because ESX (i) sees a free uninitialized LUN so no RDM cannot be created.

    Maybe these notes on configuring Microsoft Cluster Services with servers using Fibre Channel or iSCSI storage can help you:

    FTP://support.DataCore.com/PSP/TBS/TB13_MSCS.PDF

    Or check out this installation guide for the implementation of MSCS on vSphere (although I think that it shouldn't be something new for you):

    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_mscs.PDF

    page 25 discusses adding the RDM existing second MSCS cluster node.

  • Deliver the addition of iSCSI storage visible via vSphere (direct host or vCenter)

    Hi all

    I'm having a strange problem when I try and add a second drive to my host ESX 4.1. The storage is visible, but when I go to add it via the add storage wizard I get the following error message:

    Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" of object "ha-datastoresystem" on ESX '172.16.10.221' failed.

    As shown below:

    The error is the same weather, I try to add via vSphere connected to vCenter or ESX 4.1 host directly.

    The funny thing is that I have a small iSCSI target on the same iSCSI storage that works very well.

    Here is my config network:

    And here is my config to storage adapter:

    I have also attached my vmkiscsid.log of the esx host file.

    I think what's happeneing, is that the ESX host tries to enter and format the storage via the console port of service instead of the iSCSI port I created but I could be wrong.

    Any help would be greatly appreciated!

    Thank you.

    Pell Manny.

    Size of the disk (including the iSCSI LUN) is still limited to 2 TB minues 512 b... Try to drop the size of the iSCSI LUN down that maximum and try again... I would bet that the 'small' target that you have not had any problems with less than this limit is...

    Network administrator

    VMware VCP4

    Review the allocation of points for "useful" or "right" answers.

  • Migration of ESXi 3.5 to ESXi 4 (ISCSI storage)

    Hey everybody!

    I want to migrate to ESXi 3.5 to ESXi 4. But there is a question that I can't answer by myself.

    I use ISCSI storage, what is put in shape with "VMFS 3.31".

    Is it Possible to use this storage simultaneous ISCSI of the two systems, also an ESXI 3.5 and a 4 Hostsystem ESXI?

    Does anyone have experience in this context?

    Greeting of the Germany and thank you,.

    Kay

    Welcome to the forums - Yes you can - the migration of high level plan is updated vceneter (it can manage ESX 4 hosts both ESX 3.5)-> upgrade ESX hosts (they can coexist in the same groups and data centers)-> then upgrade the virtual hardware to virtual machines and VMware tools.

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

  • Can't see iscsi storage

    I have set up a virtual machine with two vmxnet eth adpt 3. a tip to the network and the other to point to my storage.  Having a problem to see my iscsi storage. I see only an address in the ip observed range and sound of my Vcenter server that has an ip of my config suggestions SAN management.

    You must create a portgroup VM for iSCSI network on the vSwitch where physical right iSCSI network is connected.

    Use the same network address, then try to do some testing with ping (for example).

    André

  • Equallogic PS6000 + ESX iSCSI storage redundancy: LUN disappears when 1 switch off!

    Hello everyone, this is my first post here

    OK, so as the title says, I'm working on the redundancy under ESX. Because network configurations can be quite confusing I just did a quick draft of our infrastructure with one of my 3 ESX host and will try to explain as clearly as possible, the configuration details.

    In my ESX host, I configured a new vswitch with the vmkernel and vmnetwork joined 4 (NIC 2,3,4 and 5) my card NETWORK is configured as active in the collection of NETWORK adapters (also tried to put 2 according to availability) method of failover is Beacon probe (also tried with the status of link)

    On another VSwitch I have the console service with the NIC (NIC 1 and 6) 2 of the joint.

    My 2 switches are connected by a SHIFT of 4 links and the RSTP Protocol is active and works (although I have a lot of broadcasting activities, as soon as I connect my LAN switch to the SAN switches, but that's another story)

    My SAN is connected with 2 links on A button and switch B 2 for each controller. I created 3 volumes on the SAN that is each configured in my host ESX 3. So far, everything is great.

    Now, I pull the power plug from the switch a. switch B takes the lead and I can still ping SAN network lan switch, I can ping my small guest VM but the virtual machine itself quickly dies. I can't even it reboot or I get an error message indicating that the file is not found. So I go to my store of data, and it is when I see that it's empty! I need to open any new analysis of the LUN to again access to the data store and start my VM.

    From my point of view, although ESX detects that some links are down when I pull the cord, the iSCSI session just dies on me and will not "restart". I read stuff on multiple paths, but the "manage paths" button is grayed out, I'm not even sure that's what I need.

    So basically, as you can see I am quite new to all this and completely lost when it comes to things of redundancy, hell, I didn't even know on the tree covering weight before yesterday! So, if any of you guys can give me some advice on the realization of redundancy I look forward ^^

    Thank you very much!

    I wouldn't put my vmnetwork and my storage on the same vSwitch network.  I separate them into their own vSwitch and use two natachasery for each vSwitch.

    Once you have the traffic separated out, I'll make sure a vmkping, and not a ping to the storage target runs through each physical NIC.  Remove all cables and one at a time to create works of course routing their connection.

    -KjB

    VMware vExpert

    -KjB

    VMware vExpert

  • VCB and iSCSI

    We have recently moved from old fiber to new iSCSI san. using VCB backups were taken from the server that has been attached to SAN fiber.

    How does this with iSCSI? I have install Microsoft iSCSI initiator and hide each LUN to the server?

    In addition, one knows where I can get the administrator for vcb's Guide? could not find anywhere...

    Installation of the MS iSCSI initiator is the best way to connect the VCB to your iSCSI storage space.  Setup for these connections varies from one storage to another provider.  I could verify your iSCSI provider and see if they have a guide of best practices.

    The VCB documentation is located in with the Infrastructure 3 documentation.  This link will take you to the most appropriate section on the backup. http://www.VMware.com/PDF/vi3_35/esx_3/r35u2/vi3_35_25_u2_vm_backup.PDF this is an another doc no VMware which is a little more in depth, http://viops.vmware.com/home/docs/DOC-1392

    I hope this helps.

  • Problem when adding 2nd host in Iscsi storage shared

    Hello people.

    I'll try to explain exactly my problem even if it is a little difficult liitle!

    First of all, we are talking about a test environment that is installed in my house just for learning purposes.

    I am facing a strange my iscsi storage behavior when trying to join a second host to storage vmotion purposes. I was using an Esx host (3.5) for a while and I had no problem with the iscsi storage (I use an old pc with freenas on it). I must say that I got lucky, also, local sata controllers have been recognized so I could test the Storage Vmotion between two storages (Commons and iscsi) with success in this host.

    To continue to learn, I decided to also buy another pc to test Ha cluster, Vmotion, DRS. Unfortunately, I couldn't find the same material and who as well as sata controllers do not to be recognized, so I used a disk ata for OS and my intention was to use for the volume vmfs3 iscsi storage.

    In the first host esx I used, you can see 3 storage adapters: vmhba32 vmhba33& gt; Controllers local sata,34vmhba& gt; initiator iscsi software

    In the second esx host, you can see only the vmhba32- & gt; the iSCSI Software initiator.

    The first time everything worked ok, which means that I have attached iscsi storage in host the2nd by rescaning the vmhba and I was able to browse the datstore and see the contents of it, but it lasted only 10 minutes. Then suddenly I lost the connection to common storage to both hosts, I could see in the section maps of storage, but when I pressed update in the storage section, nothing happened and even when I tried to add it using the add storage I received the message usual "cannot read the partition table information. After being a little more careful I noticed that in the section storage cards, I could see the same vmhba34 in the two hosts with the same alias to the host of the first, and here's the strangest part, the situation I described above has been changing from time to time, which means that a few times I have seen the vmhba32 in both hosts and after some time vmhba34 in both hosts! (crazy)

    I took the big decision after playing with the partition table, without effect to use the ' vmkfstool - C vmfs3 s ' _free "_ / vmfs/devices/disks/vmhba32: 0:0:1" command

    in the second host, of course, I lost all data, but now with a simple refresh in this host, I was able to see the storage.

    Unfortunately, I still couldn't ' 1 host storage access and I published the same command "vmkfstool - C vmfs3 s"free"" / vmfs/devices/disks/vmhba34: 0:0:1 ""

    with only the change you see in bold. Accordingly, it was to see the storage after a refresh with free (1) name.

    Now he is currently working for 2 days with this way not recommended and what I would be really grateful if could help me clarify what exactly is happening.

    My main problem is that I am in front of a very unstable environment that might as well have the same unexpected problem in the future and I can't reformat always storage and loss of all virtual machines

    I tried to describe the best I could to this strange situation!

    Thank you for your time and any help or advice would be most welcome!

    I was looking for a similar solution. Seems to be the only iSCSI host is possible on freeNas. See here: http://apps.sourceforge.net/phpbb/freenas/viewtopic.php?f=53&t=403&sid=f4881e6e25e752e6ee8c6b0eae5887d3

    The only solution for learning seems to be to install the server ESX or ESXi in a virtual machine and then use iSCSI. Vm Instructions to complete the installation here (you must all PDFs 6) http://knowledge.xtravirt.com/white-papers/esx-3x.html or you can replace the opnefiler by the SUN's VM iSCSI. http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp , you must use the vmware converter to convert the SUN's virtual to be used in ESX machine found this in a forum... but can not find more URL. I have not tried myself but, looking for just because you want to get a similar setup.

    Wolfgang

Maybe you are looking for