iSCSI HBA + SAN, EqualLogic

I have a new deployment, I plan.

I'm looking at a PS6000VX EQL - dual controller with 4 x GBIT by controller cards. In my opinion, it works in an active/passive configuration.

What is worth of watching me iSCSI HBA with this sollution? Cost is not a problem at all. Performance/reliability/redundancy is.

I understand with HBA you can multi-path, but with a sollution active/passive is not treated by the table flipping? In that it switches IP on assets of Eve? (with equallogic?)

Any comments?

Do you have more information in this plugin MPIO?

Is still in beta... so there is no official information.

But will probably like the MPIO module for the Microsoft iSCSI initiator (which can use all the links on the active controller).

There will be a big reason to go with the software on a HBA iSCSI initiator?

It's cheaper

And you can license to have the company more

understand my reading that Equallogic doesn't have any special configuration as the SP manages failover.

Right.

Just remember to follow the proposed (by Dell) network topology if you use two switches (instead of a main switch with 2 stacked modules).

André

Tags: VMware

Similar Questions

  • ISCSI HBA con problemas y el failover

    Is.

    Tengo el the siguiente problema: tengo UN IBM BladeCenter, in sus hojas tengo una an ESX 3.5 con una controladora iSCSI HBA DOUBLE. Por otro lado UN IBM DS3300 iSCSI storage con doble controladora.

    CONFIGURADO siguiente manera:

    A HBA port - 192.168.3.60 & gt; switch 1 & gt; SPA_NIC1 - 192.168.3.53 - SPB_NIC1 - 192.168.3.55

    B HBA port - 192.168.4.61 & gt; switch 2 & gt; SPA_NIC2 - 192.168.4.54 - SPB_NIC2 - 192.168.4.56

    Development the situation ahora y el problema.

    Having esta config tengo cuatro paths o caminos hacía las del LUN storage.

    Ahora el problema.

    If switch 1 (simulando una caida) apago El el failover happens sin problemas y los caminos increasing an el use switch 2. Hasta ahi esta in orden, don't sin embargo, cuando todo is restituye the led unit switch 1, el ESX no vuelve a ver estos caminos, por lo tanto if only una caida en el switch 2 the unit total seria perdida. Esto sucede tanto if the fixed esta o en MRU failover policy.

    The unica manera por the black recover esos switch paths led back 1 are doing UN Scaneo las LUNS. Sucede lo mismo so the caida is da en el switch2.

    MI jerking that're esos caminos is recuperen automatically cuando vuelven a estar activos, y currently only en indefinidamente "DEAD."

    The Plaça HBA are una QLOGIC 40xx.

    The question are so estoy cometiendo algun error config o tr por el contrario en is debe realize algun seteo para a nivel del ESX as chequee los paths.

    Gracias a todos.

    Buenas ElGogy,

    The "problem" is in the telephone box, todo lo demas esta Ok. Cabina give en dicha HCL y luego the quitaron y luego volvieron a poner. Finalmente esta con admitida ESX > = 3.5 y solo con Software initiator. (Played a look has the HCL ahora are easy con search: http://www.vmware.com/resources/compatibility/search.php?action=search&deviceCategory=san&productId=1&keyBasic=ds3300&maxDisplayRows=50&key=ds3300&release%5B%5D=-1&datePosted=-1)

    Fijate tambien in communities of las este thread of: http://communities.vmware.com/thread/121802?tstart=0&start=0

    Of todas formas as you creo La cabina esta running OK. El con las DS of ESX behavior are MRU, aun if usar el pones o no el fixed en ESX (the da igual has the cabina.). Este behavior is lo Indica has the 'lun to host mapping' pones en cabina cuando LNXCLUSTER el. (that are that hay than poner lo).

    Cabinas are Simplemente no tienen el failback automático. Supongo as lo hacen para avoid errors: If a cae switch, mejor you criteria y requiera manual attention stink if no dock than a switch could what's cae y is levanta todo el rato (fallos momentaneos en the conexion, fallos electricos in bucle, o firmware aleatorio, fallo x ej,) would be doing that el cambie rising path, no're lo mejor del mundo.

    A saludo y espero you sirva!

    http://kurrin.blogspot.com

  • iSCSI HBA on 3.5 U3 - vswitch installation is required

    I have ESX 3.5 U3 installed on an IBM 3850 with 2 QLogic is installed.  I did the firmware updates and ESX recognizes the HBA as storage adapters.  My problem is this, all my other ESX servers use standard NIC so I create a second vswitch for my iSCSI LAN and add 2nd NETWORK card for guests of this switch.  When I go to create a my vswitch iSCSI only available adapters are my cards, I can't add the HBAs.  How can I configure a vswitch on my iSCSI LAN that will be using / accessing the HBAs?

    If you use the initator MS iSCSI inside your virtual machine then you will need to use regualr NIC you wouldn't be bale to use iSCSI HBAs as they are used by the vmkernel to access the iSCSI SAN

  • iSCSI HBA NIC teaming on recommended vSS?

    Hi all

    Just curious when you have two iSCSI HBAS do not use VMwares iSCSI Software initiator you still need to make sure that the following configs are in place? :

    Vmk1 - iSCSI0 - VMNIC1 - Active

    VMNIC2 - not used

    VMK2 - ISCSI1 - VMNIC2 - Active

    VMNIC1 - not used

    Or can you just leave it as active/active with Orig Port ID?

    There is much info on what to do with the software iSCSI initator but what when you have HBA?

    Yes. I added a link to my previous post which illustrates the configuration.

    André.

  • Free hypervisor ISCSI HBA

    I'm looking for a hba iscsi which will work without problem and can be which is not expensive.

    Can someone advice me?

    Thank you.

    Yes, either check with the seller (that should know) and also take a look at VMware HCL to http://www.vmware.com/resources/compatibility/search.php?deviceCategory=io to see what iSCSI HBA is taken in charge for the different versions of ESX (i). There are also a couple of network server adapters, which can be used as iSCSI HBA, but some of them do not support frames.

    André

  • 10 G of copper iSCSI HBA / NIC?

    Because of vSphere 4.1 support 10G copper iSCSI HBA / NIC? It will support VSphere 5? If it is supported, which it will support?

    I agree with golddiggie on the Intel stuff.  There are cards that work well, you can also search the forums here some of the brands and models to see what problems people have met with them and the possible configuration changes that can help.

    If you have found this helpful at all prices please points using the correct or useful!  Thank you!

  • adding targets to send the iscsi hba software.

    Hello

    I am trying to add dynamic send targets to a host ESX 3.5.  Here is the code I wrote, but there is something wrong with it.  Someone at - it worked with this before or would be able to tell me where I'm wrong?

    Thank you

    Matt

    Here is my code:

    My $iscsisvr = Opts::get_option ('iscsisvr');

    My $hostview = Vim::find_entity_views (view_type = & gt; "HostSystem");

    My $host_ss = Vim::get_view (mo_ref = & gt; $hostview - & gt; configManager - & gt; storage system);

    My $host_sv = Vim::get_view (mo_ref = & gt; $host_ss - & gt; storageDeviceInfo);

    My $host_hbaview = Vim::get_view (mo_ref = & gt; $host_sv - & gt; hostBusAdapter (bus = & gt; "32", device = & gt; "vmhba'));

    My $sendTargetSpec = HostInternetScsiHbaSendTarget - & gt; New (address = & gt; $iscsisvr);

    My $addSendTarget = AddInternetScsiSendTarget - & gt; New (iScsiHbaDevice = & gt; "vmhba32", target = & gt; $sendTargetSpec);

    $host_hbaview - & gt; HostInternetScsiHba (configuredSendTarget = & gt; $addSendTarget);

    Find one or two things:

    Probably, you won't need the storageDeviceInfo unless you want to verify that the iscsi HBA is configured (which may not be).  You also do not get a view of the hostBusAdapter.  You must remove that as well.

    You can delete the following text:

    my $host_sv = Vim::get_view(mo_ref => $host_ss->storageDeviceInfo);
    my $host_hbaview = Vim::get_view(mo_ref => $host_sv->hostBusAdapter(bus => '32', device => 'vmhba'));
    

    There was a spelling error in AddInternetScsiSendTargets (you have it as AddInternetScsiSendTarget).  In addition, this method is off your HostStorageSystem, in this case, $host_ss.  You need to perform a rescan after adding the target as well.

    Change:

    my $sendTargetSpec = HostInternetScsiHbaSendTarget->new(address => $iscsisvr);
    my $addSendTarget = AddInternetScsiSendTarget->new(iScsiHbaDevice => 'vmhba32', targets => $sendTargetSpec);
    
    $host_hbaview->HostInternetScsiHba(configuredSendTarget => $addSendTarget);
    

    TO:

    my $sendTargetSpec = HostInternetScsiHbaSendTarget->new(address => "$iscsisvr");
    $host_ss->AddInternetScsiSendTargets(iScsiHbaDevice => 'vmhba32', targets => [$sendTargetSpec]);
    $host_ss->RescanHba( hbaDevice => "vmhba32");
    

    I would probably change your code to something like the following:

    sub FindVmhba32
    {
         my $HBAs = shift;
    
         foreach ( @{$HBAs} )
         {
    
              if ( $_->device eq "vmhba32" )
              {
                   return $_;
    
              }
         }
         return undef;
    }
    
    my $host_ss = Vim::get_view(mo_ref => $host_view->configManager->storageSystem);
    
    my $host_sv =  $host_ss->storageDeviceInfo;
    my $vmhba32 = FindVmhba32($host_sv->hostBusAdapter);
    
    unless (defined $vmhba32)
    {
         Util::disconnect();
         die "Failed to find ISCSI HBA ('vmhba32')";
    }
    
    my $sendTargetSpec = HostInternetScsiHbaSendTarget->new(address => $iscsisvr);
    $host_ss->AddInternetScsiSendTargets(iScsiHbaDevice => 'vmhba32', targets => [$sendTargetSpec]);
    $host_ss->RescanHba( hbaDevice => "vmhba32");
    

    You can drop the FindVmhba32 subroutine and simply use a block around the AddInternetScsiSendTargets eval.  You can throw a few errors, one of them is not found if the specified HBA is not found.

  • iSCSI HBA Mapping?

    Hi people,

    I recently updated my servers to two HBA ports. I plugged in and this through the configuration, and they are all in a race. But in the journal, the configuration is slightly different, and I don't know what the right/best.

    Here's a picture to view the configuration...

    iSCSI SAN Mapping.png

    To be honest, I don't really understand the discovery thing static/dynamic. A primitive test seem to fail at the top of ok in the following scenarios:

    1. a the HBA port is unplugged

    2. If a controller is removed from the San.

    Can someone sched all light/thoughts/feedback on this configuration?

    I would make two identical configurations, but don't know which one to use.

    According to the scenario of DS3300 configuration to improve performance, it is better if the two controller using a different IP subnet.

  • iSCSI HBA adapter

    Been having a lot of problems with a repressive IBM X 3650 M2 and its compatibility with the UEFI.  The problem I have now is that one does not see the storage and was curious to know if HBA will really hurt.  I know that sounds like a stupid question, but it's the only thing I have left.  I checked, double checked, m and triple checked all the settings and they are correct.  Swapped cables, exchanged ports and after a new analysis to show anything.  The other adapter is fine and the only other thing that I've been swap cards around.  Did not have to open the box.  Are there other things I can try to see why this HBA adapter can not see the storage?  It pings very good also.  The wuggestions are welcome. Thank you

    Perry

    Is there a firewall between you and the iSCSI storage?  Using CHAP?  Have you checked that the initiator is set up for CHAP authentication on the storage side?  There could be lots of things, even a HBA bad as you mention... but if it can ping successfully I guess it's a problem of configuration somewhere.

  • HOWTO display queue depth settings (qlogic iscsi hba)?

    Hi guys,.

    I can't find any queue parameters of depth in our infrastructure:

    I checked /etc/vmware/esx.conf and ' cat/proc/scsi/qla4xxx/1. more»

    ProLiant BL460c G6

    QLogic QMH4062

    ESX 4.0 Build 261974 U2

    left/hp storage p4500

    Any idea? Thanks a lot :)

    uxmax

    ESXTOP: parameter 'aqlen', 'wqlen' and 'lqlen '.

    Read http://communities.vmware.com/docs/DOC-9279

    AWo

    VCP 3 & 4

    \[:o]===\[o:]

    = You want to have this ad as a ringtone on your mobile phone? =

    = Send 'Assignment' to 911 for only $999999,99! =

  • Storage vMotion with Syncrep on Equallogic San

    Hello

    We recently bought 2 San Equallogic (PS6510ES) and turned on synchronous replication between them.

    We use ESXi 5.5 and followed the best practices guide to disable TCP ack and 60 logintimeout value.

    Everything seems to work fine until we tried Storage vMotion/migration and cloning VMS on the San.

    When we did the migration of 2 volumes on the EQL SAN storage (the target volume has active syncrep), the target volume will be out-of-sync and then the storage vMotion will hang sometimes.

    The worst is when such a problem has occurred, all other volumes on the EQL constantly to respond also. All volumes will not respond until we rebooted the corresponding ESXi host (NOTE: simple shut down/power outside the host won't help.) The San EQL will only return to normal after a reboot of the ESXi host).

    If we pause / off the syncrep before Storage vMotion, the problem will not happen.

    Is this normal or not?

    In addition, whenever the problem occurred, the latency of writing the corresponding volume will be dry and we will receive alerts similar to the following:

    connection iSCSI target ' xx.xx.xx.xx:3260, iqn.2001 - 05.com.equallogic:xxxxx - test-volume ' initiator 'xx.xx.xx.xx:xxxxx, eqlinitiatorsyncrep' failed for the following reason:

    Initiator, disconnected from the target when connecting.

    Thanks in advance.

    Hi Joerg,

    What is a #SR? I opened a record of support to 24 June and still waiting for Dell to contact me.

    Ryan

  • design for the iSCSI SAN review

    Hi all

    I intend to implement a storage shared in our vmware infrastructure. And we'll buy DELL or iSCSCI SAN NETAPP.

    Could someone pls review the design of SAN attached. Please note that we do not have a vCenter in the environment. And also we have not yet finalized on behalf of initiator SCSI, whether to use a software or iSCSI HBA for this purpose. Is it worth investing in the hardware initiator. Please suggest.

    Kind regards

    Nithin

    To use several vmks iSCSI, you cannot assign two network adapters to the iSCSI vmks.

    You cannot have an active nic by vmk and the other the vSwitch network adapter must be moved down to "unused controller. Then you can bind to port in the software iSCSI hba adapter. It is the only way to achieve real multichannel, load balancing and failover for multiple iSCSI vmks.

    Here's a KB on how do: http://kb.vmware.com/kb/2045040

    But if you google, it has quite a few tutorials with pictures detailed, etc..

    Tim

  • SMB ESXi with iSCSI SAN

    I'm looking for make a SMB deployment for the clients of mine. The new features offered by 4.1 for SMB as VMotion and HA are now in their budget. I've seen DELl offering some broadcom with TOE and ISCSI unloading cards. I know that 4.1 now supports TOE cards, but these broadcom cards actually iSCSI HBAs or just TOE cards Motors iSCSI? VSphere 4.1 will support this card? Its much cheaper than buying full host bus adapters blown for a small business.

    I'd go with the R710 mainly because of the experience with the model and how strong they are. You will save all the money using the R610, so you could spec on a solid R710 for the host. In addition, the R710 has four GB of NETWORK map (with the option of TOE on the four) and four open expansion for multiple network interface cards and other slot. With the R610, you will have only two expansion slots. So if you want to add more cards later (more than two), you must either delete that you installed to make room, or do the purchase.

    I would always use the SAS drives on board to install ESX/ESXi on. You will save money by not getting NOT the HBA you can use to boot from SAN with the same (or similar) performance level. It also helps to isolate the questions when you're starting from local disks... I would go with a pair of 10 k / 15 k RPM SAS drives (2.5 "or 3.5", your choice there) on the RAID controller (using RAID 1 on them) support. Confirm with either your VAR or rep Dell to make sure that the controller is 100% compatible with VMware (or you check the HCL). They list the 146 GB drives in the Configurator (normal site of SMB) fairly cheap price.

    I'd also go with processors Xeon E5620 (get a pair inside the host) so you'll have plenty of CPU power. Get as much RAM as you can, using not more than 12 sticks (multiple of six).

    In fact, I have configured a R610 and R710 a using the SMB website... The R710 has the 146 GB (10 k RPM) drives the R610 has 73 GB 15 k RPM drives. The R610 comes out to almost $300 more than the R710. We're talking same memory (24 GB), TOE enabled on the built-in NETWORK interface, no additional NIC at build time (they would get another provider, getting Intel NIC, NOT of Broadcom), redundant power supplies, kits of rapid-rail (without the wire management arm), two-processor Xeon E5620, iDRAC 6 Enterprise (so you can do it all remotely, no need to use any KVM don't setup once configured the iDRAC 6). Both versions include the 5 x 10 hardware only (NBD onsite) support. I also put the hard drives RAID 1, so you have redundancy here.

    Seen readers on board, for ESX/ESXi is on will also configure the host much easier and will not require something special on the side storage. Otherwise, you need to configure up to the MD3000i to present the solitary start-up LUN to a host for ESXi. Everyone that I spoke with who implements guests, always with local disks. Especially if you are looking to save the ~ 1 k $ the HBA adds to the build (not to mention the extra complexity to things). I was talking with some people in charge of a large IBM virtual data center not long there. They were forced to boot from SAN on their hosts fear. They were forced to do so by higher ranking people within the company. It was decided that because it sounded good on paper, it would be good in practice. Initial tests have shown how much pain he would be, and they were not happy about the extra work required so that it can happen. Until VMware clearly indicates that booting from THE SAN is recommended to local disk (properly configured on either) I will continue to use local drives for ESX/ESXi reside opon. Even in this case, we have a significant performance win by going to the boot of model SAN. Maybe when 10 GB is on the environment as a whole, it will make sense. As it is, you get new SAS disks that are 6 GB, everything in-house, and you still beat the trunk of the SAN performance model. In addition, you don't need to worry about someone making something accidentily to the LUN you are booting from...

    Network administrator

    VMware VCP4

    Review the allocation of points for "useful" or "right" answers.

  • EqualLogic SAN volume issue

    As I am virtualizing our infrastructure, a thought occurred to me.  First, the Installer - 3 hosts vSphere 4 connected to a SAN EqualLogic PS6000XV network.  We use iSCSI on the network for connectivity maps.  I was put in service a volume by comments on the side EqualLogic.  We did not do anything with the snapshots or replication - that will come later.  Our immediate goal is just to virtualize our servers and continue our backup and restore as usual procedures.  This is likely to change so we can use snapshot technologies, but also the stuff of VMware.  Anyway, to my question.  Should I continue this way?  One volume per person.  Or should I do a large volume and then put my VMs on the single volume?  On the SAN, the whole thing is configured in RAID-50.  I made only a handful of non-critical servers to date.  I'm not sure what are the best practices in this scenario.  What worries me is that if I continue this way, I could lose some instant advantage later or something like that.

    While the expansion of VMFS volumes is less a problem in vSphere that it was in the past, the point is that having a unit logic by VM number increases management and effort if you need to grow the hard disk to the virtual machine.  If you need to add 20 GB for drive E: one of your virtual machines, it's easy to do when using large volumes because you probably the free space to simply increase the size of the virtual disk.  If you use the method "a unit logic by VM number", you may initially increase the size of the LUN on the EqualLogic, then extend the VMFS volume in vSphere, and finally to increase the size of the virtual disk across comments.

    Another part of the equation is the backup of the virtual machine.  Many tools available for VMware backup can make backup 'LAN-free' by connecting directly to the storage.  To do this, you must present the VMFS volumes on a Windows backup server and the software then reads directly from there, rather than transfer on each ESX host network.  If you use a product that saved using this method, you must present all of your VMFS volumes to this server from Windows instead of just one or two larger LUN.

    I have seen many different deployments of VI3/vSphere and saw the two one LUN per VM model, as well as the virtual machine of 15-20 by the largest LUN model.  In almost all cases, those who did a unit number logic by VM regretted and find it difficult to manage without any real advantage.

  • HP NC360T vs NC380T for connection of SAN iSCSI (HP 2012i DC)

    Don't you think that it will be big difference in performance if I use rather than NC360T NC380T?

    According to multi-vendor 'A 'Post' to help our mutual iSCSI clients '.

    using VMware"I should keep simple and"in general, to use the software.

    Initiator unless iSCSI initialization is formally required".

    Do you have any suggestions on the switches that I will use? I was told HP - ProCurve 2510 G-24 - but not sure. People mentioned Dell 52xx series. Someone at - it (HP/Dell) use them in the workplace.

    Thank you!

    1006143 KB (relased 6/2008) some TOE NIC is supported, but the TOE unloading feature is not yet implemented.

    That said, I think you should go with (2) 360 t (for redundancy) or just go with (2) of NC110T.

    The disadvantage of SW iSCSI is that you are limited to 1 GB. / s by target IP address.  If your SAN network has several IP target, you can get > 1 GB/s using multiple LUNS, but you can't control the load balancing.  With HW iSCSI HBA (such as Qlogic) you spend about $700 / HBA and you can then download any treatment to the HBA.  You can also run it manually by LUN load balancing.

    With respect to the switches, the 2510 doesn't have a very large package buffer size (384 KB).  You should consider switches with larger packages such as the HP 2900 pads, 13 MB of buffer memory.  Alternatively, choose something between the two.  The buffer space more you have the less problems you may encounter with ignored packets and flow control problems.

    Ben

Maybe you are looking for