ESXi 6.0 challenges iSCSI Equallogic PS 6100

Hi all

I installed my Windows of 5.5 to 6.0 vCenter vCenter server.  At the same time, I wiped one of my Dell R710s 3 (3 years) to get a new installation, since it has been upgraded since ESXi 5.0.  I started with the Dell ESXi 6 disc.  Installation was smooth.  After you configure the host to match the configuration of my other two guests of ESXi 5.5, I couldn't have my Equallogic LUN to display.  Other LUN iSCSI (Drobo for backup, Openfiler to test) had no problem.  Only Equallogic and Nexenta.  The two without complain "Access denied" when the ESXi 6.0 Server tries to discover the San.

By looking farther, two things have changed in ESXi 6 at face value: 1., My Broadcom NetXtreme II BCM5709 iSCSi cards are now registered as QLogic.  2. my software VMware iSCSI adapt now has a new name of the initiator.

My Equallogic SAN running firmware 7.x.  To see if the pilot makes all the difference, I wiped the ESXi host once again and installed the version of VMware instead of Dells.  Now, my iSCSI network cards are a little different... ranked QLogic NetXtreme II.  Still, no luck on the SAN and errors on the persistence without 'access denied, or not allowed.

I made sure I don't use CHAP or and ACL.  While the static tab to the Nexenta works (which means that he discovers all IQN) Lun never appear.  With the Equallogic, no IQN is discovered.

I wanted to make this post to see if we can get this figured out and maybe after some good info if we get a resolution.  However, I don't have weeks to experiment, because I have to go back to vCenter 5.5 and recover the ESXi host.  Any help and suggestions would be appreciated!

Hello

On this page and update your firmware of hardware and its driver: VMware Compatibility Guide: search for i/o device

Its brand name is also Qlogic

Tags: VMware

Similar Questions

  • ESXi 4.1 or ESXi 5.0 with iSCSI SAN

    Hello

    I'm trying to establishing an environment that is / will be an iSCSI SAN backend entirely.  I know of a lot of reading that the best practice is to config the ESXi host with:

    VMKernel > vmnic > physical network adapter > switch ports > port on SAN (config 1:1 - no grouping of vmnic)

    Environment:

    ESXi 4.1 & 5.0

    Force10 switches

    EqualLogic iSCSI SAN (PS4000 & 6000)

    Here's my real question (playing the Devil's advocate):

    Why I shouldn't team 3-4 vnmics on a vswitch with multiple vmkernel ports?

    Given my environment.  Can someone point me to a technical document that explains what will / could happen if I set up the environment ESXi in this way?

    Thank you

    BDAboy22

    So, basically, we want the SAN to take full responsibility for the failover instead of the network and/or the SAN for who should resolve the failure (which could lead to the time of restitutio in integrum longer session).   I understand that correctly?

    Yes, that's right, we want to stack storage for vSphere manage with path selection and tipping - and by connecting each vmknic directly to the physical vmnic map we sort of "simulate" how to CF cards and also ignores the network failover mechanisms. As you probably know you also need to "bind" the iSCSI Software to the two vmnic (GUI now in 5.0, esxcli in 4.x)

  • ESXi 5.5u1 added iscsi storage adapter - reboot and now vSphere Client could not connect to "my ip" an unknown error has occurred.  The server could not interpret the customer's request.  The remote server returned an error (503) server unavailable

    I have not yet connect to an iSCSI target device.

    I can ping my host

    when I open http:// "hostip" in a web browser, I get a 503 service not available.

    restart the host gets me nowhere.

    SSH opens somehow, but can not connect

    Console seems OK

    vSphere Client can not connect

    If I reset to the default values of the console it is ok, but when I reconfigure the host, this error will be returned.

    I tried to reinstall from DVD

    I'm completely corrected to date via SSH esxcli

    This happens on both my hosts, although they are almost identical Lenovo thinkserver TS140s with broadcom 10Gig NIC and intel NETWORK card integrated

    It almost always seems to happen the next time I reboot after enabling iscsi support

    The only weird thing I have is that my Integrated NIC is a processor intel 217's and I have to use a special VIB so that it can be used in ESXi

    Customer's winning 8.1

    Here is my installation notes

    Install on USB/SSD stick with custimized ISO i217 NIC driver, reset the configuration and reboot

    Management NIC set to NIC0:1Gig

    IP management: hostIP/24 GW: my gateway

    DNS:DNS on windows vm1, vm2 Windows dns

    HostName:ESXi1.Sub.myregistereddomainname custom DNS Suffixes: sub.myregistereddomainname

    Reset

    Patch to date (https://www.youtube.com/watch?v=_O0Pac0a6g8)

    Download the VIB and .zip in a data store using the vSphere Client

    To get them (https://www.vmware.com/patchmgr/findPatch.portal)

    Start the SSH ESXi service and establish a Putty SSH connection to the ESXi server.

    Put the ESXi server in maintenance mode,

    example of order: software esxcli vib install /vmfs/volumes/ESXi2-2/patch/ESXi550-201404020.zip d

    Re install the Intel 217 NIC driver if removed by patch

    Change acceptance ESXi host sustained community level,

    command: esxcli software - acceptance-level = CommunitySupported

    Install the VIB

    command:esxcli vib software install - v /vmfs/volumes/datastore1/net-e1000e-2.3.2.x86_64.vib

    command: restart

    Connect via VSphere client

    -Storage

    Check/fix/create local storage. VMFS5

    -Networking

    vSwitch0

    Check vmnic0 (1)

    Network port group rename VM to the 'essential '.

    Rename management group of network ports for management of basic-VMkernel traffic.

    -Configuration time

    Enable NTP Client to start and stop the host. ntp.org set 0-3 time servers

    DNS and routing

    Start the virtual machine and stop

    -enable - continue immediately if tools start - stop - prompted action Shutdown - the two delay to 10 seconds

    Security profile

    Services

    SSH - startup - enable the start and stop with host

    Cache host configuration

    -Properties to start SSD - allocate 40GB for the cache host.

    Flashing warnings SSH:

    Advanced settings, UserVars, UserVars.SuppressShellWarning, change from 0 to 1.

    Storage adapters

    -Add - add-in adapter software iSCSI

    I think I saw that I was wrong.  In fact, I applied two patches when only suited. I started with 5.5u1rollup2 and then applied the ESXi550-201404001 and ESXi550-201404020.  Strangely I did not t o had problems until I worked with iSCSI.

  • Nested ESXi 5.1 and iSCSI Shared Storage

    People:

    I am unable to get my ESXi servers nested to see the iSCSI shared storage, that I put in place for them.  I use iSCSI for the ESXi host, who holds the guests of ESXi, so I have a successful iSCSI configuration to use as my reference.

    Here's what I have for the host network config:

    iSCSI targets

    + IP1: 172.16.16.7 (subnet for host IP storage)

    + IP2: 172.16.17.7 (subnet for storage over IP comments)

    vSwitch associated vmnic2 on NIC IP storage

    + Port "IP storage" group containing virtual NIC both ESXi hosts

    + VMkernel Port for iSCSI host connections: 172.16.16.28

    Here's what I have to config network comments:

    + The highest virtual NIC group of ports "IP storage".

    + vSwitch with only a VMkernel port for iSCSI of the guest connections: 172.16.17.38

    From the iSCSI host, I am able to ping 172.16.16.28.  However, I am unable to ping 172.16.17.38, and here's the really confusing part - I am able to get an answer ARP of that associated with the VMkernel port with the correct MAC network CARD!  This eliminates all kinds of potential configuration issues (for example means bad NIC, bad IP, etc..)

    The firewall on the host indicates the software iSCSI port outgoing 3260 Client is open.  Capture of the target iSCSI host package reveals no. TRAFFIC from VMkernel IP guest when updating to storage adapters.

    What should I look at?  The configuration of the host against the host looks identical yet we work and there is no...

    -Steve

    In the debugging process so, I turned on the "Promiscuous" mode on the vSwitch associated vmnic2 on the NETWORK card on the host IP storage ESXi and poof!  Everything magically started working.  The iSCSI traffic be unicast, so I don't see why promiscuous mode would be necessary, but I can't argue with the results observed.  Clearly, I have more to learn more about nested ESX, that's why I'm playing with it.  :-)

    -Steve

  • VMDK file between windows and ESXi with VMFS on iSCSI copy

    Hi all!  I searched here and that you find that no good answer but then I hope that someone can help you.

    We have a group of 3 servers ESXi running off a single iSCSI target and often need to copy VM from the VMFS file system.

    To help the conversion tool is incredibly slow, it's on.  WinSCP and FastSCP work at about 5 MB and 3 MB/sec.  FTP is the best option so far, but it is only 8 MB/sec.  When we have to copy around a virtual machine that takes a lot of time of 20 GB.  In addition, when we make 15 copies of a VM is incredibly long.

    I hoped to be able to directly connect to the iSCSI target and mount the system files somehow, but can't seem to find a way to do it.  Defect is there any method better or faster?  Perhaps a better FTP server than the option of proftpd?

    We hoped a local copy with either USB or eSATA method, but doesn't seem to work.  USB is not detected even and I can not get a non-VMFS file system to mount via eSATA.

    If it works better we can set up a linux machine to mount iSCSI target if copies of file can work this way.

    Thanks for the tips!

    NFS?

  • DataCore DataCore-V with ESXi, vSphere 4.1 iSCSI running

    I have looked everywhere and come short on my problem, if this is a problem?  In the past, I have always connected all ESX / ESXi host through corn and now I work more with iSCSI.  I always try to get a handle on it.  It's one thing to read about iSCSI and he put in place of the other.  OK, now that I have that came down my chest, here is my question.

    Should the two iSCSI connections in DataCore-V show ready if I have both connected?

    Here's a high level race down the environment.

    1. a single node of DataCore

    2. a single ESXi host

    3. the two dual iSCSI ports

    4. on the DataCore node, only one port shows ready

    5. on the ESXi host two connections show connected

    Screen shot 2011-06-24 at 9.30.34 AM.pngScreen shot 2011-06-24 at 9.36.54 AM.png

    Help, please

    Hello

    To meet ask you: It will say connected when a host 'iSCSI IQN init' is connected, simply because he says tell-connected does not necessarily mean that the 'network' is down... The host/s are not, cannot connected. (or none have been configured).

    More information about your configuration will be useful, however check over the info / recommendations below (Simple config)

    1. you set the "Port" Type for the two iSCSI ports on the DataCore for "FE" "Front End" Server?

    2. you have defined two separate IP addresses for the iSCSI on DataCore iSCSI ports ports?

    3 for simplicity on the ESX host, you have two distinct vSwitches for iSCSI with a NIC each with a vmKernel configured with IP to talk to the DataCore "target Ports.

    Therefore to confirm you can test the interfaces between them... now connect connection to DataCore target ports

    If you use software iSCSI Init, enable it on the ESX host... then you will need to "Discover" ports "targets" (static or dynamic)

    Once you add the two DataCore target Port IP please make sure that you "Login" for them to the ESX host. (BOTH)!

    If it worked, now you can join the 'Host' ESX in DataCore-V interface and assign the "iSCSI initiator" (from the ESX host) to it. (Only will be there for iSCSI Software...

    Now, remember, there is that a single iSCSI Software initiator, the important thing is to make sure this initiator is recorded in the TWO ports of DataCore target... Check the DataCore-V 'Remote Ports"tab on each port of 'target' for the IQN of the ESX host name.

    NOTE: DataCore servers don't support not multiple sessions to the same IQN to a vSphere Host to a unique DataCore iSCSI Target

    XXXXXXXXXXXXXXXXXXXXXX

    Also, please see best practice information about DataCore for your settings of the NETWORK adapter on the server of DataCore.

    the iSCSI traffic should be strictly separated from the public network (user).
    Use a separate hardware (LAN switches) to traffic storage iSCSI or less different virtual LAN (VLAN).
    Use separate NICs for iSCSI and inter-stockage server and communication management.
    Disable any other protocol or service except for "Protocol Internet (TCP/IP)" on the network adapters used for iSCSI. See help guide for detailed instructions.
    3260 port must be open on the firewall to allow the iSCSI traffic.
    On storage servers DataCore as well as disable Application servers/hosts these settings because they can cause disconnections and performance problems:
    Disable AutoTuning on W2K8: netsh interface tcp set global autotuninglevel = disabled
    disable RSS feeds: netsh interface tcp set global rss = disabled
    disable the TOE: netsh int tcp set global chimney = disabled

    Let us know how you and perhaps give a bit of monitoring more than info...

    Best regards

    Bernie.

  • How many network adapters I - ESXi 4.1 with iSCSI

    I was wondering if someone could help me give a sense the news I read on # NIC for ESXi 4.1.  I came across the following article, but I'm not sure I understand all parts of traffic that he speaks:

    http://livingonthecloud.blogspot.com/2009/09/ESXi-networking-best-practices.html

    At the moment I have 6 cards 1 GB on each of my 3 servers.  I connect to my SAN via iSCSI, so I know I'm going to need at least 2 of these network adapters for that traffic. I know I'll need at least 2 NICs for connection between my ESXi server and my switch to the network of the VM.  What I don't understand is what I really need for the VMKernel FT, VMkernal Vmotion and VMKernal the Network Administration.  I do not plan on using Vmotion very often at all, so I'm not concerned about its impact on my network.

    Any advice?  I need to replace my double NICs with Quad NIC card or is it exaggerated?

    The management doesn't have a dedicated network card. It can share the network with virtual machines cards.

    In this case the following configuration would be interesting:

    vSwitch0: (3 NIC uplinks) (assuming vmnic0, vmnic1 vmnic2)

    Port group 1 - networking (vmnic0 and vmnic1-> active vmnic2,-> stand-by)

    Port group 2 - network of the VM (vmnic0 and vmnic1-> active vmnic2,-> stand-by)

    Port 3 - VM Network Group (vmnic0 and vmnic1-> stand-by, vmnic2-> active)-for the "Pervasive SQL.

    vSwitch1: (1 NIC/uplink)

    Port group 1 - VMkernel for vMotion on a separate network (192.168.x.x)

    vSwitch2: (2 cards network/links rising)

    configuration by following the best practices of your iSCSI storage provider

    In this way the "Pervasive SQL" would have a dedicated NIC (vmnic2) and may switch to another NETWORK card in the event of disruption of the network.

    André

  • ISCSI EqualLogic PS6000 path problem

    Hello

    I have a PS6000 2 Powerconnect 5524 and 2 R710

    Ports 1 and 2 of my san are Switch1 and ports 3 and 4 switch 2, ports are marked on VLAN10 one two swjtches, they ar not stacked

    The legs of my san ports are:

    Port 1 10.10.1.10 switch 1
    2 port 10.10.1.20 switch 1
    Port 3 10.10.1.30 switch 2
    4 port 10.10.1.50 switch 2

    Virtual IP San: 10.10.1.200

    My R610 has an vmnic3 on switch1 and a vmnic5 of the switch2, ports are VLAN10 taggued

    With this configuration I IC see my 4 way san

    If I place a 10.10.1.50 on vmnic3/vmk0 switch1 ISCSI1 vmkernel port I see my san 10.10.1.10, 10.10.1.20, or 10.10.1.200

    If I place a vmkernel ISCSI2 port 10.10.1.60 on vmnic5/vmk1 the switch2 I cannot see my san

    If I delete ISCSI1, I can see my san ISCSI2 and vmnic5

    If I place ISCSI1 and ISCSI2 on VLANs separated, I can see my san on two NICs, so I have 4 path to the San

    I have seen that:

    ~ # esxcfg - road - l
    VMkernel itineraries:
    Interface of network gateway subnet mask
    10.10.1.0 255.255.255.0 subnet local vmk0
    10.10.3.0 255.255.255.0 subnet local vmk4
    10.10.4.0 255.255.255.0 subnet local vmk3
    by default 0.0.0.0 10.10.1.254 vmk0

    Perhaps a road problem?

    THX

    I have install up to EQLs with vSphere several times. Dell and EQL recommend a one-to-one ratio of iSCSI VMkernel ports to physical network interface cards. If you have 2 NICs per host dedicated to iSCSI, you should have 2 VMkernel ports on each host (and remember, they must be configured exactly the name of vMotion work). They also recommend the same number of network adapters on each host that you have on the SAN controller - SAN 4 controller iSCSI NIC - 4 NICs per server for iSCSI.

    Despite the paths physical and network cards redundant, each VMkernel port is only a single logical path.

    Here's how I would do it with a series 6000 EQL (4 ports per controller)

    R710 #1:

    VMkernel Port 1 - NIC physics 1 - switch A

    VMkernel Port 2 - Phyiscal NIC 2 - switch has

    VMkernel Port 3 - NIC physical 3 - switch B

    VMkernel Port 4 - NIC physical 4 - switch B

    R710 #2:

    VMkernel Port 1 - NIC physics 1 - switch A

    VMkernel Port 2 - Phyiscal NIC 2 - switch has

    VMkernel Port 3 - NIC physical 3 - switch B

    VMkernel Port 4 - NIC physical 4 - switch B

    4 port .1Q between switch A and B, allowing all the VLANS to pass through. As said Andre, all cards on the same iSCSI network interface VLAN. I don't see why you would need or want to several VLANS iSCSI.

    Reference Dell EQL:

    Connect 2 network by each switch controller cards, for each controller is multichannel through two switches.

    In this configuration, VMware will see 4 paths to the storage, which is what it should be, if you have 4 NICs per controller. You will also get full in this configuration redundancy, given that each host has 2 paths by switch.

    I am very familiar with EQLs, so if you have any other questions just ask. I'll actually implement a PS4000 this week.

  • ESXI 5 Broadcom 5709 ISCSI unloading

    Hello

    anyone read anything new abound the NIC "Broadcom 5709 ISCSI unloading? They develop a Frame Junbo HW unloading driver?

    Thank you

    I don't have the time to test it, but from the RC documents: Broadcom iSCSI adapters do support not IPv6 and frames.

    We will see in the official documentation.

    André

  • Best practices for Exchange 2003 with VMWare ESXi 3.5 and iSCSI SAN

    Hello guys,.

    Here's the Q? We have 1 physical Exchange 2003, the HOST of 4 and 1 iSCSI SAN with LUN 3, 1 for data, 1 for VMWare and 1 for SQL, if we're going to virtualize it, I don't know where to put data Exchage and newspapers. I do not think that that is a good practice to put together the data but I do not have another SAN. So, what can I do?

    Thank you.

    We have 813 mailbox.

    I agree with cainics, start an average size and go from there.  I know it's a production mail server and you can not exactly 'play' with the settings because this requires time, but if you do the VM too big, you would have nothing left for other virtual machines.

    I would go 2 vCPU and at least 4 GB of RAM, maybe 8GB.  There must be parameters for the Exchange mailbox 813 and X users # to implement your environments in order to get an idea of the amount of RAM that will be... 4GB seems minimal to me, but 8 GB would probably be better like that.

  • The upgrade to ESXi 4.1 with storage iSCSI software to ESXi 5.5

    I intend to upgrade ESXi 4.1 with which I have attached to ESXi 5.5 iSCSI software storage

    I've updated all my guests who were on the SAN storage with no problems.

    I would like to know that if it there's nothing I should take care before I update the host connected iSCSI to ESXi 5.5, as for the hosts with SAN attached storage I made to remove the cables of SAN before upgrading the host.

    Also, if all known problems, you will be informed during the upgrade to ESXi 4.1 to ESXi 5.5 with iSCSI Software storage, please let me know?

    If there is no different process to upgrade coz in ESXi 4.1 I do not see this link ports of iSCSI vmkernel ports software iSCSI but I know in ESXi5.5 we have to or I just launch a procedure of standard upgrade via the Update Manager and everything will be taken care?

    Thanks in advance

    With ESXi prior to version 5, linking ports had to be done via command line. However, if it has been configured correctly, you should be able to upgrade to ESXi 5.5 (assuming that the host has not been previously updated since version 3). BTW. the last time that I disconnected the host of storage for an upgrade has been with ESX 3.0.

    André

  • 5.0 to 5.5 with iSCSI ESXi ESXi

    I have some servers ESXi 5.0 with iSCSI configured for my storage. If I'm on 5.5 ESXi, should I reconfigure the iSCSI? If so, how differente different is the configuration? I know that we had to set up when we went from 4.x to 5.0 ESXi ESXi and the configuration options are very different. Is this the case with the new version?

    No, you stay within the same major version and it should work without any problems. ESX ESXi (i) 4-5 were important steps, they are smaller. Made this upgrade in environments of production before without any problem or reconfiguration.

  • ESXi 4.0 / 4.1 Broadcom iSCSI installation program / Support with unloading iSCSI

    Hello.

    I am new to the use of iSCSI on ESXi so I may have a few questions already answered. Sorry for that.

    Is it true that ESXi 4.1 supports iSCSI unloading, while ESXi 4.0 does not work?

    How to set up properly? I have a Dell R510 with an Quadport Broadcom NetXtreme II 5709.

    I like to use two network of 5709 with links LACP ports to connect to my iSCSI target (QNAP).

    I found a few messages about creating two vSwitches which each card of a network and mapping the vSwitch (vmkx) to the vmhbaXX (esxcli swiscsi nic add - n vmk1 vmhba35 - d).

    What is the right way? How to set up link aggregation then?

    This is what it looks like:

    vswitch.jpg

    storage.jpg

    Thank you

    Michael

    Hello

    you look here: http://blog.open-e.com/bonding-versus-mpio-explained/

  • Is there anyway around this limitation? -> Clustering on disks, iSCSI (VMware ESXi)

    This is in the article (http://www.vmware.com/pdf/vsphere4/r40_u1/vsp_40_u1_mscs.pdf): Setup for Microsoft Cluster Service and Failover Clustering

    Environments and the following features are not taken in charge for the setup of MSCS in this version of vSphere:

    • Clustering on iSCSI and NFS FCoE disks.

    Is this in any way about this?

    I would like to create a simple 2-node, Windows 2008 active/passive cluster in VMware ESXi using my FAS2020 iSCSI solution.

    Thanks for your time in advance!

    As I said earlier. Treat your virtual machines and physical and use the iSCSI initiator in the operating system.

  • Range Dell EqualLogic has stopped working when you restart the failure

    During and after an interruption of scheduled maintenance work, ESXi cannot mount on our Dell EqualLogic Array data warehouses and I have no idea why.

    We lack ESXi 5.0.  Shut down us throughout the weekend, replaced the network cables and when we returned to our ESXi hosts could not see the Bay EqualLogic iSCSI, but you could see several other berries on the same storage system iSCSI.  Digging into logs, I see the ESXi host making the CHAP authentication to the EqualLogic Bay, and the said table it connects successfully, then it loses connection about 10 seconds later.  10 seconds more ESXi tent reconnection to the EqualLogic and the same thing is happening again.

    I see several newspapers in ESXi hostd.log indicating this table goes ONLINE and off LINE and another line indicating "sense data Possible" leading me to http://KB.VMware.com/kb/289902 .  I feel really disabled at the moment because it's the Bay of storage accommodation our vCenter Server, which puts us in trouble right now.

    Also, if I try to "Add Storage", ESXi sees EqualLogic stores exist (watch a sufficient volume, size etc.).  However, as I'm trying to add them (via vSphere Client), step 1 takes a lot of time to finish, step 2 indicates the "disks are empty" (which I don't believe), then after an another long this timeout a popup error.

    Are there commands or utilities that are good for the diagnosis of iSCSI specifically?

    I can post more specific journal details if it is useful.

    PS - The new network cables are ok in communication with other storage targets, and it has not replaced the EqualLogic cables.

    Fixed; the MTU setting on my storage switch was mowing part of storage traffic.  Here are the troubleshooting steps that I went through, maybe it'll help someone else.  The solution didn't exactly have anything specific to do with ESXi or EqualLogic.  Basic troubleshooting and digging.

    Newspapers/events storage array inspection revealed he saw and accepted the ESXi host connection appempts.  I saw messages to the effect of:

    -Connection of ESXi iqn blah blah blah established; Successful CHAP authentication.

    -10 seconds later: Connection reset / abandoned by peers.

    -5 seconds later: ESXi connection successfully established...

    (rinse and repeat several times)

    Inspection of the file hostd.log ESXi showed several messages of 'connection reset by peer '.  The storage array and ESXi have been pointing to the other... so the switch is the only thing in the middle.

    Troubleshooting steps now:

    -using the vmkping ping on the table of the ESXi - success: vmkping 192.168.1.20

    -using the vmkping ping on the table of the ESXi with a size of 9000 (jumbo frame size): NO RESPONSE! : (vmkping) s 9000 192.168.100.20<-- this="" is="" where="" the="" money="" was="" for="" finding="" the="">

    MTU has been set at around 9000.  It turns out that the switch should be set a little higher (taking into account headers of packets? who knew)

    So this Setup ran for more than a year of intact working and just stopped when we took everything downstairs and brought it up.

Maybe you are looking for