iSCSI ESXi 5.1

Built a lot of standalone ESXi 5.1 (build 799733).  Our vCenter is 4.0, so it can't handle.  I want to connect to the iSCSI SAN storage host, which for the moment is NetApp.  I have a vSwitch configured with 2 groups of ports for the subnet for the NetApp.  We will install an EMC VNX box very soon and want the host to have access to NetApp storage and EMC?   The same vSwitch can be used for both, or I have to create a new subnet for the VNX and second VS?  Also during testing, you can connect to one or two storage boxes and remove added without hanging the host?  Thanks in advance.

The same vSwitch can be used for both, or I have to create a new subnet for the VNX and second VS?


Yes

Also during testing, you can connect to one or two storage boxes and remove added without hanging the host?


Yes--but it is better to inform the host to remove the storage, if possible.

Tags: VMware

Similar Questions

  • Target iscsi ESXI 5.0, active path connected, mounted, are not not in the data store data store

    HI -.

    I am looking for help to point me in the right direction.  This problem occurred after a reboot of the system.

    I'm in vmware esxi essentials 5.0.0 821926

    I use the starwind software as my iSCSI target

    I use ISCSI to connect to my server from storage to the esxi hosts.

    I have 2 warehouses of data showing inactive under clusters of data storage and the data store.  I have a third-party data store on the same server that is loading properly.

    I currently have the same behavior on three esxi hosts:

    under configuration - storage adapters is the ISCSI path is active. Data warehouses appear under devices and mounted.  In data warehouses, they are inactive. On the same storage server

    on the starwind Server - I have

    built a new target

    removed and added the databases to the new target

    Changed the IP address of the target.

    On Esxi hosts, I removed the ISCSI server, discover places BOF dynamic and static and added under the same or IPs news discovery

    I can restart esxi hosts & storage servers and get to that same point.

    Whenever I find myself in the same place - course assets, devices mounted, inactive in data warehouses

    I don't know what else to share, let me know what you need to know to help out me.

    Thank you

    Dylan

    OK, incase someone crosses my ramblings, it may help.  My original question was warehouses of data that would not rise.  They were visible as devices in the area of storage under devices and as devices in the scsi adapter.

    When I would try to force mount them, add storage, keep the signature, mount they wouldn't mount.  I did some research and my question was after a failed mount attempt strength as got data attached as snapshots warehouses.  I then mounted force them and he tells will data warehouses and I got them back.

    I followed this KB to identify and resolve the problem

    manipulation of vSphere detected as snapshot LUN LUN (1011387)

    When it was over, I tried to add vms back and found a new show.  Power of vms would expire.  the vmkernel.log showed;

    2014 05-27 T 07: 20:40.010Z [verbose 'Vmsvc' 564D7B90] Vmsvc: filtering Vms: ignored not ready to request power state vm vim. VirtualMachine:75

    AND

    (2014 05-27 T 03: 45:47.821Z cpu4:2052) NMP: nmp_PathDetermineFailure:2084: SCSI cmd RESERVE failed on the vmhba35:C0:T1:L0 path, booking on eui.cff548d1952b1e4c of device status is unknown.

    I had latency huge read write showing upwards, 3K and more

    After several searches, I had in the ESXShell and found that there is no conflict of booking.

    On a whim, I took inventory all virtual machines that are now inaccessible.  I then added a DOS virtual machine. Alto! the latency down to version 1.2 a.7 ms for all data warehouses.

    ultimately the instructions said you may need to add virtual machines in the inventory, but does not remove all virtual machines first.  I was grabbing vms that were not in stock, so I didn't remove the old virtual machines in the inventory.

    A recruit Mennonites, Yes.

  • 5.0 to 5.5 with iSCSI ESXi ESXi

    I have some servers ESXi 5.0 with iSCSI configured for my storage. If I'm on 5.5 ESXi, should I reconfigure the iSCSI? If so, how differente different is the configuration? I know that we had to set up when we went from 4.x to 5.0 ESXi ESXi and the configuration options are very different. Is this the case with the new version?

    No, you stay within the same major version and it should work without any problems. ESX ESXi (i) 4-5 were important steps, they are smaller. Made this upgrade in environments of production before without any problem or reconfiguration.

  • iSCSI (ESXi) loss of storage connectivity

    Hi all

    We have a new environment (test), as well as the two facilities of ESXi.

    The following configuration was created:

    Reference DELL MD3000i - & gt; DELL PowerEdge R200 (Web server)

    ------& gt; DELL PowerEdge R200 (MySQL Server)

    On both servers, we have configured the iSCSI initiator to the MD3000i, which is connected to the data store.

    However, every minute (Yes! every minute) we receive these messages on the two R200:

    Loss of connectivity to storage device

    NAA.6002219000c90093000004484a32546e. road

    vmhba33:C1:t0:L1 is down. Affected data warehouses:

    'MD3000i-Web '.

    error

    16/07/2009 0:34:48

    Lost access to the volume

    4a4e7326 - 1225570 a-7602-00219bfbd53c

    (Web-MD3000i) because of connectivity issues.

    Recovery attempt is underway and the result will be

    be shortly stated.

    Info

    16/07/2009 0:34:48

    Successfully restored access to the volume

    4a4e7326 - 1225570 a-7602-00219bfbd53c

    (Web-MD3000i) due to connectivity problems.

    Info

    16/07/2009 0:34:52

    Can someone point me in a direction to solve it, because the two servers lose their connections, there are huge hickups in or Web pages.

    Thanks in advance!

    Best regards, Mike

    perfectid wrote:

    I have attached a screenshot where you can see how the two servers are configured.

    The names there are exactly the same (part iqn.*). Is this right?

    No - each port iSCSI (initiator and target) must have a name unique iSCSI. Looks like you have confused the name of the initiator (that is usually automatically generated by ESXi) and target name. The name target iSCSI needs to be registered on the page 'Static discovery' (or even does not need to manually enter at all, if you specify the address IP target on the "Dynamic discovery" page, and the target supports the request send targets).

    Now, you need make the name of the single initiator once again (and different from the name used by the target itself).

  • Separate ISO for iSCSI ESXi

    This different ISO just for systems which are boot via iSCSI or simply use it?

    That the ISO is for guests using the iSCSI Protocol. There was a problem where starting a host attached to an iSCSI SAN can take a long time. This ISO contains the fix to avoid this scenario.

  • ISCSI Boot B200 M2 need, but get 'not enough available vNIC.

    Hello

    I'm trying to implement initialization iSCSI ESXi on a B200 M2 w / M71KR-Q card above.

    I add vNIC A and B (each linked to a fabric respectively), but trying to implement a vNIC iSCSI - I get the error «not enough vNIC» available

    Looking around, it seems that M71KR-Q only can support 2 vNIC (which I also confirmed by trying to add a third, same error message as above).

    I just thought that since the overlay of iSCSI vNIC real - it works.

    However, even when adding just 1 vNIC and trying to superimpose a vNIC top iSCSI - I get the "there not enough resources overall, not enough vNIC available" error message.

    Does anyone know if it is even possible to initializing iSCSI on B200-M2 w / M71KR-Q on?

    Thanks in advance!

    Petar

    Please refer to this document in the section documents iSCSI boot https://supportforums.cisco.com/docs/DOC-18756 it is very detailed, I tell myself.

    initializing iSCSI is only supported on CiscoVIC and Broadcom mezz cards. It is not supported on the M71KR-Q Qlogic Gen - 1 mezz.

    Dave

  • iSCSI - two subnets on a vswitch iscsi ports link

    Hello

    Is less than supported scenario about binding ports for the software iSCSI (ESXi 6.x)?

    Two in two different subnets (2 controllers) iSCSI storage devices: 192.168.10.x and 192.168.20.x (mask 255.255.255.0).

    ESXi host with a vSwitch iSCSI.

    Four exchanges vmkernel: two 192.168.10.x and two subnet 192.168.20.x subnet.

    There is a connection of software ISCSI ports configured for each vmkernel port.

    It is worth noting that this scenario is little different from the examples on VMware KB: considerations for use port binding software iSCSI in ESX/ESXi

    Does not this way. iSCSI ports link requires a one-to-one relationship between the vmkernel ports and vmnic.

    Of https://kb.vmware.com/kb/2045040

    To implement a group policy that is compatible with the binding of iSCSI ports, you need 2 or more ports vmkernel vSwitch and an equivalent of physical cards amount to bind them to....

    André

  • MPIO inside a guest virtual machine?

    Is it supported to configure MPIO inside a guest OS running on esxi4.1?

    I was looking at our configuration of Exchange virtual computer and for some reason any consultant configured our message Exchange store by using of Windows Iscsi initiator. Currently it has only a single session in the message store. That said, the virtual network adapter that is used for the iscsi initiator is connected to a vswitch configured for Iscsi traffic with 2 adapters, which should cover the aspect of redundancy. Not sure if it's useful to have multiple NICs configured inside the guest to use MPIO?

    I've seen some old messages in this regard, and it seemed that it was not "supported, but I thought I would check to see if something has changed."

    Thank you
    Kevin

    Closer to you, you could get is 68 of the document "vSphere storage" under the "SAN iSCSI ESXi Restrictions" section:

    "Impossible to use virtual-machine software multipathing to perform I/O to a single physical LUN balancing."

    As you say, the virtual network adapter that is used for the iscsi initiator is connected to a vswitch configured for Iscsi traffic with 2 adapters, which should cover the redundancy.

  • web cluster server + database con NetApp: NFS if o... so?

    Dovro'usare bring in VMWare (ESXi 4.1 Standard) quello che fu ambiente a web cluster (classico apache con MySQL db) fatto di 8 webserver substitutes + 2 db server + only netapp spinning in nfs, che e stato poi "virtualizzato" con Xen sulle stessa macchine, cluster UN producendo con circa 20 VM no rilocabili...

    Ed e venuto he momento di passare in ambiente più semplice

    He materiale nuovo knew cui dovra run it tutto sara fatta da 3 host VMWare (96 GB RAM) e a NetApp FAS2020 file server.  He continue a NetApp da fare in NFS server file by I got the VM su cui girera apache (o lighthttpd).

    He progettuale e problema: visto che he NetApp develop sleep Reed it / I datastore with VMWare vale pena di complicarsi vita collegandolo in agli FCP (o iSCSI) ESXi host? Oppure, come sto vedendo Giro per the rete, it guadagno di prestazioni sulle VM using FCP/iSCSI Datastore dedicated NAS minimo sembrerebbe che, tale da non is the major complicazione.

    The VM che make run apache mi preoccupano poco, sul disco local fanno UN I/O minimo (gli da servire are sul Server html file).

    Quelli che mi danno da Winegardner sound system I server MySQL che hanno a carico UN alto volte (especially complesse selection-join database market not su benissimo per quanto far indici etcetc).

    Note: he FAS2020 e a quindi "dog testa" in situation normal CPU servirebbe una VM che lo montano in NFS mentre l'altra servirebbe gli VMWare host. In caso di di una delle due fault tests quella che viva takes in carico remains there lavoro di quella guasta.

    Grazie in anticipated!

    Diciamo che can be bet... The funzioni che maintain a NFS (ad esempio IGCS) probabilmente saranno clogs nella versione next.

    POI ogni cosa will Reed "calzata' nel momento storico...

    This era dell Anno'iSCSI e tutti dicevano che era he meglio

    Adesso è Unified storage e dicono che molti NFS e meglio.

    Personally consider ancora valido the CF, my / certi progetti e iSCSI NFS alternative valid sound.

    Andrea

  • ESXi 6.0 U1 U2 upgrade, iSCSI questions

    Hello

    First post, so I'll try and summarize my thoughts and what I did with troubleshooting.  Please let me know if I left anything out or more information is required.

    I use the free ESXi 6.0 on a Board with a CPU Intel Xeon E3-1231v3 and 32 GB of DDR3 ECC UDIMM mATX to Supermicro X10SLL-F.  I use a 4G USB FlashDrive for start-up, 75 GB 2.5 "SATA to local storage (i.e. /scratch) and part of a 120 GB SSD for the cache of the host, as well as local storage.  The main data store for virtual machines are located on a target (current running FreeNAS 9.3.x). iSCSI  This Setup worked great since installing ESXi 6.0 (June 2015), then 6.0 U1 (September 2015) and I recently made the leap to 6.0 U2.  I thought everything should be business as usual for the upgrade...

    However, after upgrading to 6.0 U2 none of the iSCSI volumes are "seen" by ESXi- vmhba38:C0:T0 & vmhba38:C0:T1, although I can confirm that I can ping the iSCSI target and the NIC (vmnic1) vSwitch and VMware iSCSI Software adapter are loaded - I did not bring any changes to ESXi and iSCSI host before the upgrade to 6.0 U2.  It was all work beforehand.

    I went then to dig in the newspapers; VMkernel.log and vobd.log all report that there is not able to contact the storage through a network issue.  I also made a few standard network troubleshooting (see VMware KB 1008083); everything going on, with the exception of jumbo frame test.

    [root@vmware6:~] tail - f /var/log/vmkernel.log | grep iscsi

    [ ... ]

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba38:CH:0 T: 0 CN:0: iSCSI connection is being marked "OFFLINE" (event: 5)

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess [ISID: TARGET: TPGT (null): TSIH 0: 0]

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn [CID: 10.xxx.yyy.195:41620 r L: 0: 10.xxx.yyy.109:3260]

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba38:CH:0 T:1 CN:0: connection iSCSI is being marked "OFFLINE" (event: 5)

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess [ISID: TARGET: TPGT (null): TSIH 0: 0]

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn [CID: 10.xxx.yyy.195:32715 r L: 0: 10.xxx.yyy.109:3260]

    [root@vmware6:~] tail - f /var/log/vobd.log

    [ ... ]

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622023006us: [vob.iscsi.connection.stopped] iScsi connection 0 arrested for vmhba38:C0:T0

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622023183us: [vob.iscsi.target.connect.error] vmhba38 @ vmk1 could not connect to iqn.2005 - 10.org.freenas.ctl:vmware - iscsi because of a network connection failure.

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622002451us: [esx.problem.storage.iscsi.target.connect.error] connection iSCSI target iqn.2005 - 10.org.freenas.ctl:vmware - iscsi on vmhba38 @ vmk1 failed. The iSCSI Initiator failed to establish a network connection to the target.

    2016 03-31 T 05: 05:48.218Z: [iscsiCorrelator] 1622023640us: [vob.iscsi.connection.stopped] iScsi connection 0 arrested for vmhba38:C0:T1

    2016 03-31 T 05: 05:48.218Z: [iscsiCorrelator] 1622023703us: [vob.iscsi.target.connect.error] vmhba38 @ vmk1 could not connect to

    [root@vmware6:~] ping 10.xxx.yyy.109

    PING 10.xxx.yyy.109 (10.xxx.yyy.109): 56 bytes

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 0 ttl = 64 time = 0,174 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 1 ttl = 64 time = 0,238 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 2 ttl = 64 time = 0,309 ms

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 3 packets received, 0% packet loss

    round-trip min/avg/max = 0.174/0.240/0.309 ms

    vmkping [root@vmware6:~] 10. xxx.yyy.109

    PING 10.xxx.yyy.109 (10. xxx.yyy. 109): 56 bytes

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 0 ttl = 64 time = 0,179 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 1 ttl = 64 time = 0,337 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 2 ttl = 64 time = 0.382 ms

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 3 packets received, 0% packet loss

    round-trip min/avg/max = 0.179/0.299/0.382 ms

    [root@vmware6:~] NF - z 10. xxx.yyy3260.109

    Connection to 10. xxx.yyy3260.109 port [tcp / *] succeeded!

    [root@vmware6:~] vmkping 8972 10 s. xxx.yyy.109 - d

    PING 10.xxx.yyy.109 (10. xxx.yyy. 109): 8972 data bytes

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 0 packets received, 100% packet loss

    I began to watch the drivers NIC thinking maybe something got screwed up during the upgrade; not the first time I saw problems with the out-of-box drivers supplied by VMware.  I checked VMware HCL for IO devices; the physical NIC used on that host are Intel I217-LM (nic e1000e), Intel I210 (nic - igb) and Intel® 82574 L (nic-e1000e).  Lists HCL from VMware that the driver for the I217-LM & 82574L should be version 2.5.4 - I210 and 6vmw should be 5.0.5.1.1 - 5vmw.  When I went to check, I noticed that he was using a different version of the e1000e driver (I210 driver version was correct).

    [root@vmware6:~] esxcli list vib software | e1000e grep

    Name Version Date seller installation acceptance level

    -----------------------------  ------------------------------------  ------  ----------------  ------------

    NET-e1000e 3.2.2.1 - 1vmw.600.1.26.3380124 VMware VMwareCertified 2016-03-31

    esxupgrade.log seems to indicate that e1000e 2.5.4 - VMware 6vmw should have been loaded...

    [root@esxi6-lab: ~] grep e1000e /var/log/esxupdate.log

    [ ... ]

    # ESXi 6.0 U1 upgrade

    2015-09 - 29 T 22: 20:29Z esxupdate: BootBankInstaller.pyc: DEBUG: about to write payload "net-e100" of VIB VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585 to "/ tmp/stagebootbank".

    [ … ]

    # ESXi 6.0 U2 upgrade

    2016-03 - 31 T 03: 47:24Z esxupdate: BootBankInstaller.pyc: DEBUG: about to write to payload "net-e100" of VIB VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585 to ' / tmp/stagebootbank '

    3.2.2.1 - 1vmw e1000e is recommended for 5.5 U3 and not 6.0 U2 ESXi ESXi driver!  As these drivers are listed as "Inbox", I don't know if there is an easy way to download the drivers supplied by seller (vib) for him, or even if they exist.  I found an article online to manually update drivers on an ESXi host using esxcli; I tried check and install the new e1000e drivers.

    [root@vmware6:~] esxcli software update net-e1000e-n-d vib https://hostupdate.VMware.com/software/VUM/production/main/VMW-Depot-index.XML

    Result of the installation

    Message: The update completed successfully, but the system must be restarted for the changes to be effective.

    Restart required: true

    VIBs installed: VMware_bootbank_net - e1000e_3.2.2.1 - 2vmw.550.3.78.3248547

    VIBs removed: VMware_bootbank_net - e1000e_3.2.2.1 - 1vmw.600.1.26.3380124

    VIBs ignored:

    As you can see there is a newer version and installed it.  However after restarting it still did not fix the issue.  I even went so far as to force the CHAP password used to authenticate to reset and update on both sides (iSCSI initiator and target).  At this point, I wonder if I should somehow downgrade and use the driver of 2.5.4 - 6vmw (how?) or if there is another issue at play here.  I'm going down a rabbit hole with my idea that it is a NETWORK card driver problem?

    Thanks in advance.

    --G

    esxi6-u2-iscsi.jpg

    I found [a | the?] solution: downgrade for 2.5.4 drivers - Intel e1000e 6.0 U1 6vmw.  For the file name, see VMware KB 2124715 .

    Steps to follow:

    1. Sign in to https://my.vmware.com/group/vmware/patch#search
    2. Select ' ESXi (Embedded and Installable) ' and '6.0.0 '.  Click on the Search button.
    3. Look for the update of the release name - of - esxi6.0 - 6.0_update01 (released 10/09/2015). Place a check next to it and then download the button to the right.
    4. Save the file somewhere locally. Open archive ZIP once downloaded.
    5. Navigate to the directory "vib20". Extract from the record of the net-e1000e.  In East drivers Intel e1000e ESXi 6.0 U1: VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585.vib
    6. This transfer to your ESXi Server (be it via SCP or even the file within the vSphere Client browser). Save it somewhere you will remember (c.-a-d./tmp).
    7. Connect via SSH on the ESXi 6.0 host.
    8. Issue the command to install it: software esxcli vib install v - /tmp/VMware_bootbank_net-e1000e_2.5.4-6vmw.600.0.0.2494585.vib
    9. Restart the ESXi host.  Once it comes back online your datastore (s) iSCSI / volumes should return.  You can check by issuing h df or the list of devices storage core esxcli at the CLI prompt.  vSphere Client also works

    It took some time to find the correct file (s), the steps and commands to use.  I hope someone else can benefit from this.  I love that VMware can provide for virtualization, but lately, it seems that their QA department was out to lunch.

    Do not always have a definitive explanation as to why Intel e1000e ESXi 5.5 U3 drivers were used when ESXi 6.0 U1 U2.  Maybe someone with more insight or VMware Support person can.

    & out

    --G

  • Slow iSCSI-IP connection between ESXi and DataCore Virtual storage via 10 Gbit cards

    Hi all

    at the moment, I have test the following configuration:

    Site #1: DELL PowerEdge R730xd with 10 Gbit NIC and DataCore-V DataCore Virtual Storage under ESXi (vSphere 6.0.0 build Standard, 3380124)

    Site #2: Apple MacPro6, 1 with 10 Gbit NIC and ESXi (6.0.0 vSphere Standard, build of 3380124)

    DataCore # 1 server has a disk via iSCSI to ESXi on #2. The connection is running. Up here, so good, but for example when I start a Storage vMotion on #2 of local SSD on the iSCSI disk, speed is exactly 1 Gbps. interesting is that the speed increase when I start a second Storage vMotion and still when I start a third Storage vMotion etc. This behavior is clearly visible in my attached screenshot.

    To me that sounds like to it no matter what iSCSI-IP connection is limited to 1 Gbps.


    All the components used are certified VMware.

    Any ideas that I can check for this problem?

    Thank you very much

    Migo

    The reason for this behavior is that the MacPro6, 1 has two interfaces of iSCSI software that points to a DataCore frontend. It is not allowed. After you have disabled the second interface of software iSCSI, growth up to full 10 Gbps speed.

  • Boots ESXi host iSCSI front

    Hi everyone, I noticed some behaviors and I was hoping someone could confirm that this is normal.  I suspect it is perhaps by design.

    Situation: I have an ESXi host and I have an iSCSI target.  If the ESXi host starts before the iSCSI target is ready, the ESXi host is unable to connect to the iSCSI target and the data store is not available.  The iSCSI data store seems to be unavailable indefinitely.

    Issues related to the:

    1 is this normal that someone must manually enter ESXi and re - initialize the connection to the iSCSI target in this scenario?

    2. is there a best practice to automate this process so that if the ESXi server goes up, it will automatically continue trying to scan analyze again until the iSCSI rises?

    In this case, we actually virtual storage running in a VM on the host, then the ESXi host arrives before the iSCSI (running in a VM).

    Thank you!

    Then this is normal behavior, given that the virtual appliance will only begin after boots ESXi finished.

  • ESXi 6.0 challenges iSCSI Equallogic PS 6100

    Hi all

    I installed my Windows of 5.5 to 6.0 vCenter vCenter server.  At the same time, I wiped one of my Dell R710s 3 (3 years) to get a new installation, since it has been upgraded since ESXi 5.0.  I started with the Dell ESXi 6 disc.  Installation was smooth.  After you configure the host to match the configuration of my other two guests of ESXi 5.5, I couldn't have my Equallogic LUN to display.  Other LUN iSCSI (Drobo for backup, Openfiler to test) had no problem.  Only Equallogic and Nexenta.  The two without complain "Access denied" when the ESXi 6.0 Server tries to discover the San.

    By looking farther, two things have changed in ESXi 6 at face value: 1., My Broadcom NetXtreme II BCM5709 iSCSi cards are now registered as QLogic.  2. my software VMware iSCSI adapt now has a new name of the initiator.

    My Equallogic SAN running firmware 7.x.  To see if the pilot makes all the difference, I wiped the ESXi host once again and installed the version of VMware instead of Dells.  Now, my iSCSI network cards are a little different... ranked QLogic NetXtreme II.  Still, no luck on the SAN and errors on the persistence without 'access denied, or not allowed.

    I made sure I don't use CHAP or and ACL.  While the static tab to the Nexenta works (which means that he discovers all IQN) Lun never appear.  With the Equallogic, no IQN is discovered.

    I wanted to make this post to see if we can get this figured out and maybe after some good info if we get a resolution.  However, I don't have weeks to experiment, because I have to go back to vCenter 5.5 and recover the ESXi host.  Any help and suggestions would be appreciated!

    Hello

    On this page and update your firmware of hardware and its driver: VMware Compatibility Guide: search for i/o device

    Its brand name is also Qlogic

  • The upgrade to ESXi 4.1 with storage iSCSI software to ESXi 5.5

    I intend to upgrade ESXi 4.1 with which I have attached to ESXi 5.5 iSCSI software storage

    I've updated all my guests who were on the SAN storage with no problems.

    I would like to know that if it there's nothing I should take care before I update the host connected iSCSI to ESXi 5.5, as for the hosts with SAN attached storage I made to remove the cables of SAN before upgrading the host.

    Also, if all known problems, you will be informed during the upgrade to ESXi 4.1 to ESXi 5.5 with iSCSI Software storage, please let me know?

    If there is no different process to upgrade coz in ESXi 4.1 I do not see this link ports of iSCSI vmkernel ports software iSCSI but I know in ESXi5.5 we have to or I just launch a procedure of standard upgrade via the Update Manager and everything will be taken care?

    Thanks in advance

    With ESXi prior to version 5, linking ports had to be done via command line. However, if it has been configured correctly, you should be able to upgrade to ESXi 5.5 (assuming that the host has not been previously updated since version 3). BTW. the last time that I disconnected the host of storage for an upgrade has been with ESX 3.0.

    André

  • ESXi 5.5u1 added iscsi storage adapter - reboot and now vSphere Client could not connect to "my ip" an unknown error has occurred.  The server could not interpret the customer's request.  The remote server returned an error (503) server unavailable

    I have not yet connect to an iSCSI target device.

    I can ping my host

    when I open http:// "hostip" in a web browser, I get a 503 service not available.

    restart the host gets me nowhere.

    SSH opens somehow, but can not connect

    Console seems OK

    vSphere Client can not connect

    If I reset to the default values of the console it is ok, but when I reconfigure the host, this error will be returned.

    I tried to reinstall from DVD

    I'm completely corrected to date via SSH esxcli

    This happens on both my hosts, although they are almost identical Lenovo thinkserver TS140s with broadcom 10Gig NIC and intel NETWORK card integrated

    It almost always seems to happen the next time I reboot after enabling iscsi support

    The only weird thing I have is that my Integrated NIC is a processor intel 217's and I have to use a special VIB so that it can be used in ESXi

    Customer's winning 8.1

    Here is my installation notes

    Install on USB/SSD stick with custimized ISO i217 NIC driver, reset the configuration and reboot

    Management NIC set to NIC0:1Gig

    IP management: hostIP/24 GW: my gateway

    DNS:DNS on windows vm1, vm2 Windows dns

    HostName:ESXi1.Sub.myregistereddomainname custom DNS Suffixes: sub.myregistereddomainname

    Reset

    Patch to date (https://www.youtube.com/watch?v=_O0Pac0a6g8)

    Download the VIB and .zip in a data store using the vSphere Client

    To get them (https://www.vmware.com/patchmgr/findPatch.portal)

    Start the SSH ESXi service and establish a Putty SSH connection to the ESXi server.

    Put the ESXi server in maintenance mode,

    example of order: software esxcli vib install /vmfs/volumes/ESXi2-2/patch/ESXi550-201404020.zip d

    Re install the Intel 217 NIC driver if removed by patch

    Change acceptance ESXi host sustained community level,

    command: esxcli software - acceptance-level = CommunitySupported

    Install the VIB

    command:esxcli vib software install - v /vmfs/volumes/datastore1/net-e1000e-2.3.2.x86_64.vib

    command: restart

    Connect via VSphere client

    -Storage

    Check/fix/create local storage. VMFS5

    -Networking

    vSwitch0

    Check vmnic0 (1)

    Network port group rename VM to the 'essential '.

    Rename management group of network ports for management of basic-VMkernel traffic.

    -Configuration time

    Enable NTP Client to start and stop the host. ntp.org set 0-3 time servers

    DNS and routing

    Start the virtual machine and stop

    -enable - continue immediately if tools start - stop - prompted action Shutdown - the two delay to 10 seconds

    Security profile

    Services

    SSH - startup - enable the start and stop with host

    Cache host configuration

    -Properties to start SSD - allocate 40GB for the cache host.

    Flashing warnings SSH:

    Advanced settings, UserVars, UserVars.SuppressShellWarning, change from 0 to 1.

    Storage adapters

    -Add - add-in adapter software iSCSI

    I think I saw that I was wrong.  In fact, I applied two patches when only suited. I started with 5.5u1rollup2 and then applied the ESXi550-201404001 and ESXi550-201404020.  Strangely I did not t o had problems until I worked with iSCSI.

Maybe you are looking for