VM connection to iSCSI

Hi people,

I have linked to some Equallogic storage infrastructure ESX 3.5.  Each host has two connections of 1 GB in the private network iSCSI.  I need to build some new file servers with allowances of important storage (up to 2 TB).  I had discussed simply do big VMDK for file servers, or RDM but I'm looking more towards the use of iSCSI on the virtual machine software and link out to LUN.

My question is, can I just make a new vSwitch that is presentable to the virtual machines that use the same vSwitch host uses for its iSCSI connection?  The existing iSCSI vSwitch has a port service console and vmkernel port and is not visible for virtual machines.  Is there any problem with the host and the guest using the same iSCSI connection? Wouldn't be better to add additional cards for the VMS to use for iSCSI?  So far virtual machines were all using VMDK so I have not crossed the bridge to have need of a virtual machine to connect to iSCSI.

The road of iSCSI is not the best practice for this scenario, and I should come back on VMDK/RDM?

Any suggestions are appreciated.  I'm hoping to migrate infrastructure to 4.1 this summer if this has an impact on the recommendations.

Thank you!

Hello and welcome to the forums.

My question is, can I just make a new vSwitch that is presentable to the virtual machines that use the same vSwitch host uses for its iSCSI connection?

It depends on.  If you have network adapters in each host, you can create a separate vSwitch and use this iSCSI comments traffic.  I prefer this approach, when resources are available.  There is some indication of this separation in the document 'WHAT SAN best practices for the deployment of Microsoft Exchange with VMware vSphere.  That being said, you can also just create a new VM on your existing vSwitch port group and go that route.

The existing iSCSI vSwitch has a port service console and vmkernel port and is not visible for virtual machines. Is there any problem with the host and the guest using the same iSCSI connection?

It might make things a little more difficult to solve, but it is certainly a simpler configuration to implement.

Wouldn't be better to add additional cards for the VMS to use for iSCSI? So far virtual machines were all using VMDK so I have not crossed the bridge to have need of a virtual machine to connect to iSCSI.

More isn't always better?  It really depends on the use current (and new), but it probably wouldn't be a bad idea.

The road of iSCSI is not the best practice for this scenario, and I should come back on VMDK/RDM?

The limit of 2 TB 512 bytes is a good reason to consider the initiator of comments.  This approach solves the limit of 2 TB 512 bytes, but it creates new complexities around VM portability, backup and other operational tasks.  Take these things into consideration as well.

Good luck!

Tags: VMware

Similar Questions

  • After that stright connected to iSCSI (initiator) Host cannot ping the server iSCSI (target), but the target can, why?

    After that host on vSHere 4.0 strightly connected to iSCSI (initiator) host cannot ping the server iSCSI (target), but target can.  And iSCSI works well. I mean I can create and use the iSCSI disk, why? It makes me confused.

    Thank you!

    Geoarge,

    iSCSI traffic uses a VMkernel port, instead of using the command 'ping', use 'vmkping '.

    André

  • How to connect to iSCSI SAN without compromising security

    Hello:

    How to enable server OSes (VMS or physical host computers) to connect and LUN iSCSI mount without compromising the safety of our ESX host?  We have a few Microsoft servers that need to use iSCSI initiators at Mount MON for MSCS.   We cannot use the ESX initiators because VMware doesn't support iSCSI to virtual storage with MSCS.  We have already read all the documentation and spoke with VMware support, so we know that our only option is to use the iSCSI initiators in Microsoft servers to connect to the LUN.

    Our concern is security.  If we let the servers use their iSCSI initiators to connect to the San, then they also won't have access to our service and the vkernels via the iSCSI network console?  ESX requires that you have a port the service console and the port of vkernel on the iSCSI network for each ESX box that you want to use the ESX initiator for.  We struggle to understand how to connect any machine (virtual or physical) to the iSCSI network to mount LUN without exposing our service and vkernels consoles.  I know that the best practice is to remove this VMs network for this exact reason, but of course many organizations also have physical servers (UNIX, Windows) who need to access their iSCSI SAN.  How people treat this?  How much of a security problem is it?  Is there a way to secure the service console and vkernel ports while allowing host ESX - no access to the SAN?  I know that many of you are facing this exact in your organizations situation, please help.  Obviously, it is not supposed that nobody uses their SAN iSCSI for anything else except for the ESX host.  I thank very you much.

    James

    Hello

    Check out this blog

    Use of firewall is certainly a step in the right direction for that. If you can't have separate iSCSI networks, then you will need to isolate nodes NON-ESX/VCB iSCSI using other mechanisms. I would certainly opt for firewalls or reduce the redundancy to just 2 network-by-network cards and not 4 to a single network.

    Someone at - it any other suggestions? Surely many ESX users share their iSCSI SAN with a lot of different systems and operating systems. Thanks again.

    They do, but they do not secure their networks for their VMs ESX iSCSI / other physical systems. You have asked a very important question and it's how to connect to iSCSI SAN without compromising safety. If the options are currently:

    1. Physically isolate

    2. Isolate using firewall

    Given that ESX speaks in clear text and does not support IPsec for iSCSI, you have very limited options that are available to you. The firewall you use and charge iSCSI, you send through it will determine if there is no latency. Yes its a cost extra, but if it is an independent network switches/ports/etc.

    Best regards
    Edward L. Haletky
    VMware communities user moderator, VMware vExpert 2009
    ====
    Author of the book ' VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.
    Blue gears and SearchVMware Pro items - top of page links of security virtualization - Security Virtualization Round Table Podcast

  • EqualLogic PS6100X: direct connections (double) iscsi to 3 vmware ESX host

    Hello

    Due to the reduction of costs, we integrate

    1 x ps6100x Dell Equallogic (2 controllers to each 4 ports)

    3 x dell poweredge r720 (each have 2 ports dedicated for trafficking SAN storage)

    vSphere 5.5 (shared storage on the San)

    Without the use of SAN switches. Each host has dual direct connection (1 to each SAN controller) with the initiator iscsi software.

    We did before with Dell MD3200i, who has also 2 controllers and 8 ports, so we expect no problems.

    But now that I have read on the Equallogic, I'm starting to become uncertain if this Setup will work?

    I know that this is not recommended, but at this point, my only concern is that it will work (even with less performance).

    Can you please give me some advice on this?

    Best regards

    Joris

    P.S. If this is probably NOT possible, what would be the best/low average cost to make this possible?

    I've seen this failure at work, as in the connections dropped, BSOD was virtual machines with a single host with no switch.  iSCSI traffic tends to be very burst, which is having when the right switch pays you back.

    Re: 3750 X are those good switches, there is some adjustment settings that need to be addressed.  Also to solve a flowcontrol problem download the most current IOS firmware.

    For such a small group / number of servers, the stacked 2960 would be OK.  Perhaps to problems later if you need to scale this environment.  lack of 2960 allocate buffer, then you want to start without Jumbo frames, all enter into stable and of good practices.  So maybe later try enabling it.  3750 X works very well with Jumbo and Flowcontrol active BTW.

    These choices are better than no switch miles.  3750 x were pretty expensive last time I looked.  Unless you have a little already.  If you share them with the rest of the traffic that is not optimal, but at least put all ISCSI traffic on its own VIRTUAL local network.

    4948 are the right choice.  Some high-end HP switches.  Step away from the elders, like 2810 or 2824/48.  They seem to be there for cheap $$ but are designed for GbE Office not GbE iSCSI.

    Kind regards

  • How to configure a NIC connection for iSCSI

    I have a few 2950 s PowerEdge with four NICs in them, trying to use the NIC bonding on 3 of them for iSCSI.

    Are there articles or KB links explaining how to do (if it is possible?)

    Or, just point me to the right section in the vSphere client and I can understand... I have just found nothing of poking around, I wonder if it's just not possible?

    Hello

    The best answer here is that consolidation of NICs for iSCSI traffic is not supported and the various guides on the paths where you should look.

  • Fabric of interconnection to Bay iSCSI connection

    I need to connect my fabric for interconnection to a new array of iSCSI.  There are a number of blades UCS that need to connect to iSCSI LUNS. The FI is currently connected to the rest of the network through switches to Nexus 7000 K.   Should I I plugged directly from the FI to the iSCSI array or go through the 7000 K and then to the table?

    Hi, this is more a matter of design, you need to think about what will need access to the storage ISCCI matrix. For example, if a single UCS blaes will have access to this storage group, you can consider plug directly such as iscsi traffic must pass your N7Ks if the two fabrics are active. If you want another type of server such as HP or IBM to access storage, you should consider to connect the storage array to the N7Ks if your tissues are configured in end-host mode. Again, this will depend on your current implementation

  • NAS to Nas backup for iSCSI volumes

    Hello

    I have a NAS PX12-400r, with some volumes are configured as iSCSI volumes, connected to different servers.

    I would like to find a nas to nas backup solution, in order to back up or synchronize the entire volume to an another NAS PX12-400r.

    I want to do it using only the SAR in a transparent way for servers connected to iSCSI volumes, or one-third of the machine.

    Concerning

    Gabriele

    Hello ecohmedia,

    For NAS to NAS (data) transfers, your only option is running from part to part.

  • ID mapping session Iscsi MPIO path

    Hello

    I'm playing ISCSI with MPIO. I get the connection information iscsi of the cmdlet "Get-IscsiConnection". It gives the target portals to which the initiator is connected. Then I have the mpclaim - v command which gives me the current state of the railways. In my case, I have an active/optimized path and other avenues of relief. This information on mpio path statements is shown with regard to the path ID. I want a way to find out what connection/target portal is this track Id matches. In the GUI, tab window mpio initiator iscsi has this information. Is there a way to get this info through PowerShell?

    Reference for the mpclaim - v output:

    MPIO Storage Snapshot on Tuesday, 05 May 2009, at 14:51:45.023
    Registered DSMs: 1
    ================
    +--------------------------------|-------------------|----|----|----|---|-----+
    |DSM Name                        |      Version      |PRP | RC | RI |PVP| PVE |
    |--------------------------------|-------------------|----|----|----|---|-----|
    |Microsoft DSM                   |006.0001.07100.0000|0020|0003|0001|030|False|
    +--------------------------------|-------------------|----|----|----|---|-----+
    
    Microsoft DSM
    =============
    MPIO Disk1: 02 Paths, Round Robin, ALUA Not Supported
            SN: 600D310010B00000000011
            Supported Load-Balancing Policy Settings: FOO RR RRWS LQD WP LB
    
        Path ID          State              SCSI Address      Weight
        ---------------------------------------------------------------------------
        0000000077030002 Active/Optimized   003|000|002|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
        0000000077030001 Active/Optimized   003|000|001|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
    MPIO Disk0: 01 Paths, Round Robin, ALUA Not Supported
            SN: 600EB37614EBCE8000000044
            Supported Load-Balancing Policy Settings: FOO RR RRWS LQD WP LB
    
        Path ID          State              SCSI Address      Weight
        ---------------------------------------------------------------------------
        0000000077030000 Active/Optimized   003|000|000|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
    Microsoft DSM-wide default load-balancing policy settings: Round Robin
    
    No target-level default load-balancing policy settings have been set.The reference for iscsi connection and session info: 
    
    PS C:\> Get-IscsiConnection
    
    ConnectionIdentifier : ffffe001e67f4020-29fInitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 44996TargetAddress        : 10.120.34.12TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a0InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 46020TargetAddress        : 10.120.34.13
    
    TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a1InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 47044TargetAddress        : 10.120.34.14
    
    TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a2InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 46788TargetAddress        : 10.120.34.15
    
    TargetPortNumber     : 3260PSComputerName       :
    
    PS C:\>I basically want to know which target portal does this pathid "0000000077030002" correspond to ?
    

    Hello

    Please post your question on the TechNet forums:

    Here is the link:

    https://social.technet.Microsoft.com/forums/Windows/en-us/home?category=w7itpro

    Kind regards

  • Hyper-V VSS on iSCSI

    It is possible to activate the VSS via an iSCSI connection? I'm doing a backup of a hard drive to remote servers, which is hosted on a San and connected via iSCSI. The backup client can see the virtual C drive of servers but not the E drive, which is on the SAN.

    All servers running Server R2 DC 2012.

    Thank you


    This issue is beyond the scope of this site and must be placed on Technet or MSDN

    http://social.msdn.Microsoft.com/forums/en-us/home

  • storage iSCSI with UCS

    Hi all

    Can I ask a question regarding the connection of iSCSI storage for use with UCS. Look at us with Nimble iSCSI based storage & want to understand best practice recommendations on how to connect it to UCS to get the best level of performance & reliability / resilience etc.

    Another issue is more closely how VMware deals with loss of connectivity on a path (where double connections are the installation program from the warehouse to the tissues), would he re - route traffic to the path running?

    Any suggestion would be appreciated.

    Kassim

    Hello Kassim,

    Currently the agile iSCSI storage is certified with UCS 2.0.3 firmware version.

    http://www.Cisco.com/en/us/docs/unified_computing/UCS/interoperability/matrix/r_hcl_B_rel2.03.PDF

    The following guide can serve as a reference.

    Virtualization solution Cisco with the agile storage reference Architecture

    http://www.Cisco.com/en/us/solutions/collateral/ns340/ns517/ns224/ns836/ns978/guide_c07-719522.PDF

    In above installation, ESXi software iSCSi multipath with PSP Round Robin algorithm is implemented to take care of the IO and failover with load balancing two paths.

    HTH

    Padma

  • iSCSI vNIC etc.

    An iSCSI vNIC is created only to allow SAN boot? When the OS is installed these vNIC will not show upward, only the NIC overlay will show?

    I'm trying to understand what vNIC is necessary to allow the start of SAN SAN connectivity and iSCSI once the operating system is loaded.

    Fix.

    Yes it is supported.  The CIV iBFT functionality (I assume you are using the adapter) will allow the host iSCSI initialization, and then to access an iSCSI network SAN LUN data (for example a VMFS) will build the OS Software integrated into ESX initiator.

    Robert

  • MSA 2040 Direct attached iSCSI

    We implement a VSphere environment 5.5-3 hosts directly connected to an MSA 2040 with iSCSI.  Attached host to the MSA a question directly or have us use switches?  Our equipment provider said

    "in order for the volumes of VMWare to have access through shared storage needs through a switch. You can run questions where if they are directly attached, then the VMDK on the single host may not be seen or accessible on other hosts (high availability or Vmotion). »

    Also

    "Support from direct attach to the msa via iSCSI is dependent on the BONE."  Some OS support other don't. MSA does not directly connect to Vmware via iSCSI. "

    I talked to HP directly and they claim to direct connection via iSCSI is supported with the MSA 2040 and VMware.

    Is anyone running this configuration that can put my mind at ease?

    Thank you.

    Hello

    Yes, you can use a directly attached MSA 2040.

    I use this configuration yourself by using DAC 10 GB cables, also sold this solution to customers.

    Please note that each host must have 2 connections to the San (if the SAN has two controllers). Each host must have a connection to each controller on the SAN.

    4 directly attached hosts is supported (the controllers have 4 ports on each one).

    Please note that several subnets must be used and configured to allow a correct path discovery. A BIG note to stress also is DO NOT USE the PORT iSCSI MAPPING!

    Be sure to activate the Robin. Please note that you will see an active path perform I/O by the data store (since only one controller has a volume), but in case of rupture of the cable, or controller failure, there are switched to the other controller.

    I use this setup for a few years and I love it!

  • ESXi 6.0 U1 U2 upgrade, iSCSI questions

    Hello

    First post, so I'll try and summarize my thoughts and what I did with troubleshooting.  Please let me know if I left anything out or more information is required.

    I use the free ESXi 6.0 on a Board with a CPU Intel Xeon E3-1231v3 and 32 GB of DDR3 ECC UDIMM mATX to Supermicro X10SLL-F.  I use a 4G USB FlashDrive for start-up, 75 GB 2.5 "SATA to local storage (i.e. /scratch) and part of a 120 GB SSD for the cache of the host, as well as local storage.  The main data store for virtual machines are located on a target (current running FreeNAS 9.3.x). iSCSI  This Setup worked great since installing ESXi 6.0 (June 2015), then 6.0 U1 (September 2015) and I recently made the leap to 6.0 U2.  I thought everything should be business as usual for the upgrade...

    However, after upgrading to 6.0 U2 none of the iSCSI volumes are "seen" by ESXi- vmhba38:C0:T0 & vmhba38:C0:T1, although I can confirm that I can ping the iSCSI target and the NIC (vmnic1) vSwitch and VMware iSCSI Software adapter are loaded - I did not bring any changes to ESXi and iSCSI host before the upgrade to 6.0 U2.  It was all work beforehand.

    I went then to dig in the newspapers; VMkernel.log and vobd.log all report that there is not able to contact the storage through a network issue.  I also made a few standard network troubleshooting (see VMware KB 1008083); everything going on, with the exception of jumbo frame test.

    [root@vmware6:~] tail - f /var/log/vmkernel.log | grep iscsi

    [ ... ]

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba38:CH:0 T: 0 CN:0: iSCSI connection is being marked "OFFLINE" (event: 5)

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess [ISID: TARGET: TPGT (null): TSIH 0: 0]

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn [CID: 10.xxx.yyy.195:41620 r L: 0: 10.xxx.yyy.109:3260]

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba38:CH:0 T:1 CN:0: connection iSCSI is being marked "OFFLINE" (event: 5)

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess [ISID: TARGET: TPGT (null): TSIH 0: 0]

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn [CID: 10.xxx.yyy.195:32715 r L: 0: 10.xxx.yyy.109:3260]

    [root@vmware6:~] tail - f /var/log/vobd.log

    [ ... ]

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622023006us: [vob.iscsi.connection.stopped] iScsi connection 0 arrested for vmhba38:C0:T0

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622023183us: [vob.iscsi.target.connect.error] vmhba38 @ vmk1 could not connect to iqn.2005 - 10.org.freenas.ctl:vmware - iscsi because of a network connection failure.

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622002451us: [esx.problem.storage.iscsi.target.connect.error] connection iSCSI target iqn.2005 - 10.org.freenas.ctl:vmware - iscsi on vmhba38 @ vmk1 failed. The iSCSI Initiator failed to establish a network connection to the target.

    2016 03-31 T 05: 05:48.218Z: [iscsiCorrelator] 1622023640us: [vob.iscsi.connection.stopped] iScsi connection 0 arrested for vmhba38:C0:T1

    2016 03-31 T 05: 05:48.218Z: [iscsiCorrelator] 1622023703us: [vob.iscsi.target.connect.error] vmhba38 @ vmk1 could not connect to

    [root@vmware6:~] ping 10.xxx.yyy.109

    PING 10.xxx.yyy.109 (10.xxx.yyy.109): 56 bytes

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 0 ttl = 64 time = 0,174 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 1 ttl = 64 time = 0,238 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 2 ttl = 64 time = 0,309 ms

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 3 packets received, 0% packet loss

    round-trip min/avg/max = 0.174/0.240/0.309 ms

    vmkping [root@vmware6:~] 10. xxx.yyy.109

    PING 10.xxx.yyy.109 (10. xxx.yyy. 109): 56 bytes

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 0 ttl = 64 time = 0,179 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 1 ttl = 64 time = 0,337 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 2 ttl = 64 time = 0.382 ms

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 3 packets received, 0% packet loss

    round-trip min/avg/max = 0.179/0.299/0.382 ms

    [root@vmware6:~] NF - z 10. xxx.yyy3260.109

    Connection to 10. xxx.yyy3260.109 port [tcp / *] succeeded!

    [root@vmware6:~] vmkping 8972 10 s. xxx.yyy.109 - d

    PING 10.xxx.yyy.109 (10. xxx.yyy. 109): 8972 data bytes

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 0 packets received, 100% packet loss

    I began to watch the drivers NIC thinking maybe something got screwed up during the upgrade; not the first time I saw problems with the out-of-box drivers supplied by VMware.  I checked VMware HCL for IO devices; the physical NIC used on that host are Intel I217-LM (nic e1000e), Intel I210 (nic - igb) and Intel® 82574 L (nic-e1000e).  Lists HCL from VMware that the driver for the I217-LM & 82574L should be version 2.5.4 - I210 and 6vmw should be 5.0.5.1.1 - 5vmw.  When I went to check, I noticed that he was using a different version of the e1000e driver (I210 driver version was correct).

    [root@vmware6:~] esxcli list vib software | e1000e grep

    Name Version Date seller installation acceptance level

    -----------------------------  ------------------------------------  ------  ----------------  ------------

    NET-e1000e 3.2.2.1 - 1vmw.600.1.26.3380124 VMware VMwareCertified 2016-03-31

    esxupgrade.log seems to indicate that e1000e 2.5.4 - VMware 6vmw should have been loaded...

    [root@esxi6-lab: ~] grep e1000e /var/log/esxupdate.log

    [ ... ]

    # ESXi 6.0 U1 upgrade

    2015-09 - 29 T 22: 20:29Z esxupdate: BootBankInstaller.pyc: DEBUG: about to write payload "net-e100" of VIB VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585 to "/ tmp/stagebootbank".

    [ … ]

    # ESXi 6.0 U2 upgrade

    2016-03 - 31 T 03: 47:24Z esxupdate: BootBankInstaller.pyc: DEBUG: about to write to payload "net-e100" of VIB VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585 to ' / tmp/stagebootbank '

    3.2.2.1 - 1vmw e1000e is recommended for 5.5 U3 and not 6.0 U2 ESXi ESXi driver!  As these drivers are listed as "Inbox", I don't know if there is an easy way to download the drivers supplied by seller (vib) for him, or even if they exist.  I found an article online to manually update drivers on an ESXi host using esxcli; I tried check and install the new e1000e drivers.

    [root@vmware6:~] esxcli software update net-e1000e-n-d vib https://hostupdate.VMware.com/software/VUM/production/main/VMW-Depot-index.XML

    Result of the installation

    Message: The update completed successfully, but the system must be restarted for the changes to be effective.

    Restart required: true

    VIBs installed: VMware_bootbank_net - e1000e_3.2.2.1 - 2vmw.550.3.78.3248547

    VIBs removed: VMware_bootbank_net - e1000e_3.2.2.1 - 1vmw.600.1.26.3380124

    VIBs ignored:

    As you can see there is a newer version and installed it.  However after restarting it still did not fix the issue.  I even went so far as to force the CHAP password used to authenticate to reset and update on both sides (iSCSI initiator and target).  At this point, I wonder if I should somehow downgrade and use the driver of 2.5.4 - 6vmw (how?) or if there is another issue at play here.  I'm going down a rabbit hole with my idea that it is a NETWORK card driver problem?

    Thanks in advance.

    --G

    esxi6-u2-iscsi.jpg

    I found [a | the?] solution: downgrade for 2.5.4 drivers - Intel e1000e 6.0 U1 6vmw.  For the file name, see VMware KB 2124715 .

    Steps to follow:

    1. Sign in to https://my.vmware.com/group/vmware/patch#search
    2. Select ' ESXi (Embedded and Installable) ' and '6.0.0 '.  Click on the Search button.
    3. Look for the update of the release name - of - esxi6.0 - 6.0_update01 (released 10/09/2015). Place a check next to it and then download the button to the right.
    4. Save the file somewhere locally. Open archive ZIP once downloaded.
    5. Navigate to the directory "vib20". Extract from the record of the net-e1000e.  In East drivers Intel e1000e ESXi 6.0 U1: VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585.vib
    6. This transfer to your ESXi Server (be it via SCP or even the file within the vSphere Client browser). Save it somewhere you will remember (c.-a-d./tmp).
    7. Connect via SSH on the ESXi 6.0 host.
    8. Issue the command to install it: software esxcli vib install v - /tmp/VMware_bootbank_net-e1000e_2.5.4-6vmw.600.0.0.2494585.vib
    9. Restart the ESXi host.  Once it comes back online your datastore (s) iSCSI / volumes should return.  You can check by issuing h df or the list of devices storage core esxcli at the CLI prompt.  vSphere Client also works

    It took some time to find the correct file (s), the steps and commands to use.  I hope someone else can benefit from this.  I love that VMware can provide for virtualization, but lately, it seems that their QA department was out to lunch.

    Do not always have a definitive explanation as to why Intel e1000e ESXi 5.5 U3 drivers were used when ESXi 6.0 U1 U2.  Maybe someone with more insight or VMware Support person can.

    & out

    --G

  • HP BL490C G6 / NC553M / Virtual Connect Flex-10 / frames not subproject

    Hello world

    So, we are adding some additional 5.5 ESXi hosts in our cluster.  Host hardware configuration looks like this:

    * BL490C G6

    * Dual Proc

    * 72-144 GB of RAM

    * Integrated NC531i NIC

    * Other NC553M NIC

    Inside the enclosure, we have four Flex-10 modules to provide connectivity to the NETWORK adapter as well as the NIC NC553M to the location of mezz 1.  The reason for the card NETWORK NC553M is it is supported of the extended 9K frames.  The NC531i has only 4K support frame.  It's the exact configuration for a G6 blade in the HP cookbook on page 10 of the http://h20628.www2.hp.com/km-ext/kmcsdirect/emr_na-c02533991-10.pdf

    On the side of Virtual Connect I have four modules stacked with cables SFP - DAC 10 m 1 GB to provide connectivity and then we have a shared connection fixed module 1 and module 2 for the power switch in this cage to 10 GB so.  On the kernel to miss, we made sure to allow frames jumbo on iSCSI VLANS.

    I made sure my vSwitch and adapter MTU are all set to 9000.

    vmware-iscsi-01.png

    vmware-iscsi-02.png

    vmware-iscsi-03.png

    The problem is that I can not vmkping with everything above a size of the iSCSI standard package VLAN without having too much attention when I say do not fragment.  As you can see from downstairs than a normal ping works fine, but if I try something like a 2K ping fails.

    vmware-iscsi-04.png

    The switch that he speaks is HP Procurve 5400ZL R2.  I configured a port of 1 GB on the untagged for the iSCSI switch VLAN and I had no problem by assigning an IP address of my computer windows laptop, allowing 9 K jumbo frames and the realization of pings jumbo frame network SAN.  So, that tells me that the switch on the side of SAN seems good.

    So what in the world could cause this problem?  Some things roll around in my head are:

    • Configure another blade of VMWare inside the chassis configured the same way as this one.  See if I can vmkping of frames to the other blades VMWare iSCSI NIC if it works then it would tell me that jumbo frames work inside VMWare and inside the virtual environment itself connect C7000.  They swear in all Virtual Connect documentation that frames "turned on by default".  I would like to find a way to confirm that on the CLI or something.
    • Configure a dedicated connection for iSCSI traffic on the Flex-10 modules to iSCSI VLANS.  Do not use a shared set of uplink.  This shouldn't be a problem though.  This is the scenario 2 of the HP Virtual Connect iSCSI Cookbook
    • Configure another slide with the same material, but something as CentOS load on it and see if he can make this map jumbo frames and the network.  If it can and can not VMWare then he would tell me something wrong with my configuration of VMWare

    I would really appreciate anyone who helps to solve this problem of configuration.  We really need get this cluster running inside this frame so that we can continue with a migration project.

    Sincerely,

    Jonathan

    I have seen that sometimes a host needs a real reboot after enabling jumbo frames, did you do that?

    'Too long message' error comes from the IP stack own hosts and not a peripheral network between the two, your package is never leaving the host.

    Make sure to specify the interface vmkernel dating the -I spend when you run vmkping:

    1. you can vmkping the host on his own localhost and vmkernel interface IP? For example:

    # vmkping d s 8972 127.0.0.1

    # vmkping - I vmk1 d s 2000 [Destination IP]

    2. try tests with esxcli network, diag ping utility for example:

    # esxcli network of diag ping - df - vmk1 - the ipv4 interface - size 2000 - host [IP of Destination]

    3. your iSCSI network is on a completely different subnet of all other interfaces vmkernel (e.g. Management), and it is a flat area not routed layer 2, right?

    4. always make sure that your card drivers NETWORK and firmware are up to date.

    5. the output of the following commands could be useful as well, if you replace information such as IPs please do so in a consistent way to keep relationships intact:

    List of the nic # esxcli network

    # esxcli network ip interface list

    get # esxcli ip interface ipv4 network

    list of ipv4 route # esxcli ip network

    vmware # esxcli network vswitch dvs list

    list standard # esxcli vswitch network

    List of the nic # esxcli network | awk (' print $1' "). grep vmnic | while reading nic; make esxcli network nic get - n '$nic '; fact

    # vmware - v

  • Formatted VMFS3 ISCSI data warehouses are compatible with ESX5.5?

    Hi all

    I'm currently building an ESX 5.5 environment to run in parallel with my environment 4.1 existing and need the new 5.5 ESX servers to connect via iSCSI to a couple of VMFS3 formatted lun.

    I can't see them via the iSCSI storage adapter, but when I try to mount it asks me to format first. I wanted to confirm if anyone else had this problem or has confirmed that VMFS3 is fully compatible with ESX 5.5 connected via iSCSI.

    Thanks in advance.

    Looks like that the MTU on my NICs is set too high. Once I lowered around 1500, they began to present correct data warehouses.

    Thanks for the help, I think I have a handle on it.

    Thank you, MP.

Maybe you are looking for

  • Find the percentage of double pairs false in five two sets of columns

    The situation is that I have four sets of two related columns, and these four sets are repeated during 17 sheets. They are all checkboxes (essentially and or or), and I've already configured to do a little when one of them is checked. What I want to

  • Can not remove the iPod with itunes Podcast

    It seems that I can delete is no longer my ipod via itunes podcast less 12.4. I have a 3rd gen ipod (no, I'm not buying a version more recent one it still works fine) I have manually manage my music and podcast on. Before I could just put all my podc

  • App tabs keep disappear when firefox is charged - at least once a day!

    App tabs keep disappearing in 64-bit Windows 7 at least once a day?

  • NEITHER 9229 Offset & precision

    Hello I use a module NI 9229 in cRIO application and I try to measure the voltage across shunt resistance. The 750uV = 1 a. With 24-bit the 9229 must be precise to ~ 7uV (120 /(2^24). I'm putting a meter between terminals on the 9229 to which is conn

  • IPSEC VPN DMZ HOST NAT

    Hello world First of all thanks for the invaluable information this community offers technicians everywhere... I'm newish to IPSEC VPN and I have a question. I have a DMZ PATed host to a public IP address. I've set up an IPSEC tunnel (with an externa