OK, why ISCSI

Can someone tell me why one of my servers will see an ISCSI target and the other will not be when configured in the same way? ISCSI stores my VM

Would be really helpful to know are the SEM, how they are configured (with the help of CHAP or not?), but also how the networks are configured for iSCSI on both hosts... Do you use dynamic or static iSCSI target discovery?

VMware VCP4

Review the allocation of points for "useful" or "right" answers.

Tags: VMware

Similar Questions

  • Why vmkernel port for iscsi and vmotion services?

    Hello

    Just a quick question.

    I was trying to understand - why we need to have a vmkernel port when we want to use features such as vmotion or iscsi.

    Why not use them with a console port service or a port of vm management traffic.

    No particular reason.

    Thank you

    Yes. You're not really all that traffic produced by DRS and vMotion actions in your production (maybe time critical).

    Concerning

  • After that stright connected to iSCSI (initiator) Host cannot ping the server iSCSI (target), but the target can, why?

    After that host on vSHere 4.0 strightly connected to iSCSI (initiator) host cannot ping the server iSCSI (target), but target can.  And iSCSI works well. I mean I can create and use the iSCSI disk, why? It makes me confused.

    Thank you!

    Geoarge,

    iSCSI traffic uses a VMkernel port, instead of using the command 'ping', use 'vmkping '.

    André

  • Why several vmkernels for iSCSI multipath?

    I'm a little confused about the iSCSI Multipathing. What is the reason for the creation of two vmkernel interfaces and then use command line tools to link them to a new vmhba? Is it different for the use of a software iSCSI initator on a vSwitch with multiple VMNIC, connected to different physical switches?

    There is the same discussion in thread Russian vmug.

    iSCSI Multipathing is not on the trail, its about us pairs - initiators (interfaces vmk) and targets.

    If you have two targets

    • and you need to fail over only - 1 vmk is enough (NIC in Active mode / standby)
    • If you need loadbalacing
      • and you can use the aggregation of links + Hash IP - 1 vmk is sufficient (PSP is Roundrobin and NICs in active/active mode)
      • If you cannot use THE - vmk 2 are needed.
  • Microsoft iSCSI is not currently running

    Original title: "Microsoft iSCSI"

    I realized today that Microsoft iSCSI does not work in my computer. I do not understand why is this, it's just when I clicked on iSCSI, then know it must run to see. We have seen the problems of internet connection in the evening (at the time of day it is perfect but from 8-9 pm it is impossible to get an internet connection.) So I tried to search in the computer system... And found iSCSI.Do that I need to execute or I should he leave it as what? My computer is running Windows Vista SP2.

    Thank you.

    Hello panda98,

    Microsoft iSCSI you can not access the internet. Actually it has nothing to do with Internet problems you have between 8 to 9 h.  It is a service used to facilitate the transfer of data on intranets and to manage storage over long distances.

    You can follow the link for more information below:

    Click here

    You experience the problem of the Internet, could you please tell me exactly, what is happening right now, let me know if you get an error message when you try to access the Internet.

    Thank you
    Irfan H, Engineer Support Microsoft Answers. Visit our Microsoft answers feedback Forum and let us know what you think.

  • iSCSI replication delays

    Hi guys,.

    Basically, I called support raised with support compellent. We have a link 1 GB between two sites, and we reproduce some of the volumes through the link.

    We have replays running at 06:00 and 18:00 and from 06:00 - 08:00 and 18:00 - 8 m link gets used as you wait, but once then it just slows down and as a result replication is trolling.

    I did a test with 2 virtual machines, we have an iscsi target, and the other is an iscsi initiator, and when I transfer data to this drive virtual iscsi totally max on this link, exactly like you I expect.

    My question is why in the compellent doesn't do this? Why he made pretty much stop maxing out the link to 8 tails every time like clock work?

    There is nothing in the QoS profiles, they have no limit value and they are defined for 1 GB link

    Final evaluation supports was saying there was nothing they could do. It was a large amount of data passes to step 3 and replication was running.

    I told them that I could not accept that, because:

    7K discs is not a reason why the SAN could not max 1 GB link

    If the link was being up all the time, and then replication was late I could fully understand and it would be a network problem. But the SAN is not max out and does not come close.

    I just sent them a screen shot of the link showing that between 06:00 - 08:00 and 18:00-20:00 that the link maxes, this coincides with the what Replays are created, if clearly the SAN has the capability to max out, but seems to stop it doing at around 08:00 / pm?

    I am at a loss, replication MB/s on average at about 7.5 which is nothing.

    This graph shows that after 08:00 transfer decreases, and at about 4:45 pm I start my iscsi test and try new max

    and it shows that at around 06:00 and 18:00 it reached a peak of grace replays?

    OK, I can confirm that activation data option immediate iSCSi Center storage solved the problem. The speed has increased considerably. Now all the black and white is used through the link. Then please try this.

    Right-click on each connection remotely, and then change the settings as follows:

    and use these settings. its what we went too and it works a treat!

  • Net M4110x - Net management and iSCSI. Clarification of the configuration steps. The documentation is at best ambiguous.

    Hi guys,.

    I'm having a lot of trouble to translate what I know PS6000 table to the new M4100x.  Here's what I'm building:

    I want my iSCSI traffic that is completely isolated from all other traffic, and I want to use the CMC network to run the Board.  It must be a simple configuration, but Dell documentation for which is worse than useless.  It confuses and everyone who's read it confuses.  Why is this?

    It seems that I should be able to assign the IP addresses of management using the MCC according to DELL:

    Step 1.  Initialize the storage

    * Once in the CMC right-click on storage open storage gui initialization.

    * Member name: MY_SAN01
    * Member, IP: 192.168.101.10
    * Member gateway: 192.168.101.254
    Group name: MY_SAN
    Group IP address: 192.168.101.11
    Group of Memebersip password: groupadmin
    * Password Admin group: groupadmin

    It sounds simple enough, and when I apply this I guess I will be disconnected my M4110x simply because it currently resides on a separate network (net 2 in the image above).  Now how to set up the IP address of my CMC (net0 in the picture above) network management?

    Step 2.  Set ip management port

    According to the documentation of Dell, I have:

    To set the management port:

    * Open a telnet (ssh) session on a computer or console that has access to the PS-M4110 range. The array must be configured beforehand.
     
    * Connect the PS-M4110 modules using the following racadm command: racadm Server 15 connect
     
    * Connect to the PS-M4110 array as grpadmin

    Once I am in:

    Activate the management controller ports using the following commands in the CLI:
    0 > member select MY_SAN01
    1. (array1) > eth select 1
    2. (array1 eth_1) > 10.10.10.17 ipaddress netmask 255.255.255.0
    3. (array1 eth_1) > upward
    4. (array1 eth_1) > exit
    5. (array1) > grpparams
    6. (array1 (grpparams)) > network management-ipaddress 10.10.10.17

    (array1 (grpparams)) > exit

    My interpretation is correct?  Now my questions:

    1. in step 2. SubStep 1 - How can I know what ethernet interface to use?  Step 1 automatically assume eth0?

    2. am I correct in using the same IP address for both step 2 - substep 2 and substep 6?  Or do I have to assign a different IP address for these?  10.10.10.18 maybe.

    3. step 2 - substep 6, it doesn't seem to be that a network mask is that correct?

    4. comparison of the ps6000e - I set up an IP address for each controller (so 2) and then assigned an IP address for the group.  It's 3 IP addresses.  For this M4110, it seems that I have only one controller.  Is this correct?  The specifications make a point that there are 2 controllers.  What happened to the IP address of the controller of the 2nd?

    CLOSE-UPS

    I intend on building a VMware cluster using the algorithm of multiple paths of DELL and I built it to the DSC, but a technician Dell put in place the table initially and have not set up a dedicated management port.  Required configuration routing traffic on the net iSCSI management.  It is not recommended, and I don't want to set up this way.

    Currently, he is a blocking problemand I need to go beyond this ASAP.  I work with a large system integrator in Texas and plan on the order that these systems built this way on their part.  This means that I must be able to explain to them how to proceed.  This issue is standing in the way of progress, and I really hope I can get a satisfactory response from this forum.  Thanks for any helpful answers.

    I think I have the answers to my own questions:

    1. YES.  Step 1 automatically assume eth0.  There are TWO Ethernet interfaces and eth1 is disabled by default, and unless you use step 2 to set the management port this second Ethernet interface is never used.

    2. No. I can't use the same IP address for both lines.  In lower level 6 I need to use a different IP address on the same network 10.10.10.18 would work fine.

    3. YES.  It is correct.  Lower level 6 assumes the network mask that I have included in the point 2.

    4. it's tricky.  There is NO WAY to configure Active/active on these tables.  There are 2 controllers, but one "still asleep," unless the other fails.  Actually, the IP address is assigned to an Abstraction Layer, it maintains.  When fails another controller "awakens" and just starting to accept traffic and it doesn't care what its IP address.

    Another point.  Now that my table is initialized and my interfaces are configured, I need to know what IP address to point my ESXi hosts for their storage.  Use the IP address of the group assigned in step 1.  It is 192.168.101.11 (there is typo in the original post).

  • Connections iSCSI how much should I see?

    Hi all

    Also posted here - iscsi connections how do I see?-Spiceworks and here - communities.vmware.com/.../520622

    Running ESXi 5.5 and a PS4100X Equallogic, hoping someone point me in the right direction on something I noticed.

    Recently installed a MD3220i (https://community.spiceworks.com/topic/1114594-iscsi-port-binding-daft-question)

    After start installation and connection iSCSI can be quite long (as expected by using binding with several subnets iSCSI ports) but my question is not related to this.

    I came today to see what I can do to improve the time of starting, perhaps using static rather than dynamic discovery, and I noticed something weird.

    The Equallogic we have 2 NIC, 10.11.14.1 and 10.11.14.2. When using dynamic discovery I enter the IP address of the Group (10.11.14.253) and I get 8 connections, one for each IP address on the ESXi hosts to each IP addresses on the equallogic, see screenshot iscsi1.png

    Any fine.

    If I then switch to static and put 2 entries in by the data store (one for each NETWORK card on the equallogic) (see the screenshot iscsi2.png) then what I see in the equallogic is 16 connections to the host. Each data store is connected twice for each NETWORK card on the host, everyone via network cards on the equallogic, see screenshot iscsi3.png.

    I don't know if it's better or worse! I have several paths, so it should be better? Or I have many paths, and the system will get confused?

    Thank you

    Hello

    The best option to avoid the delay is for something like the broadcom iSCSI adapter unloading.  ISCSI S/W adapter would be bound to network adapters for volumes MD, (or bind something here and use several subnets rather) the unloading adapter would then manage the EQL triaffic.   This prevents the long startup delay since all related network cards trying to reach the defined all discovery addresses.

    I have never tried to use only addresses static w/ESXi, so I can't really talk about why you get the additional connections.  Unless it still makes a discovery, or the old static entries were still there after you have changed static.

    Kind regards

    Don

  • The ISCSI initiator problem

    Another problem of beginner.  I learned a lot about my new virtual world, and the box Logic Equal Dell.  But I still have a fundamental problem that must be an easy fix, or something I do not understand from a technical perspective.  I want to be able to connect to a snapshot and be able to recover a file if necessary.

    I brought the snapshot online and I could even get on vSphere so I could see him there.  But, for the life of me I can't all iscsi initiator to see the snapshot at all.  I have a few guesses, but I can't seem to understand my problem.

    I can't get the initiator to see all the targets... it just tells me the connection fails.  I think I know why, but my skills of network management are not real strong.  When we had an outside company come in and set up the environment, they put everything on 192.168.100.x or 192.168.101.x.  Since I arrived here (more than 15 years ago), our network has always been 129.1.x.x ip addresses.  I'm programmer for the most part and I got a guy of networking which is solid to the networking but I don't know if he can answer my question either.

    In any case, when I try to add my portal target to the initiator, what IP address or the name am I supposed to point to?  I can't ping the 192 addresses because I'm on an IP 129.1.x.x, correct?  So, this is why I have no chance to find snapshots using the initiators?  I know the vCenter server has an address 129 and the SAN itself has a 129 address and I tried to connect to who but did not work.

    I think I gave permission to the snapshot to be seen by several initiators.

    I think it's my lack of networking, but maybe not.  Any help would be greatly appreciated.

    Thank you

    Ryan Kotowski

    The snapshot will not be a set of files that you can view. This will be a copy of exactly what the LUN looked like when you took the photograph, you need to mount it in vSphere just like you would ride the regular LUN. Then you will see all the VMX and VHDX files for virtual machines that you instantly. Then you can add one in vSphere and power to recover his files. Don't forget that you'll want to disable networking all on this virtual machine so that it is not in conflict with the actual VM on the network.

  • Port-groups, vSphere 5 and Jumbo (iSCSI) frames

    We will implement a UCS system with EMC iSCSI storage. Since this is my first time, I'm a little insecure in design, although I have acquired a lot of knowledge to read in this forum and meanders.

    We will use the 1000V.

    1. is it allowed to use only a GROUP of ports uplink with the following exchanges: mgmt, vmotion, iscsi, vm network, external network?

    My confusion here is what jumboframes? Should we not separate for this connection? In this design all executives are using jumboframes (or are this set by portgroup?)

    I read something about the use of the class of frames extended Service. Maybe it's the idea here.

    2. I read in a thread do not include mgmt and VMotion in the 1000V and put it on a vs. Is this correct?

    In this case, the design of uplink would be:

    1: Mgmt + vMotion (2 vNIC, VSS)

    2: iSCSi (2 vNIC, 1000v)

    3 data VM, external traffic (2 vNIC, 1000v)

    All network cards for parameter as active, Virtual port id teaming

    Answers online.

    Kind regards

    Robert

    Atle Dale wrote:

    I have 2 follow-up questions:

    1. What is the reason I cannot use a 1000V uplink profile for the vMotion and management? Is it just for simplicity people do it that way? Or can I do it if I want? What do you do?

    [Robert] There is no reason.  Many customers run all their virtual networking on the 1000v.  This way they don't need vmware admins to manage virtual switches - keeps it all in the hands of the networking team where it belongs.  Management Port profiles should be set as "system vlans" to ensure access to manage your hosts is always forwarding.  With the 1000v you can also leverage CBWFQ which can auto-classify traffic types such as "Management", "Vmotion", "1000v Control", "IP Storage" etc.

    2. Shouldn't I use MTU size 9216?

    [Robert] UCS supports up to 9000 then assumed overhead.  Depending on the switch you'll want to set it at either 9000 or 9216 (whichever it supports).

    3. How do I do this step: "

    Ensure the switch north of the UCS Interconnects are marking the iSCSI target return traffic with the same CoS marking as UCS has configured for jumbo MTU.  You can use one of the other available classes on UCS for this - Bronze, Silver, Gold, Platinum."

    Does the Cisco switch also use the same terms "Bronze", Silver", "Gold" or "Platimum" for the classes? Should I configure the trunk with the same CoSes?

    [Robert] The Plat, Gold, Silver, Bronze are user friendly words used in UCS Classes of Service to represent a defineable CoS value between 0 to 7 (where 0 is the lowest value and 6 is  highest value). COS 7 is reserved for internal traffic. COS value "any"  equals to best effort.  Weight values range from 1 to 10. The bandwidth percentage can be  determined by adding the channel weights for all channels then divide  the channel weight you wish to calculate the percentage for by the sum  of all weights.

    Example.  You have UCS and an upstream N5K with your iSCSI target directly connected to an N5K interface. If your vNICs were assigne a QoS policy using "Silver" (which has a default CoS 2 value), then you would want to do the same upstream by a) configuring the N5K system MTU of 9216 and tag all traffic from the iSCSI Array target's interface with a CoS 2.  The specifics for configuring the switch are specific to the model and SW version.  N5K is different than N7K and different than IOS.  Configuring Jumbo frames and CoS marking is pretty well documented all over.

    Once UCS receives the traffic with the appropriate CoS marking it will honor the QoS and dump the traffic back into the Silver queue. This is the "best" way to configure it but I find most people just end up changing the "Best Effort" class to 9000 MTU for simplicity sake - which doesn't require any upstream tinkering with CoS marking.  Just have to enable Jumbo MTU support upstream.

    4. Concerning Nk1: Jason Nash has said to include vMotion in the System VLANs. You do not recommend this in previous threads. Why?

    [Robert] You have to understand what a system vlan is first.  I've tirelessly explained this on vaiours posts .  System VLANs allow an interface to always be forwarding.  You can't shut down a system vlan interface.  Also, when a VEM is reboot, a system vlan interface will be FWDing before the VEM attaches to the VSM to securely retrieve it's programming.  Think of the Chicken & Egg scenario.  You have to be able to FWD some traffic in order to reach the VSM in the first place - so we allow a very small subnet of interfaces to FWD before the VSM send the VEM's programming - Management, IP Storage and Control/Packet only.  All other non-system VLANs are rightfully BLKing until the VSM passes the VEM its policy.  This secures interfaces from sending traffic in the event any port profiles or policies have changed since last reboot or module insertion.  Now keeping all this in mind, can you tell me the instance where you've just reboot your ESX and need the VMotion interface fowarding traffic BEFORE communicating with the VSM?  If the VSM was not reachable (or both VSMs down) the VMs virtual interface would not even be able to be created on the receiving VEM.  Any virtual ports moved or created require VSM & VEM communication.  So no, the vMotion interface vlans do NOT need to be set as system VLANs.  There's also a max of 16 port profiles that can have system vlans defined, so why chew up one unnecessarily?

    5. Do I have to set spanning-tree commands and to enable global BPDU Filter/Guard on both the 1000V side and the uplink switch?

    [Robert] The VSM doesn't participate in STP so it will never send BPDU's.  However, since VMs can act like bridges & routers these days, we advise to add two commands to your upstream VEM uplinks - PortFast and BPDUFilter.  PortFast so the interface is FWD faster (since there's no STP on the VSM anyway) and BPDUFilter to ignore any received BPDU's from VMs.  I prefer to ignore them then using BPDU Gaurd - which will shutdown the interface if BPDU's are received.

    Thanks,

    Atle, Norway

    Edit:

    Do you have some recommendations on the weigting of the CoS?

    [Robert] I don't personally.  Others customer can chime in on their suggestions, but each environement is different.  VMotion is very bursty so I wouldn't set that too high.  IP storage is critical so I would bump that up a bit.  The rest is up to you.  See how it works, check your QoS & CoS verification commands to monitor and adjust your settings as required.

    E.g:

    IP storage: 35

    Vmotion: 35

    Vmdata: 30

    and I can then assign management VM-kernels to the Vmdata Cos.

    Message was edited by: Atle Dale

  • ESXi 6.0 U1 U2 upgrade, iSCSI questions

    Hello

    First post, so I'll try and summarize my thoughts and what I did with troubleshooting.  Please let me know if I left anything out or more information is required.

    I use the free ESXi 6.0 on a Board with a CPU Intel Xeon E3-1231v3 and 32 GB of DDR3 ECC UDIMM mATX to Supermicro X10SLL-F.  I use a 4G USB FlashDrive for start-up, 75 GB 2.5 "SATA to local storage (i.e. /scratch) and part of a 120 GB SSD for the cache of the host, as well as local storage.  The main data store for virtual machines are located on a target (current running FreeNAS 9.3.x). iSCSI  This Setup worked great since installing ESXi 6.0 (June 2015), then 6.0 U1 (September 2015) and I recently made the leap to 6.0 U2.  I thought everything should be business as usual for the upgrade...

    However, after upgrading to 6.0 U2 none of the iSCSI volumes are "seen" by ESXi- vmhba38:C0:T0 & vmhba38:C0:T1, although I can confirm that I can ping the iSCSI target and the NIC (vmnic1) vSwitch and VMware iSCSI Software adapter are loaded - I did not bring any changes to ESXi and iSCSI host before the upgrade to 6.0 U2.  It was all work beforehand.

    I went then to dig in the newspapers; VMkernel.log and vobd.log all report that there is not able to contact the storage through a network issue.  I also made a few standard network troubleshooting (see VMware KB 1008083); everything going on, with the exception of jumbo frame test.

    [root@vmware6:~] tail - f /var/log/vmkernel.log | grep iscsi

    [ ... ]

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba38:CH:0 T: 0 CN:0: iSCSI connection is being marked "OFFLINE" (event: 5)

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess [ISID: TARGET: TPGT (null): TSIH 0: 0]

    (2016 03-31 T 05: 05:48.217Z cpu0:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn [CID: 10.xxx.yyy.195:41620 r L: 0: 10.xxx.yyy.109:3260]

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba38:CH:0 T:1 CN:0: connection iSCSI is being marked "OFFLINE" (event: 5)

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess [ISID: TARGET: TPGT (null): TSIH 0: 0]

    (2016 03-31 T 05: 05:48.218Z cpu4:33248) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn [CID: 10.xxx.yyy.195:32715 r L: 0: 10.xxx.yyy.109:3260]

    [root@vmware6:~] tail - f /var/log/vobd.log

    [ ... ]

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622023006us: [vob.iscsi.connection.stopped] iScsi connection 0 arrested for vmhba38:C0:T0

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622023183us: [vob.iscsi.target.connect.error] vmhba38 @ vmk1 could not connect to iqn.2005 - 10.org.freenas.ctl:vmware - iscsi because of a network connection failure.

    2016 03-31 T 05: 05:48.217Z: [iscsiCorrelator] 1622002451us: [esx.problem.storage.iscsi.target.connect.error] connection iSCSI target iqn.2005 - 10.org.freenas.ctl:vmware - iscsi on vmhba38 @ vmk1 failed. The iSCSI Initiator failed to establish a network connection to the target.

    2016 03-31 T 05: 05:48.218Z: [iscsiCorrelator] 1622023640us: [vob.iscsi.connection.stopped] iScsi connection 0 arrested for vmhba38:C0:T1

    2016 03-31 T 05: 05:48.218Z: [iscsiCorrelator] 1622023703us: [vob.iscsi.target.connect.error] vmhba38 @ vmk1 could not connect to

    [root@vmware6:~] ping 10.xxx.yyy.109

    PING 10.xxx.yyy.109 (10.xxx.yyy.109): 56 bytes

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 0 ttl = 64 time = 0,174 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 1 ttl = 64 time = 0,238 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 2 ttl = 64 time = 0,309 ms

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 3 packets received, 0% packet loss

    round-trip min/avg/max = 0.174/0.240/0.309 ms

    vmkping [root@vmware6:~] 10. xxx.yyy.109

    PING 10.xxx.yyy.109 (10. xxx.yyy. 109): 56 bytes

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 0 ttl = 64 time = 0,179 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 1 ttl = 64 time = 0,337 ms

    64 bytes from 10. xxx.yyy. 109: icmp_seq = 2 ttl = 64 time = 0.382 ms

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 3 packets received, 0% packet loss

    round-trip min/avg/max = 0.179/0.299/0.382 ms

    [root@vmware6:~] NF - z 10. xxx.yyy3260.109

    Connection to 10. xxx.yyy3260.109 port [tcp / *] succeeded!

    [root@vmware6:~] vmkping 8972 10 s. xxx.yyy.109 - d

    PING 10.xxx.yyy.109 (10. xxx.yyy. 109): 8972 data bytes

    -10. ping.109 xxx.yyystats-

    3 packets transmitted, 0 packets received, 100% packet loss

    I began to watch the drivers NIC thinking maybe something got screwed up during the upgrade; not the first time I saw problems with the out-of-box drivers supplied by VMware.  I checked VMware HCL for IO devices; the physical NIC used on that host are Intel I217-LM (nic e1000e), Intel I210 (nic - igb) and Intel® 82574 L (nic-e1000e).  Lists HCL from VMware that the driver for the I217-LM & 82574L should be version 2.5.4 - I210 and 6vmw should be 5.0.5.1.1 - 5vmw.  When I went to check, I noticed that he was using a different version of the e1000e driver (I210 driver version was correct).

    [root@vmware6:~] esxcli list vib software | e1000e grep

    Name Version Date seller installation acceptance level

    -----------------------------  ------------------------------------  ------  ----------------  ------------

    NET-e1000e 3.2.2.1 - 1vmw.600.1.26.3380124 VMware VMwareCertified 2016-03-31

    esxupgrade.log seems to indicate that e1000e 2.5.4 - VMware 6vmw should have been loaded...

    [root@esxi6-lab: ~] grep e1000e /var/log/esxupdate.log

    [ ... ]

    # ESXi 6.0 U1 upgrade

    2015-09 - 29 T 22: 20:29Z esxupdate: BootBankInstaller.pyc: DEBUG: about to write payload "net-e100" of VIB VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585 to "/ tmp/stagebootbank".

    [ … ]

    # ESXi 6.0 U2 upgrade

    2016-03 - 31 T 03: 47:24Z esxupdate: BootBankInstaller.pyc: DEBUG: about to write to payload "net-e100" of VIB VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585 to ' / tmp/stagebootbank '

    3.2.2.1 - 1vmw e1000e is recommended for 5.5 U3 and not 6.0 U2 ESXi ESXi driver!  As these drivers are listed as "Inbox", I don't know if there is an easy way to download the drivers supplied by seller (vib) for him, or even if they exist.  I found an article online to manually update drivers on an ESXi host using esxcli; I tried check and install the new e1000e drivers.

    [root@vmware6:~] esxcli software update net-e1000e-n-d vib https://hostupdate.VMware.com/software/VUM/production/main/VMW-Depot-index.XML

    Result of the installation

    Message: The update completed successfully, but the system must be restarted for the changes to be effective.

    Restart required: true

    VIBs installed: VMware_bootbank_net - e1000e_3.2.2.1 - 2vmw.550.3.78.3248547

    VIBs removed: VMware_bootbank_net - e1000e_3.2.2.1 - 1vmw.600.1.26.3380124

    VIBs ignored:

    As you can see there is a newer version and installed it.  However after restarting it still did not fix the issue.  I even went so far as to force the CHAP password used to authenticate to reset and update on both sides (iSCSI initiator and target).  At this point, I wonder if I should somehow downgrade and use the driver of 2.5.4 - 6vmw (how?) or if there is another issue at play here.  I'm going down a rabbit hole with my idea that it is a NETWORK card driver problem?

    Thanks in advance.

    --G

    esxi6-u2-iscsi.jpg

    I found [a | the?] solution: downgrade for 2.5.4 drivers - Intel e1000e 6.0 U1 6vmw.  For the file name, see VMware KB 2124715 .

    Steps to follow:

    1. Sign in to https://my.vmware.com/group/vmware/patch#search
    2. Select ' ESXi (Embedded and Installable) ' and '6.0.0 '.  Click on the Search button.
    3. Look for the update of the release name - of - esxi6.0 - 6.0_update01 (released 10/09/2015). Place a check next to it and then download the button to the right.
    4. Save the file somewhere locally. Open archive ZIP once downloaded.
    5. Navigate to the directory "vib20". Extract from the record of the net-e1000e.  In East drivers Intel e1000e ESXi 6.0 U1: VMware_bootbank_net - e1000e_2.5.4 - 6vmw.600.0.0.2494585.vib
    6. This transfer to your ESXi Server (be it via SCP or even the file within the vSphere Client browser). Save it somewhere you will remember (c.-a-d./tmp).
    7. Connect via SSH on the ESXi 6.0 host.
    8. Issue the command to install it: software esxcli vib install v - /tmp/VMware_bootbank_net-e1000e_2.5.4-6vmw.600.0.0.2494585.vib
    9. Restart the ESXi host.  Once it comes back online your datastore (s) iSCSI / volumes should return.  You can check by issuing h df or the list of devices storage core esxcli at the CLI prompt.  vSphere Client also works

    It took some time to find the correct file (s), the steps and commands to use.  I hope someone else can benefit from this.  I love that VMware can provide for virtualization, but lately, it seems that their QA department was out to lunch.

    Do not always have a definitive explanation as to why Intel e1000e ESXi 5.5 U3 drivers were used when ESXi 6.0 U1 U2.  Maybe someone with more insight or VMware Support person can.

    & out

    --G

  • Can I use a custom with iSCSI TCP/IP stack?

    We have a group of 3 guests identical at each connection to the network through 2 natachasery using 2 vmks SAN. Each vmk resides on a separate broadcast domain so they belong to separate vswitches. No link is used in accordance with the guide of vSphere.

    Thus, as the title suggests, should I use a separate TCP/IP stack custom for iSCSI vmks? or leave them in the stack by default (even if it is not an option to mark the vmks as IP storage)

    As long as your vmkernel iSCSI cards and their respective iSCSI targets are in the same subnet of layer 3, i.e. you are not routing iSCSI traffic (vmkX<->TargetSPX vmkY<->TargetSPY separate subnet design iSCSI does not, apply as never before the traffic is routed between networks), OR you do not want to configure static routes customized for iSCSI targets in other subnets then there is no point in using a separate TCP/IP stack.

    The main point to use the new functionality of ESXi 6 with several TCP/IP stacks is to have a completely separate battery routing table. Most of the admins were not able to handle very basic layer 3 static routing with itineraries dedicated by subnet, so they assigned default gateways on several vmkernel NIC and I wondered why things broke or has never worked.

    It also allows better control what interfaces to send data when you have several paths of layer 3 or subnets to communicate with, but it's largely irrelevant and already a point an integral part of an iSCSI network fine grain.

  • Cannot add the iSCSI storage unless it is formatted

    Hello

    I have therefore 2 boxes of ESX 4.0 and I migrate all the machines to a new box of 5.5 ESXi.  Box 5.5 cannot see one of iSCSI targets that can the other 2.  The only way I can add by using the application client vSphere to add storage, where I can see it, but I'm not able to add without formatting.  Of course, this is not desirable.  Why this storage can be used without formatting?  All iSCSI targets are the same machine, they all were created the same way I can say, so why is it a unable to add without a format?

    Well here is the solution to my problem:

    http://www.experts-exchange.com/Software/VMware/Q_27032673.html

    I had to mount the storage using the CLI instead of the vSphere client.

  • iSCSI LUN on a Synology changed - how to reconnect?

    Hello

    We have a Synology RackStation where we organize two LUN iSCSI on it. These LUNS is mounted like data warehouses, store multiple virtual machines on it.

    In the details of the data store, I see a path like: / vmfs/volumes/5283b99a-65... for each LUN.

    Now, I need to replace the Synology unit. I can move all hard disks 1 to another Synology with problem, the unit will start and I can reach it under the same IP address. The LUNS are also on the rise. But, probably because the MAC address of the Synology has changed, I can't access the data warehouse etplus. In vSphere I marked as "dead". When I swap to Synology device, everything is back to normal.

    How can I make the change to a Synology to another? Is there a way to specify a new path to the Lun, and then reconnect them just with its content of 'old '?

    Thanks in advance,

    Björn

    Hello

    On the new Synology you are presenting to VMware volumes? If nothing has changed (discovered access/intellectual property rights) that you will need to resign data warehouses:

    Take a look at these two articles. They explain how a data store is trained and poetentially why you cannot connect to them when you change units.

    VSphere Documentation Centre

    VSphere Documentation Centre

  • iSCSI treatment new target LUN in the additional paths for target Lun

    Hello

    I have a 5.5.0 ESXi host (build 1331820) I am trying place some new iSCSI LUNS to, but I have a problem with it that I am hoping someone can help me solve.

    The back-end iSCSI server is running CentOS 6.5 with tgtd providing iSCSI targets.  There are four iSCSI targets in production, with another two created and used here.  When I re-scan the iSCSI environment after adding two new objectives, I see more than 4 disks in the Details pane of map iSCSI under Configuration-> storage adapters.

    In addition, two of the LUNS stop working and inspection also shows that new iSCSI targets are seen as paths separate for the two broken LUNS.  It is:

    Existing LUNS:

    IQN.2014 - 06.storage0.jonheese.local:datastore0

    IQN.2014 - 06.storage1.jonheese.local:datastore1

    IQN.2014 - 06.storage2.jonheese.local:datastore2

    IQN.2014 - 06.storage3.jonheese.local:datastore3

    New LUN:

    IQN.2014 - 06.storage4.jonheese.local:datastore4

    IQN.2014 - 06.storage5.jonheese.local:datastore5

    Of the "Managing paths" dialog for each found discs, I see that "datastore4" presents itself as an additional path to "datastore0" and "datastore5" presents itself as a path to "datastore1" - even if the target names are clearly different.

    So can someone tell me why vSphere client iSCSI is the treatment target separate iSCSI LUNS as having multiple paths to the same target?  I compared the configuration of tgtd between the original LUN 4 and 2 new LUNS, and everything seems OK.  They are all attached to different backend disks (not that vSphere should know/care about this) and I wrote zeros to the new back-end disks LUNS to ensure that there is nothing odd about the disc, confusing the iSCSI client.

    I can post some newspapers/config/information required.  Thanks in advance.

    Kind regards

    Jon Heese

    For anyone who is on this same situation, my supposition above proved to be correct.  But in addition to renumbering of the scsi_id each of the LUNS, I also had to renumber the field controller_tid to the controllers on the second node would be present with unique IDS.  Here is an example of the stanza of config I used for each LUN:

    # device LUN storage medium

    -storage/dev/drbd5 store

    # SCSI for storage LUN identifier

    scsi_id EIT 00050001

    # Identifier for the LUN controller SCSI

    controller_tid 5

    # iSCSI initiator IP address (ed) allowed to connect

    initiator-address 192.168.24.0/24

    It is a problem of configuration of tgtd after all!  Thanks to all for the questions that led me to the right answer!

    Kind regards

    Jon Heese

Maybe you are looking for