MPIO iSCSI error

I downloaded and installed vSphere 4 and found a tutorial on the configuration of MPIO.  I encountered an error when I try to add network cards to the swiscsi.

esxcli swiscsi nic add - n vmk1 d vmhba35

"Add Nic failed in IMA."

I followed the instructions to the following ADDRESS, and it seems that someone has experienced this problem.  We do not use the distributed switch again.  It's just a stand-alone host without vCenter again.

http://www.yellow-bricks.com/2009/03/18/iSCSI-Multipathing-with-esxcliexploring-the-next-version-of-ESX/

Anyone run into this and found a solution?

I thank very you much for any help.

Move the 'other' NIC mode ensures unused. That should do it. You'll failover anyway since you have more than one path.

Tags: VMware

Similar Questions

  • MPIO - iscsi - ARE links/Ops / s

    Am I correct in saying get;

    iSCSI MPIO will not increase your IOPS / s for a given machine, unless your ethernet connection is saturated?

    MPIO does not increase performance per say, but give you a "big pipe" for several virtual machines on?

    My scavaging on the net I find there are TWO ways to enable MPIO for you.

    (1) Add second Teddy (physical nic), assign the nic to your switch. Create a second VMkernal, substitute blanace loading options and set an active nic by vmk. Add the two vmk to the iscsi HBA, enable repetition alternate on the management of the paths to the data store. Shows that I have two paths.

    or

    (2) add the second bear to switch, allow the two bears will use the unique vmk. storage system, create two targets that map to the same LUN. Shows two paths.

    Does it matter which way MPIO is configured as it shows several paths?

    Out of curiosity I configured esxi using the first way to configure MPIO, then created a second target with the same number of LUN on the storage. ESXi shows that I have 4 paths to storage. A path is a link right? so 4 paths would be unnecessary, at least that I had 4 links and very high demand?

    I read that it takes about 14 000 iops to saturate a gigabit link. Does this mean that since my storage system cannot achieve worldwide real scenerio MPIO is a waste of time?

    Thank you to everyone who takes the time to communicate with me!

    You are confused of serveral things in your argument.

    Let's first start IOPS / s. To make an e/s of the mechanical components are involved on a disk and the number of IOs can run a disc is therefore very limited.

    For this example lets assume that a disk can do 100 IOs (some less, some more do) and that you have 10 disks at hand. This means that your storage hardware can make 1000 IOPS / s. If you invest in a storage Bay real intelligence of implementing adding caching and controller you can get you more out of your drives as long your load is a good target for caching, but I don't know this option.

    How to tie up your storage space (FC, iSCSI,) at an ESX Server direct changes nothing on the number of the IOPS / s your disks can do a lot.

    However, it influences the bandwidth you can achieve.

    Suppose that your consumption of storage is a WIndows NTFS machine. NTFS uses a typical 4 K block size. That calculates to a maximum block size of 4 K x 1000 = 4 MB bandwidth. Which can easily be done on a 100 MB link and you will see this flow if your consumption is doing sequential i/o only.

    Now suppose that your ESX Server has a VMFS formatted with 1 MB (512 GB). While loading fully your 10 discs, you can reach 1 MB x 1000 = 10 GB.

    Again, this is possible if your VMFS was only sequential writing which he did not.

    When you run virtual machines VMs operating system will write to his type of typical files (i.e. 4 K on WIndows) and so the underlying VMFS will buffer a bit in its own cache and access discs in shuffle mode. Thus, you should expect a lot less bandwidth used. It many cases a single gigabit link will not be saturated.

    MPIO is a function which can reach:

    -Path redundancy

    -Additional bandwidth and IOPS / s in some cases

    If MPIO can also add to the number if IOPS / s depends on the type of controller you have on the side storage. If you have a single controller with ports different frontend IOPS gain / s should not be dramatic. With several controllers, indivdual caches and mutual paths to the disks that you can achieve additional IOPS if you get a good cache reference rate.

    How many different ways you can use with MPIO depends on what can make your storage space. What is a passive connection active? Reach you reduncancy but no INCREASE in the Ops / s or bandwidth. Only an active-active array will increase the bandwidth and, possibly, the number of IOPS / s you can achieve.

    With ESX4 you can use up to 8 channels, if it's something your taken storage array supported by parallel controller access.

  • MPIO iSCSI Config issues multiple vmKnics and natachasery several...

    Hey guys... I am trying to wrap my head around this whole MPIO and iSCSI configuration. I read all the documents there (from dell to vmware) and I'm still wrapping my head arounda couple things and I hope that someone could take a few minutes and just explain it a bit clearer for me.

    First of all. When you create a vSwitch and add vmknics to the vswitch, configuration says that you can have only ONE active adapter. If you go and delete this second card and put it in unused. Why do you need two cards when the second one is just going to be configured as unused? Even if this Teddy fails the secondary image is defined for unused and no failover then what is the point of the configuration of the other?

    In the second place. When you create a vSwitch and add several vmknics and then bind to the iSCSI initiator (I understand and have done laboratory) why some manufacturers (dell equalogic) saying create as many vmknics as natachasery you have? Is it so you all flow through that 1 GB direct as possible? Some walk through see a vmknic by iSCSI connection, what is the best practice?

    Third and last. I read that you should have vswitch as much as you do to your network iSCSI networks.  So if I have two separate iSCSI networks will my storage I will need two separate vSwitches and then bind these NICs vmk on the two vswitches to my iSCSI initiator. Its okay?

    Thanks in advance for any more detailed explanation!

    This is the document that you are referring to? http://attachments.wetpaintserv.us/Hb2187XrZtZHUBMwbBrD4A%3D%3D1015900

    I also confirmed that there is no limitation of 4 GB on LAG for PowerConnect 5424. You can LAG ports and 8 Gbps hose 8.

    Bala

    Dell Inc.

  • ASM & iSCSI - disk has gone forever

    I installed Centos 5.9 (don't ask...), my storage is a drive Bay of 10 scsi drives connected via the iscsi Protocol, and I installed grid Infrastructure 11.2.0.3 (software only).

    I was setting up ASM, but when I added the candidate disks, it never ended, then I canceled. After several futile attempts to get it to work, I decided to reboot the server.

    I think I saw her because before restarting the server, I had uninstalled ASMLib.

    The server wouldn't start due to errors related to the disks. After doing a bit of magic, I disabled the connection on the discs and was able to start, but now I have some problems when you try to connect to the disks:

    starting at $ /etc/init.d/iscsid

    $ chkconfig--add iscsi

    $ chkconfig iscsi on

    $ iscsiadm m - t sendtargets Pei 10.9.254.2 discovery

    10.9.254.1:3260, 1 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    10.9.254.9:3260, 2 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    10.9.254.2:3260, 3 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    10.9.254.10:3260, 4 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    10.9.254.3:3260, 5 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    10.9.254.11:3260, 6 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    10.9.254.4:3260, 7 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    10.9.254.12:3260, 8 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    Then I try to connect to the Portal 10.9.254.2 (I'm not sure how to determine what portal should I use, but it used to work before):

    $ iscsiadm, node m t iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc - l Pei 10.9.254.2

    But it never ends, so I have to cancel it. Looks like it's connected and work:

    $ iscsiadm-m session

    TCP: [8] 10.9.254.1:3260, 1 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    TCP: [9] 10.9.254.3:3260, 5 iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc

    .. .but it does not work:

    $ iscsiadm m - session-P3

    (...)

    Attached SCSI devices:

    ************************

    The host number: 5 status: running

    SCSI5 Channel 00 Id 0 Lun: 0

    Attached scsi disk sda status: unknown

    Furthermore, I can't see the disks! (sda, sdb, sdc,...)

    $ ls-l/dev/s *.

    lrwxrwxrwx 1 root root 3 6 Feb 17:05 / dev/scd0-> sr0

    CRW - 1 root root 0, 21 6 Feb 17:05 / dev/sg0

    CRW - 1 root root 1, 21 feb 26 12:30 / dev/sg1

    CRW - 1 root root 21, 10 Feb 26 14:31 / dev/sg10

    CRW - 1 root root 21, 2 feb 26 12:32 / dev/sg2

    CRW - 1 root root 21, 3 Feb 26 13:02 / dev/sg3

    CRW - 1 root root 21, 4 feb 26 13:02 / dev/sg4

    CRW - 1 root root 21, 5 feb 26 13:31 / dev/CE5

    CRW - 1 root root 21, 6 feb 26 13:33 / dev/CE6

    CRW - 1 root root 21, 7 feb 26 14:01 / dev/CE7

    CRW - 1 root root 21, 8 Feb 26 14:01 / dev/sg8

    CRW - 1 root root 21, 9 Feb 26 14:30 / dev/sg9

    CRW - 1 root root 10, 231 6 Feb 17:05 / dev/snapshot

    BRW - rw - 1 root disk 11, 0 6 Feb 17:05 / dev/sr0

    lrwxrwxrwx 1 root root 15 6 Feb 17:05 / dev/stderr->/proc/self/fd/2

    lrwxrwxrwx 1 root root 15 6 Feb 17:05 / dev/stdin->/proc/self/fd/0

    lrwxrwxrwx 1 root root 15 6 Feb 17:05 / dev/stdout->/proc/self/fd/1

    CRW - 1 root root 4, 0 6 Feb 14:04 / dev/systty

    The strange thing here is just beginning, in/proc/partitions I can see only of sda. But the remaining machines appear progresively... after an hour, I can see all the:

    $ cat/proc/partitions

    name major minor #blocks

    292935982 0 104 cciss/c0d0

    104 1 104391 cciss/c0d0p1

    cciss/c0d0p2 2 104 292824787

    253 0 286720000 dm - 0

    1 253 6094848 dm - 1

    8 0 96679680 sda

    8 16 96679680 sdb

    8 32 96679680 SDC

    8 48 96679680 SDS

    8 64 96679680 sde

    8 80 96679680 homeless

    96679680 96 8 sdg

    8 112 96679680 sdh

    8 128 96679680 sdi

    8 144 96679680 LDS

    8 160 96679680 sdk

    8 176 96679680 sdl

    At the end of the day, I see 2 devices as 'unknown' and others as 'running '. But I can't even fdisk them:

    $ fdisk-l/dev/sda

    Nothing is returned. We will try to create a partition:

    $ fdisk/dev/sda

    Unable to open/dev/sda

    I can't access any device of sdX, because there is not one in/dev.

    This is the log that I got when I tried to connect to the target using the iscsiadm command:

    6 Feb 15:16:55 kernel bat-cvracdb02: scsi5: iSCSI Initiator on TCP/IP

    6 Feb 15:16:55 bat-cvracdb02 kernel: vendor: HP model: P2000 G3 iSCSI Rev: T250

    6 Feb 15:16:55 kernel bat-cvracdb02: Type: the Direct access ANSI SCSI revision: 05

    6 Feb 15:16:55 bat-cvracdb02 kernel: SCSI device sda: 193359360 512-byte hdwr (99000 MB) sectors

    6 Feb 15:16:55 bat-cvracdb02 kernel: sda: write protect is off

    6 Feb 15:16:55 bat-cvracdb02 kernel: SCSI device sda: drive cache: write back

    6 Feb 15:16:55 bat-cvracdb02 kernel: SCSI device sda: 193359360 512-byte hdwr (99000 MB) sectors

    6 Feb 15:16:55 bat-cvracdb02 kernel: sda: write protect is off

    6 Feb 15:16:55 bat-cvracdb02 kernel: SCSI device sda: drive cache: write back

    6 Feb 15:16:56 bat-cvracdb02 iscsid: Connection1:0 to [target: iqn.1986 - 03.com.hp:storage.p2000g3.121514b3cc, Portal: 10.9.254.2,3260] by [iface: default] now works

    6 Feb 15:17:05 bat-cvracdb02 kernel: sda: < 3 > connection1:0: ping timeout of 5 seconds, recv timeout expires 5, last rx 4295476749, last ping 4295481749, now 4295486749

    6 Feb 15:17:05 core bat-cvracdb02: connection1:0: detected error conn (1011)

    6 Feb 15:17:06 iscsid bat-cvracdb02: kernel reported connection iSCSI error state (1011) 1:0 (3)

    6 Feb 15:17:06 bat-cvracdb02 udevd-event [3628]: wait_for_sysfs: waiting for ' / sys/devices/platform/host5/session1 / target5:0:0 / 5:0:0:0 / ioerr_cnt' failed

    6 Feb 15:17:09 iscsid bat-cvracdb02: could not online LUN 0 err 2.

    6 Feb 15:17:09 iscsid bat-cvracdb02: connection1:0 is operational after recovery (1 attempts)

    6 Feb 15:17:24 core bat-cvracdb02: connection1:0: ping timeout of 5 seconds, recv timeout expires 5, last rx 4295495002, last ping 4295500002, now 4295505002

    6 Feb 15:17:24 core bat-cvracdb02: connection1:0: detected error conn (1011)

    6 Feb 15:17:24 iscsid bat-cvracdb02: kernel reported connection iSCSI error state (1011) 1:0 (3)

    6 Feb 15:17:27 iscsid bat-cvracdb02: could not online LUN 0 err 2.

    6 Feb 15:17:27 iscsid bat-cvracdb02: connection1:0 is operational after recovery (1 attempts)

    6 Feb 15:17:37 core bat-cvracdb02: connection1:0: ping timeout of 5 seconds, recv timeout expires 5, last rx 4295508253, last ping 4295513253, now 4295518253

    6 Feb 15:17:37 core bat-cvracdb02: connection1:0: detected error conn (1011)

    6 Feb 15:17:38 iscsid bat-cvracdb02: kernel reported connection iSCSI error state (1011) 1:0 (3)

    6 Feb 15:17:40 iscsid bat-cvracdb02: could not online LUN 0 err 2.

    6 Feb 15:17:40 iscsid bat-cvracdb02: connection1:0 is operational after recovery (1 attempts)

    6 Feb 15:17:50 kernel bat-cvracdb02: connection1:0: ping timeout of 5 seconds, recv timeout expires 5, last rx 4295521254, last ping 4295526254, now 4295531254

    6 Feb 15:17:50 kernel bat-cvracdb02: connection1:0: detected error conn (1011)

    6 Feb 15:17:51 iscsid bat-cvracdb02: kernel reported connection iSCSI error state (1011) 1:0 (3)

    6 Feb 15:17:53 bat-cvracdb02 iscsid: could not online LUN 0 err 2.

    6 Feb 15:17:53 bat-cvracdb02 iscsid: connection1:0 is operational after recovery (1 attempts)

    6 Feb 15:18:03 core bat-cvracdb02: connection1:0: ping timeout of 5 seconds, recv timeout expires 5, last rx 4295534255, last ping 4295539255, now 4295544255

    6 Feb 15:18:03 core bat-cvracdb02: connection1:0: detected error conn (1011)

    6 Feb 15:18:04 bat-cvracdb02 iscsid: reported core connection iSCSI error state (1011) 1:0 (3)

    6 Feb 15:18:06 iscsid bat-cvracdb02: could not online LUN 0 err 2.

    6 Feb 15:18:06 iscsid bat-cvracdb02: connection1:0 is operational after recovery (1 attempts)

    6 Feb 15:18:12 bat-cvracdb02 avahi-daemon [3321]: invalid query package.

    6 Feb 15:18:16 bat-cvracdb02 last message repeated 6 times

    6 Feb 15:18:16 core bat-cvracdb02: connection1:0: ping timeout of 5 seconds, recv timeout expires 5, last rx 4295547259, last ping 4295552259, now 4295557259

    6 Feb 15:18:16 core bat-cvracdb02: connection1:0: detected error conn (1011)

    6 Feb 15:18:16 bat-cvracdb02 kernel: sd 5:0:0:0: error code not supported

    6 Feb 15:18:16 bat-cvracdb02 kernel: sd 5:0:0:0: SCSI error: return code = 0x000e0000

    6 Feb 15:18:16 bat-cvracdb02 kernel: result: hostbyte = DID_TRANSPORT_DISRUPTED driverbyte = DRIVER_OK, SUGGEST_OK

    6 Feb 15:18:16 bat-cvracdb02 kernel: error of the buffer i/o on device sda, logical block 0

    6 Feb 15:18:17 bat-cvracdb02 iscsid: reported core connection iSCSI error state (1011) 1:0 (3)

    6 Feb 15:18:19 bat-cvracdb02 iscsid: could not online LUN 0 err 2.

    6 Feb 15:18:19 bat-cvracdb02 iscsid: connection1:0 is operational after recovery (1 attempts)

    I also tried to reinstall ASMLib, network and iscsi tools.

    I'm tired of this problem, please kindly share ideas. No one could help me on the Centos forum.

    Finally, I solved this.

    I had to use MPIO, because when I accesed drives node of the wave, all was fine, but when I accessed the other node disks, disks began to show problems.

  • Having dead path to the iSCSI target

    What I had:

    1 host: ESX3.5i

    2 host: Solaris 10 (SunOS 5.10 Sun Generic_138889-02 i86pc i386 i86pc)

    2 parts ZFS on it - 1 iSCSI share and 1 NFS Sharing.

    What I did:

    Upgrade to ESX4i

    What I have so far:

    Everything works but iSCSI.

    It shows 'path of death' for me, but vmkping to host Sun is ok.

    tail of unpacking here:

    20 June at 16:24:07 vmkernel: 0:01:22:15.659 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_TransportConnSendPdu: vmhba37:CH:0 T: 0 CN:0: didn't request passthru queue: no connection

    20 June at 16:24:07 iscsid: send_pdu failed rc - 22

    20 June at 16:24:07 vmkernel: 0:01:22:15.659 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba37:CH:0 T: 0 CN:0: iSCSIconnection is marked 'OFFLINE '.

    20 June at 16:24:07 iscsid: reported core connection iSCSI error state (1006) (3) 1:0.

    20 June at 16:24:07 vmkernel: 0:01:22:15.659 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:07 vmkernel: 0:01:22:15.659 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.0.0.2:51908 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StartConnection: vmhba37:CH:0 T: 0 CN:0: iSCSI connection is marked as "ONLINE."

    20 June at 16:24:10 iscsid: connection1:0 is operational after recovery (2 attempts)

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StartConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StartConnection: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: vmhba37:CH:0 T: 0 CN:0: Invalid residual of the SCSI response overflow: residual 0, expectedXferLen 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: vmhba37:CH:0 T: 0 CN:0: connection failure notification rx: residual invalid. State = online

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: Sess ISID: 00023 d 000001 TPGT TARGET:iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba37:CH:0 T: 0 CN:0: event CLEANING treatment

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.789 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_TransportConnSendPdu: vmhba37:CH:0 T: 0 CN:0: didn't request passthru queue: no connection

    20 June at 16:24:10 iscsid: failure of send_pdu rc-22

    20 June at 16:24:10 vmkernel: 0:01:22:18.789 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba37:CH:0 T: 0 CN:0: iSCSIconnection is marked 'OFFLINE '.

    20 June at 16:24:10 iscsid: reported core connection iSCSI error state (1006) (3) 1:0.

    20 June at 16:24:10 vmkernel: 0:01:22:18.789 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.789 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    and so on.

    What I did wrong, and how can I remedy this situation?

    Out of curiosity, I advanced and tried the target solaris. It seems that there is a problem in the target solaris before «solaris 10 update7» I tried u6 and I see the question and this is clearly the problem of the target.

    ...

    iSCSI (SCSI command)

    Opcode: SCSI Command (0x01)

    .0... ... = I: in queue delivery

    Flags: 0x81

    1... = final F: PDU sequence

    .0... ... = R: no data is read from target

    .. 0... = w: no data will be written in the target

    .... . 001 = Attr: Simple (0x01)

    TotalAHSLength: 0x00

    DataSegmentLength: 0x00000000

    MON: 0000000000000000

    InitiatorTaskTag: 0xad010000

    ExpectedDataTransferLength: 0x00000000

    CmdSN: 0x000001ab

    ExpStatSN: 0x000001ad

    SCSI CDB Test Unit Ready

    MON: 0X0000

    Set command: Direct Access Device (0x00) (using default commandset)

    Opcode: Test Unit Ready (0x00)

    Single vendor = 0, NACA = 0, link = 0

    ...

    iSCSI (SCSI response)

    Opcode: SCSI response (0 x 21)

    Flags: 0 x 82

    ... 0... = o: no overflow to only read a part of two-way remote

    .... 0... = sup: No exceedances of the ability to only read a part of two-way remote

    .... . 0 = o: no residual overflow occurred

    .... .. 1 = sup: residual overflow occurred < < < < < =.

    Answer: Command completed at the target (0x00)

    Condition: Good (0x00)

    TotalAHSLength: 0x00

    DataSegmentLength: 0x00000000

    InitiatorTaskTag: 0xad010000

    Statsdon't: 0x000001ad

    ExpCmdSN: 0x000001ac

    MaxCmdSN: 0x000001ea

    ExpDataSN: 0x00000000

    BidiReadResidualCount: 0x00000000

    ResidualCount: 0x00000000

    Demand: 10

    Request time: 0.001020000 seconds

    SCSI response (Test Unit Ready)

    MON: 0X0000

    Set command: Direct Access Device (0x00) (using default commandset)

    SBC Opcode: Test Unit Ready (0x00)

    Demand: 10

    Request time: 0.001020000 seconds

    Condition: Good (0x00)

    ...

    14:02:12:42.569 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: vmhba35:CH:0 T:2 CN:0: Invalid residual of the SCSI response overflow: residual 0, expectedXferLen 0

    14:02:12:42.584 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:fa2cdf18-2141-e633-b731-f89f47ddd09f.test0 TPGT: TSIH 1: 0

    14:02:12:42.602 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: Conn CID: 0 L: 10.115.153.212:59691 r: 10.115.155.96:3260

    14:02:12:42.614 cpu2:7026) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: vmhba35:CH:0 T:2 CN:0: Connection failure notification rx: residual invalid. State = online

    14:02:12:42.628 cpu2:7026) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:fa2cdf18-2141-e633-b731-f89f47ddd09f.test0 TPGT: TSIH 1: 0

    14:02:12:42.644 cpu2:7026) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: Conn CID: 0 L: 10.115.153.212:59691 r: 10.115.155.96:3260

    14:02:12:42.655 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba35:CH:0 T:2 CN:0: Treatment of CLEANING event

    14:02:12:42.666 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:fa2cdf18-2141-e633-b731-f89f47ddd09f.test0 TPGT: TSIH 1: 0

    14:02:12:42.683 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.115.153.212:59691 r: 10.115.155.96:3260

    14:02:12:42.731 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_TransportConnSendPdu: vmhba35:CH:0 T:2 CN:0: Didn't request passthru queue: no connection

    14:02:12:42.732 cpu3:4119) vmw_psp_fixed: psp_fixedSelectPathToActivateInt: "Unregistered" device has no way to use (APD).

    14:02:12:42.745 cpu2:7026) iscsi_vmk: iscsivmk_SessionHandleLoggedInState: vmhba35:CH:0 T:2 CN:-1: State of Session passed of "connected to" to "in development".

    14:02:12:42.756 cpu3:4119) vmw_psp_fixed: psp_fixedSelectPathToActivateInt: "Unregistered" device has no way to use (APD).

    14:02:12:42.770 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba35:CH:0 T:2 CN:0: iSCSI connection is marked as 'offline'

    14:02:12:42.782 cpu3:4119) NMP: nmp_DeviceUpdatePathStates: the PSP has not selected a path to activate for NMP device 'no '.

    14:02:12:42.794 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:fa2cdf18-2141-e633-b731-f89f47ddd09f.test0 TPGT: TSIH 1: 0

    14:02:12:42.823 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.115.153.212:59691 r: 10.115.155.96:3260

    In response, 'U' bit must not be set. Solaris 10 update7 seem to have this problem.

  • Iomega ix12 Cabina

    Hola a todos

    Qué Tengo pasar a project that no tienen mucho presupuesto y habia pensado in este para ponerlo como main storage, principalmente nas las maquinas what will a corresponding United Nations ISA, ISA VPN gestor como otro son y una web management application una y luego como UN servidor axapta the tests.

    Teneis esperiencia con este SIN? creeis than ir wrong you can?

    Gracias por vuestras opiniones.

    VCP 410

    Hola nuevo,

    Con ese storage sacaria al menos 3 volumenes 1 TB cada uno, tambien depende del maximo of tamano del VMDK con crear... aunque whether replaced by Recomiendan los warehouses esten between 600-800 GB, pero esto los are muy subjetivo ago than today don't con el volumen of data that mueven none are nada raro better 1 TB.

    Todos el seccionar en varios volumenes el storage anyway look you a dar performance mejor y mas seguridad... puedes hacer optar a RAID 10 of todos los discos y luego realize volumenes o por el contrario crear 3 RAID10 y luego asignar knew tamano completo cada from a data store.

    Tanto como NFS supports MPIO iSCSI, of hecho in el documento appears el con example NFS aunque dicen Québec ix2 ix4 solo una red graphical, por lo package e what hace imposible utility dicha. Bajo mi punto vista optaria por iSCSI, is that VMware is encargara simplice el formato y también otras como RDM Québec caracterisiticas dispondras none are posible bajo NFS.

    A greeting.

    -

  • Can't access iscsi with ' error in the evaluation table display iscsilist IndexSizeError:

    I have been using my readyNAS years and today, after having extended a target (in volume), I can't access the Volumes-> iSCSI tab anymore.

    SOS!

    I really need.

    The error I get in FF:

    "Error in the evaluation table display iscsilist IndexSizeError: Index or size is negative or greater than the quantity allowed.

    It's a bit different in Chrome, but it won't let me copy it.

    Error in the evaluation table display iscsilist IndexSizeError: cannot set property 'maxLength' on "HTMLInputElement": the provided value is (-1) is negative.

    Any suggestions please?

    Thank you!

    It was super useful:

    https://community.NETGEAR.com/T5/using-your-ReadyNAS/ReadyNAS-2100-error-encountered-uploading-updat...

    I downloaded the update of intermediate as a file and updated firmware to update.

    And now I can access the iscsi again tab.

    MDGM, thanks a lot!

    I'm a happier person now!

  • I want to use MPIO to Ms iscsi initiator How get/install MPIO support and MSDSM to windows 7?

    How to enable MPIO are MS Inititator and MS DSM iscsi on window 7?

    Hi Parag Blangy,

    Thank you for visiting Microsoft Answers.

    As this problem is related to the MS iscsi initiator MPIO, it will be better suited in the Technet community.

    Please visit the link below to find a community that will provide the best support.

    http://social.technet.Microsoft.com/forums/en-us/category/windowsvistaitpro

    Installation and configuration of Microsoft iSCSI Initiator:

    http://TechNet.Microsoft.com/en-us/library/ee338480 (WS.10) .aspx

    Microsoft iSCSI Software Initiator Version 2.X users guide:

    http://download.microsoft.com/download/A/E/9/AE91DEA1-66D9-417C-ADE4-92D824B871AF/uGuide.doc.

    Microsoft Multipath i/o: frequently asked Questions:

    http://www.Microsoft.com/windowsserver2003/technologies/storage/MPIO/FAQ.mspx

    Kind regards
    Amal-Microsoft Support.
    Visit our Microsoft answers feedback Forum and let us know what you think.

  • ID mapping session Iscsi MPIO path

    Hello

    I'm playing ISCSI with MPIO. I get the connection information iscsi of the cmdlet "Get-IscsiConnection". It gives the target portals to which the initiator is connected. Then I have the mpclaim - v command which gives me the current state of the railways. In my case, I have an active/optimized path and other avenues of relief. This information on mpio path statements is shown with regard to the path ID. I want a way to find out what connection/target portal is this track Id matches. In the GUI, tab window mpio initiator iscsi has this information. Is there a way to get this info through PowerShell?

    Reference for the mpclaim - v output:

    MPIO Storage Snapshot on Tuesday, 05 May 2009, at 14:51:45.023
    Registered DSMs: 1
    ================
    +--------------------------------|-------------------|----|----|----|---|-----+
    |DSM Name                        |      Version      |PRP | RC | RI |PVP| PVE |
    |--------------------------------|-------------------|----|----|----|---|-----|
    |Microsoft DSM                   |006.0001.07100.0000|0020|0003|0001|030|False|
    +--------------------------------|-------------------|----|----|----|---|-----+
    
    Microsoft DSM
    =============
    MPIO Disk1: 02 Paths, Round Robin, ALUA Not Supported
            SN: 600D310010B00000000011
            Supported Load-Balancing Policy Settings: FOO RR RRWS LQD WP LB
    
        Path ID          State              SCSI Address      Weight
        ---------------------------------------------------------------------------
        0000000077030002 Active/Optimized   003|000|002|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
        0000000077030001 Active/Optimized   003|000|001|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
    MPIO Disk0: 01 Paths, Round Robin, ALUA Not Supported
            SN: 600EB37614EBCE8000000044
            Supported Load-Balancing Policy Settings: FOO RR RRWS LQD WP LB
    
        Path ID          State              SCSI Address      Weight
        ---------------------------------------------------------------------------
        0000000077030000 Active/Optimized   003|000|000|000   0
            Adapter: Microsoft iSCSI Initiator...              (B|D|F: 000|000|000)
            Controller: 46616B65436F6E74726F6C6C6572 (State: Active)
    
    Microsoft DSM-wide default load-balancing policy settings: Round Robin
    
    No target-level default load-balancing policy settings have been set.The reference for iscsi connection and session info: 
    
    PS C:\> Get-IscsiConnection
    
    ConnectionIdentifier : ffffe001e67f4020-29fInitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 44996TargetAddress        : 10.120.34.12TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a0InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 46020TargetAddress        : 10.120.34.13
    
    TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a1InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 47044TargetAddress        : 10.120.34.14
    
    TargetPortNumber     : 3260PSComputerName       :
    
    ConnectionIdentifier : ffffe001e67f4020-2a2InitiatorAddress     : 0.0.0.0InitiatorPortNumber  : 46788TargetAddress        : 10.120.34.15
    
    TargetPortNumber     : 3260PSComputerName       :
    
    PS C:\>I basically want to know which target portal does this pathid "0000000077030002" correspond to ?
    

    Hello

    Please post your question on the TechNet forums:

    Here is the link:

    https://social.technet.Microsoft.com/forums/Windows/en-us/home?category=w7itpro

    Kind regards

  • error of target iSCSI initiator 0xEFFF0012

    After upgrade to windows Server 2003 enterprise x 86 to windows Server 2008 Server Enterprise x 86, the SCSI initiator gives me error of target, the event viewer here is the increases to warning:
    «Discovery iSCSI via SendTargets failed with error 0xefff0012 for target portal code * 192.168.130.1 0003260 Root\UNKNOWN\0000_0.»

    Please help and advice.

    Concerning

    Hello

    Because the problem is with windows server 2008, I suggest you post this question in the Windows Server forum.

    http://social.technet.Microsoft.com/forums/en-us/winservergen/threads

  • Error MD3220i MPIO

    I have a virtual data center consisting of two windows server 2012R2 hyper-v hosts Manager connected to a MD3220i of basic failover cluster running.

    Everything looks good, except on a single host, I get the following error from MPIO in the windows event logs:

    Event ID: 32

    Reference Dell MD series Module specific to the device for Multi-Path does not have a pathway back to \Device\MPIODisk0.

    These events come in several times a day, and they occur for all 3 disks in the system.

    MDSM reports no errors

    All paths are available in iscsicpl.

    Apart from this error, everything looks good.  I have about 10 Vm running on the SAN, clustering with failover and migration to run without problem.  I am ready to put the system into production, but I would like to know why I'm getting this error before to do so be sure that there are no underlying problems.  I did find something specific to the error online, so I thought I would ask here.  If anyone can point me in the right direction that would be great.

    Thank you

    Hey, Madendever.

    I should have said from the very beginning. It is best to uninstall completely, (it might even be a good idea to go in the programs directory and delete its files and leftovers from the previous installation) style 'old school') and re-install the application. I saw it several times where something click until it's done.

    Also, I would be sure that you use the most recent iso for MDSM.

    I know it sounds like a 'nit - pic', but I have seen too many times to not judged.

    If this is not yet it solves, let me know.

  • iSCSI initiator error - cannot add target

    I am currently using a MD3000i in a test environment.

    The installation is very simple:

    The MD3000i is directly connected to a Windows Server 2003 (x 64 Enterprise edition).

    The management software is completely installed on this server. Only one controller is used to manage the PowerVault (partially managed)

    All default settings are used. I have configure the system according to the guide online "Board installation and configuring iSCSI" found on the installation CD.

    I get an error when configuring the Mirosoft iSCSI initiator. As soon as I try to add a port for iSCSI (iSCSI initiator properties panel-> discovery-> target portals tab), I get the following error message:

    "invalid sendtargets response text was detected.

    After accepting this error, the port is added to the list, but will not appear in the tab target, it is therefore impossible to connect to this port. The port of host iSCSI on the table and the address source on the server IP responds to ping without problem.

    At this point, I am only able to connect to the MD3000i with the out of band management and cannot access the storage in any way.

    Any help would be greatly appreciated

    Robert


  • iSCSI with MPIO

    I need configure the virtual switch in VMware for an iSCSI Array does not support LACP or aggregation of any kind.

    There are 4 ports on the table that can be used.

    • How many cables should I use for my 7K table?
    • Should I use MPIO for LACP is not an option?
    • How many IP addresses to configure on my table for iSCSI?

    I never used LACP for iSCSI connectivity, I don't know yet if this is supported by VMware! Anyway, basically you will configure an IP address for each target port and an IP address to each initiator VMkernel on one or more ESXi hosts. You may need to double-check with the documentation of the unit of the range, if for example a VIP (virtual IP) for the target to be used, and the question of whether IP addresses should be in the same or in different subnets. If storage systems supports the path of Round-Robin policy, then make sure that set you this for the LUN to the advantage of the bandwidth available.

    André

  • iSCSI Initiator Port Group error

    Problem adding in the vmk second to an existing iSCSI vSwitch.  Original config was 2 vmks - all worked successfully.  1 vmk has been removed to solve network problems.

    Re-created the 2nd vmk, but when going down one of the cards the unused category as soon as active network OK is selected the message "this vmkernel port group contains a NETWORK card.

    which is related to an iSCSI initiator.  Change its settings could disrupt the connection to the iSCSI data store"is displayed.  The response options are Yes or no.  A screenshot is attached.

    vCenter is version 5.0 U1b.  The host is ESXi5.0 U2 and mode of maintenance with none of the guests.  The host on the local disk not stop SAN.  I can safely click Yes, then go back and

    move the vmnic or is there a better way to fix this problem.  Thank you.

    Thanks for sharing the information. Please try below steps:

    Step 1. Remove links in the existing port and reboot the host.

    Step 2. Change the port iSCSI and vmnic preferred value to active group and move the second unused NETWORK card.

    Step 3. Change group 2-port iSCSI and vmnic preferred value in the active State and move the other card unused NETWORK.

    Step 4. Add vmk1 and vmk2 under links in the port iScsi initiator network configuration.

    I hope steps above will help to solve the problem. If you encounter the error even if you please let me know.

  • ESXi 5.5u1 added iscsi storage adapter - reboot and now vSphere Client could not connect to "my ip" an unknown error has occurred.  The server could not interpret the customer's request.  The remote server returned an error (503) server unavailable

    I have not yet connect to an iSCSI target device.

    I can ping my host

    when I open http:// "hostip" in a web browser, I get a 503 service not available.

    restart the host gets me nowhere.

    SSH opens somehow, but can not connect

    Console seems OK

    vSphere Client can not connect

    If I reset to the default values of the console it is ok, but when I reconfigure the host, this error will be returned.

    I tried to reinstall from DVD

    I'm completely corrected to date via SSH esxcli

    This happens on both my hosts, although they are almost identical Lenovo thinkserver TS140s with broadcom 10Gig NIC and intel NETWORK card integrated

    It almost always seems to happen the next time I reboot after enabling iscsi support

    The only weird thing I have is that my Integrated NIC is a processor intel 217's and I have to use a special VIB so that it can be used in ESXi

    Customer's winning 8.1

    Here is my installation notes

    Install on USB/SSD stick with custimized ISO i217 NIC driver, reset the configuration and reboot

    Management NIC set to NIC0:1Gig

    IP management: hostIP/24 GW: my gateway

    DNS:DNS on windows vm1, vm2 Windows dns

    HostName:ESXi1.Sub.myregistereddomainname custom DNS Suffixes: sub.myregistereddomainname

    Reset

    Patch to date (https://www.youtube.com/watch?v=_O0Pac0a6g8)

    Download the VIB and .zip in a data store using the vSphere Client

    To get them (https://www.vmware.com/patchmgr/findPatch.portal)

    Start the SSH ESXi service and establish a Putty SSH connection to the ESXi server.

    Put the ESXi server in maintenance mode,

    example of order: software esxcli vib install /vmfs/volumes/ESXi2-2/patch/ESXi550-201404020.zip d

    Re install the Intel 217 NIC driver if removed by patch

    Change acceptance ESXi host sustained community level,

    command: esxcli software - acceptance-level = CommunitySupported

    Install the VIB

    command:esxcli vib software install - v /vmfs/volumes/datastore1/net-e1000e-2.3.2.x86_64.vib

    command: restart

    Connect via VSphere client

    -Storage

    Check/fix/create local storage. VMFS5

    -Networking

    vSwitch0

    Check vmnic0 (1)

    Network port group rename VM to the 'essential '.

    Rename management group of network ports for management of basic-VMkernel traffic.

    -Configuration time

    Enable NTP Client to start and stop the host. ntp.org set 0-3 time servers

    DNS and routing

    Start the virtual machine and stop

    -enable - continue immediately if tools start - stop - prompted action Shutdown - the two delay to 10 seconds

    Security profile

    Services

    SSH - startup - enable the start and stop with host

    Cache host configuration

    -Properties to start SSD - allocate 40GB for the cache host.

    Flashing warnings SSH:

    Advanced settings, UserVars, UserVars.SuppressShellWarning, change from 0 to 1.

    Storage adapters

    -Add - add-in adapter software iSCSI

    I think I saw that I was wrong.  In fact, I applied two patches when only suited. I started with 5.5u1rollup2 and then applied the ESXi550-201404001 and ESXi550-201404020.  Strangely I did not t o had problems until I worked with iSCSI.

Maybe you are looking for