PSP for ESXi using NetApp storage

I use ESXi 4.1.  Storage is CF of NetApp.  I enabled ALUA on groups of the initiator.  The Type of storage array is VMW_SATP_ALUA.  The selection of the path is by default, the most recently used.  What is the cmd to ESXi4.1 define the selection of the default path for the Round Robin.  Thank you

That means that the policy in your case has been changed to MRU, but not RR, then maybe you can run the command again once, he appeared properly tuned.

esxcli - server - root nmp ATAS setdefaultpsp psp - VMW_PSP_RR - ATAS VMW_SATP_DEFAULT_AA username

Tags: VMware

Similar Questions

  • Masking vs. zoning for UCS and NetApp storage LUNS

    I know it's more a matter of Cisco MDS/storage, but nobody knows masking the LUNS and zoning, which one would be the most preferred method? I have two switches Cisco fabric 9148, two controllers SAN Netapp FAS3210 1 UCS chassis with 4 blades B200 M2. I said that I should not connect the Cisco UCS fabric interconnect directly at the back of the NetApp SAN and configure with LUN masking, but I rather configure a fabric zoning on switches Cisco 9148 MDS. Normally, this wouldn't be a problem but we have an offsite location where we have not all DMS switches to, and I would like to connect them directly to the San.

    I was told that this could lead to corruption if misconfigured dsik and point of view of Cisco is to use a zoning by some type of switch Cisco Fabric. Of course, I have of course this Cisco advises anyone to manage this type of configuration through their facilities instead of on the SAN. Does anyone have an opinion on the matter?

    Zoning and masking are two completely different characteristics.

    Zoning occurs on your storage switches and is the equivalent of an ACL (Access Control List).  It limits who may be considering other targets and/or initiators.  (Who I see?)

    Occurs on your storage array and masking limits what initiator LUN has access to.  (What do I see?)

    * UCS is not supported to directly connect a storage array in the interconnection of fabric, at least to have a storage upstream switch (MDS or equivalent) to push the zoning.  The use of zoning prevents a faulty initiator potentially impacting on the operation of the others by limiting what they can see in the fabric.

    If you happen to have a Nexus 5 K by chance, they can also work as your storage switch.  The N5K is able to perform almost all of the same services as MDS fabric and is fully supported.

    Kind regards

    Robert

  • Another issue for people using Netapp with VMWare via NFS

    Could you please attach to this message your logfile vmkernel since the last 2 weeks? Just remove all the names server for him.

    I need to compare with my journal if it is somehow 'common' to VMware and netapp

    That's what I see in my logs:

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028) NFSLock: 516: stop to access the 0xc21db80 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028) NFSLock: 516: stop to access the 0xc21a310 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028) NFSLock: 516: stop to access the 0xc21e798 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028) NFSLock: 516: stop to access the 0xc219448 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.338 cpu4:1028) NFSLock: 516: stop to access the 0xc219f08 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21d4c8 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21bdf0 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21d0c0 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21c4a8 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21ecf8 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21df88 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21dcd8 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21f3b0 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21d370 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc219040 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc219db0 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc218d90 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc218ee8 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21bb40 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21a468 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc2196f8 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc219b00 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21d778 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21e0e0 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21e390 fd 4

    August 25 at 04:03:42 dv29-011 vmkernel: 33:13:05:52.339 cpu4:1028) NFSLock: 516: stop to access the 0xc21e4e8 fd 4

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.004 cpu2:1026) NFSLock: 478: start Access again to fd 0xc219040

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.004 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21e390

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.004 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21e4e8

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.008 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21bb40

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.008 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21d370

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.009 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21e0e0

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.009 cpu2:1026) NFSLock: 478: start Access again to fd 0xc219b00

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.011 cpu2:1026) NFSLock: 478: start Access again to fd 0xc218ee8

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.015 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21f3b0

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.015 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21d778

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.015 cpu2:1026) NFSLock: 478: start Access again to fd 0xc2196f8

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.017 cpu2:1026) NFSLock: 478: start Access again to fd 0xc218d90

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.017 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21dcd8

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.017 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21a468

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.021 cpu2:1026) NFSLock: 478: start Access again to fd 0xc219db0

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.022 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21df88

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.024 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21bdf0

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.024 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21ecf8

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.025 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21d4c8

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.026 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21c4a8

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.027 cpu2:1026) NFSLock: 478: start Access again to fd 0xc219f08

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.028 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21d0c0

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.029 cpu2:1026) NFSLock: 478: start Access again to fd 0xc219448

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.030 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21e798

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.034 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21a310

    August 25 at 04:03:58 dv29-011 vmkernel: 33:13:06:09.036 cpu2:1026) NFSLock: 478: start Access again to fd 0xc21db80

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc219040 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc219db0 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc218d90 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc218ee8 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21bb40 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21a468 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc2196f8 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc219b00 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21d778 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21e0e0 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21e390 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21e4e8 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21db80 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21a310 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21e798 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc219448 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc219f08 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.052 cpu1:1025) NFSLock: 516: stop to access the 0xc21d4c8 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025) NFSLock: 516: stop to access the 0xc21bdf0 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025) NFSLock: 516: stop to access the 0xc21d0c0 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025) NFSLock: 516: stop to access the 0xc21c4a8 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025) NFSLock: 516: stop to access the 0xc21ecf8 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025) NFSLock: 516: stop to access the 0xc21df88 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025) NFSLock: 516: stop to access the 0xc21dcd8 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025) NFSLock: 516: stop to access the 0xc21f3b0 fd 4

    August 26 at 04:04:07 dv29-011 vmkernel: 34:13:06:15.053 cpu1:1025) NFSLock: 516: stop to access the 0xc21d370 fd 4

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.641 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21e390

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.642 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21e4e8

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.667 cpu2:1080) NFSLock: 478: start Access again to fd 0xc219b00

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.668 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21e0e0

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.670 cpu2:1080) NFSLock: 478: start Access again to fd 0xc2196f8

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.672 cpu2:1182) NFSLock: 478: start Access again to fd 0xc21d778

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.672 cpu2:1182) NFSLock: 478: start Access again to fd 0xc21a468

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.705 cpu2:1182) NFSLock: 478: start Access again to fd 0xc21bb40

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.706 cpu2:1182) NFSLock: 478: start Access again to fd 0xc218ee8

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.706 cpu2:1182) NFSLock: 478: start Access again to fd 0xc218d90

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.712 cpu2:1080) NFSLock: 478: start Access again to fd 0xc219db0

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.713 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21bdf0

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.713 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21d4c8

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.714 cpu2:1182) NFSLock: 478: start Access again to fd 0xc219f08

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.714 cpu2:1182) NFSLock: 478: start Access again to fd 0xc219448

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.718 cpu2:1080) NFSLock: 478: start Access again to fd 0xc219040

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.718 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21d370

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.718 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21f3b0

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.719 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21dcd8

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.719 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21df88

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.720 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21ecf8

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.720 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21c4a8

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.720 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21d0c0

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.721 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21e798

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.721 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21a310

    August 26 04:05:52 dv29-011 vmkernel: 34:13:07:59.721 cpu2:1080) NFSLock: 478: start Access again to fd 0xc21db80

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21a468 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc2196f8 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc219b00 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21d778 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21e0e0 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21e390 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21e4e8 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21db80 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21a310 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21e798 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc219448 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc219f08 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21d4c8 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21bdf0 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21d0c0 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21c4a8 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21ecf8 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21df88 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21dcd8 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21f3b0 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21d370 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc219040 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc219db0 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc218d90 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc218ee8 fd 4

    August 27 at 04:03:51 011 dv29 vmkernel: 35:13:05:56.744 cpu5:1029) NFSLock: 516: stop to access the 0xc21bb40 fd 4

    August 27 at 04:04:26 dv29-011 vmkernel: 35:13:06:31.629 cpu3:1027) NFSLock: 478: start Access again to fd 0xc21bb40

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc218ee8

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc218d90

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc219db0

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc219040

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21d370

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21f3b0

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21dcd8

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21df88

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21ecf8

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21c4a8

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21d0c0

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21bdf0

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.157 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21d4c8

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc219f08

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc219448

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21e798

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21a310

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21db80

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21e4e8

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21e390

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21e0e0

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21d778

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc219b00

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc2196f8

    August 27 at 04:04:27 dv29-011 vmkernel: 35:13:06:32.158 cpu3:1156) NFSLock: 478: start Access again to fd 0xc21a468

    Until last week we saw an error IOCStatus on a linux host. It wasn't the typical time. This is compared to 10-15 (out of 60) hosts linux every Sunday morning and a few variables at other times of load during the week. However, nothing broke down. We went 2 Sundays now since the upgrade of the code and have only seen 1 error w/o accidents. I'm happy with that for now. I'd really like to know what the cause of this well is first. Our storage Administrator said that he has nothing of nfs in the upgrade of the code. Class DOTF, that I went noted a change in the process of kahuna of 7.3 who had to help load. Not sure if it is or not.

  • NAS storage solution possible?  Promise VTRAK 15100 (U320 SCSI) as storage for ESXi/VirtualCenter?

    I have several units Promise VTRAK (12110 and 15100) that I would use as storage for validation using ESXi project and a few pens of blade and 1U servers in a configuration of type 'data center '...  The promise of VTRAKs are 12 to 15 SATA drive enclosures, they have 2 x Channel U320 scsi on the back...

    I would like to install ESXi on blades and 1U servers and have direct storage on the promise of VTRAKs and set up a test environment for the deployment of the Web servers and DB servers.   I see a lot of people who use SAN solutions - BUT I don't want to just throw these VTRAKs.

    Is this possible?  I don't the seen in the HCL, but as we all know, this does not mean that it is not "possible"...

    THANKS TO ALL THOSE WHO CAN PROVIDE INSIGHT.

    So, your options...

    1. It's prob. the best option in terms of performance. It will take a bit to get the LInux drivers sorted if they are not already in the bare metal package OpenFiler.

    2. This option might work, however "multiple jumps" storage (i.e. ESX->->-> storage W2K3 OpenFiler) don't you will probably give the best performance and a lot of "moving parts" to complicate things. I would try to avoid it if possible, and if you want to stick with Windows disk the VTrak then watch StarWind of RocketDivision as an alternative to the OpenFiler/Linux optional #1

    3. NFS of W2K3/W2K8 is also possible. There were a few anecdotal accounts to try and find the default of performance, but you will find that it is "good enough" for your needs

    Among all the options #1 is likely to provide you with the best performance, and you could do with OpenFiler, Windows + StarWind, OpenSolaris or some other distribution Linux of your choice...

  • When I was able to remove the preinstalled Apple apps I have no use for and take my storage?

    When I was able to remove the preinstalled Apple apps I have no use for and take my storage?

    When Apple decides to make that an option.

    In total, I'd be surprised if apps take up more of one to two hundred megabytes.

  • How can I delete a file on my iOS device that I downloaded from my iCloud? I just want to keep the Preview on my iOS device without using my storage iOS for the entire file.

    How can I delete a file on my iOS device that I downloaded from my iCloud? I just want to keep the Preview on my iOS device without using my storage iOS for the entire file. This means that the file is still in the iCloud and available to be downloaded to any device.

    In practice, I want to be able to scan and download through my documents located in my iCloud and once I didn't need them on my iOS device I would like 'load their return to the cloud' to my storage of the iOS device is not used.

    Thank you in advance to the community!

    Max

    The only way I found to do that once a file has been downloaded is to remove it and add it again through either from Finder on Mac or iCloud.com.

  • Configuration of Nexus 5 k for SAN switching between UCS 6100 s and NetApp storage

    We strive to get our configuration of the UCS environment make start SAN LUN on our NetApp storage. We have a pair of Nexus 5Ks with the company / SAN license on them and the 6-port FC module. What to do to configure Nexus 5000 to operate as the SAN switch between the target storage and environment of the UCS?

    You see the 6120 WWPN on the N5Ks? What is the result of 'show flogi database' and 'show int br '.

    You have mapped the FC ports on the 6120 for the proper VSAN?

    NPIV is enabled?

  • Is there a way to recover the use of storage for a specific org through the vCloud API RESTful vdc?


    Hello - I hope someone can tell me how to recover the amount of storage used against an assignment given to a specific VCC in the goal.  I use storage profiles in my environment and when I retrieve the details of an org vDC by AGAINST... api / / admin/vdc / < OID >, I think the result is a block with hrefs VdcstorageProfiles I can do more gets against for details on provisioned storage profiles.  However the results of these becomes later contain just a 'Limit' tag in the XML file that indicates the amount of the allowance, but still no details on usage.  The initial GET against the vDC shows its use against the allocation of memory and cpu, and I was looking to find a similar result for storage.

    Thank you!

    Nevermind, I just have this one resolved, everything necessary for the query / api /? type = adminOrgVdcStorageProfile & filter = vdcName is XXXX

  • Can I use an image in VMware Player for Vsphere for Esxi?

    Jeremy

    User French

    Hello, my saver

    Currently, I have an Esxi, but before I used a picture of WMware player to my virtual server and I know, I use Vsphere to work with my 5.1 ESsxi, I just want to know if I can use my VMware Player image in VSphere for Esxi? and if the answer is Yes, how?

    Thanks for your help.

    Right, so use VMware converter to do this as indicated in my previous post.

    Kind regards

    Julien

  • What plans are available for the use of the storage cloud at adobe. What is the authorized space if I free plan.

    What plans are available for the use of the storage cloud at adobe. What is the authorized space if I free plan.

    Thank you

    Agoutin

    Hi rajthkr,

    Acrobat.com offers free online storage for up to 5 GB of data (there are on the plans at this time to provide additional storage for a fee).

    Best,

    Sara

  • Service profile of the CISCO UCS for ESXi

    Do you need multiple boot LUN on the storage array for each host that you want to apply the service profile to? I'm just confused, let's say I have 12 blades that I want to designate as boot SAN for ESXi, y aura-t 12 profiles of separate service for each of them, and will need me 12 different LUNS on our storage? I am new to this and I just want to make it as clean as possible! Any help is appreciated.

    Hi Justin,

    You can have the same LUN ID for multiple service profiles, as well as the same target ID. However, your initiators will be different and you will need to configure the zoning on your switch upstream if you use the end host mode.

    Here is a good guide, review the basics and how to put in place.

    http://jeffsaidso.com/2010/11/boot-from-San-101-with-Cisco-UCS/

    Also here is an excellent guide to troubleshooting;

    http://www.Cisco.com/c/en/us/support/docs/servers-unified-computing/UCS-...

    Kind regards

  • Using NetApp SRA with vSphere SRM and suspension

    Hello!

    My understanding is that SnapMirror is no suspension, so as a result, you will only get consistent backups fail?

    My question is, when you use the NetApp storage replication adapter in Site Recovery Manager, you now get supported in standby, as MRS can work alongside the matrix to orchestrate the recovery correctly?

    See you soon,.

    Bilal

    Necessary for suspension of application with NetApp NetApp SnapManager.

    SRM does not add any capabilities of demand for NetApp mise en veille (or another provider of table). SRM monitoring, can trigger and in some cases reverse replication, but it is not change/impact how replication is running.

    If you perform a scheduled migration (or a failover of DR and MRS can communicate with servers SRM & vCenter protected site), SRM will replicate storage, turn off the system VMs graciously and then happen again so it is not necessary in these cases of app-compatible replication.

    Does that answer your questions?

  • How do I configure Openfiler for ESXi 5.0 (on which has been installed in the workstation).

    9.0 pre-installed VMware I installed in my laptop. I created a 5.0 as a VM ESXi in workstation. And I installed vSphere client 5.0 in my laptop. And I am able to access ESXi from my laptop through the vSphere client.

    Now, I created a VM with Windows 2008 R2 on top of ESXi. I installed vCenter 5.0 in Windows 2008 R2 Machine. I have just to connect to vCenter using vSphere client (which has been installed in my laptop) and added ESXi as host in the inventory.

    Now, I created a VM with Openfiler. Everything is fine. I want to use this Openfiler as a storage array, and I want the SAN with ESXi and Openfiler configuration.

    I think I need to create a VMkernel port group and attach that Openfiler VM who. And if I want to access the Openfiler web interface, do I need to have internet access? We can access the Openfiler Web interface without having access to the internet?

    Can someone let me know how to connect this Openfiler as a storage Bay for ESXi and configure a San.

    Openfiler does not need Internet access and you do not require an additional VMkernel port for access.  If it's production, you share the management and traffic on NIC iSCSI physical separate and thus vmkernel ports.  In this case, it is not necessary.  Did you do the initial installation of Openfiler to create a LUN for the ESXi host access?

  • Can join a Cluster of storage disk out of Maintenance DTS Mode to set up the profiles for a virtual machine storage policies?...

    Can join a Cluster of storage disk out of Maintenance DTS Mode to set up the profiles for a virtual machine storage policies?...


    IE can you define the rules of affinity DTS depending on how the rules of storage of profiles are configured for a virtual machine?

    In my mind it seems there is disconnect from the DTS and storage profiles.

    Thank you.

    = NOTE =.

    My lab at home, 2 organizes running ESXi 5.0

    Dv01 has

    Of startup vmfs 7200 RPM 160 G = dv01-BOOT

    Vmfs 1 TB 7200 RPM = VM02-VMFS

    VM02 has

    Of startup vmfs 7200 RPM 160 G = VM02-BOOT

    Vmfs 1 TB 7200 RPM = VM02-VMFS

    Vmfs 5900 RPM 2 TB = VM02-TV01

    I have (3) facility profiles of storage using 'User-defined storage Capablity' as shown below:

    (160 @ 7200 RPM SATA) boot > Boot

    Fast (160 7200 RPM SATA and 1 TB @ 7200 RPM SATA) > fast

    Slow (2 TB @ 5900 RPM SATA) > slow

    Then, I set up a storage Cluster with my local disks on dv01 (only to date)

    LocalVM disc dv01-BOOT and dv01-VMFS disk of the 'cluster of storage.

    I have a computer virtual called DC01 (HardDisk1) want to live on 'Boot' profile storage drive
    His "Non-compliant" poster according to the storage profile, it should run on drive 'boot' on dv01

    To get to this point, I put the StorageCluster in Mode of Maintenace 'DTS' and forced a sVmotion of all virtual machines off dv01-BOOT

    No disks are IO or claim of space.

    But once I take the 'DTS Maintenacne mode' STARTUP dv01 floppy is now get the DTS to return him to satisfy the storage profile (rule) I install... IE do the 'storage profile' VM complient (without doing a few externall API, the PS script or the 3rd party software calls)... ??

    Seems weird it is (or may be) such disconnection of a memory of the profiles and the StorageClusters and the DTS?

    Thank you...

    At this point, I think that storage DRS and storage profiles are two technologies to separate and work independently, but what you suggest is a great idea-

  • The NetApp storage HP data migration

    I have LUNS presented to ESX clusters of HP storage.  We recently bought of NetApp storage and want to migrate data from the HP storage with NetApp storage.

    Is it possible to present two different storage vendors on the same ESX Server LUNS?  You can use storage vmotion to migrate virtual machines to a data on HP store to another store of data on the NetApp

    Yes you can present of LUN two fdifferent manufacturers and then use storage vmotion to move VMs data - it was one of the main reasons for svMotion was created.

    If you find this or any other answer useful please consider awarding points marking the answer correct or useful

Maybe you are looking for