Appropriate measures to remove LUN/iSCSI target?

Hi guys, I'm under ESXi 5 and must remove some iSCSI LUN from a couple of hosts.  My first inclination was to just delete the target side SAN and then rescan.  But I thought I remembered reading somewhere that I should "detach" the LUN from the host first?  So I thought that maybe I should ask.

Second question in the same sense, if I delete a full pathname, do I have everything just to delete the IP address in the 'Discovery' tabs, then rescan?  Thank you kindly.

Hello.

Check out the ticket according to the procedure to follow:

http://blogs.VMware.com/vSphere/2011/11/best-practice-how-to-correctly-remove-a-LUN-from-an-ESX-host.html

Good luck!

Tags: VMware

Similar Questions

  • Removed VMFS3 iSCSI target volume - I'm screwed?

    Hello. I "tried" to move virtual machines between servers and the San. On a single server, ESXi 3.5, I 'got' a VMFS volume on a target iSCSI on a basic SAN device.

    The volume "was" listed in the configuration of the server/storage tab. I removed thinking it would be just to delete the reference to the volume from the point of view of the host, but he seems to have deleted the iSCSI target volume.

    Is this correct - is my missing volume? If I'm screwed? Is there anyway to recover the volume?

    Any help or advice would be really appreciated. Thanks in advance.

    We must thank Edward Haletky, not me

    In the VMware communities, his nickname is Texiwill.

    André

  • Best practices the iSCSI target LUN

    I connect to my ESX laboratory to a QNAP iSCSI target and wonder what best practices to create LUNS on the target. I have two servers each connection to its own target. Do I create a LUN for each virtual machine, I intend to create in the laboratory, or is it better to create a LUN of the largest and have multiple VMs per LUN? Place several virtual machines on a LUN has concequenses HA or FA?

    Concerning

    It is always a compromise...

    ISCSI reservations are per LUN.    If you get many guests on the same logical unit number, it becomes a problem and Yes we saw it.

    Slim sure to layout.   This way you can make your smaller LUN and still get several s VM we each their.

  • Static iSCSI targets removal script with PowerShell

    Hi all

    Is it possible to use PowerShell for the removal of static iSCSI Targets of script? If so, how...

    Thank you!

    If I understand you correctly, you want a report of current static iSCSI on your ESX Server targets.

    This should do the trick

    $esxName = 
    
    $report = @()
    $hostStorSys = Get-View -Id (Get-VMHost $esxName | Get-View).ConfigManager.StorageSystem
    foreach($hba in $hostStorSys.StorageDeviceInfo.HostBusAdapter){
         if($hba.gettype().name -eq "HostInternetScsiHba"){
              if($hba.ConfiguredStaticTarget -ne $null){
                   foreach($target in $hba.ConfiguredStaticTarget){
                        $row = "" | Select HBAName, hostname, device
                        $row.HBAName = $hba.Device
                        $row.hostname = $target.address
                        $row.device = $target.iScsiName
                        $row.port = $target.port
                        $report += $row
                   }
              }
         }
    }
    $report
    
  • Failed to create data store on LUN ISCSI

    Get the following error when attempting to create a new data store after a new installation of ESXi 5.5U2.

    Location:

    • UCS B200 servers using the VIC-1280 map and
    • Interfaces ISCSI connected directly from the fabric of interconnection to the NetApp FAS2552 with TwinAx cables
    • MTU is set for extended frames (9000) on the interface of NetApp, vNIC for ISCSI vKernel vSwitch and group interface ports
    • During the initial setup of the vSwitches, I forgot the MTU setting Port and interfaces group wouldn't even ping, so all the MTU setting are good
    • ISCSI connection is green and LUN created and present storage warehouses are considered as available on the ESX Server to create a data store (sizes 1 TB and 2 TB tested times)
    • The NetApp storage have been created as a thin provisioned and thick provisioned - neither works and gets the same error

    Any ideas would be very useful. Technical support has not yet responded.

    Thanks in advance.

    DM

    OK, after working with NetApp and VMware on a conference call (after both made each other) bottom line "fix" was to remove the iSCSI port bindings, remove broken paths, remove the vSwitches and rebuild again.

    1. create vKernel vSwitch with appropriate port group

    2. set the MTU to 9000 on the port group and vSwitch

    3. create the bindings for each interface iSCSI Port

    4 re-scan

    5 successfully create the data store

    Oddly enough, I had to repeat this process 3 times for one of the blades, but it worked the first time for 3 other involved blades.

    VMware technical support did provide the following as a look at the LUN of the CLI:

    1. / vmfs/devices/disks cd

    ls-l

    We see and out of all the available discs

    Identify the Lun Naa.xxxxxxxx

    2. run the Naa.xxxxx reported in step 2

    partedUtil getptbl /vmfs/devices/disks/naa.xxxxxxx

    If there is a partition, you might see something similar to

    1 63 2249099 131 128 > 1 is the partition number

    or

    You can see and error, if error please contact technical support

    3. If there is partition, you can delete

    partedUtil remove /vmfs/devices/disks/naa.xxxxxxx 1

    NOTE: This process did not work, but is still useful nonetheless. It could work if the LUN has been used by another operating system and need cleaning. But my situation was all greenfield.

  • Hot Add Remove lun fails in rescan periodical at the time of the withdrawal, although the path is marked as DEAD as planned

    So we run the hot add and remove cases from LUNs in the certification for ESX 5.5 with Workbench 3.0 kit

    The hot add succeeds to the two manual add and periodic scan... Deletion succeeds manual extraction, but a failure in periodicals rescan same removal so it lists the path out of SERVICE.

    Here are the logs

    [August 26, 2014 20:01:29: WLMANAGER] [0] FRAME: no charges Found.

    [August 26, 2014 20:02:25: HOTADDREMO] [0] INFO: hot test add using RESCAN

    [August 26, 2014 20:07:55: HOTADDREMO] [0] INFO: performance of rescan the first server

    [August 26, 2014 20:08:28: HOTADDREMO] [0] INFO: found new LUN eui. 48d8bf5f64e64100

    [August 26, 2014 20:08:28: HOTADDREMO] [0] INFO: add hot with the new analysis is the first server

    [August 26, 2014 20:08:28: HOTADDREMO] [0] INFO: Running rescan the second server

    [August 26, 2014 20:09: HOTADDREMO] [0] INFO: found new LUN eui. 48d8bf5f64e64100

    [August 26, 2014 20:09: HOTADDREMO] [0] INFO: add hot with rescan is the second server

    [August 26, 2014 20:09: HOTADDREMO] [0] INFO: hot test add using PERIODICALLY RESCAN

    [August 26, 2014 20:15:05: HOTADDREMO] [0] INFO: Test runs on the first server

    [August 26, 2014 20:15:06: HOTADDREMO] [0] INFO: found new LUN eui.48d8bf5f64e64200

    [August 26, 2014 20:15:06: HOTADDREMO] [0] INFO: add hot with the new analysis from time to time, is the first server

    [August 26, 2014 20:15:06: HOTADDREMO] [0] INFO: Test runs on the secondary server

    [August 26, 2014 20:15:08: HOTADDREMO] [0] INFO: found new LUN eui.48d8bf5f64e64200

    [August 26, 2014 20:15:08: HOTADDREMO] [0] INFO: add hot with periodically rescan is the second server

    [August 26, 2014 20:15:10: WLMANAGER] [0] FRAME: no charges Found.

    [August 26, 2014 20:16:07: HOTADDREMO] [0] INFO: hot control remove using RESCAN

    [August 26, 2014 20:17:27: HOTADDREMO] [0] INFO: hot execution removes the test on the first server

    [August 26, 2014 20:18: HOTADDREMO] [0] INFO: LUNS deleted eui.48d8bf5f64e64200

    [August 26, 2014 20:18: HOTADDREMO] [0] INFO: Test successfully on the first server

    [August 26, 2014 20:18: HOTADDREMO] [0] INFO: hot execution removes the test on the second server

    [August 26, 2014 20:18:33: HOTADDREMO] [0] INFO: LUNS deleted eui.48d8bf5f64e64200

    [August 26, 2014 20:18:33: HOTADDREMO] [0] INFO: Test passed on the second server

    [August 26, 2014 20:18:33: HOTADDREMO] [0] INFO: hot control remove by using PERIODICALLY RESCAN

    [August 26, 2014 20:23:20: HOTADDREMO] [0] INFO: Test runs on the first server

    [August 26, 2014 20:23:21: HOTADDREMO] [0] INFO: check the path to removed LUN on the first server

    [August 26, 2014 20:24:17: TRANSPORT] [0] FRAMEWORK: running cmd 'esxcfg-mpath - bd eui.48d8bf5f64e64100' mode of blocking on the host 'esxia.amiads.com '...

    [August 26, 2014 20:24:17: STAFBASE] [0] FRAMEWORK: command execution STAF: staf esxia.amiads.com PROCESS SAMECONSOLE RETURNSTDOUT RETURNSTDERR WORKDIR SHELL START / WAIT COMMAND esxcfg-mpath - bd eui.48d8bf5f64e64100

    [August 26, 2014 20:24:18: STAFPROCES] [0] FRAME: host esxia.amiads.com returned eui. 48d8bf5f64e64100 : * iSCSI disk (eui.48d8bf5f64e64100)

    vmhba34:C0:T65:l0 LUN:0 State: dead iscsi adapter: not available target: unavailable

    vmhba34:C3:T65:l0 LUN:0 State: dead iscsi adapter: not available target: unavailable


    [August 26, 2014 20:24:27: HOTADDREMO] [0] ERROR: LUN is found not deleted after periodic rescan on the first server


    So I checked the Vmware perl code that performs this case... it is available at /opt/vmware/VTAF/storage50-cert/VTAF/Test/Storage/StorageCert/FVT/HotAddRemoveLUNs.pm


    Sub RunHotRemoveTest

    {

    .....

    # Remove LUN only cause death I/O path.  It does not delete the LUN.

    # Evaluate for the dead I/O path.

    $deadPathFound = "NO";

    LogInfo (channel = > $channel, MSG = > "Check the path to remove LUN on $server");

    foreach my {$lun (@lunListBeforePeriodicallyRescan)

    #my $lunObj = $host2HbaObj-> CreateScsiLun (name = > $lun);

    $cmd = "esxcfg-mpath - bd $lun";

    My $cmdObj = new VTAF::Framework:Core:Common:Command)

    Home = > $host2Obj-> GetName();

    Cmd = > $cmd);

    my $rc = $cmdObj-> GetReturnCode();

    my $cmdResult = $cmdObj-> GetStdout();

    my $cmdError = $cmdObj-> GetStderr();

    If ($cmdError) {}

    $deadPathFound = "YES";

    LogInfo (channel = > $channel, MSG = > "mpath command returned: $cmdError");

    LogInfo (channel = > $channel, MSG = > "$lun is deleted after periodic rescan");

    }

    }

    If ($deadPathFound not 'YES') {}

    LogError (channel = > $channel, MSG = > "LUN is not deleted after periodic rescan on the $server");

    return FAILURE;

    } else {}

    LogInfo (channel = > $channel, MSG = > ' delete hot with periodically rescan PASSED on the $server ");

    }

    }

    So if you see the code above, Vmware itself expected the path to show the death, but they perceive three error codes for the output of esxcfg-mpath - bd < lunname >

    I think they expect an error to return to $cmdError... But looks at though it lists the path as DEAD, she returns to success... So the $deadPathFound is always set to no. He goes into the path of "not YES" and not the test case...

    I would like to open a folder from supported with VMware, but before that I would like to know if someone has faced this problem and had this problem of VMware using any patch in their Workbench... Or if any new workbench has this problem fixed...

    We are using Vmware Test Manager: 3.0.0 - 1610638

    5.5 storage certification: 3.0.0 - 1337995

    VMware Workbench say 3.0.1

    Than I went back to our older certification with ESX 5.1 where this matter has passed... and was able to check the logs. It has been performed with Workbench 2.0. In this they executed a more order "peripheral storage esxcli list d lunname" and lists the path as dead and also properly detects the dead path and scored the try with the... Unfortunately I don't seem to have their old code to compare with...

    [July 27, 2012 12:34:06: SCSILUN] [0] FRAMEWORK: check if the 'eui.93bb4a37348e4100' lun is available or not.

    [July 27, 2012 12:34:06: TRANSPORT] [0] FRAMEWORK: running cmd ' esxcli storage device base list d eui.93bb4a37348e4100 ' mode of blocking on the host 'esxia.amiads.com '...

    [July 27, 2012 12:34:06: STAFBASE] [0] FRAMEWORK: command execution STAF: staf esxia.amiads.com PROCESS SAMECONSOLE RETURNSTDOUT RETURNSTDERR WORKDIR SHELL START / WAIT COMMAND esxcli storage base device list d eui.93bb4a37348e4100

    [July 27, 2012 12:34:07: STAFPROCES] [0] FRAME: host esxia.amiads.com returned eui.93bb4a37348e4100

    Full name: * iSCSI disk (eui.93bb4a37348e4100)

    Definable display name: true

    Size: 2048

    Device type: Direct access

    Multichannel plugin: NMP

    Devfs Path: /vmfs/devices/disks/eui.93bb4a37348e4100

    Seller: * I

    Model: *.

    Review: 2 0 s

    SCSI level: 4

    Is nickname: false

    Status: dead

    RDM Capable is: true

    Is Local: false

    Is removable: false

    SSD is: false

    Is Offline: false

    Is perpetually booked: false

    Thin Provisioning status: Yes

    Attached filters:

    VAAI status: unknown

    Other UID: vml.01000000003933626234613337333438653431303053746f725472

    [July 27, 2012 12:34:07: MULTITECH] SETTING [0]: called VTAF::TestLib:Sphere:Storage:Lib:CLI:ScsiLun:IsLunAvailable (Password='password@123' Username = 'root' HostName = 'esxia.amiads.com' LunName = 'eui.93bb4a37348e4100') returns '1'

    [July 27, 2012 12:34:07: TRANSPORT] [0] FRAMEWORK: running cmd 'esxcfg-mpath - bd eui.93bb4a37348e4100' mode of blocking on the host 'esxia.amiads.com '...

    [July 27, 2012 12:34:07: STAFBASE] [0] FRAMEWORK: command execution STAF: staf esxia.amiads.com PROCESS SAMECONSOLE RETURNSTDOUT RETURNSTDERR WORKDIR SHELL START / WAIT COMMAND esxcfg-mpath - bd eui.93bb4a37348e4100

    [July 27, 2012 12:34:08: STAFPROCES] [0] FRAME: host esxia.amiads.com returned eui.93bb4a37348e4100: * iSCSI disk (eui.93bb4a37348e4100)

    vmhba34:C1:T65:l0 LUN:0 State: dead iscsi adapter: not available target: unavailable

    vmhba34:C0:T65:l0 LUN:0 State: dead iscsi adapter: not available target: unavailable

    [July 27, 2012 12:34:08: HOTADDREMO] [0] INFO: eui.93bb4a37348e4100 died because of the LUN to remove test

    So basically had the question warm Add Remove Lun. It is a problem in the ESX version we use and scripts of WorkBench. If we use the build ESX GA this would have worked. ESX changed the behavior of the command in version 5.5 of GA to a new approach and then back to the old approach again in 5.5 Update1.

    The version we use are two versions higher than the release of GA... Therefore, the behavior of this command is different between these two versions.

    ESXi 5.5 Patch 2

    2014-07-01

    1892794

    N/A

    ESXi 5.5 Patch Express 4

    2014 06-11

    1881737

    N/A

    ESXi 5.5 Update 1 has

    2014-04-19

    1746018

    N/A

    ESXi 5.5 Express Patch3

    2014-04-19

    1746974

    N/A

    ESXi 5.5 Update 1

    2014 03-11

    1623387

    N/A

    ESXi 5.5 Patch1

    2013 12-22

    1474528

    N/A

    ESXi 5.5 GA

    2013-09-22

    1331820

    N/A


    This is the behavior in version 5.5 GA ESX

    ~ # vmware - v

    VMware ESXi 5.5.0 build-1331820

    Added the target

    ~ # esxcfg - mpath - bd eui.3db57bdc252c0200

    1. EUI.3db57bdc252c0200: FRIEND iSCSI disk (eui.3db57bdc252c0200)

    vmhba34:C0:t0:l0 LUN:0 Status: active iscsi adapter: iqn.1998-01.com.vmware:5213e0fa-31de-329a-5885-002590135b9e-17c7a47e target: IQN = iqn.1991 - 10.com.ami:itx002590135d84e923:l.v10 Alias = Session = 00023 000002 PortalTag = 5 d

    ~ # echo $?

    0

    Removed from the target

    ~ # esxcfg - mpath - bd eui.3db57bdc252c0200

    Device eui.3db57bdc252c0200 unknown

    ~ # echo $?

    1

    Now the version of ESX 5.5 Update 1, we use

    Added the target

    ~ # vmware - v

    VMware ESXi 5.5.0 build-1623387

    ~ # esxcfg - mpath - bd eui.5b5fbb54c4d80200

    1. EUI.5b5fbb54c4d80200: FRIEND iSCSI disk (eui.5b5fbb54c4d80200)

    vmhba38:C1:t0:l0 LUN:0 Status: active iscsi adapter: iqn.1998-01.com.vmware:5405e3db-f95e-9f9c-e99b-0025900cab82-01a17ffb target: IQN = iqn.1991 - 10.com.ami:itx00259014329a26b:l.sharontest Alias = Session = 00023 PortalTag = 9 000002 d

    ~ # echo $?

    0

    Removed from the target

    ~ # esxcfg - mpath - bd eui.5b5fbb54c4d80200

    1. EUI.5b5fbb54c4d80200: FRIEND iSCSI disk (eui.5b5fbb54c4d80200)

    vmhba38:C1:t0:l0 LUN:0 State: dead iscsi adapter: not available target: unavailable

    ~ # echo $?

    0

  • Multiple ISCSI targets and connection failed

    last week I was preparing my s/w iscsi initiator to connect to multiple iscsi targets.  I ended up not doing last week, and I thought that I removed the targets of the GUI.  looking now @ the GUI, I only show my main production target, but if I look in my Journal vmkernel, his constantly trying to connect to these targets, I had the plan on connecting the box to.  Anyone know of a way for me to look @ the console and see where they can still be defined, of cause, they are not displayed in the GUI.

    See part 3 of this post "The case of stale iSCSI LUN.

    http://apps.sourceforge.NET/MediaWiki/iscsitarget/index.php?title=The_case_of_stale_iSCSI_LUNs

  • VMware as an iSCSI target

    Hello

    This might be a newbie question, but I looked in the documentation and online and I could not find a definitive answer.

    I want to configure my VMware machine as an iSCSI target, so that I can use additional storage to other machines in VMware.

    The hard drives are connected via SCSI on the local machine and not on a dedicated storage machine.

    Is this possible?

    Thanks for any response,

    Dorian

    Welcome to the community - Yes you can load a pack opensource like FreeNas and present as a target iSCSI in a VM - but I don't then point other virtual machines that are running on the same for your storage space because if the host fails your storage space will be would not available-

    I also move this issue a more appropriate forum

  • Need urgent help: loss of storage after rename of iSCSI target

    Hello guys,.

    I need help, I had the problem with one of my iSCSI targets with all virtual machines on it, this iSCSI target on the storage iomega NAS device, is suddenly the storage is no longer available, I tried to rename the 'VmwareStorage' to 'VmwareStorage1' iSCSI target, but his is more appropriate (his was visible on a vsphere servers) but now visible sound like a DOS partition, please help to recover to vsphere without losing any data and the virtual machines inside. Note that I use Vsphere 5.5. see attached photo:

    the selected one is 16-bit DOS > = 32 M, its must be VMFS, as all other stores, I don't want to not loose my vms, the company stops and I'm fired

    vmwarepartition.jpg

    I fixed it... I followed this vmware kb

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalID=2046610

    Thank you vmware, linux thanks<>

  • not visible on the ESXi LUN iSCSI hosts

    So I have a bit of a strange and hoping the community can help me to understand. In my lab at the House, I have 3 x ESXi 5.1 Build 799733 hosts x 2 guests are the same hardware (BUD Intel) and the other host an old HP flat desk. I am running CE Nexenta 3.1.3.5 on a Microserver HP which provided the iSCSI storage to HP desktop computer. The 2 new Intel NUC VMware hosts however, I am unable to get any iSCSI storage visible to them. Software iSCSI adapter is enabled, peut vmkping storage to the host and the target will appear on the static discovery tab. The configuration appears to be identical between the 2 However, the hardware is different... I have a new analysis on the host and no devices appear. Now the obvious answer is there must be a security issue on the side of Nexenta... but the 3 hosts are all part of the same group initiator. Confirmed and copied the IQN name over and over again. I still think that this is maybe a matter of Nexenta, but maybe it's something else.

    Check the vmkernel.log file contains the following information after a new analysis...

    (2013 06-09 T 23: 49:06.160Z cpu3:2933) VC: 1547: device rescan time 17 msec (total number of 5 devices)

    (2013 06-09 T 23: 49:06.160Z cpu3:2933) VC: 1550: Filesystem probe time 85 msec (4 of 5 surveyed devices)

    (2013 06-09 T 23: 49:06.412Z cpu3:2931) Vol3: 692: could not read the header of control volume: unsupported

    (2013 06-09 T 23: 49:06.412Z cpu3:2931) Vol3: 692: could not read the header of control volume: unsupported

    (2013 06-09 T 23: 49:06.412Z cpu3:2931) SFS: 4972: pilot no. FS claimed "on the remote control": unsupported

    Anyone got any ideas?

    Problem solved.

    Appears that it was a problem of connection iSCSI ports. There is only a single teddy bear in each of these hosts and allowing the port bindings vmkernel on initiator iSCSI under 'Network Configuration' result not being iSCSI target devices is not visible. Removed the vmkernel port binding and made a new analysis. Cameras then showed upward.

  • issue of configuration for the store of data/LUN iSCSI

    Hi all!  I am all new to VMWare and iSCSI, and I have a question about data warehouses regarding being mounted on an iSCSI target.

    Here's my setup.  I have a Dell PowerEdge R810 connected over a link to 10 GB dedicated to a Dell PowerEdge 2950 that I use as a SAN.  The R810 has no hard drives in it, it starts ESXi on a redundant configuration of SD card integrated into the server.  My SAN software is Server 2008 Enterprise with Microsoft iSCSI software Target 3.3 installed.  I created target 1, LUN 0, 800 GB .vhd file sitting on a NTFS partition and set up a store of data on it in VCenter.  The data store is the size of the entire iSCSI, 800 GB target.  I have my first virtual server up and get 50GB of space between the data store.

    Here is my question: can I create several virtual machines using the data store I put in place?  I know it sounds like a stupid question, but my concern comes from some reading I did on several computers, access to a single LUN.  If I set up another virtual machine, it will explode on my SAN file system as two virtual machines would be to access the same big VHD file on the NTFS partion, or I'm far from base?

    Issue of bonus points:

    I have an another R810 that I will be online soon.  Assuming that I am OK to add more virtual machines of my first machine, pourrais I connect this box in the data store as well and create virtual machines, or should I set up a separate data on another iSCSI target store?

    Thanks for your patience with a new guy.  I searched and read for hours, and I could have read too much so I thought that I had just askt experts.

    Hello and welcome to the forums.

    Can I create several virtual machines using the data store that I put in place?

    Yes.

    If I set up another virtual machine, it will explode on my SAN file system as two virtual machines would be to access the same big VHD file on the NTFS partion, or I'm far from base?

    No, you should be fine.  One thing to keep in mind is that the Windows/MS iSCSI target is not a supported storage solution.  There could be problems of scalability with iSCSI Windows/MS, but you'll be fine in a VMware perspective.

    I have an another R810 that I will be online soon.  Assuming that I am OK to add more virtual machines of my first machine, pourrais I connect this box in the data store as well and create virtual machines, or should I set up a separate data on another iSCSI target store?

    You could, but I personally would put in place a different data store on another iSCSI target.

    Good luck!

  • iSCSI target

    can we use USB HDD for the iscsi target?

    Hello Manu,

    1. which Windows operating system you are using on the computer?

    2. are you connected to a domain?

    Please provide more information to help you best.

    Suggestions for a question on the help forums

    http://support.Microsoft.com/kb/555375

  • 'Incorrect function' during the initialization of an iSCSI target

    I have problems when you try to create a partition of 3.7 TB on Windows Server 2008 R2.

    I get the following error message when I try to initialize an iscsi target:

    What I am doing wrong.  The target is not part of a cluster.

    What I am doing wrong? I tried initilizating the disk with MBR and GPT time partitions and all do the same thing.

    Have you installed the software on the server (at least the host software) and establish connections iSCSI on two raid controllers? Looks like that may connect you to a controller, but the virtual disk is owned by the other controller.

  • How to change the name of a volume iSCSI target?

    How the name of a volume iSCSI target can be changed?  I created a volume with a typo in a letter.   The volume production data on it now so I can not just it delete and recreate the problem.   It's not a big deal, but it doesn't look good either.

    It's on a PS6100.   When I go into the properties of the Volume in the advance tab I can see the iSCSI name, but cannot change it.  All I change is the public alias.  But it does no good that the name appears in the vCenter with the misspelled name.

    So how can I change the iSCSI name?

    Or am I stuck creating a new volume and telling them to migrate their data for spelling can be fixed?

    See you soon

    Part of the iSCSI specification is that target names must be unique.  Therefore, once created, you cannot change the name of the volume.

    Kind regards

  • Could not see the iSCSI target in the adapters iSCSI software under Configuration details page

    Detailed description:

    After you add the IP address of the iSCSI target in the target iSCSI dynamic discovery Tab names can be seen under static discovery. But, iSCSI target is not considered as adapters iSCSI software tab Configuration details page.

    This iSCSi target is already mounted as a data store VMFS5 with some VMS on an another ESX which is part of the different ESX cluster in the same center of work data.

    thinking the im network configuration

    You vm kernel ports configured correctly?

    you use VLAN? If so see if your config of vlan is correct

Maybe you are looking for

  • Satellite A500-1GL - Driver NVIDIA GeForce GT300m for Windows 7 32 bit

    Hi, I'm French, I bought this phone 2 weeks ago, and I did a new install of Windows7 32 bit.I'm looking for the display driver (for my Nvidia GeForce GT 300 m), but the only one I found is for Windows 7 64 bit :'( Do you know if Toshiba will realease

  • Where is the F4 key?

    I nave a lap top, could you please help me find the f4 key? That's exactly what I did above and just hit the letter f and the number 4? I feel awkward asking, however my older eyes and used a magnifying glass and I thought that there is just a key fo

  • printer offline does not connect

    After replacing the old router failure printer displays offline with new router, rebooted router and printer and still displays offline. Config print network and wireless mac address f4ce46d42e37, status disconnected, all the remains enable what frus

  • I have problems to connect to my network wireless computers in my house running XP Pro.

    Hello. I have problems to connect to my network wireless computers in my house running XP Pro. I set up my wireless router to use WPA - PSK encryption. When I type in the passphrase on computers running XP in my house, I get an error message indicati

  • My wmi file is missing. I am running windows xp home with sp3 addition

    whenever I try to use the windows diagnostic utility or if I try to use any microsoft diagnostic tool always happens the same thing. Sometimes it does not find my wmi or its lack. What can I do to fix this?