ESXi 4.1 iSCSI Shared Datastore

I'm under ESXi 4.1 with Windows Storage Server and iSCSI software targets three servers to store my data stores.  I've been using dedicated iSCSI LUNS for each ESXi server and it was working great (MPIO and RR is awesome too).  I am planning for disaster scenarios, and I wonder if there was nothing wrong with the help of a store of iSCSI data shared between servers.  I do not have vMotion, so I can't migrate directly from one host to another, but I installed a test data store and it spread over three ESXi servers.  I moved a virtual machine for the data store that is shared and I am able to add to the inventory on all three servers.  I can also start the virtual machine to any host, but its status (Powered On / Off / suspend) appears only in the home that I started in.  You see problems with this configuration?  I would like to share my stores of existing data with other hosts, so I can't add all virtual machines to the inventory on all servers.  Then I can implement a poor-mans vMotion and start the virtual machines from any host.

Thanks in advance!

Hello

There is absolutely no problem sharing a LUN between several ESXs iSCSI. In fact, this is used to create a cluster.

You do not look to have a vCenter. I don't know, what if it's a good idea to have saved several guests at the same time isolated virtual machines. I'd say recording each virtual machine to a single host only.

If you want to move a virtual machine, stop it, remove the host inventory, enter it on the new and start it.

In case of crash, register the VMs failed for the remaining guests and start them.

Good luck.

Franck

Tags: VMware

Similar Questions

  • Nested ESXi 5.1 and iSCSI Shared Storage

    People:

    I am unable to get my ESXi servers nested to see the iSCSI shared storage, that I put in place for them.  I use iSCSI for the ESXi host, who holds the guests of ESXi, so I have a successful iSCSI configuration to use as my reference.

    Here's what I have for the host network config:

    iSCSI targets

    + IP1: 172.16.16.7 (subnet for host IP storage)

    + IP2: 172.16.17.7 (subnet for storage over IP comments)

    vSwitch associated vmnic2 on NIC IP storage

    + Port "IP storage" group containing virtual NIC both ESXi hosts

    + VMkernel Port for iSCSI host connections: 172.16.16.28

    Here's what I have to config network comments:

    + The highest virtual NIC group of ports "IP storage".

    + vSwitch with only a VMkernel port for iSCSI of the guest connections: 172.16.17.38

    From the iSCSI host, I am able to ping 172.16.16.28.  However, I am unable to ping 172.16.17.38, and here's the really confusing part - I am able to get an answer ARP of that associated with the VMkernel port with the correct MAC network CARD!  This eliminates all kinds of potential configuration issues (for example means bad NIC, bad IP, etc..)

    The firewall on the host indicates the software iSCSI port outgoing 3260 Client is open.  Capture of the target iSCSI host package reveals no. TRAFFIC from VMkernel IP guest when updating to storage adapters.

    What should I look at?  The configuration of the host against the host looks identical yet we work and there is no...

    -Steve

    In the debugging process so, I turned on the "Promiscuous" mode on the vSwitch associated vmnic2 on the NETWORK card on the host IP storage ESXi and poof!  Everything magically started working.  The iSCSI traffic be unicast, so I don't see why promiscuous mode would be necessary, but I can't argue with the results observed.  Clearly, I have more to learn more about nested ESX, that's why I'm playing with it.  :-)

    -Steve

  • Questions of ESXi 5.1 ISCSI--> cannot establish a network connection (continued)

    Hello world

    I had a problem there not too long ago (thread link below) where my ESXi hosts could not establish a connection with an IBM DS3500 on start up and "rescan", however their connectivity worked correctly without any problem after commissioning and when a "rescan" finished. We determined that the most likely cause was the unit IBM itself. However, I wasn't completely satisfied.

    In order to deepen the question and dismiss my guests being the problem, I created a unit ISCSI myself with two new hosts of ESXi 5.1 (version 1157734).

    ESXi signals that it cannot connect to the initiator ISCSI (network error) - but it can run virtual machines and browse data stores?

    The configuration is the following:

    Test environment:

    2 x ESXi 1157734 hosts

    1 x server Freenas ISCSI (ZFS R10)

    At the start of my ESXi hosts in the test environment, download the same error as our production environment. They also put on the same error message (in the event log) when I "rescan" map of the ISCSI software. However, once the scan is finished, I can't see any error message and everything continues as it should (Round robin, active active). I can get a brilliant flow also. It's as if the errors are almost... false positives?

    Example of the error message (test environment):

    Connection to the iSCSI target

    IQN.2011 - 03.org.example.istgt:iscsi on vmhba35

    @ vmk2 has failed. The iSCSI initiator couldn't

    establish a network connection to the target.

    error

    17/10/2013-08:33:35

    ESXi hosts are configured as follows:

    Test environment:

    • (ESXi hosts) Two network cards for ISCSI, one for VM traffic
      • NIC1: 10.20.20.2/24, NIC2: 10.30.30.2/24
    • (Freenas) Two network cards installed on the ISCSI device (dedicated to ISCSI traffic)
      • NIC1: 10.20.20.1/24, NIC2: 10.30.30.1/24
    • ESXi is set to round robin, which works properly (I can max on both ports at the same time)
    • Two VMKs are related to the ISCSI Software initiator and defined to dynamically discover the two ISCSI addresses
    • Unity of ISCSI and the hosts are not on the VLAN, the switch is dedicated to ISCSI traffic (However in the production, the two networks are separated by VLAN.) I removed the in the test environment to simplify everything)
    • The vSwitches (who hold a single VMK for ISCSI) are fixed:
      • "Promiscuous" mode: reject
      • Change of MAC address: accept
      • Forged passes: accept
      • The traffic shaping: disabled
      • No NIC teaming (obviously)
      • Each VMK inherits the settings of these

    Quick points:

    • My hosts use VMware approved NIC in my test environment. My production environment runs on full VMware hardware approved
    • Can I use ping of vmk join each each VMK ISCSI initiator
    • I can use the nc-z target_ip 3260 command to check connectivity to the port
    • The ISCSI test has two network adapters, each on 1 Gbps ports
      • I can get around 800mbps per port when copying lots of files (both ports are active)
      • I see no error in copying a large number of virtual machines around, or backup files
    • I only see errors on start up and "rescan".
    • I have two network adapters on each host ESXi dedicated for ISCSI. Each NETWORK card has a single VMK. A single mapped port on ISCSI, the other is mapped to port B (I used to have two VMKs by nic, but I removed that in order to simplify everything)
    • I have updated the firmware on our switch of production. No results
    • I've used 4 different switches in order to avoid a network problem. No results
    • I tested ESXi versions 5.0, 5.1 (799733 & 1157734), all have the same mistakes
    • The ISCSI ports are on different networks according to the VMware (I was told not to put on the same network VMKs)

    Any ideas? It's starting to drive me up the wall. Any help would be greatly appreciated

    I've seen several times in recent months and was beginning to think it is specific to the HP P2000 units because it's the only time wherever I saw him! I now believe that ESX/iSCSI rather than issues of driver or table.

    On my last install, I found that errors occur as soon as the table is connected, i.e. before any storage is displayed.

    Single host ESX, no guests. 2 Intel adapters to 2 HP past by dedicated HP P2000 1 GB table with all 8 wired ports.

    All roads are good, all lights are in the State expected.

    It did not affect the use of the storage in question to any site, so there has been no need / a momentum of any further investigation.

    B! 886y annoying however!

  • supported iSCSI & sharing LUN

    Hello

    I intend to deploy ESXi 3.5 U4 on a Dell PE2900 with an iSCSI NAS and I have two questions:

    -Support iSCSI initiator software such as ESX original?

    -Can I share an iSCSI LUN of the NAS between virtual machines (Win2008 x 64) without function of cluster in read/write mode?

    Thank you in advance for your answers.

    Support iSCSI initiator software such as ESX original?

    Yes, side (for VMFS datastore) vmkernel and VM (for iSCSI inside a virtual machine drive).

    Can I share an iSCSI LUN of the NAS between virtual machines (Win2008 x 64) without function of cluster in read/write mode

    You can share a CIFS/NFS between two or more VM share.

    But not a disc, because NTFS is not not cluster aware.

    But you can use Microsoft Failover Cluster to 'share' this disc.

    André

  • Nested ESXi hosts and iSCSI POC

    People,

    I want to set up a proof of concept for a two host, two models NAS for my workplace. At this point, I put a host of ESXi nested using this Vcritical guide - http://www.vcritical.com/2011/07/vmware-vsphere-can-virtualize-itself/

    This made wonder. I have two virtual servers within the main install ESXi with the three hosts on the network 192.168.0.x. I then added a second NETWORK adapter on the 'server' and separated from the network iSCSI (192.168.1.x) and pointed to a simple NAS. This went well now. The problem is the hosts nested cannot access the iSCSI network.

    It's all a bit above my paygrade, but I can't work on how I can get the hosts nested on the iSCSI network. I tried to add network cards second in virtual computers running virtual hosts and their allocation to the 192.168.1.x network.

    Can anyone help to shed light? Someone you will understand my question or what I'm doing? I hope so!

    Thanks in advance

    HP

    Hello

    Your primary host see the iscsi network and connect to your nas Server?

    It should just be a case of creating a VM portgroup on this same vswitch, with settings of appropriate VLANs if necessary, assigning your esx hosts nested this portgroup vNIC. On the nested hosts themselves, you will need to implement the initiator software iscsi etc. as usual.

  • ESXi 4 port iSCSI in question

    I'm having a problem with ESXi 4 I can't understand. I have activated the iscsi function and pointed to the right place. But when I do a new analysis for storage again, it brings nothing to the top. If I click on the "Paths" button, it shows the Lun number correctly but under status it shows as DEAD with a red diamond. What I'm missing here? It's the 60-day trial version ESXi 4... im guessing it is a license to use iSCSI. Anyone have any ideas?

    Thank you

    I have not worked with 7.0.6.  7.2, the command set ALUA is ' Group set < Group name > alua no '. Can you see what gives "Group show - v < Group name > '.

    The quick alternative would be to delete the rule to claim ALUA in ESX, IE esxcli nmp ATAS deleterule - ATAS VMW_SATP_ALUA - seller NETAPP - option tpgs_on. I think that's not persistent resets.

  • Missing ESXi VMFS on iSCSI

    So I searched the forums and few people seem to have similar problems, but nothing seems to work for me.  We have a server network storage FalconStor making Thinprovisioning, mirror, replication, snapshots, etc.  So let's test this new system, replicate us a LUN of our EMC Dell, more towards the Falconstor, then break the link (while one of our ESXi server is connected and VM running... gota, put them to the test on the right!)... .so plan is promoting the replica drive, post it on an iSCSI target, connect our ESXi box and presto!  But none presto

    Are connected to the target iSCSI OK, and he sees this drive... but does not detect the file (in stock) mount/system.  I can check it out via "fdisk/vmfs/devices/drives / vmhba32xx:xx:xx and he sees that there is a VMFS installed the disk and displays the appropriate size.»  I expected to run vmkfstools-r on the disc, but I get:

    Error: failure of vmkfstools: vmkernel is not loaded or not implemented call.

    Anyone have any ideas?

    See page 114/115 here - http://www.vmware.com/pdf/vi3_35/esx_3/r35u2/vi3_35_25_u2_san_cfg.pdf concerning the EnableResignature / setting of the DisallowSnapshot.  Once you have set the appropriate option for your scenario, you will need to rescan only (Configuration-storage adapters).

  • ESXi 5.5, 5 VMFS datastore / vmdk size question

    Trying to check that I understand the vmdk for esxi 5.5 size limitation.

    If I have a VMFS 5 existing on my 5.0 esxi hosts, data store once I update my hosts to esxi 5.5 that I would then be able to have vm with vmdk is greater than 2 TB?

    I don't have to create new warehouses of data VMFS 5 after the upgrade to esxi5.5 take advanrage of the change in file size limit?

    Thanks for any help.

    That's right, at least in part. In addition to the upgrade of the host, you must also upgrade the hardware version of the virtual machine (Compatibility Mode) to take advantage of larger virtual disks (up to ~ to 62). However, always keep in mind the time needed to restore the virtual disk together where this is necessary. With 1 Gbps - assuming that you have the full available bandwidth - 2 TB will require more than 5 hours already! Then maybe organize the data in several virtual disks may be a better solution.

    André

  • Size max for NFS and VMFS (iscsi, FCP) datastore on vSphere 4.1 and 5.0

    Hello

    What is the maximum size for NFS and VMFS (iscsi and FCP) data created on vSphere 4.1, 5.0, and 4.0 stores?

    Thank you

    Tony

    Hi Tony,.

    You should find the answers in the various maxima of Configuration documents:

    Good luck.

    Concerning

    Franck

  • ESX 3.5 / vSphere 4.1 shared datastore

    I am in the final stages of planning for my upgrade from 3.5 to 4.1.  My only is the question in abeyance for the moment, she is sure to have my current 3.5 servers and my new 4.1 server points to the same SAN LUNS?  Everything I read leads me to believe that it should be fine, since the VMFS format is the same between versions, but I'd rather be safe than sorry.

    Thank you.

    Should be fine since the two using the same version of VMFS. The only thing you might want to include is, virtual hardware version (4 and 7). Once you upgraded to version 7, you will not be able to run the version 7 VM on 3.5 hosts more.

    http://www.no-x.org

  • Cannot access the synology on nested host iscsi datastore

    I installed a Synology ds 414 version - 5-2-5592, with 2 volumes prescribed as 2 iscsi targets, I'm running 2 esxi 6 guests, a physical and a nested, phyiscal host I can access the target immediately without any problem, the host nested when I use the dynamic discovery, I can see the target via the static mapping but LUNS never appear and I can't add them as data stores.

    I put the target to allow multiple sessions on the synology, I can ping from the nested host synology vmk. I added the host nested as initiator allowed on the Synology, although I didn't need to do it on the first host as default value is set to read / write and there is no authentication set to synology.

    I have attached a few screenshots

    Any help would be greatly appreciated.

    iscsi targets.PNGCapture.PNGsynology.PNG

    You have activated on your physical host vSwitch promiscuous mode? See this thread to another: Nested ESXi 5.1 and iSCSI Shared Storage

  • Poor ESXi 4 NFS Datastore Performance with various NAS systems

    Hello!

    In testing, I found that I get between a half and a quarter of the e/s inside a guest performance when ESXi 4 systems connect to the using NFS data store if the clients connect to the exact same NFS share.  However, I don't see this effect if the data store using iSCSI or local storage.  This has been reproduced with different systems running ESXi 4 and NAS systems.

    My test is very simple.  I created naked CentOS 5.4 minimum installation (completely updated 07/04/2010) with VMware Tools loaded and the creation time of a file of 256 MB using jj.  I create the file on the root (a VMDK stored in different data warehouses) partition or a directory of the NAS mounted via NFS, directly in the comments

    My crucial test configuration consists of a single test PC (Intel 3.0 GHz Core 2 Duo E8400 CPU with a single Intel 82567LM-3 Gigabit NC and 4 GB RAM) running ESXi 4 connected to a printer HP Procurve 1810 - 24 G, which is connected to a VIA EPIA M700 NAS system running OpenFiler 2.3 with two 1.5 to 7200 tr / MIN SATA disks configured in front of software RAID 1 and dual Gigabit Ethernet NIC.  However, I have reproduced it with different ESXi PC and NAS systems.

    This is a release of one of the tests.  In this case, the VMDK is a store of data stored on the NAS via NFS:

    -


    root@iridium /sync #; Sync; Sync; time {dd if = / dev/zero of = bs = 1 M count = test.txt 256 sync; sync; sync ;}}
    2560 records in
    256
    0 records out
    268435456 bytes (268 MB) copied, 0,524939 seconds, 511 MB/s
    Real 0m38.660s
    user 0m0.000s
    sys 0m0.566s
    root@iridium /# mount/mnt 172.28.19.16:/mnt/InternalRAID1/shares/VirtualMachines
    root@iridium /# cd/mnt
    synchronization of the # mnt root@iridium; Sync; Sync; time {dd if = / dev/zero of = bs = 1 M count = test.txt 256 sync; sync; sync ;}}
    2560 records in
    256
    0 records out
    268435456 bytes (268 MB) copied, 8,69747 seconds, 30.9 MB/s
    Real 0m9.060s
    user 0m0.001s
    sys 0m0.659s
    mnt root@iridium#.

    -


    The first dd is a VMDK stored in a connected via NFS data store.  The dd ends almost immediately, but the synchronization takes nearly 40 seconds!  It's less than 7 MB per second transfer rate: very slow.  Then I get the exact same NFS share that ESXi is used to store data directly in the comments and repeat the DD.  As you can see, the SD is longer and the synchronization takes no real time (as befits a NFS share with active sync), and the whole process takes less than 10 seconds: this is four times faster!

    I don't see these results on data warehouses mounted via NFS.  For example, here is a test on the guest even running from a mounted via iSCSI data store (using the exact same SIN):

    -


    root@iridium /sync #; Sync; Sync; time {dd if = / dev/zero of = bs = 1 M count = test.txt 256 sync; sync; sync ;}}
    2560 records in
    256
    0 records out
    268435456 bytes (268 MB) copied, 1,6913 seconds, 159 MB/s
    Real 0m7.745s
    user 0m0.000s
    sys 0m1.043s


    root@iridium /# mount/mnt 172.28.19.16:/mnt/InternalRAID1/shares/VirtualMachines
    root@iridium /# cd/mnt
    synchronization of the # mnt root@iridium; Sync; Sync; time {dd if = / dev/zero of = bs = 1 M count = test.txt 256 sync; sync; sync ;}}
    2560 records in
    256
    0 records out
    268435456 bytes (268 MB) copied, 8,66534 seconds, 31.0 MB/s
    Real 0m9.081s
    user 0m0.001s
    sys 0m0.794s
    mnt root@iridium#.

    -


    And the same comments linking internal SATA drive of the PC ESXi:

    -


    root@iridium /sync #; Sync; Sync; time {dd if = / dev/zero of = bs = 1 M count = test.txt 256 sync; sync; sync ;}}
    2560 records in
    256
    0 records out
    268435456 bytes (268 MB) copied, 6,77451 seconds, 39.6 Mbps
    Real 0m7.631s
    user 0m0.002s
    sys 0m0.751s
    root@iridium /# mount/mnt 172.28.19.16:/mnt/InternalRAID1/shares/VirtualMachines
    root@iridium /# cd/mnt
    synchronization of the # mnt root@iridium; Sync; Sync; time {dd if = / dev/zero of = bs = 1 M count = test.txt 256 sync; sync; sync ;}}
    2560 records in
    256
    0 records out
    268435456 bytes (268 MB) copied, 8,90374 seconds, 30.1 MB/s
    Real 0m9.208s
    user 0m0.001s
    sys 0m0.329s
    mnt root@iridium#.

    -


    As you can see, the performance of NFS direct comments for each of the three are very consistent.  ISCSI and the performance of the store local data disk are both a bit better than that - as I expect.  But the mounted via NFS data store gets only a fraction of the perfomance of the any of them.  Obviously, something is wrong.

    I was able to reproduce this effect with an Iomega Ix4 - 200 d as well.  The difference is not as dramatic, butalways important and consistent.  Here is a test of a guest of CentOS using a VMDK stored in a data store provided by an Ix4 - 200 d via NFS:-.

    root@palladium /sync #; Sync; Sync; time {dd if = / dev/zero of = bs = 1 M count = test.txt 256 sync; sync; sync ;}}
    2560 records in
    256
    0 records out
    268435456 bytes (268 MB) copied, 11,1253 seconds, 24.1 Mbps
    Real 0m18.350s
    user 0m0.006s
    sys 0m2.687s
    root@palladium /# mount/mnt 172.20.19.1:/nfs/VirtualMachines
    root@palladium /# cd/mnt
    synchronization of the # mnt root@Palladium; Sync; Sync; time {dd if = / dev/zero of = bs = 1 M count = test.txt 256 sync; sync; sync ;}}
    2560 records in
    256
    0 records out
    268435456 bytes (268 MB) copied, 9,91849 seconds, 27.1 MB/s
    Real 0m10.088s
    user 0m0.002s
    sys 0m2.147s root@palladium mnt-#.

    -


    Once more, the direct NFS mount gives very consistent results.  But using the diskette provided by ESXi on a mounted NFS datastore gives still worse results.  They are not as terrible as OpenFiler test results, but they are constantly between 60% and 100% longer.

    Why is this?  What I've read, NFS performace is supposed to be within a few percent of the iSCSI performance, and yet I see between 60% and 400% worse performance.  And this isn't a case of the SIN is not able to provide correct NFS performance.  When I connect to the NAS via NFS directly inside the guest, I see much better than when ESXi connects to the same NAS (the same proportion!) via NFS.

    The configuration of ESXi (network and network cards) is 100% stock.  There is no VLAN in place, etc., and ESXi system has only one

    Single Gigabit adapter.  It is certainly not optimal, but it doesn't seem to me to be able to explain why a virtualized guest is able to get a lot better performance NFS as ESXi itself to the same NAS.  After all, they both use the same exact suboptimal network configuration...

    Thank you very much for your help.  I would be grateful any idea or advice, you might be able to give me.

    Hi all

    It is very definitely a performance O_Sync problem. It is well known that NFS VMware shops still use O_Sync for writes little matter what share put for a default value. VMware uses a custom file locking system so you really can't compare it to a normal NFS share connection to a different NFS client.

    I have validated that the performance will be good if you have an SSD cache or storage target with enough reliable battery backup.

    http://blog.laspina.ca/ubiquitous/running-ZFS-over-NFS-as-a-VMware-store

    Kind regards

    Mike

    vExpert 2009

  • The upgrade to ESXi 4.1 with storage iSCSI software to ESXi 5.5

    I intend to upgrade ESXi 4.1 with which I have attached to ESXi 5.5 iSCSI software storage

    I've updated all my guests who were on the SAN storage with no problems.

    I would like to know that if it there's nothing I should take care before I update the host connected iSCSI to ESXi 5.5, as for the hosts with SAN attached storage I made to remove the cables of SAN before upgrading the host.

    Also, if all known problems, you will be informed during the upgrade to ESXi 4.1 to ESXi 5.5 with iSCSI Software storage, please let me know?

    If there is no different process to upgrade coz in ESXi 4.1 I do not see this link ports of iSCSI vmkernel ports software iSCSI but I know in ESXi5.5 we have to or I just launch a procedure of standard upgrade via the Update Manager and everything will be taken care?

    Thanks in advance

    With ESXi prior to version 5, linking ports had to be done via command line. However, if it has been configured correctly, you should be able to upgrade to ESXi 5.5 (assuming that the host has not been previously updated since version 3). BTW. the last time that I disconnected the host of storage for an upgrade has been with ESX 3.0.

    André

  • How to add the software iSCSI adapter in vSphere, ESXi 5.0?

    Hello

    Please give me the steps to add the software iSCSI adapter in vSphere, ESXi 5.0.

    Thank you.

    Hello

    In an earlier version of VMware ESXi software iSCSI adapter was included in the list of the storage card, but is not in the case of VMware ESXi 5.0. In ESXi 5.0 iSCSI adapters are not listed by default and must be activated first before we go away from configuration.

    How to add iSCSI adapter in ESXi 5 check the link below

    http://www.google.co.in/#sclient=psy&hl=en&source=hp&q=How+to+add+Software+iSCSI+Adaptor+in+vSphere+ESXi+5.0+Site%3A+blog.srinfotec.com&pbx=1&oq=How+to+add+Software+iSCSI+Adaptor+in+vSphere+ESXi+5.0+Site:+blog.srinfotec.com&aq=f&aqi=&aql=&gs_sm=e&gs_upl=9181l21561l0l21875l33l30l4l0l0l0l736l10898l3-8.7.7.1l23l0&bav=on.2,or.r_gc.r_pw.&fp=1ad7a99e02298f12&biw=1366&bih=647

    or

    http://blog.srinfotec.com/?p=178

    concerning

    Rohit

  • VMWare ESXi 5.5 - VMotion &amp; HA supported MDM physical or virtual

    Hello

    Hope someone can shed some light on the survey below:

    1. can you VMWare ESXI 5.5 HA and Vmotion supported with vmdk files located in front of multiple vmfs datastore 3/5? I have problems with VMotion or HA?

    2. can you VMWare ESXI 5.5 HA and Vmotion (not storage Vmotion) support with a VM scenario below:

    3 three nodes with iSCSI SAN storage

    VM1
    -Drive C-> vmfs Datastore1
    -Drive D-> RDM (is this support on a physical or virtual compatibility mode)

    VM2

    -Drive C-> vmfs Datastore 1

    -Drive D-> vmfs Datasore 2


    IV ' e seen these link below no mention if need pyshical or virtual mode
    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1005241
    https://pubs.VMware.com/vSphere-55/index.jsp?topic=%2Fcom.VMware.vSphere.storage.doc%2FGUID-D9B143D8-9F93-41D1-A32F-9FF4DE4CDF14.html

    3. can you multiple access by 5.5 ESXi host the same data store (located on a San) vmfs using the free version of ESXi 5.5?
    Can I use this in a production environment? I have seen some companies test this on a non-production environment. Technically, it works.

    Thank you
    Paul

    Welcome to the community-

    (1) as long as the DRS HA cluster hosts see data even warehouses there will be no problem

    (2) once more, also longer than the nose can see data warehouses including the LUNS as the RDM it home should be without issue.

    (3) Yes multiple instances of the free version of ESXi can access LUNS shared - Yes it can be used in a production environment, but remember you can not handle the free hypervisor with vCenter.

Maybe you are looking for