ESX iSCSI & LACP

I have a few questions I wanted to ask and try to learn before you start this project. Currently, we have two ESX servers, located on a single switch. Our virtual machines use different VLANS by customer.

Here is going to be our new configuration I have planned:

  • (1) HP StorageWorks P2000 G3 iSCSI SAN
  • (2) cisco/Linksys SRW2024P switches (supported VLAN, LAG, LACP, frames Jumbo)
  • (3) HP Proliant DL360 G6 servers
  • Each server has 3 network cards
  • (2) routers of Sonicwall

Exactly how do redunandcy off the iSCSI traffic? Using LACP on the switches I would connect the right set of themselves? So 1 Port SW1 goes to SW2 Port 1 and the same for Port 2.

Then I would be on the side of ESX team two NICs together for iSCSI traffic. I would use IP Hash. I would like to connect NIC #1 to SW1 and NIC #2 in SW2. So, technically, I could unplug on NIC #1 and virtual machines must be able to run because of the several paths right?

If I'm away, please let me know. I want to be able to lose a single switch, losing a NIC and continue to operate without a hiccup.

Hello

Attach a large document to help you implement the P2000 with vSphere.

It is generally recommended to use multiple paths instead of collage/team.

Tags: VMware

Similar Questions

  • Guest VM, or additional hard drives ESX iSCSI initiator?

    We'll install Exchange 2010 on a Server 2008 R2 VM... is there disadvantages of adding several hard drives and for dealing with the ESX iSCSI traffic?  Or would it be better to use the initator iSCSI in the VM guest to connect to the San (EqualLogic PS4000)?

    If we continue the road of iSCSI initiator that I'll have to do some networking finageling since our iSCSI traffic is spread over a separate... switch then I'll probably have to add a VM network to the virtual switch and add a second NETWORK card to the guest VM... It would be simpler just to add more hard drives, but I don't know what is the (positive or negative) impact it would have on performance.

    Thank you.

    Hello

    the difference in performance between a vmdk and a RDM is something that can be ignored safely, there are many autour benchmarks that show how VMDK is basically fast a RDM disk, but you can avoid all the complexities of managing iscsi RDM inside virtual machines by going with vmdk.

    Basically, the only need IMHO to use a RDM is if you need to create a Microsoft Cluster inside VMware.

    Ciao,.

    Luca.

    --
    Luca Dell'Oca
    @dellock6
    vExpert 2011
  • ESX iSCSI Configuration recommendation

    Hello

    I wonder what is the best configuration of the storage card in the following scenario to provide a NIC host and SAN controller redundancy. The SAN must be directly connected to each host that I don't have one at the moment gigabit switch, which means each link point to point in its own subnet as below.

    Host 1:

    iSCSI_A1: 192.168.1.2/30

    iSCSI_B1: 192.168.1.5/30

    Host 2:

    iSCSI_A2: 192.168.1.9/30

    iSCSI_B2: 192.168.1.13/30

    ISCSI SAN:

    A1 of the host: 192.168.1.3/30

    B1 the host: 192.168.1.6/30

    A2 of the host: 192.168.1.10/30

    Host B2: 192.168.1.14/30

    According to the article, VMware KB: considerations for use port binding software iSCSI in ESX/ESXi, I should not use links in Port because each link is in a separate broadcast domain. Does this mean that I have to put up 2 iSCSI adapters on each ESX host software and just use the dynamic discovery?

    Thanks for any help.

    Of course, no problem. Make sure you see both paths, under Configuration-> storage adapter-> select your iSCSI adapter. Example like this. In this case, I have 12 aircraft 24 channels, so 2 paths to each device (LUN).

    You can also mark the answers as useful or appropriate if you wish.

  • vSphere 5 and iSCSI LACP

    Hello - this is my first post here so please, be gentle.

    I tried to find information about configuring LACP in vSphere, ESXi 5. Let me give a brief overview of my environment and the objectives that we strive to achieve. We use NICs 1 GB on 6710 VDX Brocade fabric switches.

    We have a small cluster of vSphere, ESXi 5 standard edition. We have 6 physical network interface cards in each host - 2 are associated to the network of the VM, 2 are associated for vMotion/management, and the last two are vmKernal ports one two separate subnets connected using MPIO to our Compellent SAN iSCSI. This has been our test bench and we use the nic teaming in ESXi 5 running "Route based on the originating virtual port ID" with no specific switch-side config. We have other servers only using LACP 802.3ad configured on the host and the switch that work very well - gives us a better failure protection that we use two switches and plug in a link in each switch. We would like to do the same with ESXi hosts.

    Our new project is coming to virtualize a larger number of systems we currently serve. We want to do is expand our use VM to include a large (30 - big for us) number of SQL servers. The basic functions of these systems require a decent amount of e/s SAN backend. The physical servers we would be virtualize emballerions a density close to 4 / 1 or up to 8:1 with this conversion. We are concerned that having just the 2 iSCSI nic MPIO paths will not be sufficient to support the increased load of I/O.

    We would like to know if you are using LACP on the two subnet iSCSI connections and join 2 + NIC for each connection is viable in ESXi 5 and with iSCSI technology and what configuration parameters that we set up to do this.

    In addition, this project would be to use Enterprise Edition VMWare vSphere 5 - DRS or distributed switching introduced other complications or benfits for this configuration?

    Thanks for any helpful input or direction of already published documents.

    Scott

    I find using LACP / etherchannel is rarely effective or useful in VMware environments.

    For iSCSI storage, my standard configuration is to use 2 uplinks with binding of iSCSI ports. Here are the screenshots of the configuration.

  • Impossible to create great on ESX iscsi volumes

    Hi all

    I have 2 servers to ESX 4.1 new installation u1 nine directly connected to the new HP P2000 iscsi storage facility.
    HP Storage has 4 1 GB iscsi ports.
    They are:
    PortA1: 10.0.0.1
    PortA2: 10.0.0.2
    PortA3: 10.0.0.3
    PortA4: 10.0.0.4


    As you can see from the photos storage1 and storage2, I assigned Port A1, A2 to ESX1
    Other PortA3 and A4 to ESX2.
    I have on two servers, 10.0.0.5 and 0.6 vmkernel

    As you can see from picture1 iscsi connections seems to be OK. What I normally do after that is to go to Storage - storage to add.
    Image2 is what I see of the storage option that you want to add. And I got error on photo3

    Don't you think that it is bound to no limitation of capacity because I created a LUN with less than 2 TB and it worked very well.
    So should I split 6 to a 3 to make it less than 2 TB? should be a silly thing to divide it. There should be another options I guess?

    Thank you.

    This limit if for VMFS.

    NFS is not at the level of the blocks and he doesn't use VMFS, so you can have a great store of data.

    But vmdk are still limited to 2 TB max.

    André

  • solution for ESX iSCSI SAN

    Hello.

    We are looking into getting a SAN iSCSI for ESX.

    On readers, y at - there a big difference between sata and sas and sas 10K vs 15K?

    How iSCSI performance are compared to the SAN fiber?

    Can you guys recommend an iSCSI solution right?

    Excerpt from Dell PS4000 and it seems to be a good product.

    (On players, is there a big difference between sata and sas and sas 10K vs 15K?)

    Yes, 15 k SAS are faster and a bit more expensive, but worth it!

    (How iSCSI performance are compared to the fiber SAN?)

    ISCSI SAN is not as fast as the fiber, but compared to the local drive is a big improvement for your applications, users, etc.

    (You guys can recommend a good iSCSI solution?)

    Personally, I think Dell equallogic are great! PS600XV is what I put in place.

  • at the level of the ESX iSCSI initiator

    Hey guys,.

    I have a hard time trying to imagine if the iscsi initiators software for vmware esx is a bunch of installable software that I need to download, or TI a bunch of settings that I need the value of the vmnic.

    Please help. If I need to download and install them on the esx host, can you guide me as to where I can find them.

    We run a free version of esx 3.5i, I don't see that would be a problem during the installation or the loading of the iscsi initiators, or would it?

    Thank you

    RJ

    Under adapters for storage in the Control Panel, you should see a SoftwareAdapter iSCSI.  Click it and then click Properties.  This should help you pointed in the right direction.  Under the security profile, you need to activate the customer iSCSI as well.  Otherwise it will be blocked by the firewall.

    Let me know if you need more information.

    Charles nights, VCP

    If you have found this or other useful information, please consider awarding points to 'Correct' or 'useful '.

  • Equallogic PS6000 + ESX iSCSI storage redundancy: LUN disappears when 1 switch off!

    Hello everyone, this is my first post here

    OK, so as the title says, I'm working on the redundancy under ESX. Because network configurations can be quite confusing I just did a quick draft of our infrastructure with one of my 3 ESX host and will try to explain as clearly as possible, the configuration details.

    In my ESX host, I configured a new vswitch with the vmkernel and vmnetwork joined 4 (NIC 2,3,4 and 5) my card NETWORK is configured as active in the collection of NETWORK adapters (also tried to put 2 according to availability) method of failover is Beacon probe (also tried with the status of link)

    On another VSwitch I have the console service with the NIC (NIC 1 and 6) 2 of the joint.

    My 2 switches are connected by a SHIFT of 4 links and the RSTP Protocol is active and works (although I have a lot of broadcasting activities, as soon as I connect my LAN switch to the SAN switches, but that's another story)

    My SAN is connected with 2 links on A button and switch B 2 for each controller. I created 3 volumes on the SAN that is each configured in my host ESX 3. So far, everything is great.

    Now, I pull the power plug from the switch a. switch B takes the lead and I can still ping SAN network lan switch, I can ping my small guest VM but the virtual machine itself quickly dies. I can't even it reboot or I get an error message indicating that the file is not found. So I go to my store of data, and it is when I see that it's empty! I need to open any new analysis of the LUN to again access to the data store and start my VM.

    From my point of view, although ESX detects that some links are down when I pull the cord, the iSCSI session just dies on me and will not "restart". I read stuff on multiple paths, but the "manage paths" button is grayed out, I'm not even sure that's what I need.

    So basically, as you can see I am quite new to all this and completely lost when it comes to things of redundancy, hell, I didn't even know on the tree covering weight before yesterday! So, if any of you guys can give me some advice on the realization of redundancy I look forward ^^

    Thank you very much!

    I wouldn't put my vmnetwork and my storage on the same vSwitch network.  I separate them into their own vSwitch and use two natachasery for each vSwitch.

    Once you have the traffic separated out, I'll make sure a vmkping, and not a ping to the storage target runs through each physical NIC.  Remove all cables and one at a time to create works of course routing their connection.

    -KjB

    VMware vExpert

    -KjB

    VMware vExpert

  • Best NAS for ESX iSCSI

    Hi everyone, we have two ESX Server 3.5 with virtual machines running on a QNAP TS-509Pro and it works well, mhmmh, pretty good really, because when a drive failure happened and replace a disk, during the reconstruction of the nas is very very slow and sometimes we packets lose... a disaster!

    Well, we are looking for a NAS iSCSI, not too big, but with good performance and max 4/5 TB of disk space.

    Around this forum we found for Thecus feedback, but I think it's very similar to QNAP (linux inside).

    Some people suggest me iSCSI SAN PS5000 Dell and HP Storage Works, but it seems very very slow!

    Any suggestions for brands and models?

    Kind regards

    Marco Ciacci

    The best Nas solution seems to be Netapp - and the nice site is it can iscsi also (iscsi free license?).

    If you decide to use iscsi and then the EQL belongs to your narrow choice. Hp with iscsi boxes do not have the best reputation.

    Below that you can find other boxes also - MD3000i Dell or IBM DS3xxx series and then a few small vendors like promise or Infortrend.

    You get what you pay for - I could tell - but by some providers, you pay more than you get. Support and quality are not not to forget.

    We here EQL and we are very pleased with them (with the support too) - but are not cheap - includes some features that you may not have.

    The last time much heard about Datacore products - very interesting when you for example 2 locations connected with GB or CF.

    Just my 2 cents.

  • iSCSI: migrate a volume NTFS of MS initiator initiator sw ESX

    Hello

    We have a host esx 3.5 with a virtual machine running MS Small Business Server 2003.

    Inside the virtual machine, it was MS iscsi initiator sw connection via a vNIC dedicated to a LUN iscsi 1 TB (on a target Openfiler 2.2), formatted NTFS (i)

    On this LUN there are some MS sql Server 2005 databases that we use for the e-mail archiving with a dedicated application (GFI MailArchiver) and some WSUS related files.

    Because our sbs 2003 is a domain controller and multihoming is not recommended on domain controllers (we will have problems), get rid of the vNIC that inside the virtual machine, is dedicated to iscsi, without disrupting the system.

    Our idea is to make visible the vm ESX iscsi sw NTFS formatted LUN.

    Can help RDM (in physical or virtual compatibility mode?) or is it better to recreate the package (big, 1 TB) NTFS volume like a vmdk to a new volume VMFS on iscsi?

    Could you please suggest how can we accomplish this with a minimum off time (possibly step by step)?

    Any help is greatly appreciated

    Thanks in advance

    Guido

    Hello

    Last question: when I change the virtual machine to use a raw disk card, should I specify physical or virtual compatibility mode?

    Either work. Use the virtual if you want to be able to use the snapshot and VCB to backup the RDM. Most people use generally physical. You can switch between virtual and physical as well without affecting some data features.

    Will it work (maintaining the NTFS content) with the two parameters?

    Yes.

    Best regards
    Edward L. Haletky
    VMware communities user moderator, VMware vExpert 2009
    ====
    Author of the book ' VMWare ESX Server in the enterprise: planning and securing virtualization servers, Copyright 2008 Pearson Education.
    Blue gears and SearchVMware Pro items - top of page links of security virtualization - Security Virtualization Round Table Podcast

  • UCS ISCSI Boot Jumb MTU

    Light and matter seeks jumbp MTU working for ISCSI boot.

    UCS 6248S

    IOM 2208

    VIC YEARS 1240

    5.1 ESXi update 1

    VNX 5500

    I have a few rack mount hosts with 10G network cards, who works with jumbo MTU and I can vmkping the VNX SPs with ' vmkping d s 7000 10.1.1.1 "and it works.

    In UCS the operating system starts on the VNX via ISCSI

    The ISCSI network is a pair of Dell Force 10 S4810s

    UCS FI - A and FI - B have 2 10 uplinks in a s port LACP on the strength of Dell 10 channel

    Checked the jumbo MTU is enabled on the side of Dell

    The system class QoS now occupies the 9216

    There is a QoS ESX ISCSI strategy mapped to the class gold

    ESX ISCSI strategy is mapped to 2 vNIC models; one fabric and a fabric b

    If I set the MTU on the vNIC model 9000 errors the OS at startup a message on start-up banks missing

    If I set the MTU on the vNIC model 1500 the host boots fine

    I put groups of ISCSI ports and vSwitch to 9000 in ESX, but I cannot ping the VNX SPs with a package of more than 1500.

    Attached is a diagram

    Someone at - he of the clues as to what I'm missing? Thank you

    Make sure that you assign you best effort to 9216 so class on the system UCS class.

    The return traffic is probably not sent with a COS value so the UCS is the treatment to best effort which by default is 1500.

    Joey

  • NetApp iSCSI best configuration?

    Hello...

    I have 3 servers esx 4, related to a NetApp 2020 cabin, two active controllers / assets. Each controller has 2 interfaces (e0a and e0b)
    Reading the documentation, NetApp have two connection options: using LACP, or by using the standard software interfaces with the iscsi vmware, create 2 or more ports of vmkernel MPIO.

    I opted for the second option, spending 2 physical adapters in each server to which to connect esx iscsi. I created a vSwitch for iSCSI, with 2 vmkernel port.

    vswitch.jpg

    without the use of an adapter in each case. (Unused adapter option).

    On the side of the cabin, I configured the interface of each controller e0a e0a interface of the other controller as his partnet, as you have assigned a second IP address as an alias. While I e0a controller "A", with alias 192.168.1.17 and 192.168.1.40 IP; E0B controller 'A' alias 192.168.2.17 and 192.168.2.40 withIP; E0A controller "B" IP 192.168.1.18 with or without controller e0b alias' B 'with the IP 192.168.2.18 and without aliases.

    netapp.jpg

    Now, in the initiator iscsi vmware configuration, add only the IP "Dynamic discovery" 192.168.1.17, and I appear in the tab 'static discovery', four IP addresses.

    static discovery.jpg

    Once you add to that, each data store than I get with 4 available paths. :

    paths.jpg

    Is this a correct configuration? ... be improved?
    If I add more IPS 'alias', 'paths' multiply, also happens if I add more vmkernel vswitch port to...

    This servers ESX 3, not "overload" the connections of ACR?

    I can see if the packages are removed? ...


    I have the ability to add to each esx Server 2 more cards for iscsi vsiwtch to add 2 to each server vmkernel port... good idea or not?...

    Thank´s...

    well... your configuration should be fine.

  • Questions of ESXi 5.1 ISCSI--> cannot establish a network connection (continued)

    Hello world

    I had a problem there not too long ago (thread link below) where my ESXi hosts could not establish a connection with an IBM DS3500 on start up and "rescan", however their connectivity worked correctly without any problem after commissioning and when a "rescan" finished. We determined that the most likely cause was the unit IBM itself. However, I wasn't completely satisfied.

    In order to deepen the question and dismiss my guests being the problem, I created a unit ISCSI myself with two new hosts of ESXi 5.1 (version 1157734).

    ESXi signals that it cannot connect to the initiator ISCSI (network error) - but it can run virtual machines and browse data stores?

    The configuration is the following:

    Test environment:

    2 x ESXi 1157734 hosts

    1 x server Freenas ISCSI (ZFS R10)

    At the start of my ESXi hosts in the test environment, download the same error as our production environment. They also put on the same error message (in the event log) when I "rescan" map of the ISCSI software. However, once the scan is finished, I can't see any error message and everything continues as it should (Round robin, active active). I can get a brilliant flow also. It's as if the errors are almost... false positives?

    Example of the error message (test environment):

    Connection to the iSCSI target

    IQN.2011 - 03.org.example.istgt:iscsi on vmhba35

    @ vmk2 has failed. The iSCSI initiator couldn't

    establish a network connection to the target.

    error

    17/10/2013-08:33:35

    ESXi hosts are configured as follows:

    Test environment:

    • (ESXi hosts) Two network cards for ISCSI, one for VM traffic
      • NIC1: 10.20.20.2/24, NIC2: 10.30.30.2/24
    • (Freenas) Two network cards installed on the ISCSI device (dedicated to ISCSI traffic)
      • NIC1: 10.20.20.1/24, NIC2: 10.30.30.1/24
    • ESXi is set to round robin, which works properly (I can max on both ports at the same time)
    • Two VMKs are related to the ISCSI Software initiator and defined to dynamically discover the two ISCSI addresses
    • Unity of ISCSI and the hosts are not on the VLAN, the switch is dedicated to ISCSI traffic (However in the production, the two networks are separated by VLAN.) I removed the in the test environment to simplify everything)
    • The vSwitches (who hold a single VMK for ISCSI) are fixed:
      • "Promiscuous" mode: reject
      • Change of MAC address: accept
      • Forged passes: accept
      • The traffic shaping: disabled
      • No NIC teaming (obviously)
      • Each VMK inherits the settings of these

    Quick points:

    • My hosts use VMware approved NIC in my test environment. My production environment runs on full VMware hardware approved
    • Can I use ping of vmk join each each VMK ISCSI initiator
    • I can use the nc-z target_ip 3260 command to check connectivity to the port
    • The ISCSI test has two network adapters, each on 1 Gbps ports
      • I can get around 800mbps per port when copying lots of files (both ports are active)
      • I see no error in copying a large number of virtual machines around, or backup files
    • I only see errors on start up and "rescan".
    • I have two network adapters on each host ESXi dedicated for ISCSI. Each NETWORK card has a single VMK. A single mapped port on ISCSI, the other is mapped to port B (I used to have two VMKs by nic, but I removed that in order to simplify everything)
    • I have updated the firmware on our switch of production. No results
    • I've used 4 different switches in order to avoid a network problem. No results
    • I tested ESXi versions 5.0, 5.1 (799733 & 1157734), all have the same mistakes
    • The ISCSI ports are on different networks according to the VMware (I was told not to put on the same network VMKs)

    Any ideas? It's starting to drive me up the wall. Any help would be greatly appreciated

    I've seen several times in recent months and was beginning to think it is specific to the HP P2000 units because it's the only time wherever I saw him! I now believe that ESX/iSCSI rather than issues of driver or table.

    On my last install, I found that errors occur as soon as the table is connected, i.e. before any storage is displayed.

    Single host ESX, no guests. 2 Intel adapters to 2 HP past by dedicated HP P2000 1 GB table with all 8 wired ports.

    All roads are good, all lights are in the State expected.

    It did not affect the use of the storage in question to any site, so there has been no need / a momentum of any further investigation.

    B! 886y annoying however!

  • In-Guest iSCSI for Exchange 2010

    Hi, we just built and are ready to go into production with Exchange 2010 using LUNS in guest through the card Microsoft iSCSI initiator iSCSI separate network for iSCSI etc. Load tested with JetStress and everything seems fine. We just found out as snapshots are not supported with iSCSI in guest so we are concerned that the other things work or be taken in charge or another, i.e. of vmotion. So, does anyone know if we lose any other characteristic, and we need to change our minds and using ESX iSCSI? I read pieces on being comparable, that we stuck with a guest performance because that's what we know with our existing Exchange 2007 environment (physical). If we must change, is it simple?

    Hey everyone who is following this thread or found via Google.  It is important to understand what is and is not supported when you are running Exchange on a virtual machine.

    To be absolutely clear, combining Exchange 2010 DAG and migration technologies such as vMotion technology is supported.  You must ensure that you are running at least Exchange 2010 SP1 as well as a version of vSphere that sits on the server virtualization Validation program.

    I think that the link was already posted, but here is the final link:

    http://blogs.technet.com/b/Exchange/archive/2011/05/16/announcing-enhanced-hardware-virtualization-support-for-Exchange-2010.aspx

    More specifically, since the post above (in bold italics):

    Combining Exchange 2010 high availability solutions (database availability groups (DAG)) with hypervisor clustering high availability and migration solutions that will be moving or automatically failover servers for mailboxes that are members of a DAG between clustered root servers, is now supported.

    Improved, that we did in Exchange Server 2010 SP1, with more comprehensive tests of Exchange 2010 in an environment that is virtualized, we are pleased to offer this additional deployment flexibility to our customers. Management of updated support applies to any provider of hardware virtualization that participate in the Windows Server Virtualization Validation Program (SVVP).

    Logiboy123, can you provide any documents to support the statement that Microsoft does not support this configuration?  The above quotes are directly from Microsoft and are very clear that they support not only the migration of Microsoft technology, but also similar to any other provider of the hypervisor technology, provided that they are in the SVVP program.

    Since vSphere is part of the SVVP, Microsoft has extended support for the technologies of migration such as vMotion, in addition to their own Live Migration.  Of course, simply support something does not automatically make it works.  If you want knots vMotion DAG works correctly, you might have settings described in this blog post extended the cluster heartbeat timeout (also a fully supported configuration change):

    http://www.thelowercasew.com/vSphere-and-Exchange-Admins-can-live-in-harmony-Microsoft-finally-supports-HA-and-VMotion

    To answer the original question in this post, using the Protocol in-guest iSCSI is supported and with vMotion in this configuration is supported as well.  You're not really 'give up whatever it is' using this configuration well so if it meets your needs, I would stick with it.  I prefer generally VMFS just because of the flexibility it offers, but all the world has different requirements and I have absolutely no problem with guest iSCSI if your organization requires it.

    Do not forget that whatever configuration, snapshots of virtual machine are not supported for Exchange. Here is another post which goes into detail on it and cover the declaration of support for Microsoft Exchange 2010 snapshots and machine virtual: http://www.thelowercasew.com/virtual-machine-snapshots-and-tier-1-apps-not-always-supported

    So in general, I would say that provided you are running the correct versions of Exchange and vSphere must then take full advantage of your configuration to use features like vMotion and HA without problem.  If you adjust your settings accordingly cluster heartbeat timeout, you should have no problem using vMotion with your knots of DAG.

    Matt

    http://www.thelowercasew.com

  • ESXi 4.1 iSCSI Shared Datastore

    I'm under ESXi 4.1 with Windows Storage Server and iSCSI software targets three servers to store my data stores.  I've been using dedicated iSCSI LUNS for each ESXi server and it was working great (MPIO and RR is awesome too).  I am planning for disaster scenarios, and I wonder if there was nothing wrong with the help of a store of iSCSI data shared between servers.  I do not have vMotion, so I can't migrate directly from one host to another, but I installed a test data store and it spread over three ESXi servers.  I moved a virtual machine for the data store that is shared and I am able to add to the inventory on all three servers.  I can also start the virtual machine to any host, but its status (Powered On / Off / suspend) appears only in the home that I started in.  You see problems with this configuration?  I would like to share my stores of existing data with other hosts, so I can't add all virtual machines to the inventory on all servers.  Then I can implement a poor-mans vMotion and start the virtual machines from any host.

    Thanks in advance!

    Hello

    There is absolutely no problem sharing a LUN between several ESXs iSCSI. In fact, this is used to create a cluster.

    You do not look to have a vCenter. I don't know, what if it's a good idea to have saved several guests at the same time isolated virtual machines. I'd say recording each virtual machine to a single host only.

    If you want to move a virtual machine, stop it, remove the host inventory, enter it on the new and start it.

    In case of crash, register the VMs failed for the remaining guests and start them.

    Good luck.

    Franck

Maybe you are looking for

  • iOS mail automatically marking as read after marking as unread emails

    Hello This has been me angry for a while now and I hope someone can help me find a solution.  How I personally use e-mail is that even if I read an e-mail, I will be mark as unread if I need to do something for example to answer in the future or anot

  • Keep changing to red all the flags in Apple Mail

    When I use indicators to tag messages in my Apple Mail app, they all end up going to red. It happens when I change to another folder and come back or after I exit and restart the application. I like to use different colors to remind me of emergency o

  • Since my system automatically upgrade 17.0.1, I had problems with many Web sites.

    I do not receive the error messages, they do not work! If I open another browser, they work perfectly, but as soon as I try to reuse them in Firefox, they do not work - many examples - a site of the game won't let me start the game, just think contin

  • Impossible to install linux on a new purchased yoga 80ue 900

    Hello I have a * problem *. Having invested my hard earned money in a working tool, I can't work on it since I can't install linux and windows 10 is terrible. Can you help me?

  • Validity of the report

    Summary of the issueOther issues of Windows Live family safety What version of Windows Live Family Safety do you use? Version 2011 (15.4.3502.922) Choose your operating system version: Windows XP SP1 Additional detailsI received a family safety repor