NFS VS ISCSI

Buenas, como lately should spend ando enredando estado con devices pequenos en laboratorio, y me surge una gran duda, as protocolo works mejor para vsphere 4? Yo the truth than siempre he sido muy partidario iscsi, pero the verdad me ha dado por enredar con nfs y me esta dejando muy buena feeling. He estado reading algo y verdad Québec Hay bastante opiniones disparidad, como lo enriched vosotros? My reocmendais sober todo para entornos pequeños? UN rated con host 4 maquinas por host o algo asi? are Quebec tengo varios proyectos between not more than 20 personas empresas manos is not to do

Gracias, UN saludi

Is una llamada iometer tool.

O monitor the performance of los del del windows counters.

Abrazo

Tags: VMware

Similar Questions

  • NFS and iSCSI

    Hi all

    I will not go into another performance of iSCSI vs NFS because it isn't of interest to me, I would like to gather your thoughts on why you chose NFS or iSCSI and also the differences between the two because I'm not sure.

    The reason behind this, is based on a clean install of our Center of virtual data I have will happen soon.

    At the moment we have currently 5 virtual machines running on ESXi with two virtual machine to connect to an external iSCSI (RAID6) storage. This has worked well for over a year. (So I have no real experience with NFS)

    ESXi and VM host all five is on a 10 k SAS drives RAID1, but now I'm going to follow best practices now that we bought VMware Essentials.

    I'll put the host on the machine and the VM 5 on another separate NAS (ReadyNAS NVX) data store. I will use one of the 4 NETWORK card to connect directly on the NAS using a straight through cable, and the other three in a switch.

    Now, this is why I ask the question. Can I use NFS or iSCSI, on what I've seen there is a lot more technical documents and videos based on iSCSI, but according to me, it's because its aimed at the business market and the huge amount of VM.

    The future may hold an another VM 2-4 but no more than that.

    I have been recommended NFS in the external network manager and trust his opinion but feel that I have no experience of NFS.

    Tips are welcome.

    Specification of the server

    Reference Dell R710

    2 x Xeon processor Sockets (don't remember what model I'm typing this at home)

    24 GB Ram

    2 x RAID1 SAS drives

    4 x Broadcom NIC with iSCSI unloading

    This is the IP address on the cable, a crossover will do.

    AWo

    VCP 3 & 4

    \[:o]===\[o:]

    = You want to have this ad as a ringtone on your mobile phone? =

    = Send 'Assignment' to 911 for only $999999,99! =

  • MS on NFS and iSCSI with cluster?

    Clustering for vSphere management guide States that only storage fibre channel is supported of the CF/MSCS clusters, so I'm sure I already know the answer to that, but...

    I have a client who wants to run a NetApp storage solution active cluster failover, configure the disks in the operating system for Windows 2008 R2 servers on an NFS volume and application driven on a DRM iSCSI data.  IOPS / s and network side, someone set up a cluster like this before, and if so what were your experiences?  I'm just trying to get an idea of what kind of potential problems to wait on the road.  I also assume that VMware is not going to support because they advise against it expressly.  Thoughts?



    -Justin

    According to me, that it will simply not work. ISCSI initiator Software in ESX/ESXi does not support persistent reservations SCSI-3, which is required by MSCS on 2008 and above. With the help of the RDM will not change it. I don't know if iSCSI HBA will work.

    The workaround for this is to use software iSCSI initiator inside 2008. Operating system can sit on NFS datastore. Quorum and the data must be on iSCSI LUNS connected via the Windows iSCSI initiator.

  • NFS over iSCSI design options

    I have blades with 6 cards, SAN NetApp

    There is probably a requirement for VM iSCSI direct access storage. VCB proxy using HP Server for backup

    I would use NFS if appropriate given the possibility to easily access the snapshots of backup rather restore a strip or restore the Lun

    So I think:

    1 NIC Service console

    1 card NETWORK Vmotion

    2 NETWORK cards have teamed to VM network access

    2 NICs for iSCSI/NFS

    Will be iSCSI VMs running on the same vswitch and physical NIC as ESX access storage using NFS?

    Is it his OK or go with iSCSI storage of data would be a better option?

    Hey,.

    I think that the comments above have covered the first vswitch.

    Now, with the others...

    ISCSI traffic from a virtual machine must go on the portgroup 'Virtual Machine '.  So if you install, for example MS ISCSI initiator in a prompt, as traffic could exceed a port VM group.

    The ESX ISCSI traffic goes on VMKernel/Service Console port group.  Remember, if you use the ISCSI Software initiator, you need to access the ISCSI target SC.

    If you have found this device or any other answer useful please consider useful or correct buttons using attribute points

    ~ y

  • Best method to OpenFiler: NFS or iSCSI?

    I do proof of concept using a box running OpenFiler as my NAS.  I know that iSCSI is the usual way to use OpenFiler for purposes of VMware.

    I wonder if the NFS is also acceptable.  There problems with HA?

    Hi richard,

    I used two options with openfiler, both works well. Main differences:

    -iSCSI could double the troughput (almost 70 MB/s).

    -iSCSI uses the level of block blocking approach (level of NFS files), so it's better for many simultaneous operations against FS.

    -iSCSI also supports RDM and VMFS file systems (NFS does not work).

    -You can start an ESX of LUN iSCSI Server (if you use the initiatior material).

    -NFS, it's more easy to deploy and easy sharing between different operating systems.

    Then... iSCSI his best for production and NFS environments as a large storage for ISOs, models, more portable and easy to manage backups.

    Take care VCB, VMotion, DRS, HA, and VM startup.

    Hope helps.

    Ciao

  • deploy a share NFS and ISCSI to all guests at the same time?

    Hello

    I'm looking to deploy an NFS share in multiple Hosts in a single installation of Vsphere. Is there a way to push the configurtion to guests at a time, or you have to go and add it to each host individually?

    Thank you!

    Aaron

    Hello.

    NFS will need to be configured individually on each host. Make sure you use the same name on each host as well.

    Good luck!

  • ISCSI or NFS

    Hello world

    I am deploying ESXi for a client and they are quite short on money. They need at least 2 ESXi servers in order to deploy servers in 2003. They have an exchange 2003 server, several domain controllers, and a database server. The database is a SQL database, but its current use is still quite small. Maximum memory usage is about 4 GB. I want to ask some servers ESXi with 16 GB of RAM 24GO a piece.

    My problem is that in the long run, I need a storage shared for a my phase 2 plan. I then configure VCenter and apply the company licenses to be able to VMotion and some other services. I will also add more ESXi servers around this same time for exchange 2010 and some other applications. Usually, Exchange uses a lot of I/O transactions. I saw that exchange 2010 has decreased its I/O operations by dumping many of its works in the working memory. I was wondering if I can go out with a NAS and NFS. I would use 4 Gbps NICs to a switch. If I paste them, can I have a good link with ESX4i aggregation configuration? If not, is 1 Gbit/s over a good NFS? Experiments using such a slow link instead of 4 Gb/s FC?

    If I can't go out with NFS, whereas I would have to ask a local storage large enough to hold the current servers I want to convert and create for this first phase. Then I would get SAN iSCSI for their next year that is coming.

    Any advice would be really appreciated. Thank you

    Hello

    Would you recommend but using ISCSI or NFS but at these speeds "low" for the next 5 years?

    Mike Laspina has a good post on some of the differences between NFS and iSCSI, the other day.

    Have a read of his article ZFS running via NFS as a VMware store

    It is a little bit technical and focused on the opensolaris ZFS part, but the information you're looking for is especially in there.

    --

    Wil

    _____________________________________________________

    VI-box tools & scripts wiki at http://www.vi-toolkit.com

  • Home environment, direct attached eSATA or iSCSI/NFS 1 single?

    Greetings...

    Today, I am running ESXi 3.5u4 on my server at home, plans to move to ESXi 4.0 in a few months.  Right now, I'm flying without a net - no disk redundancy.  The server is not the space of the slot to add a good RAID card (it's a Shuttle XPC), I'm looking to go with an external RAID device.  I am considering a device that got both eSATA and 1GbE onboard (QNAP TS - 439 Pro).  If I go with it as a NAS, I'll plug 1 port of the NAS directly to a port 1GbE dedicated on the server, run frames, etc., leaving the 2nd port on the NAS connected to the management network.

    I run 5 VMs full-time, no not particular disc, i/o intensive, other than the file server that my wife and I use, and it's only a little time.  If I SIN instead of direct connection, I see a noticeable loss of performance?  If this isn't the case, should I be looking at iSCSI or NFS?

    So that I do not see a big performance hit, I'd rather go NAS, so he would give me a little extra flexibility to connect to other systems for the time machine backups, etc.  I'm leaning towards NFS, since iSCSI, if I remember involves stuffing a SCSI inside TCP frame, no. generals, creating news?

    The differences in performance between NFS and iSCSI would be minimal on a smaller device. Extended frames may also be minimum value, but your tests that prove. NFS is much simpler to install and use. Is there a lot of contention for disk access then you will start having performance problems. There is not a lot of processor or controller in these devices and no drive of caching (secure disk cache).

  • Cannot add to the inventory if virtual machine running on the other guests - iSCSI and VMFS5

    5.5 ESXi hosts free standalone version

    iSCSI mounts with multiple paths - relatively new unit with more modern hardware

    CHAP not used

    I'm running a few hosts in order to climb the same iSCSI data stores. The iSCSI LUN is visible and editable by all hosts. If Host1 is to map the iSCSI LUN (VMFS5) and a VM is powered, Host2 can add that VM to the inventory. It's possible with NFS iSCSI for a reason any. This is for a specific reason? Only when I turned off the virtual computer can I add it to both hosts. Other that just lock files are placed in the directory, I missed something related to the lock?

    From now on, adding new esxi hosts does not allow me to add existing virtual machine, less than the voltage first. Once again, NFS does, iSCSI, this isn't. Any ideas?

    It is not specific to ISCSI but rather to the use of VMFS. NFS has no awareness VM VMFS is and why it is not possible to do that with iSCSI / VMFS. There is a lock on the disk for each virtual machine of (race), and that's why the second host is not able to add this host to its inventory. With vSphere HA, this lock is released when a failover should occur.

  • Using VDR 1.2 to backup VMs to iSCSI

    VDR is supposed to be able to backup to iSCSI according to marketing materials.  I saw a post on 23/07/09 the same question.  RParker responded by saying that the answer is Yes and no. I would like to get a detailed response on the part of this answer Yes.

    I installed VDR and 3 TB iSCSI network device is now regarded by the host as a data store works.  The VDR Configuration tab only allows me to add a "network share".  What should I do to make the VDR 'See' the iSCSI VMs backup location as data store?  I have two servers Windows and SLES 10/11 servers.  I know that to be effective with deduplication each OS should have its own backup location.

    Other issues that are important:

    1 VDR will work with SLES - looks like only Red Hat and Ubuntu?  Does it matter if it's iSCSI or actions?

    2 dedup will work with VDR and iSCSI or only with stocks?

    If there are documents or links to other community responses that will help, just reply with the URL - no need to retype responses.  But please make sure they are clear and concise!

    Thank you very much

    Charlie

    Once all your guests can see the target iSCSI as a VMFS database, add a new virtual disk directly to the virtual appliance for data recovery (max 1 TB from the documents I've seen so far). When you start the DR unit, you should see the newly added disk and be able to format and mount. From my understanding network shares 'Add' to communicate with CIFS or NFS shares. iSCSI and FC target must be added as a data store to the level of the host.

    From my understanding VDR save upwards of any virtual computer either. Ideally, the virtual machine will be at the hardware level 7 and running the latest VMWare tools in order to get the most effective (dedup/tracking changes block) and the OS and APP quiecsing.

    I also believe, dedup works on any target (iSCSI, FC, NFS, CIFS) VDR.

    I'm still learning it and the documentation is not fabulous, above is my experience so far. I hope it helps.

    -added 9/2 10:56. I realized that CIFS is the only part that you can add to "add a network share." All other targets (FC, iSCSI, NFS) must be presented to the virtual appliance as a vmdk. The VMDK would reside in the data store of FC/iSCSI/NFS that you set up.

  • NFS mount options

    I just bought a Qnap NAS box that I want to share with 2 VM hosts.   I did a little research on NFS vs iSCSI and it seems that NFS has some pretty good advantage over iSCSI.  Especially when she is on a network to shared resources.

    The hosts are the two Centos 5.3.

    I wonder what would be the optimum settings (options) for the NFS mount?  I found a couple of some old site, but I wonder what people here use?  Here's a setting string that I found:

    nfsvers = 3, tcp, timeo = 600, TRANS = 2, rsize = 32768, wsize = 32768, hard, intr, noatime

    I'm heading to the coast to look at the man page for NFS now to see what these all mean...

    Thank you

    Phil

    Here's a good explanation of this thing:

    The nolock option prevents the sharing of file locking

    information between the NFS server and the NFS client. The server is

    unaware of the file locks on this client and vice versa.

    Using

    the nolock option is required if the NFS server has its NFS file lock

    the feature in a State broken or remained dead letter, but she works between

    all versions of NFS.

    If another host is accessing the same files that

    your app on this host, there may be problems if file locking

    information sharing is disabled.

    Failure to maintain good

    blocking a write operation to the same host with a read operation on

    another may cause the reader to have incomplete or inconsistent data

    (reads a structure of data/record/line that is written only in part).

    A

    locking failure between two writers is likely to have data

    loss/corruption, such as the later writing replaces the previous one. The

    changes made by the previous write operation may be lost.

    -

  • poor performance iscsi

    Hi all I use an esx4 connect to a store of data opensolaris. On the open solaris, I share a dir by nfs and iscsi logical unit number. When I try this from windows I the same perf (near 100 MB/s) but when I test with vm on each data store I have 67 MB/s for nfs (it's ok) but only 34 MB/s with iscsi.

    It is therefore possible to increse the perf iscsi or not?

    I have 100 MB/s without any problem. You must specify your configuration on COMSTAR so that we can see where is the problem. Do you have the SSD as slog? I heard NFS and iSCSI are that both use sync writing in ESX

  • SAN iSCSI and ESX3 Clustering

    Reading the documentation indicating that if we deploy an iSCSI SAN with the initiators of software that this system will not allow clustering.  What is the correct reading?  No clustering function?  If so, why?

    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_mscs.PDF

    page 11

    > Environments and the following features are not taken in charge for the setup of MSCS in this version of vSphere:

    > - Clustering via NFS or iSCSI.

    http://www.VMware.com/PDF/vi3_35/esx_3/vi3_35_25_u1_mscs.PDF

    page 16

    Clustering is not supported on iSCSI or NFS

    ---

    VMware vExpert 2009

    http://blog.vadmin.ru

  • Remove an iSCSI Datastore VMFS

    We are currently migrating to NFS over iSCSI and while doing some tests, I deleted a vmfs ISCSI using the VIC data store. When I went to add to the data store to the esx server, he said the volume was empty. If you do this with NFS, you can mount the volume again as it does not destroy the volume.

    My question is what happens if you want to remove the data store because you no longer want on ESX01, but you always want on ESX02. The volume will be destroyed and therfore ESX02 will not see this volume?

    Another question I have is what happens if I have an ESX machine that created the data store, then on another computer ESX try adding the same data store. The data store appears as empty when you try to create the volume on the second machine?

    Thank you

    Rombus wrote:

    So would do you by deleting the IP address of the esx of its list on the target server and then re-scan?

    Yes, but réanalysant does that with iscsi HBAs (even here the deletion of the target may be necessary); with software disable it, kill, and activation of the initiator iscsi is required (commands on the console) - or you restart your Esx host.

  • vSPhere VM without protection in cluster HA

    Hello user of VMware

    I have a very strange problem with my vmware ha cluster and nfs.

    vSphere is installed on two servers HP ProLiant DL380 Gen8

    vCenter runs on a HP ProLiant DL160 G6.

    NFS and iSCSI is already running on a HP ProLiant DL180 G8

    In my data center, I have configured a cluster HA and DRS, on two cluster nodes that are mounted the iscsi and nfs data banks.

    If I create or move a virtual machine for the data nfs virtual machine storage will be unprotected, but if I move the iscsi data store the virtual machine, the virtual machine will be protected.

    The virtual machine could not be vSphere protected HA and HA cannot attempt to restart after a power failure.

    I found many users with these problems and I tried the following:

    -Disable and enable the HA option

    -Delete cluster nodes, cluster of delete, create the cluster add nodes again.

    -Remove the old VirtualCenter and installed a new windows with the latest version of vCenter Server

    But nothing helps, does anyone have an idea?

    Thank you.

    If you have vswitch0 with 2 interfaces vmk. management and one for vmotion and nfs?

    you have configured LACP or something similar on the physical switches? If so you must select the IP load balancing hashing type...

    This is perhaps the reason why the cluster think the cluster is not high available

    It could also be a known error of vmware: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2020082

    Compare the version. Maybe you are concerned.

Maybe you are looking for