ISCSI or NFS

Hello world

I am deploying ESXi for a client and they are quite short on money. They need at least 2 ESXi servers in order to deploy servers in 2003. They have an exchange 2003 server, several domain controllers, and a database server. The database is a SQL database, but its current use is still quite small. Maximum memory usage is about 4 GB. I want to ask some servers ESXi with 16 GB of RAM 24GO a piece.

My problem is that in the long run, I need a storage shared for a my phase 2 plan. I then configure VCenter and apply the company licenses to be able to VMotion and some other services. I will also add more ESXi servers around this same time for exchange 2010 and some other applications. Usually, Exchange uses a lot of I/O transactions. I saw that exchange 2010 has decreased its I/O operations by dumping many of its works in the working memory. I was wondering if I can go out with a NAS and NFS. I would use 4 Gbps NICs to a switch. If I paste them, can I have a good link with ESX4i aggregation configuration? If not, is 1 Gbit/s over a good NFS? Experiments using such a slow link instead of 4 Gb/s FC?

If I can't go out with NFS, whereas I would have to ask a local storage large enough to hold the current servers I want to convert and create for this first phase. Then I would get SAN iSCSI for their next year that is coming.

Any advice would be really appreciated. Thank you

Hello

Would you recommend but using ISCSI or NFS but at these speeds "low" for the next 5 years?

Mike Laspina has a good post on some of the differences between NFS and iSCSI, the other day.

Have a read of his article ZFS running via NFS as a VMware store

It is a little bit technical and focused on the opensolaris ZFS part, but the information you're looking for is especially in there.

--

Wil

_____________________________________________________

VI-box tools & scripts wiki at http://www.vi-toolkit.com

Tags: VMware

Similar Questions

  • Overhead costs with software iSCSI vs NFS?

    If you use NFS to store your VMS VMDK files, could say nothing about the CPU above for this, compared to the iSCSI Software?

    If you use the FC or iSCSI hardware, it seems, most of the work could be discharged at the HBA, but what about NFS that needs to be done in software? It will be easier that iSCSI from the Vmkernel doesn't have to manage the lower block level access?

    Hello.

    The difference is very small.  Check out the white paper of comparison of the performance of the Protocol storage in VMware vSphere 4 .

    Good luck!

  • Convert iSCSI RDM NFS VMKD

    OK, a littlebit background.  I have two ESX environments: a cluster of older hardware that running ESX 3.0 and Virtual Center 2.0 and a cluster of newer hardware running ESX 3.5 and 2.5 Center Virtal.  On older hardware, are virtual machines with iSCSI RDM.  I want to convert them in VMDK of NFS and export/import in the new cluster of ESX servers.

    Now the time for the question: is it still possible?  If so, how should I do?  I read on the vCenter Converter but I have not really found a definitive answer to my question. 

    You can clone the virtual machine. When you do this, a ROW will be converted to a VMDK instead.  If you choose a NFS datastore as a destination, your work is done.

    -KjB

    VMware vExpert

  • iSCSI or NFS + ASM of SAN

    Hello

    my questions look at stupid.but it's a matter of simple vary.
    Should I have any iscsi or SAN to install clusterware GR 11, 2 to 5.4 x86_64 OEL?

    concerning

    1. create the file

    DD if = / dev/zero bs = 1024 k count = 1000 = / disk1

    2 driver installation closure

    losetup/dev/loop1 /disk1

    3 instrument brand

    oracleasm createdisk LOOP1/dev/loop1

    4 use with ASM

  • How to compare premises, iSCSI and storage NFS for VMware ESXi 5 with IO Analyzer?

    Hello

    I have 2 hosts ESXi 5.0 with a few virtual machines running. I would compare the IO throughput for my local storage, iSCSI and NFS vmware storage store. Can someone point me to the right direction? I read the documents and tutorials, but I'm not sure how to compare tests from these 3 options.

    I have a need to compare the following storage options.

    Local disk on the ESX host

    iSCSI on QNAP NAS device

    iSCSI on EMC e 3100

    NFS on QNAP NAS device

    NFS on EMC e 3100

    IOmeter seems to be good tool based on reading, but still not clear where to make changes to pass tests of storage to another.

    Thanks in advance,

    Sam

    If you use IO monitor, then you must simply vmotion the VMS to the data store for the device you want to test, and then run the test (and Records results).  Can vmotion for the following data store and repeat.

  • NFS and iSCSI

    Hi all

    I will not go into another performance of iSCSI vs NFS because it isn't of interest to me, I would like to gather your thoughts on why you chose NFS or iSCSI and also the differences between the two because I'm not sure.

    The reason behind this, is based on a clean install of our Center of virtual data I have will happen soon.

    At the moment we have currently 5 virtual machines running on ESXi with two virtual machine to connect to an external iSCSI (RAID6) storage. This has worked well for over a year. (So I have no real experience with NFS)

    ESXi and VM host all five is on a 10 k SAS drives RAID1, but now I'm going to follow best practices now that we bought VMware Essentials.

    I'll put the host on the machine and the VM 5 on another separate NAS (ReadyNAS NVX) data store. I will use one of the 4 NETWORK card to connect directly on the NAS using a straight through cable, and the other three in a switch.

    Now, this is why I ask the question. Can I use NFS or iSCSI, on what I've seen there is a lot more technical documents and videos based on iSCSI, but according to me, it's because its aimed at the business market and the huge amount of VM.

    The future may hold an another VM 2-4 but no more than that.

    I have been recommended NFS in the external network manager and trust his opinion but feel that I have no experience of NFS.

    Tips are welcome.

    Specification of the server

    Reference Dell R710

    2 x Xeon processor Sockets (don't remember what model I'm typing this at home)

    24 GB Ram

    2 x RAID1 SAS drives

    4 x Broadcom NIC with iSCSI unloading

    This is the IP address on the cable, a crossover will do.

    AWo

    VCP 3 & 4

    \[:o]===\[o:]

    = You want to have this ad as a ringtone on your mobile phone? =

    = Send 'Assignment' to 911 for only $999999,99! =

  • FAS2050 performance (iscsi vs fc vs. nfs)

    We have a NetApp FAS2050.  It will use the Fibre Channel to connect to other guests (not VMware).

    To connect to our VMware environment, we use fibre, iscsi, or NFS.

    It seems that NFS would be easier in terms of deduplication (vs a provisioning).  Are there performance on iscsi vs fc vs. nfs reports?  So far, I've read the vmware docs "Performance Best Practices for VMware vSphere 4.0 ' and 'Scalable storage performance', but they don't focus on the performance of NFS issue.

    As a backdrop, the hosts are 2950 s Dell running esx 4 and about 10 virtual machines per host.

    Thank you

    Assuming that our flow would be less 1 GB & we can get the hang of the CPU, it seems that NFS will be sufficient.  Am I reading that correctly?

    Yes, but this assumption is based solely on this assumption.  It is always better to know the workloads and their needs, before these decisions are made.  You don't want to find out after the fact that your treatment needs were (or happened to be) too.  The chances are pretty good that you would be fine with all protocols and storage, NetApp, but its always better to know for sure.

    The other thing to keep in mind is that, while NFS provides operational simplifications, it is also an expensive option for the NetApp.  If you already have a fiber channel environment, strong support (or staff) in place, and this infrastructure can handle the growth, then it might be useful to research in use as well.

  • Home environment, direct attached eSATA or iSCSI/NFS 1 single?

    Greetings...

    Today, I am running ESXi 3.5u4 on my server at home, plans to move to ESXi 4.0 in a few months.  Right now, I'm flying without a net - no disk redundancy.  The server is not the space of the slot to add a good RAID card (it's a Shuttle XPC), I'm looking to go with an external RAID device.  I am considering a device that got both eSATA and 1GbE onboard (QNAP TS - 439 Pro).  If I go with it as a NAS, I'll plug 1 port of the NAS directly to a port 1GbE dedicated on the server, run frames, etc., leaving the 2nd port on the NAS connected to the management network.

    I run 5 VMs full-time, no not particular disc, i/o intensive, other than the file server that my wife and I use, and it's only a little time.  If I SIN instead of direct connection, I see a noticeable loss of performance?  If this isn't the case, should I be looking at iSCSI or NFS?

    So that I do not see a big performance hit, I'd rather go NAS, so he would give me a little extra flexibility to connect to other systems for the time machine backups, etc.  I'm leaning towards NFS, since iSCSI, if I remember involves stuffing a SCSI inside TCP frame, no. generals, creating news?

    The differences in performance between NFS and iSCSI would be minimal on a smaller device. Extended frames may also be minimum value, but your tests that prove. NFS is much simpler to install and use. Is there a lot of contention for disk access then you will start having performance problems. There is not a lot of processor or controller in these devices and no drive of caching (secure disk cache).

  • Recommendation of NFS - VCP exam

    Hello

    I prepare for the exam VCP 550 and I was looking for a solution practice NFS storage.

    Can you recommend some ways simulation NFS storage, as I do not have a physical?

    I agree with rcporto on this one.

    I used FreeNAS to my environment and its great. Great for learning iSCSI and NFS.

  • RS 5 with netapp SRA, FCOE site iSCSI with RDM

    Hi team, could someone please help to veify if it works in this scenario.

    We have PROD site running vmware vcenter 5.0, esxi5.0, SRM 5.0, Netapp 3210 ONTAP8.1 version file server. The VMs are windows Server2008, MS SQL2008 as major DB. Virtual RDM database files. Only FCOE for this site. AD is a physical Server 2003. The rest are virtual machines

    We offer you to have SRM for DR site deployment. The DR site is synchronized with netapp SRA snapmirrror, controlled by the console SRM5.0. Recovery site has no FCOE, only iSCSI and NFS license availble. Always RDM but using iScsi. vCenter 5.0, 5.0, netapp ONTAP8.1 2040 esxi.

    My questions are:

    1. If VMware SRM support DR failover for this case with the different protocol, PROD is FCoE, iscsi DR

    2. If the SRM supports failover with RDM in our case

    3. If the AD DR Server site need to be installed as a new best practices, or we will have to use SRM to switch production announced on website of Dr. What is recommended.

    4. If the physical AD can be kept as the common way and always taken SRM supported

    5. to the recovery site, which is the best practice? To use NFS to store data and iSCSI for iscsi? Or store data and RDM using iSCSI LUNS to simplify?


    Thank you!

    1. Yes
    2. Yes
    3. I would recommend if possible having a domain controller AD, already running on the DR site - it may be a virtual machine
    4. Yes, the PROD site domain controller can be physical
    5. I'd keep all compatible iSCSI so so -.
  • VMKernel port done NFS use - management?

    Documentation VMware - Networking ESXi 5.0 vSphere vCenter Server 5.0:

    "The VMkernel TCP/IP stack handles vMotion iSCSI and NFS in the following way."

    If you watch the VMkernel configuration, you can configure a port group for:

    binding of vMotion, FT, lumbering, management and iSCSI ports

    By the process of elimination NFS will use the virtual card for management traffic?

    Right,

    These boxes are available for VMware knows what vmkernal interfaces to use for the management and who to use for vMotion ounces. Your storage protocol that you select is outside the realm of what vsphere manages directly; they haven't needed to put a checkbox in there for iSCSI, NFS, FCoE etc since storage protocols are, in general, managed separately.

    Ideally, you should configure your interface vmkernal on your Standard vSwitch or vDistrubuted switch with network details you have specified for the subnet / VLAN that you chose for NFS [skip the check boxes]. The final State, all your NFS traffic will then flow through the physical network links that are configured for your traffic NFS on your SvS/vDS.

    Without knowing the details of your files and their abilities network/host/server configurations, Setup and best practices may vary.

  • the iSCSI hardware recommendations

    I just had one Digital EN104L + B (below) die on us after less than two weeks of service, and he was SLOOW before he fell.  Our problem was along the lines of this type problem:

    http://www.sansdigital.com/index.php?option=com_kunena & Itemid = 190 & Func = View & ID = 1568 & catid = 10 #1568

    Before that, we had openfiler which worked well, think back to that.

    However, we liked that digital WITHOUT was all in a single unit and that he didn't need that a host like openfiler server does.

    are there units iSCSI in the meantime 1K - 2K like this don't suck?

    http://www.sansdigital.com/elitenas/en104lplusb.html

    http://www.sansdigital.com/images/stories/products/en104lplusb.jpg

    It hasn't is usually very little tested differnce between iSCSI and NFS. The simplicity of NFS WINS for me.

  • Impossible to create RDM on sharing NFS - unimplemented function

    Hi all

    I have a problem, creating a map of CRUDE device on a NFS share. Get an error "failed to create ramdisk: function not implemented (2490377)' when you use vmkfstools:

    / usr/sbin/vmkfstools z - /vmfs/devices/disks/naa.60a9800050336d755a5a5a3473355659 /vmfs/volumes/VMDATANFS/eea-cl1-rdm/eeacl1-sbd1.vmdk - a free

    Tried several suggestion in forums, such as from the creation of inside the right way, but it just doesn't work.

    The environment is a vShere 4.1 new installation that has storage NFS was assigned a 2040 of NetApp FAS. For a virtualized Linux cluster we want or need to create a bundel of RDM disks. When I add a disk in the CR of the host (or the customer vSpere directly connected to the ESX host) and select Raw Device Mappings, and then select the LUN exposed, then I need to specify the data store but only can select 'local' and not the NFS share my VM runs on. The store just does not appear here in the list, or I can select 'Save with Virtual Machine' it (the virtual machine is sitting on the store NFS of course).

    To test, I created the NFS name lowercase. does not work

    Also created a VMFS on the NetApp volume and attached to the host iSCSI hollow. This volume appears in the list to store ROW on the use of VC to add a RDM disk for a virtual machine, also I can use the vmkfstools command line successfully this way.

    I think a RDM link must be supported on an NFS share, right?

    Thanks in advance!

    Welcome to the forums!

    You need a number of logic unit or soemthing which are accessible through SCSI (such as iSCSI). NFS is not supported for this feature.

    http://pubs.VMware.com/vsp40u1/wwhelp/wwhimpl/js/html/wwhelp.htm#href=server_config/c_limitations_of_raw_device_mapping.html#1_12_9_7_8_1

    AWo

    VCP 3 & 4

    \[:o]===\[o:]

    = You want to have this ad as a ringtone on your mobile phone? =

    = Send 'Assignment' to 911 for only $999999,99! =

  • Using VDR 1.2 to backup VMs to iSCSI

    VDR is supposed to be able to backup to iSCSI according to marketing materials.  I saw a post on 23/07/09 the same question.  RParker responded by saying that the answer is Yes and no. I would like to get a detailed response on the part of this answer Yes.

    I installed VDR and 3 TB iSCSI network device is now regarded by the host as a data store works.  The VDR Configuration tab only allows me to add a "network share".  What should I do to make the VDR 'See' the iSCSI VMs backup location as data store?  I have two servers Windows and SLES 10/11 servers.  I know that to be effective with deduplication each OS should have its own backup location.

    Other issues that are important:

    1 VDR will work with SLES - looks like only Red Hat and Ubuntu?  Does it matter if it's iSCSI or actions?

    2 dedup will work with VDR and iSCSI or only with stocks?

    If there are documents or links to other community responses that will help, just reply with the URL - no need to retype responses.  But please make sure they are clear and concise!

    Thank you very much

    Charlie

    Once all your guests can see the target iSCSI as a VMFS database, add a new virtual disk directly to the virtual appliance for data recovery (max 1 TB from the documents I've seen so far). When you start the DR unit, you should see the newly added disk and be able to format and mount. From my understanding network shares 'Add' to communicate with CIFS or NFS shares. iSCSI and FC target must be added as a data store to the level of the host.

    From my understanding VDR save upwards of any virtual computer either. Ideally, the virtual machine will be at the hardware level 7 and running the latest VMWare tools in order to get the most effective (dedup/tracking changes block) and the OS and APP quiecsing.

    I also believe, dedup works on any target (iSCSI, FC, NFS, CIFS) VDR.

    I'm still learning it and the documentation is not fabulous, above is my experience so far. I hope it helps.

    -added 9/2 10:56. I realized that CIFS is the only part that you can add to "add a network share." All other targets (FC, iSCSI, NFS) must be presented to the virtual appliance as a vmdk. The VMDK would reside in the data store of FC/iSCSI/NFS that you set up.

  • Is there anyway around this limitation? -> Clustering on disks, iSCSI (VMware ESXi)

    This is in the article (http://www.vmware.com/pdf/vsphere4/r40_u1/vsp_40_u1_mscs.pdf): Setup for Microsoft Cluster Service and Failover Clustering

    Environments and the following features are not taken in charge for the setup of MSCS in this version of vSphere:

    • Clustering on iSCSI and NFS FCoE disks.

    Is this in any way about this?

    I would like to create a simple 2-node, Windows 2008 active/passive cluster in VMware ESXi using my FAS2020 iSCSI solution.

    Thanks for your time in advance!

    As I said earlier. Treat your virtual machines and physical and use the iSCSI initiator in the operating system.

Maybe you are looking for