NFS and iSCSI

Hi all

I will not go into another performance of iSCSI vs NFS because it isn't of interest to me, I would like to gather your thoughts on why you chose NFS or iSCSI and also the differences between the two because I'm not sure.

The reason behind this, is based on a clean install of our Center of virtual data I have will happen soon.

At the moment we have currently 5 virtual machines running on ESXi with two virtual machine to connect to an external iSCSI (RAID6) storage. This has worked well for over a year. (So I have no real experience with NFS)

ESXi and VM host all five is on a 10 k SAS drives RAID1, but now I'm going to follow best practices now that we bought VMware Essentials.

I'll put the host on the machine and the VM 5 on another separate NAS (ReadyNAS NVX) data store. I will use one of the 4 NETWORK card to connect directly on the NAS using a straight through cable, and the other three in a switch.

Now, this is why I ask the question. Can I use NFS or iSCSI, on what I've seen there is a lot more technical documents and videos based on iSCSI, but according to me, it's because its aimed at the business market and the huge amount of VM.

The future may hold an another VM 2-4 but no more than that.

I have been recommended NFS in the external network manager and trust his opinion but feel that I have no experience of NFS.

Tips are welcome.

Specification of the server

Reference Dell R710

2 x Xeon processor Sockets (don't remember what model I'm typing this at home)

24 GB Ram

2 x RAID1 SAS drives

4 x Broadcom NIC with iSCSI unloading

This is the IP address on the cable, a crossover will do.

AWo

VCP 3 & 4

\[:o]===\[o:]

= You want to have this ad as a ringtone on your mobile phone? =

= Send 'Assignment' to 911 for only $999999,99! =

Tags: VMware

Similar Questions

  • MS on NFS and iSCSI with cluster?

    Clustering for vSphere management guide States that only storage fibre channel is supported of the CF/MSCS clusters, so I'm sure I already know the answer to that, but...

    I have a client who wants to run a NetApp storage solution active cluster failover, configure the disks in the operating system for Windows 2008 R2 servers on an NFS volume and application driven on a DRM iSCSI data.  IOPS / s and network side, someone set up a cluster like this before, and if so what were your experiences?  I'm just trying to get an idea of what kind of potential problems to wait on the road.  I also assume that VMware is not going to support because they advise against it expressly.  Thoughts?



    -Justin

    According to me, that it will simply not work. ISCSI initiator Software in ESX/ESXi does not support persistent reservations SCSI-3, which is required by MSCS on 2008 and above. With the help of the RDM will not change it. I don't know if iSCSI HBA will work.

    The workaround for this is to use software iSCSI initiator inside 2008. Operating system can sit on NFS datastore. Quorum and the data must be on iSCSI LUNS connected via the Windows iSCSI initiator.

  • deploy a share NFS and ISCSI to all guests at the same time?

    Hello

    I'm looking to deploy an NFS share in multiple Hosts in a single installation of Vsphere. Is there a way to push the configurtion to guests at a time, or you have to go and add it to each host individually?

    Thank you!

    Aaron

    Hello.

    NFS will need to be configured individually on each host. Make sure you use the same name on each host as well.

    Good luck!

  • Size max for NFS and VMFS (iscsi, FCP) datastore on vSphere 4.1 and 5.0

    Hello

    What is the maximum size for NFS and VMFS (iscsi and FCP) data created on vSphere 4.1, 5.0, and 4.0 stores?

    Thank you

    Tony

    Hi Tony,.

    You should find the answers in the various maxima of Configuration documents:

    Good luck.

    Concerning

    Franck

  • 2 questions - group leader and iscsi

    We have a unique PS6500 of the firmware 6.0.5. From 3 days ago, Archbishop Grp freezes when I click on the member icon. I could go into all the rest of the GUI with no problems. I got the firmware installed for about a month, so I don' t think that is the problem. Has anyone else seen elsewhere?

    2nd edition - we have vmware hosts connected to QE through private switches. Vmware hosts have separate vswitches for vmotion and windows iscsi volumes. VMotion is on 10.2.0 subnet and iscsi volumes are on 10.2.1 subnet. The EQ is on 10.2.0 with a 255.255.0.0 mask. The switches are set with the VLANS separated for iscsi and vmotion.  So far in Grp Mgr, I allowed on each volume 10.2. *. *. I started to try to change it so that the vm volumes only allow connections from 10.2.0. * (vmotion subnet) and volumes windows only allow connection of the 10.2.1. *. Currently, I do not use chap or iscsi initiator to limit. Here is an example for a windows volume.

    But here is what I get in the Connections tab for this volume, even after a new analysis of storage host adapters:

    So either I completely ignore the setting of the limits of intellectual property, or my EQ has a problem. Shouldn't I have the 10.2.1 connections to this volume?

    Any ideas? Thank you.

    None of the fixes listed in the 6.0.6 release notes apply to my EQ,.

    BTW, you can add the ACL CHAP to the EQL live volume.  This will not affect existing sessions.  Just make sure that you first create the CHAP user on the table.  Then after setting the ACL volume, set the iSCSI initiator ESX to use CHAP.

    If you take a look at the RPM, you can find a few Commons left default setings.  There are some that will improve performance, other stability.

    vMotion has not EQL access at all.  It's strictly for compose the image memory of a computer virtual from one host to another ESX host.

    Storage vMotion includes EQL, but it does not use the port of vMotion to do anything.

    vMotion traffic must be completely isolated, because the content in the course of the transfer are not encrypted.   So no VM Network exchanges should be on this vSwitch and the physical switch should be isolated too.

  • FC and iSCSI

    I have an IBM Blade Center with 4 blades running enterprise 5.1 ESXI.

    The blades are connected to storage via FC (Fibre Channel) switch.

    I added a new SAN at the center of the blade, but due to some unexpected port licensing problems with my Cisco MDS 9000 FC switch, I can connect only one of the 2fc connects to my blade right now.

    The SAN also comes with ISCSI, so my question is, can I have a put in place with FC and ISCSI multipath?  or is it a bad idea?

    Hi friend

    Here's the thread that talks about your request:

    fiber channel - it is possible to multipath FC and iSCSI? -Server fault

  • Recommend bridge SCSI and iSCSI

    Hi guys,.

    Currently I have an ADIC FastStor 2 tape auto-chargeur connected to a physical machine, Backup Exec 12.5, which backs up our running VMS with internal agents. Our ESX host running ESX 4.1, running on servers of 4 HP DL380G5.

    We are looking into converting to a more focused VM disk backup over the next 9 months, but we have a short-term obligation to upgrade our current solution quickly so that we can remove the material (it has long been out of support and our backups start to fail). To this, looking at a quick upgrade to BE2010 R2 (since we have already supported) and put us the media server on a virtual machine. Remains to be seen how to connect to the tape library. A SCSI and iSCSI Bridge seems to be the best option to do it quickly and with headache less.

    Someone at - it experience with these devices? Recommendations for an individual? The current tape drive uses a connection of Ultra2 SCSI wide 68 pins.

    Thank you

    .. .After research forums and seeing the difficulty that people have, I wasn't convinced.

    The positions of 'problem' I see the most are those who use cards non - of Adaptec SCSI and / or not parallel SCSI drives.  The most recent posts I've seen are for tape drives, SATA or SAS drives.  Basically trying to use 'new' band players, rather than reuse the 'old' band of readers.

    Quick is the main concern here. The money is available, it cannot simply be a ridiculous amount since it is monstly going to be throwing next year (otherwise I'd just go straight for an iSCSI tape library).

    Then I recommend the SCSI card because it would be cheaper than a bridge.  Or if you have an existing workstation you can re-tasks to host your tape drive for awhile so that get you the rest of your new backup equipment.  Used & refurb workstations can be made for well under $ 500 and often as low as $200 and would be more powerful enough to accommodate a tape drive.  Because you can reuse the workstation after obtaining new equipment, you could get something more than the bare minimum for this project.  Maybe it's a better use of $$$ instead of a bridge iSCSI that you go to "throw".

  • VCB server with both FC and iSCSI connections

    Hello.

    Anyone know if a VCB implementation that has the FC and iSCSI LUN presented to the proxy server is possible/supported?  I see no technical reason that it wouldn't work, but I'm trying to understand whether or not this is a supported configuration.

    Thank you!

    I don't see why that would be a question of a standpoing of features, but if you didn't mind the performance hit you just the business day following mode of transport and not worry present LUNS.   I guess that maybe in the same way hotadd would work as well.

    If you have found this device or any other useful post please consider the use of buttons useful/correct to award points

  • How to save and reuse network and iscsi on reformatting configuration?

    We have all the physical servers to reformat and reinstall vsphere (Enterprise edition).

    We would like to avoid to rebuild all the configuration network and iscsi for each physical machine.

    Is there a way to save the configuration and apply again to the reformatted Server?

    Thank you

    Tonino

    Hello

    You can save your configuration with the command vicfg - cfgbackup.pl - Server

    Frank

  • Table FC and iSCSI host table

    We are in the process of migrating off a fiber channel SAN legacy to a new array of 10Gig iSCSI and want to use Storage vMotion via the host double room to move the data from one table to the other.  This host is running ESX 3.5.  I couldn't find any documentation that comes out and says you can not use two different methods together.  Is this possible in 3.5, vSphere 4, both or neither?

    Quite possible.

    I have an OpenSolaris home server storage LUNS with ESX by FC/iSCSI and NFS. I use Storage VMotion to move virtual machines between data warehouses all the time. 3.5, you must use the command line or from the third party of SVMotion GUI, 4.0, SVMotion GUI is builtin.

  • NFS over iSCSI design options

    I have blades with 6 cards, SAN NetApp

    There is probably a requirement for VM iSCSI direct access storage. VCB proxy using HP Server for backup

    I would use NFS if appropriate given the possibility to easily access the snapshots of backup rather restore a strip or restore the Lun

    So I think:

    1 NIC Service console

    1 card NETWORK Vmotion

    2 NETWORK cards have teamed to VM network access

    2 NICs for iSCSI/NFS

    Will be iSCSI VMs running on the same vswitch and physical NIC as ESX access storage using NFS?

    Is it his OK or go with iSCSI storage of data would be a better option?

    Hey,.

    I think that the comments above have covered the first vswitch.

    Now, with the others...

    ISCSI traffic from a virtual machine must go on the portgroup 'Virtual Machine '.  So if you install, for example MS ISCSI initiator in a prompt, as traffic could exceed a port VM group.

    The ESX ISCSI traffic goes on VMKernel/Service Console port group.  Remember, if you use the ISCSI Software initiator, you need to access the ISCSI target SC.

    If you have found this device or any other answer useful please consider useful or correct buttons using attribute points

    ~ y

  • Best method to OpenFiler: NFS or iSCSI?

    I do proof of concept using a box running OpenFiler as my NAS.  I know that iSCSI is the usual way to use OpenFiler for purposes of VMware.

    I wonder if the NFS is also acceptable.  There problems with HA?

    Hi richard,

    I used two options with openfiler, both works well. Main differences:

    -iSCSI could double the troughput (almost 70 MB/s).

    -iSCSI uses the level of block blocking approach (level of NFS files), so it's better for many simultaneous operations against FS.

    -iSCSI also supports RDM and VMFS file systems (NFS does not work).

    -You can start an ESX of LUN iSCSI Server (if you use the initiatior material).

    -NFS, it's more easy to deploy and easy sharing between different operating systems.

    Then... iSCSI his best for production and NFS environments as a large storage for ISOs, models, more portable and easy to manage backups.

    Take care VCB, VMotion, DRS, HA, and VM startup.

    Hope helps.

    Ciao

  • Net M4110x - Net management and iSCSI. Clarification of the configuration steps. The documentation is at best ambiguous.

    Hi guys,.

    I'm having a lot of trouble to translate what I know PS6000 table to the new M4100x.  Here's what I'm building:

    I want my iSCSI traffic that is completely isolated from all other traffic, and I want to use the CMC network to run the Board.  It must be a simple configuration, but Dell documentation for which is worse than useless.  It confuses and everyone who's read it confuses.  Why is this?

    It seems that I should be able to assign the IP addresses of management using the MCC according to DELL:

    Step 1.  Initialize the storage

    * Once in the CMC right-click on storage open storage gui initialization.

    * Member name: MY_SAN01
    * Member, IP: 192.168.101.10
    * Member gateway: 192.168.101.254
    Group name: MY_SAN
    Group IP address: 192.168.101.11
    Group of Memebersip password: groupadmin
    * Password Admin group: groupadmin

    It sounds simple enough, and when I apply this I guess I will be disconnected my M4110x simply because it currently resides on a separate network (net 2 in the image above).  Now how to set up the IP address of my CMC (net0 in the picture above) network management?

    Step 2.  Set ip management port

    According to the documentation of Dell, I have:

    To set the management port:

    * Open a telnet (ssh) session on a computer or console that has access to the PS-M4110 range. The array must be configured beforehand.
     
    * Connect the PS-M4110 modules using the following racadm command: racadm Server 15 connect
     
    * Connect to the PS-M4110 array as grpadmin

    Once I am in:

    Activate the management controller ports using the following commands in the CLI:
    0 > member select MY_SAN01
    1. (array1) > eth select 1
    2. (array1 eth_1) > 10.10.10.17 ipaddress netmask 255.255.255.0
    3. (array1 eth_1) > upward
    4. (array1 eth_1) > exit
    5. (array1) > grpparams
    6. (array1 (grpparams)) > network management-ipaddress 10.10.10.17

    (array1 (grpparams)) > exit

    My interpretation is correct?  Now my questions:

    1. in step 2. SubStep 1 - How can I know what ethernet interface to use?  Step 1 automatically assume eth0?

    2. am I correct in using the same IP address for both step 2 - substep 2 and substep 6?  Or do I have to assign a different IP address for these?  10.10.10.18 maybe.

    3. step 2 - substep 6, it doesn't seem to be that a network mask is that correct?

    4. comparison of the ps6000e - I set up an IP address for each controller (so 2) and then assigned an IP address for the group.  It's 3 IP addresses.  For this M4110, it seems that I have only one controller.  Is this correct?  The specifications make a point that there are 2 controllers.  What happened to the IP address of the controller of the 2nd?

    CLOSE-UPS

    I intend on building a VMware cluster using the algorithm of multiple paths of DELL and I built it to the DSC, but a technician Dell put in place the table initially and have not set up a dedicated management port.  Required configuration routing traffic on the net iSCSI management.  It is not recommended, and I don't want to set up this way.

    Currently, he is a blocking problemand I need to go beyond this ASAP.  I work with a large system integrator in Texas and plan on the order that these systems built this way on their part.  This means that I must be able to explain to them how to proceed.  This issue is standing in the way of progress, and I really hope I can get a satisfactory response from this forum.  Thanks for any helpful answers.

    I think I have the answers to my own questions:

    1. YES.  Step 1 automatically assume eth0.  There are TWO Ethernet interfaces and eth1 is disabled by default, and unless you use step 2 to set the management port this second Ethernet interface is never used.

    2. No. I can't use the same IP address for both lines.  In lower level 6 I need to use a different IP address on the same network 10.10.10.18 would work fine.

    3. YES.  It is correct.  Lower level 6 assumes the network mask that I have included in the point 2.

    4. it's tricky.  There is NO WAY to configure Active/active on these tables.  There are 2 controllers, but one "still asleep," unless the other fails.  Actually, the IP address is assigned to an Abstraction Layer, it maintains.  When fails another controller "awakens" and just starting to accept traffic and it doesn't care what its IP address.

    Another point.  Now that my table is initialized and my interfaces are configured, I need to know what IP address to point my ESXi hosts for their storage.  Use the IP address of the group assigned in step 1.  It is 192.168.101.11 (there is typo in the original post).

  • Hyper-V and iSCSI network

    Hello

    We evaluate a migration of vmware for hyperv.

    I try to understand best practices for networks iSCSI comments.

    I have physical 4ports 1GBit dedicated, on the host for iSCSI traffic.

    I like to use all 4 for iSCSI host (vhdx volumes) traffic.

    Now I thought to do 2 of them shared by creating 2 logical switches in VMM, adding 2 virtual network cards for the host to use.

    The new virtual network cards are 10 Gbit. I don't see an option to change them to 1GBit. To me it seems now that the system prefers the 10 GB adapters. My other two physical cards are no more used.

    I tried to do all 4 ports as virtual, but somehow the 4.7EPA ASM does not see virtual cards. He said only: "no network adapters don't find" at the opening of the MPIO settings.

    Should I just ignore this idea to share and use 2 for host and 2 for iSCSI hosts, or is it a medium work?

    It is recommended to devote at least 2 interfaces on the host iSCSI network.  In addition, you must install the Dell EqualLogic for Microsoft host integration tools and install the MPIO feature.  To enable the MPIO in the guest operating system, you must create at least two virtual switches that are related to the physical SAN on the Hyper-V host adatpers.  Virtual machines must be configured with at least two of these virtual switches.  Then, since the guest operating system, configure interfaces with IP iSCSI network, Subnet, etc...  You must also install the Dell EqualLogic for Microsoft host integration tools and functionality MPIO DSM in the guest operating system, if it is not running Windows.  If you use Jumbo frames, ensure that all NETWORK adapters used for iSCSI (NETWORK physical cards, NETWORK cards, Guest OS NICs) are enabled for frames.

    In regards to ASM v4.7 EPA you don't see not cards network for MPIO - there is a known ASM / ME v4.7 bug in Windows Server R2 2012 linked to the EPA.  It is likely that the configuration of MPIO is fine (you can check it out through the initiator Microsoft iSCSI MPIO EqualLogic tab - it's just that ASM / me has a problem of information display.)  This bug has been fixed in version recommended to v4.7 GA HIT/Microsoft - which is intended to be published very soon.

  • NFS and VLAN native

    Hi all

    I have two channels of different port by interconnection fabric. On a single port channel I have several VLAN assigned to the traffic of the virtual machine so that 1 VLAN by default not identified in a vNIC. Unfortunately, in our factory environment VLAN is used for certain traffic of virtual machine. Now, on the second channel of port it is connected to nexus 5 k switches but only allowed for a NFS VLAN.

    The problem I am facing is that if I enable NFS port channel, some of my traffic to the machine virtual stops as it seems that they arrive via public port channel but are trying to wind up with the NFS who filed the application.

    I want to use the Group feature VLAN to apply a VLAN for the Port Channel Mapping. I am able to associate a rule for NFS fine, but I'm not able to select the default VLAN in my public group to create a mapping rule.

    If I just create a group for NFS, be it re - automatically send everything through the other channel of port? (This is essentially what I want) Or if I create a group and not the second, it will only help the NFS one but leaves the audience in the same situation that bounce between several channels of port?

    Thank you for your help and assistance

    Contact me directly if necessary

    an a v v a l i t o r o n t o c a.

    Hello

    Altogether, you created 10 vlan, including the vlan by default and you are able to add only 9 vlan in this group...

    You mean that you want to add the vLAN by default Id in the particular group which was created by you...?

    You cannot add the vlan by default Id in groups of VLANs, but an option is there you can change the default vlan ID 1 to another number, you can create a new id vlan 1 and you'll be able to add to the group.

    Before making changes to ensure that if id vlan by default 1 used by some other servers or not because if you have changed this means it will disrupt traffic.

Maybe you are looking for