FAS2050 performance (iscsi vs fc vs. nfs)

We have a NetApp FAS2050.  It will use the Fibre Channel to connect to other guests (not VMware).

To connect to our VMware environment, we use fibre, iscsi, or NFS.

It seems that NFS would be easier in terms of deduplication (vs a provisioning).  Are there performance on iscsi vs fc vs. nfs reports?  So far, I've read the vmware docs "Performance Best Practices for VMware vSphere 4.0 ' and 'Scalable storage performance', but they don't focus on the performance of NFS issue.

As a backdrop, the hosts are 2950 s Dell running esx 4 and about 10 virtual machines per host.

Thank you

Assuming that our flow would be less 1 GB & we can get the hang of the CPU, it seems that NFS will be sufficient.  Am I reading that correctly?

Yes, but this assumption is based solely on this assumption.  It is always better to know the workloads and their needs, before these decisions are made.  You don't want to find out after the fact that your treatment needs were (or happened to be) too.  The chances are pretty good that you would be fine with all protocols and storage, NetApp, but its always better to know for sure.

The other thing to keep in mind is that, while NFS provides operational simplifications, it is also an expensive option for the NetApp.  If you already have a fiber channel environment, strong support (or staff) in place, and this infrastructure can handle the growth, then it might be useful to research in use as well.

Tags: VMware

Similar Questions

  • iSCSI - performance iSCSI Lefthand storage

    Hello

    I have a virtual machine with a number of iSCSI disks. I'm looking to analyze disk performance in vCenter.

    In the performance chart legend come objects to disk as "LEFTHAND iSCSI disk (naa.6000eb3710 etc...).

    My question is how do I find which drive is if that makes sense?

    Thank you

    PM

    This is a limitation of the graphic performance, there is no graphic available with store data and this device name. We have to note the name of the data store and the ID of the corresponding host-> Configuration-> storage device and match with the column of the object in the performance table.

    The graph button "Pop-up table" far close Save, Refresh icon and print, you can pop up.

  • poor performance iscsi

    Hi all I use an esx4 connect to a store of data opensolaris. On the open solaris, I share a dir by nfs and iscsi logical unit number. When I try this from windows I the same perf (near 100 MB/s) but when I test with vm on each data store I have 67 MB/s for nfs (it's ok) but only 34 MB/s with iscsi.

    It is therefore possible to increse the perf iscsi or not?

    I have 100 MB/s without any problem. You must specify your configuration on COMSTAR so that we can see where is the problem. Do you have the SSD as slog? I heard NFS and iSCSI are that both use sync writing in ESX

  • NFS/iSCSI ports vmkernel - different VLAN?

    I have a question, if you already have a vmkernel port defined for NFS (in vlanX), and if you want to set the iSCSI on the same physical network adapter vmkernel port/ports, then you would give this NIC iSCSI even vlan like NFS or vlanY for iSCSI?

    If you have found this device or any other answer useful please consider useful or correct buttons using attribute points

    I would create different VGA (and VLAN) for the types of traffic.  It's simple, and it will stand the test of time and changes in your iSCSI environment.  You can add network cards later, you can separate the iSCSI network in main switch.

    My situation is a little different to yours I have NFS coming through vPC on Nexus s 2148 (here 1000V) and traffic iSCSI in France via 3750 s (here 1000V).  The NFS traffic using vPC and iSCSI traffic uses MAC pinning and iSCSI MPIO.  Very different profiles.  A time ago I would have found myself in a situation similar to yours, and I took a simple approach to share the same VLAN I would be regret and detangle it right about now

    Andrew.

  • RDM v ISCSI initiator in comments

    Hi all

    We are looking at a Sharepoint deployment and Server index MS requires that its application data on a LUN.
    Telling me two options:

    -Use RDM

    -Use the guest VM iSCSI initiator

    Is there a reason why I should not use the iSCSI initiator in the comments?
    My reason for thinking this is a much simpler configuration.
    We go to a lot of Ribbon burocratique until we can make changes to a production environment
    It would be rather not configure ESX with iSCSI. We use NFS as back-end storage

    Have you seen performance problems when you use the ISCSI initiator in the guest VM?
    I didn't notice anything during a vmotion for example. What should I look for?

    Any thoughts appreciated!

    See you soon

    Hello

    If your back-end spinning is the same, so if you want to use iSCSI or RDM depends on how much you want to manage storage from the virtual machine. Using the Software in an average VM iSCSI initiators, you must deal with this, when you upgrades, etc. If you use RDM you leave such management to vSphere.

    It is also one of the reasons I prefer RDM. least to me when I manage VMs and if my vSphere host uses iSCSI devices or can make unloading iSCSI (like HP Flex-10) and then I win all the way around.

    Best regards

    Edward L. Haletky

    Host communities, VMware vExpert,

    Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the 2nd business edition

    Podcast: the Podcast for security virtualization of resources: the virtual virtualization library

  • Connect the NAS (iSCSI) to vSphere using the crossover Ethernet cable

    Hello

    I have a HP Proliant DL360 G5 in a data center (housing) directly connected to the internet. To save all the VM I would fix a NAS (QNAP with iSCSI) on the server. But given that the server is connected only to a 100 Mbps switch (supplier provide only 100 Mbit) I am looking for a direct solution to enjoy better performance.

    I use on my Proliant Server internal dual port NIC (VMNIC0 and VMNIC1) and an additional port of double NC360T NIC (VMNIC2 and VMNIC3). I made two switches, with VMNIC0 and VMNIC2 vSwitch0 and vSwitch1 with VMNIC1 and VMNIC3 for redundancy. Each vSwitch has its own service with the public IP console.

    Because I would like to continue using the above configuration, it came into my mind to buy an additional network (NC110T with 1 NIC) of the card and use a crossover cable directly to QNAP NAS. The NAS has two NIC, we'll be fixed as said to the ESX and directly to the internet (to manage from outside).

    But now I don't know how to configure it. What should I do that the ESX will intervene to the NAS, and what type of IP address I need to adjust on the NAS to talk to the ESX?

    What I have to configure a vSwitch2 (with physical adapter NC110T) and a new Console of Service with a private IP address: for example. 192.168.1.1 and the card NETWORK NAS Port that connects to the ESX, I put 192.168.1.2 and then it should be possible to work with ISCSI? I can then add the storage in vSphere?

    I would really appreciate if someone understands perfectly how such things can give me an advice!

    Thank you very much

    Best regards

    Phil

    It sounds like a good plan and you have the process down to the right.

    • After the installation of the new card as you did for the first configuration, you should see additional network cards.

      • You create a new vSwitch for the internal network segment that connects the NAS and ESX you note and add these network adapters

      • Then you create the port console and IP address on the private network

    • Now both devices may be

    • In QNAP, you create your iSCSI target or your NFS volume

    • For iSCSI you go to storage adapters - properties - add the ip address of the NAS and then close and discover

      • This should find the paths of the target.

      • Then you go into storage and add and format the iSCSI volume

    • For NFS, you go straight to the storage and add the name address and NAS volume and turn up the volume of storage

    All storage will be visible and usable by ESX.

    Steve Puluka

    JNCIA-ER

    JNCIA-EX

    Senior network administrator

    Liberty Dialysis

    http://puluka.com/home

  • Can possible save of Cisco DCNM on NFS and FTP on the stand-alone dcnm Server

    Can anyone suggest me that how I can back up the database DCNM 7.2 for a stand-alone server on NFS /FTP?

    What is the procedure to perform the backup on server NFS /FTP?

    Is any dependency to take backup on remote servers from stand-alone server DCNM (not HA)?

    Hello

    VIEW includes a database backup utility in the $INSTALLDIR/dcm/directory/bin/view.  There are 2 versions of the script, one for Oracle and one for postgres databases, and they both can be run from the server command line.  These scripts will create a dump (.dmp) database file which you can ftp offshore to another server to files, if you wish.

    For reference, here is the documentation on backups of db VIEW:

    http://www.Cisco.com/c/en/us/TD/docs/switches/Datacenter/SW/7_2_x/Fundam...

    Thank you

    Eric

  • Reduce the number of iSCSI Multipathing in paths

    I'm experimenting with performance iSCSI on NAS. Currently I have 4 NICs on the host and 4 NICs on the NAS. That leaves me with 16 channels for iSCSI data store. I would like to disable multipathing so I find myself with 4 paths. I would try having NIC1 on the host go on the NAS NIC1 and NIC2 on the host then goes to NIC2 on the NAS, so on and so forth. Is this possible? I saw that I could disable paths but I did anyway to say which way goes to which nic so that I might finish with 4 paths all on the same network adapter. Any help would be appreciated.

    I found that if I broke to the top of each of the 4 NIC I used for iSCSI in different subnets traffic and then set the 4 network cards on the NAS in the same subnets I could get the number of paths down to 4. However with adapter for iSCSI for the VMware software there is no way to have more than 1 NIC of the flow value. Round Robin will use only 1 NETWORK with a value of debit card while I was stuck with the help of only 25% of the potential flow of the NAS for iSCSI traffic.

  • On broadcom BCM57800 10 Gb nic and iscsi

    Hi guys,.

    I have a 5.1 with 1 NIC broadcom NetXtreme II bcm57800 esxi server and would use my storage via iscsi.

    This adapter support iscsi off load but I do not understand if it is mandatory to add "iscsi software adapter" too esxi or not.

    Thanks in advance.

    Hi gianlucab,.

    Only, you would use the iSCSI Software adapter if you need the kernel ESXi itself to perform iSCSI operations. You would use this with Ethernet adapters that do not support iSCSI unloading.

  • VDR - addition of network NFS share

    Hello

    I've set up an NFS share on a Dell NAS running Windows Storage Server 2008 and Active Directory Lightweight Directory Services (AD LDS) to provide a UNIX user name mapping to Windows.

    I can add that the NFS share as a data store for my ESXi host with no problems; but for some reason, I can't not add that the same NFS share a device VDR running on an ESXi host, even if this host is the one running the VDR!

    When you add a NFS datastore to a host, the dialog box wonder following:

    • Server: '111.222.333.444.
    • File: 'MyNFSShare. '
    • The data store name: "MyNFSDatastore."

    When you add a destination to the VDR, dialog box asks me what follows:

    • URL: '\\111.222.333.444\MyNFSShare '.
    • Username: "< sharing NFS Username >".
    • Password: '< shared NFS Password >.

    I also tried to enter the vCenter administrator user credentials but it makes no difference, and the newspaper of the events on the NAS server reports a successful security event "Audit success" for the connection of the user anyway.

    Anyone can shed some light on this?

    VDR accepts no NFS as destinations, but it does not accept CIFS.  So, instead of trying to add a NFS target, add a windows share real using windows (or domain) credentials.

    I too seem to get best performance of an instance of NFS Linux than Windows, I think because the implementation of Windows isn't technical compliance.  If I watch the console while doing a vmdk format hosted on a Windows based NFS target the connection seems to constantly go in and out.  I now running on a Windows 2008 R2 server and it works well enough for my purposes, that hosting the vmdk VDR so I can their riffle on tape.

  • question put on the vSphere iSCSI SAN infrastructure network

    Hi all

    We are a small company with a small virtualized environment (3 ESX servers) and are about to buy an AX-5 SAN EMC (model Ethernet not CF) to implement some of the features of high availability of vSphere. My question is related to the networking of the SAN: we switch dual Cisco 2960 G Gigabit and dual Cisco ASA 5510 firewalls in a redundant configuration.

    I understand that the best practice is to implement iSCSI traffic on a separate from all other traffic LAN switch. However, I do not have the knowledge and experience to determine the real difference than a separate switch would really vs the plan to create a separate VLAN on the switches Cisco dedicated to iSCSI traffic only. I would ensure that the iSCSI traffic has been on a VLAN dedicated Physics (not just a logic) with any other VLAN logical (subinterfaces) on the same VLAN). It is difficult for me to understand how a port gigabit on a VLAN isolated on kit Cisco will perform much poorer than on a dedicated Cisco switch somehow. But then again, I don't know what I don't know...

    The thoughts and the input would be appreciated here: I'm (very) hard not to drop another $6 + in another pair of Cisco switches, at least that this decision will significantly compromised performance iSCSI SAN.

    Enjoy your time,

    Rob

    You have 2 SP each with 2 iSCSI, for example:

    SPA: 10.0.101.1 101 VLANS and VLAN 102 10.0.102.1

    SPB: 10.0.101.2 101 VLANS and VLAN 102 10.0.102.2

    On your ESX to create 2 vSwithes, each with a port vmkernel on an iSCSI network and each with a single physical NIC.

    See also:

    http://www.DG.com/microsites/CLARiiON-support/PDF/300-003-807.PDF

    André

  • Having dead path to the iSCSI target

    What I had:

    1 host: ESX3.5i

    2 host: Solaris 10 (SunOS 5.10 Sun Generic_138889-02 i86pc i386 i86pc)

    2 parts ZFS on it - 1 iSCSI share and 1 NFS Sharing.

    What I did:

    Upgrade to ESX4i

    What I have so far:

    Everything works but iSCSI.

    It shows 'path of death' for me, but vmkping to host Sun is ok.

    tail of unpacking here:

    20 June at 16:24:07 vmkernel: 0:01:22:15.659 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_TransportConnSendPdu: vmhba37:CH:0 T: 0 CN:0: didn't request passthru queue: no connection

    20 June at 16:24:07 iscsid: send_pdu failed rc - 22

    20 June at 16:24:07 vmkernel: 0:01:22:15.659 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba37:CH:0 T: 0 CN:0: iSCSIconnection is marked 'OFFLINE '.

    20 June at 16:24:07 iscsid: reported core connection iSCSI error state (1006) (3) 1:0.

    20 June at 16:24:07 vmkernel: 0:01:22:15.659 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:07 vmkernel: 0:01:22:15.659 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.0.0.2:51908 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StartConnection: vmhba37:CH:0 T: 0 CN:0: iSCSI connection is marked as "ONLINE."

    20 June at 16:24:10 iscsid: connection1:0 is operational after recovery (2 attempts)

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StartConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StartConnection: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: vmhba37:CH:0 T: 0 CN:0: Invalid residual of the SCSI response overflow: residual 0, expectedXferLen 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: vmhba37:CH:0 T: 0 CN:0: connection failure notification rx: residual invalid. State = online

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: Sess ISID: 00023 d 000001 TPGT TARGET:iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba37:CH:0 T: 0 CN:0: event CLEANING treatment

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.538 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    20 June at 16:24:10 vmkernel: 0:01:22:18.789 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_TransportConnSendPdu: vmhba37:CH:0 T: 0 CN:0: didn't request passthru queue: no connection

    20 June at 16:24:10 iscsid: failure of send_pdu rc-22

    20 June at 16:24:10 vmkernel: 0:01:22:18.789 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba37:CH:0 T: 0 CN:0: iSCSIconnection is marked 'OFFLINE '.

    20 June at 16:24:10 iscsid: reported core connection iSCSI error state (1006) (3) 1:0.

    20 June at 16:24:10 vmkernel: 0:01:22:18.789 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:3dd0df28-79b2-e399-fe69-e34161efb9f0 TPGT: TSIH 1: 0

    20 June at 16:24:10 vmkernel: 0:01:22:18.789 cpu1:11899) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.0.0.2:55471 r: 10.0.0.1:3260

    and so on.

    What I did wrong, and how can I remedy this situation?

    Out of curiosity, I advanced and tried the target solaris. It seems that there is a problem in the target solaris before «solaris 10 update7» I tried u6 and I see the question and this is clearly the problem of the target.

    ...

    iSCSI (SCSI command)

    Opcode: SCSI Command (0x01)

    .0... ... = I: in queue delivery

    Flags: 0x81

    1... = final F: PDU sequence

    .0... ... = R: no data is read from target

    .. 0... = w: no data will be written in the target

    .... . 001 = Attr: Simple (0x01)

    TotalAHSLength: 0x00

    DataSegmentLength: 0x00000000

    MON: 0000000000000000

    InitiatorTaskTag: 0xad010000

    ExpectedDataTransferLength: 0x00000000

    CmdSN: 0x000001ab

    ExpStatSN: 0x000001ad

    SCSI CDB Test Unit Ready

    MON: 0X0000

    Set command: Direct Access Device (0x00) (using default commandset)

    Opcode: Test Unit Ready (0x00)

    Single vendor = 0, NACA = 0, link = 0

    ...

    iSCSI (SCSI response)

    Opcode: SCSI response (0 x 21)

    Flags: 0 x 82

    ... 0... = o: no overflow to only read a part of two-way remote

    .... 0... = sup: No exceedances of the ability to only read a part of two-way remote

    .... . 0 = o: no residual overflow occurred

    .... .. 1 = sup: residual overflow occurred < < < < < =.

    Answer: Command completed at the target (0x00)

    Condition: Good (0x00)

    TotalAHSLength: 0x00

    DataSegmentLength: 0x00000000

    InitiatorTaskTag: 0xad010000

    Statsdon't: 0x000001ad

    ExpCmdSN: 0x000001ac

    MaxCmdSN: 0x000001ea

    ExpDataSN: 0x00000000

    BidiReadResidualCount: 0x00000000

    ResidualCount: 0x00000000

    Demand: 10

    Request time: 0.001020000 seconds

    SCSI response (Test Unit Ready)

    MON: 0X0000

    Set command: Direct Access Device (0x00) (using default commandset)

    SBC Opcode: Test Unit Ready (0x00)

    Demand: 10

    Request time: 0.001020000 seconds

    Condition: Good (0x00)

    ...

    14:02:12:42.569 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: vmhba35:CH:0 T:2 CN:0: Invalid residual of the SCSI response overflow: residual 0, expectedXferLen 0

    14:02:12:42.584 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:fa2cdf18-2141-e633-b731-f89f47ddd09f.test0 TPGT: TSIH 1: 0

    14:02:12:42.602 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_ConnSetupScsiResp: Conn CID: 0 L: 10.115.153.212:59691 r: 10.115.155.96:3260

    14:02:12:42.614 cpu2:7026) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: vmhba35:CH:0 T:2 CN:0: Connection failure notification rx: residual invalid. State = online

    14:02:12:42.628 cpu2:7026) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:fa2cdf18-2141-e633-b731-f89f47ddd09f.test0 TPGT: TSIH 1: 0

    14:02:12:42.644 cpu2:7026) iscsi_vmk: iscsivmk_ConnRxNotifyFailure: Conn CID: 0 L: 10.115.153.212:59691 r: 10.115.155.96:3260

    14:02:12:42.655 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba35:CH:0 T:2 CN:0: Treatment of CLEANING event

    14:02:12:42.666 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:fa2cdf18-2141-e633-b731-f89f47ddd09f.test0 TPGT: TSIH 1: 0

    14:02:12:42.683 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.115.153.212:59691 r: 10.115.155.96:3260

    14:02:12:42.731 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_TransportConnSendPdu: vmhba35:CH:0 T:2 CN:0: Didn't request passthru queue: no connection

    14:02:12:42.732 cpu3:4119) vmw_psp_fixed: psp_fixedSelectPathToActivateInt: "Unregistered" device has no way to use (APD).

    14:02:12:42.745 cpu2:7026) iscsi_vmk: iscsivmk_SessionHandleLoggedInState: vmhba35:CH:0 T:2 CN:-1: State of Session passed of "connected to" to "in development".

    14:02:12:42.756 cpu3:4119) vmw_psp_fixed: psp_fixedSelectPathToActivateInt: "Unregistered" device has no way to use (APD).

    14:02:12:42.770 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba35:CH:0 T:2 CN:0: iSCSI connection is marked as 'offline'

    14:02:12:42.782 cpu3:4119) NMP: nmp_DeviceUpdatePathStates: the PSP has not selected a path to activate for NMP device 'no '.

    14:02:12:42.794 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess ISID: 00023 000001 d TARGET: iqn.1986-03.com.sun:02:fa2cdf18-2141-e633-b731-f89f47ddd09f.test0 TPGT: TSIH 1: 0

    14:02:12:42.823 cpu2:7026) WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn CID: 0 L: 10.115.153.212:59691 r: 10.115.155.96:3260

    In response, 'U' bit must not be set. Solaris 10 update7 seem to have this problem.

  • NFS server as a quest on ESXi Server

    Hi all

    I would like to ask your advice on the subject. I am looking for a

    solution to configure NFS shared storage for 3 ESXi servers. I currently have

    have MD3000i, 3 x DELL 2950, and 2 x DELL 2850.

    My original idea

    was to use 2 x 2850 DELL servers like NFS servers for ESXi on DELL servers

    2950 servers. NFS server cards LUN on the MD3000i iSCSI and

    then sharing of data via NFS. But maybe I could jump DELL 2850 s at all?

    What

    If I configure the NFS servers as quests on each server ESXi, use ESXi

    Software

    initiator iSCSI and NFS servers (or software use raw devices card

    inside of the NFS server - in my case it's CentOS). After these steps, it

    Will be

    no different to have a server stand-alone NFS (configuration).

    Each DELL 2950 has two cards on boar and I intend to install one or two

    dual plus

    ports NICs to increase bandwidth.

    ESXi server will be NFS datastore to map NFS server under

    the same server ESXi and this data store will be used to store all the others

    Virtual machines.

    I intend to have no more than 8 total VMs on each server ESXi.

    In the event of server failure, I can temporary remap an iSCSI LUN

    other NFS server (running on another server ESXi). Data on LUNS

    should be in place it.

    This could be done? I know technically it's possible, but it'll work in production?

    Thank you.

    Hello and welcome to the forums.

    I was very happy with Openfiler running on the 2850 s.  It might be worth checking out.

    Good luck!

  • FRA on NFS Oracle RAC one node

    Hi all

    We have installed Oracle RAC one node on Oracle Linux. Everything seems to work fine except one small thing: we try to change the database in archivelog mode, but when we try to move the database, we get ORA-19816 "WARNING: files may exist in... that are not known to the database." and "Linux-x86_64 error: 37: no available lock.

    The FRA is mounted as NFS share with the following options: "rw, bg, hard, nointr, rsize is 32768, wsize = 32768, proto = tcp, noac, worms = 3, suid.

    I searched a lot on the Internet but couldn't find any hint. Can someone point me to the right installation guide?

    Thanks in advance

    Hello

    user10191672 wrote:
    Hi all

    We have installed Oracle RAC one node on Oracle Linux. Everything seems to work fine except one small thing: we try to change the database in archivelog mode, but when we try to move the database, we get ORA-19816 "WARNING: files may exist in... that are not known to the database." and "Linux-x86_64 error: 37: no available lock.

    The FRA is mounted as NFS share with the following options: "rw, bg, hard, nointr, rsize is 32768, wsize = 32768, proto = tcp, noac, worms = 3, suid.

    I searched a lot on the Internet but couldn't find any hint. Can someone point me to the right installation guide?

    Check if service NFSLOCK works... and if not start it.

    # service nfslock status
    

    * Mounting for Oracle files options when used with NAS [359515.1 ID] devices *.
    Mounting for Oracle data files options

    rw,bg,hard,nointr,rsize=32768, wsize=32768,tcp,actimeo=0, vers=3,timeo=600
    

    For games of backups RMAN, copies images and Data Pump dump files, the "NOAC" mount option should not be specified - it's because RMAN and Data Pump do not check this option and specifying this can adversely affect performance.

    The following NFS options must be specified for 11.2.0.2 disc RMAN backup directory:

    opts="-fstype=nfs,rsize=65536,wsize=65536,hard,actime=0,intr,nodev,nosuid"
    

    Hope this helps,
    Levi Pereira

    Published by: Levi Pereira on August 18, 2011 13:20

  • using SATA vs FC in Lab Manager

    Hi, I'm new to Lab manager and we have noticed performance using LM, however by using virtual machines in Virtual Center have no problem. What are some bottlenecks, that I should look for? We use Server 2 GB for Lab Manager, Server 2 GB for VC, and two 32 GB connected SATA ESX servers. Upgrades to CF help considerably?

    upgrade to FC shared will help a lot; SATA performance tends to be pretty poor and lose you out on the opportunity to share storage, but it is especially in the field of the performance of the virtual machine, which apparently is acceptable when it is directly accessible from VC.  I'll come back later.

    Check use of memory on the servers of your LM and VC - the task manager's performance tab should be sufficient for this purpose.  If your use of the page file seems to be high, probably, and the biggest single performance gain noted by increasing real memory of the system in question.  2 GB seems reasonable, but you very well may benefit considerably with 4 GB of ram real at the VC server.

    CPU utilization must above all be a non-issue.   I run my setup of LM in a virtual machine with a maximum of 512 mhz of limited resources and do not tend to notice the important, long cpu to load.  Memory is always the killer.

    You can also Alternatively take advantages by adding a third win2k3 machine in the mix to install an external full version of MS SQL Server for your VC, rather than the bundled sql express - especially because it divides the workload on the vc server, that tends to get hammered by the demands of LM.

    Network topology could be a problem.  From the sound of it, you have a LM separate, physical server and a server VC separate, physical, as well as some ESX servers.  Is the networking between the four devices complex or distorted? There are benefits to the use of the network paths separate for network LM/VC-esx service console that use virtual machines, and there are certainly benefits to all local to the other servers.

    Or, Alternatively, if your LM and VC are actually themselves VM within your esx servers, you can save significantly by giving them minorants of resources and prioritization of memory/cpu to 'high' whereas they are less likely to get private by ESX resources.

    Meanwhile, your virtual machine performance.  Director of the laboratory, because of related clones, use the drive a little differently, in order to really start to lose with the absence of tagged command queuing, rewriting, and other features in a high performance SCSI configuration, because an operation of e/s "sequential" may eventually be scattered all over the disk.  At the very least, a GNU / linux external and a high performance + disks SCSI controller, shared nfs or iscsi to your esx boxes should give significant benefits (especially if you can do an isolated for this traffic network) in the area of how to perform your VM.  Naturally, the CF will be faster, but there will be $$ involved.

Maybe you are looking for