Storage shared by NFS for RAC


I intend to set up a node 2 CARS. However, I have only the internal drive to use and no SIN /SAN. So I intend to share some of the ability of alternatives I have on the internal nodes 1 & 2 discs and make shared storage using NFS. However I have a few questions about it.

Q1. Since the NFS mount will indeed be file system media, this means that for the storage for the Cluster option and the RAC database that I can only use option 'shared file system' and not 'ASM '?

I think what is confusing me is that I think that I read that the ASM disks can be complete physical disks, partitions of a disk, SAN LUNS or NFS?

If this is the case then it suggests that I can create an ASM disk based on a NFS mount?

I don't know how to do this - any help / advice would be appreciated because ideally, I would have preferred to create candidate ASM disks based on the shared NFS disk

Thank you



You can use NFS as your clustered file system, but if you want to use ASM we top of the page so it no problem: dd to create large files on the file system shared, for example,

If DD \u003d/dev/zero of = / mnt/nfs/asmdisk1 bs = 1048576 count = 2048

and set your asm_diskstring parameter to point to them (or specify them as devices to be used at installation time). It is very easy to set up and is fully supported by Oracle of the uncle. I recorded a few demos that include this a year or two before, Oracle ASM free tutorial


John Watson

Oracle Certified Master s/n

Tags: Database

Similar Questions

  • Shared LUN for RAC: VirtualBox or Openfiler?

    I'm going to install a grid of 2 nodes Infrastructure and DB on Oracle Enterprise Linux 6.3

    SE host: Windows 7 (16 GB of RAM)
    Guest operating system: OEL 6.3
    Hypervisor: VirtualBox 4.2.6

    Its been over a year that Virtual box introduced the shared LUN feature.

    To configure the shared lun, would you use River creation shared LUN Openfiler or VirtualBox?

    I know openfiler needs a dedicated virtual computer to create. But I have enough RAM on my PC (16GB).


    the shared LUN feature is really good and sufficient to test the CARS. Personally, I would put the memory for images of node RAC, because 11 GR 2 needs more RAM than 10g.

    But it depends, on what you exactly you want to test. If it's just for holiday CARS with LUN shared experience.
    If you want to test the storage or payments NFS/drop down, then Openfiler is probably better...


  • The transfer server storage shared for vcloud Director cells


    I'm building a cluster of vCloud Director (3 guests) in our environment, but I was wondering about transfer server shared storage that is required for downloads and uploads the data among all cells, here is that excerpts from the Director Cloud installation guide for the same


    To provide temporary storage for uploads and downloads, shared storage must be accessible to all hosts in a cluster of clouds Director. The amount of storage to transfer server must have permissions writable by root. Each host must ride this storage $VCLOUD_HOME

    / Data/Transfer (in general transfer/opt/vmware/cloud-director/data/transfer). Uploads and downloads are the storage for a short time (a few hours a day), but because the transferred images can be large, allocate at least several hundred gigabytes at this volume.


    I have some questions related to this,

    1. can I use storage iSCSI for this "shared forwarding server storage" or only a NFS storage will do.

    2 storage is recommended according to the prospective of VMware.

    3. If anyone has references to the configuration for this storage documentation, please let me know.

    Thank you


    It is not explicitly stated above all because there are several options and alternatives.

    NFS is by far the easiest and also the easiest to wear but CIFS, for example, would work.  Like any other technique of file-sharing system that makes the blob of data readable and accessible in writing of all cells at the same time.

    iSCSI is not enough to really say yes or no to the subject.  If you have a shared several cells and do iscsi lun on it ext3 file system, then the cells will tear system metadata of file into pieces in a relatively short deadline, because ext3fs is not designed to be multi-homed.

    This isn't a limitation of iSCSI, or a lun SAN, or tours with virtual machines to share the same file several VMS vmdk, is put on top of this file system.

  • Is it possible to use GPFS or another build is a SAN storage shared for multiple ESX and ESXi hosts?

    We have a license GPFS and SAN storage. I am trying to create a storage shared for multiple ESX and ESXi hosts share existing virtual machines. We tried once NFS, it of a little slow and consume too much bandwidth LAN.

    Anyone can help answer? Thank you very much in advance!

    It depends on your storage space.

    You must connect all hosts to the same San, then follow the configuration guide of ESXi and specific documentation for your storage space (for sharing LUNS on multiple hosts).


  • Storage FireWire for RAC

    DB version:
    Operating system: Solaris 5.10

    We plan to implement a (Development) db CARS with a Firewire 800 device as our shared storage. We believe buying a 2 port Firewire storage device mentioned in the below URL, as well as 2 cards PCI firewire for two of our machines.

    I read in other posts OTN RAC with firewire storage is only good for demonstration purposes. This means that it is not good for the DBs development at least?


    That said, that it is not supported for production.
    It may be good enough for development. Just keep in mind that an error condition are harder to reproduce.

    On the other hand, software machine virtual (virtual box/Oracle VM) will also seek to build a CAR on the same machine without needing a storage "externally/shared.
    For development, on 1 Server Setup is pretty good too... (Just don't use it in production, because he abandoned the idea of HA having 1 Server ;)


  • How to compare premises, iSCSI and storage NFS for VMware ESXi 5 with IO Analyzer?


    I have 2 hosts ESXi 5.0 with a few virtual machines running. I would compare the IO throughput for my local storage, iSCSI and NFS vmware storage store. Can someone point me to the right direction? I read the documents and tutorials, but I'm not sure how to compare tests from these 3 options.

    I have a need to compare the following storage options.

    Local disk on the ESX host

    iSCSI on QNAP NAS device

    iSCSI on EMC e 3100

    NFS on QNAP NAS device

    NFS on EMC e 3100

    IOmeter seems to be good tool based on reading, but still not clear where to make changes to pass tests of storage to another.

    Thanks in advance,


    If you use IO monitor, then you must simply vmotion the VMS to the data store for the device you want to test, and then run the test (and Records results).  Can vmotion for the following data store and repeat.

  • Storage NFS for Exchange Server OS

    While I believe that Microsoft Support policy and recommendations for Exchange servers in hardware virtualization environments, document is very clear, I was faced with the following question:

    Microsoft does support the operating system Exchange server running on a NFS datastore?

    I believe that this issue is the result of the lack of understanding of the support strategy.  Here is the statement in question ("BOLD" emphasis is mine):

    The storage used by the Exchange Server guest machine for storage of data Exchange (for example, the basics of data from the mailbox or hub transport queues) can be virtual storage of a fixed size (for example, fixed virtual hard drives [VHD] in a Hyper-V environment), SCSI direct storage, or Internet SCSI (iSCSI) storage. Pass through storage is storage that is configured at the host level and dedicated to one guest machine. All storage used by a computer invited exchange for the storage of Exchange data must be block-level storage. Exchange 2010 is not compatible with network attached storage (NAS) volumes. NAS storage presented to the guest as block-level storage using the hypervisor is not supported. Transmission volumes must be presented as storage of block level for hardware virtualization software. This is because Exchange 2010 does not support the use of storage (NAS) devices connected to the network. The following virtual disk requirements apply to volumes that are used to store Exchange data.

    While I thought that the statement applied specifically to Exchange data, you can read this includes the OS data server as well (based on the second sentence in bold).  A customer was told by an engineer from MS that NAS storage was not at all based.  I disagree, because the recommended storage configuration is to separate the BONES of exchanging data anyway.


    I'm all for pick up statements of support from MS and calling him on it, when the issue is raised in question and this one is no different.  I read the Data Exchange simply means that databases, newspapers, etc.  In fact they make it that some clearly in this statement:

    "The storage used by the Exchange guest machine for storage of data Exchange (for example, the basics of data from the mailbox or Hub transport queues)...". »

    There is no mention of the operating system being considered as Exchange data... which is not.  As for this one:

    "All storage used by a guest computer in Exchange for the storage of Exchange data must be block-level storage because Exchange 2010 does not support the use of storage (NAS) devices connected to the network."

    OK, this has been an attitude of long-standing support for MS, no biggie there since we have some that only qualified Data Exchange.  To the following:

    "In addition, NAS storage presented to the guest as block via the hypervisor level storage no is not supported."

    The key point here is that the NAS storage can be presented to a guest in two ways, one through the prompt (mounting storage NAS for Windows, that is, \\server\share) and another through the hypervisor (supported NFS VMDK).  In this statement, they are referring to the latter.  So, after all, this hike that my point is that it seems to me that the drive for your Exchange virtual computer operating system can absolutely be hosted on NAS-based storage and be supported.

    .. .then it's VMware spin on it, a little.  For Hub Transport, client access and stand-alone Mailbox servers, we have no problem with that.  We recommend, however, that if a clustering solution is deployed within the guest, i.e. DAG, you create the VMDK as eagerzeroedthick.  VMDK supported by NFS cannot be eagerzeroedthick, by our advice should not be hosted on NFS.  This little nugget of information is available here:  The reason behind this is when you use a disk virtual standard thickness blocks must be cancelled before the data is written, this (albeit low) latency is avoided by using the eagerzeroedthick option.  What we are proposing is avoid any additional latency that clusters tend to not like and zero all on.  BTW, we suggest this to any level of clustering, not just Exchange or MSCS comments.


  • Please explain SVmotion (Storage Vmotion) and DRS for storage in simple words


    I little confusing... could you please contact Storage vmotion and DRS for storage in simple words and the important points to remember in this regard...

    Kind regards



    Storage DRS is a feature that you allow warehouses of data aggregated on a single object (cluster) and recommend the implementation of disks in virtual machine based on the performance of the store of data and the available space on the data store.

    The DRS storage uses Storage vMotion to migrate virtual (online) computers from a data store to another to balance data store performance and space.

    You can use Storage vMotion without storage DRS too, moving virtual machine between clusters of DRS storage and warehouses of data even from the local level to the data store shared.

  • ASM vs RAID for RAC 11 GR 2 environment

    We plan to install 11 GR 2 CARS with two-node Cluster on LINUX in our environment.
    Operating system: OEL 5.4
    In our material, we got two dell servers with 16 GB of RAM on each side more on SAN we have only 8 (173 GB) disk left for RAC Cluster configuration. I create the database (LIVE/UAT) on this Cluster Setup. Currently our Production DB dimensions are 6 GB and I assumed that for 5 years, that he's not going beyond 100 GB coming and I keep size UAT 15 GB difficulty without modification. So, how do get you the best performance from ASM using my all resources.

    My question:
    (1) who is the best solution for the ASM and RAID in our storage environment?
    (2) disc how do I create a group for the two databases (UAT/LIVE)?
    (3) records how much should I allocate in each group of disks with RAID Option or if all this suggests for LUNS, how to create LUNS on the disk I got?
    (4) I know oracle recommends two DATA DISK group & FRA is there any suggestion of CRS, REDO and TEMP FILE?
    Thank you for your Assintance.


    My first question was: which RAID Option (0,1,5,0+1) I choose with ASM?

    Well, it does not matter for the DSO. At least in your configuration with 8 disks.
    RAID0is not an option - forget about it. RAID1 (or combined with more than two discs and a RAID0 superimposed making it a RAID 1 + 0) could be an option for the writing intensive databases. RAID5 is more for intensive reading due the RAID5 write hole but offers a capacity of 'more' at the expense of slower writing speed.

    I have recommended to the stick with RAID1 (mirroring two disks) and exporting to ASM rather than create a big RAID1 + 0 on all of your discs and exporting as a big Chuck of ASM storage for the management of. If you want to add storage lateron your perfect according to the recommendations of the Oracles have LUNS of equal size in ASM with two mirrored drives. If you create it large RAID 1 + 0 and lateron add two drives you have a 600 GB size logic unit number and a 170 GB size... it's a big shift.

    But if I create TWO group of disk then, is it advisable to provide two data bases (UAT/LIVE). ?

    Normally, there is a separation between UAT and P on the storage and the server level. In your case, it might be 'ok' to do everything in the same disk group. It depends especially of what database puts the load on the disk subsystem.

    Ronny Egner
    My Blog:

  • lsnrctl reload for RAC

    Hi all

    I had a problem when using db CARS. I tried to get the status of the listener after I reload it. And I get the error

    [[email protected] ~] $ lsnrctl reload LISTENER

    LSNRCTL for Linux: Version - Production June 1, 2015 19:33:57

    Copyright (c) 1991, 2013, Oracle.  All rights reserved.

    Connection to (ADDRESS = (PROTOCOL = tcp)(HOST=) (PORT = 1521))

    The command completed successfully

    [[email protected] ~] $ lsnrctl status

    LSNRCTL for Linux: Version - Production June 1, 2015 19:29:16

    Copyright (c) 1991, 2013, Oracle.  All rights reserved.

    Connection to (ADDRESS = (PROTOCOL = tcp)(HOST=) (PORT = 1521))

    AMT-12541: TNS:no listener

    AMT-12560: TNS:protocol adapter error

    AMT-00511: no listener

    Linux error: 111: connection refused

    If I stop and restart the listener, then it works again.

    [[email protected] ~] srvctl stop listener $

    [[email protected] ~] $ srvctl start listener

    [[email protected] ~] $ lsnrctl status

    LSNRCTL for Linux: Version - Production June 1, 2015 19:34:39

    Copyright (c) 1991, 2013, Oracle.  All rights reserved.

    Connection to (ADDRESS = (PROTOCOL = tcp)(HOST=) (PORT = 1521))



    Alias LISTENER

    Version TNSLSNR for Linux: Version - Production

    Beginning June 1, 2015 19:34:33

    Uptime 0 days 0 h 0 min 6 sec

    Draw level off

    Security ON: OS Local Authentication


    Parameter Listener of the /u01/app/11.2.0/grid/network/admin/listener.ora file

    The listener log file /U01/app/Oracle/diag/tnslsnr/AIE-45-215/listener/alert/log.XML

    Summary of endpoints listening...


    (DESCRIPTION = (ADDRESS = (PROTOCOL = tcp)(HOST= (PORT = 1521)))

    (DESCRIPTION = (ADDRESS = (PROTOCOL = tcp)(HOST= (PORT = 1521)))

    Summary of services...

    "Rone" service has 1 instance (s).

    'Rone1' instance, State LOAN, has 1 operation for this service...

    Service 'roneXDB' has 1 instance (s).

    'Rone1' instance, State LOAN, has 1 operation for this service...

    The command completed successfully

    Here's what the listener.ora

    [[email protected] ~] $ cat /u01/app/11.2.0/grid/network/admin/listener.ora

    Listener = (Description = (ADDRESS_LIST = (Address = (Protocol = IPC) (Key = Listener))) # line added by Agent

    LISTENER_SCAN1 = (Description = (ADDRESS_LIST = (Address = (Protocol = IPC) (Key = LISTENER_SCAN1))) # line added by Agent



    So "lsnrctl reload" should not used for RAC environment or I used by mistake?


    Node VIP and public IP address of the node, the two IP will be exist for the listener node. Check the status of the listener without changing anything. You will see the listener with both the PPE

    $GRID_HOME/bin/lsnrctl State AUDITOR

  • Upgradation OS for RAC

    Hi guys... How is everything...

    I have the node CCR on Windows2003R2 Itaniam. I need to know is there any procedure for upgradation OS for RAC... I mean I'm going to the bottom of the Cluster & when the OS database places a gradation.


    you need not stop the entire base.
    Upgrades of the OS can be in upgradeable mode (so one node after another), you must stop the instance and the clusterstack on the node, you perform the upgrade.

    Policies to support Oracle to launch of the CARS with different versions of an operating system within a period of 24 hours.
    In other words: after completing the upgrade of the first node you have 24 hours to what your other nodes must be at the same level of the new OS.



  • Agent shared over NFS installation

    I'm trying to deploy EM agents on a RAC 2 nodes in the cluster with a shared directory on NFS. After installing the first agent I use the instructions in the documentation to install the shared agent.
    Deployment Type-> add a shared Agent Host
    Target-> node1
    OH-> same as target on node 1 (/ u02/oracle/agent12c)
    However, he fails the prerequisite steps because it detects an agent on the host computer. Here is an excerpt of the newspapers which gives the pre-reqs has failed:

    2013-05-13_11-43 - 34:INFO:DESCRIPTION: it is a precondition to ensure that there is no agent host does exist on the host computer. RESULT: WARNING RESULT TEXT: expected result: MS Agent should not exist in the host.
    Result: A home agent already exists
    Full version. The result of this review is: failure of < < < <
    2013-05-13_11-43 - 34:INFO:DESCRIPTION: it is a precondition to check if the Agent Oracle Home or install Base Directory, is already registered with the inventory RESULT: TEXT OMITTED from the RESULT: expected result: New Oracle Home Agent should not be stored with the inventory
    Actual result: Agent Oracle Home with the same /u02/oracle/agent12c/core/ value exists
    Full version. The result of this review is: failure of < < < <

    Anyone seen this before?

    Thank you

    I upgraded NFS agents, but there were some errors during the upgrade. It was not properly detect the version of the correct agent so I uninstalled the agent shared only. Now I'm looking for the re - install the agent using the silent installation process.

    The master agent is already installed and I try to create the list of plugin using the master agent but his failure.

    $AGENT_HOME/perl /u02/oracle/agent12c/core/ instancehome - / u02/oracle/agent12c/agent_inst

    The Base Dir is

    File: /rwFile

    Failed to create the plugin file.

    Make sure you have permission to write

    It seems to be using the incorrect agent Base Directory.

    . / agent of getproperty emctl-name agentBaseDir

    Oracle Enterprise Manager Cloud control 12 c Release 3

    Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.

    agentBaseDir = / oracle/u02/agent12c

    Any thoughts?

    Thank you


  • Can I use file of udev rules for RAC on Oracle Enterprise Linux 6

    Version of the grid:
    Platfomrm: The last Oracle Enterprise Linux 6 (Unbreakable Enterprise Kernel)

    Yes guys Oracle sales have convinced our Manager to go OEL of AIX.
    We plan to install RAC on Oracle Enterprise Linux 6 (UEK)

    I read this thread on the asmlib and udev rules file.
    List of rules for CAR 10.2 udev

    As a benefit to ASMLib, periera Levi said (quotes)
    '+ A good async IO interface to the database, all the I/o interface is based on a model of performance + ASYNC'

    Oracle is not out for 6 OEL or RHEL 6 AsmLib. I think that they will not release asmlib more.

    So, we will use only the udev rules file. Without ASMLib, we will have an ASYNCHRONOUS i/o or other storage related issues?

    Someone at - it successfully installed RAC OEL 6 with udev rules file?

    I use +/etc/udev/rules.d/40-multipath.rules+ udev file to configure the permissions of device of multiple devices on path to e/s for use by o/s user grid.

    It is as follows:

    # modded for setting mpath device permissions
    SUBSYSTEM!="block", GOTO="end_mpath"
    KERNEL!="dm-[0-9]*", GOTO="end_mpath"
    PROGRAM!="/sbin/mpath_wait %M %m", GOTO="end_mpath"
    ACTION=="add", RUN+="/sbin/dmsetup ls --target multipath --exec '/sbin/kpartx -a -p p' -j %M -m %m"
    # goto labels added
    PROGRAM=="/sbin/dmsetup ls --target multipath --exec /bin/basename -j %M -m %m", RESULT=="?*", NAME="%k", SYMLINK="mpath/%c", GOTO="check_cluster_devs"
    PROGRAM!="/bin/bash -c '/sbin/dmsetup info -c --noheadings -j %M -m %m | /bin/grep -q .*:.*:.*:.*:.*:.*:.*:part[0-9]*-mpath-'", GOTO="check_cluster_devs"
    PROGRAM=="/sbin/dmsetup ls --target linear --exec /bin/basename -j %M -m %m", NAME="%k", RESULT=="?*", SYMLINK="mpath/%c", GOTO="check_cluster_devs"
    # set device permissions for Grid Infrastructure
    RESULT=="*", GROUP="oinstall", MODE="660"
    RESULT=="*", OWNER="grid", GROUP="oinstall", MODE="660"
    # eof

    I understand that it is no longer necessary that can be configured in +/etc/multipath.conf+ as follows:

    multipaths {
        multipath {
            wwid  SSCST_BIOscst1_sdb_757d30d0
            alias scst1_sdb
            mode 0660
            uid oinstall
            gid grid
    .. etc.

    However, has never work properly myself--must do something wrong that I saw a few sources Internet showing it works.

    As far as I KNOW, it is also with OL6 (as it is with OL5 and earlier versions), not a question of AsmLib or udev/MPIO. Udev is a Mapper devices used by the kernel. The decision is whether to add another layer called AsmLib. Personally, I have yet to see any convincing evidence that why should I increase the complexity of the kernel driver stack, adding AsmLib. We also had a few serious incompatibility with AsmLib issues in the past (where he with, caused compatible I/O corruption at the logical level).

    If you look at the clusters in the list, many of them have shared storage accessible via udev/multichemin Pentabytes and nothing else. MPIO has its roots as a driver of kernel for these groups. One of the reasons why we stopped years ago to use EMC Powerpath for our CARS that are wired to EMC San to shared storage.

  • error while configuring nfs for unix or openfiler


    I need NFS help set up for the test esx host.  I use openfiler nfs and windows to UNIX servers to install the nfs server, but without success.  I have the following errors. Error in the configuration of the host: NFS error: unable to mount filesystem: unable to connect to NFS server Note: I need to configure something special on the esx service console host?  I have already installed nfs components and mounts.


    I can ping my server nfs, listening on port 2049, enable the firewall client nfs on esx host

    Anyone know where the steps by steps to implement nfs or openfiler for nfs work of a to z.

    Thank you for helping

    You should check your settings of double and proceed again, enable NFSv3 with ESX 3.x is very simple. To install NFS using Windows Services for UNIX, follow these steps:

    On NFS Windows 2003 Server:

    1. download the component on

    2 copy the/etc/passwd and/etc/Group password of ESX 3.x folder c:\etc\ windows host.

    3. install on your windows 2003 server and choose the "custom" option and choose "User mapping Service" and point the to these 2 files in c:\etc\

    4. use Windows to Unix admin console add 'administrator' and 'root' and click 'apply '.

    5 create a share as 'd:\data\sharefolder' and click on the tab 'NFS Sharing' and pass 'read/write' option and allow root access.

    On ESX 3.5 hosts:

    1 turn on the firewall for the NFS client settings

    2 chkconfig nfs and portmap autostart level 235 as mentioned

    3. create the network porgroups as (SC, VMotion, Production, Test, iSCSI, NFS) make sure you use different subnet for NFS/iSCSI traffic.

    4. you can customize your files /etc/hosts.allow and /etc/hosts.deny to hold account access restrictions. for example (rw, async, no squash root)

    5 restart your NFS daemon.


    Allows you to add storage wizard NFS Client VI-> the server name or IP address (this is your nfs server) and (mount point) such as /sharefolder and give him a "(NFS LUN01) label something like that."

    Check your ID, permissions, and settings, and ping them test information.

    Nothing else I can think of at this point if its does not work. Everyone is welcome to jump in.

    Best luck!

    If you found this information useful, please consider awarding points to 'Correct' or 'useful '. Thank you!!!

    Kind regards

    Stefan Nguyen

    VMware vExpert 2009

    iGeek Systems Inc.

    VMware, Citrix, Microsoft Consultant

  • Storage shared of the poor

    I'm building a laboratory at home to practice my skills of VMware. I found the material of high class server (IBM eSeries 345) on eBay, but make the piece of storage that are shared in place has puzzled me so far. A cheap NAS seems to be the way to go, but I can't be sure that it will work because I have no real experience with the lower end of one.

    Someone has suggestions for establishing a shared for a host laboratory storage practices?

    There are a number of products on the market that allow you to do what you want for free (you just need to provide the material).

    I'm biased toward OpenFiler, but I know that others have used FreeNAS or some other related products...

    The two products will provide a shared iSCSI or NFS infrastructure to use in your tests...

Maybe you are looking for

  • Firefox is adjustable so that it can be opened with a password?

    I thought creating a master password would do this, but it's not. I want to protect the saved information, i. e. income tax returns.

  • How to enable or disable the phone I followed

    How to enable or disable I followed phone

  • Messages not downloaded in the e-mail folder

    Hi, I tried to access an e-mail in my e-mail folders in Outlook Express to get a message saying 'Message has not yet been downloaded by Outlook Express, to download the message, click "here" and press space'. The message has not been retrieved, and I

  • Motorola sjyn0817a mouse bluetooth pairing issue

    Motorola sjyn0817a bluetooth mouse pairing question. Initially, the mouse paired with my computer hp laptop dv4 but after 3 or 4 days, he stopped the match. my laptop recognizes the mouse but the mouse does not respond or doesn't work. When I try to

  • Windows 8 HD broken to new HD

    I have a laptop HP that came with windows 8 already loaded on this subject. My HD recently broken and is replaced by a new one by HP. When the new HD arrives, how will I be install windows? Laptop did not come with installation disks. Thank you very