Oracle RAC 11.2.0.3. 2nd node is down.

Dear gurus,

in node 2 second node RAC is broken. I can't start. Help, please:

Oracle version: 11.2.0.3

1. name of the node: faa4 (works)

name of the 2nd node: faa5 (bottom)

name of the cluster: FAARACDB

on faa5:

[oracle@faa5 ~]$ crsctl check  crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager


[oracle@faa5 ~]$ crsctl stat res -t -init
------------------------------------------------------------------------------------------------------
NAME                            TARGET  STATE    SERVER        STATE_DETAILS       Cluster Resources
------------------------------------------------------------------------------------------------------
ora.asm                         1                ONLINE        OFFLINE                                                   
ora.cluster_interconnect.haip   1                ONLINE        OFFLINE                                                   
ora.crf                         1                ONLINE        OFFLINE                                                   
ora.crsd                        1                ONLINE        OFFLINE                                                   
ora.cssd                        1                ONLINE        OFFLINE                                                   
ora.cssdmonitor                 1                ONLINE        UNKNOWN              faa5                                         
ora.ctssd                       1                ONLINE        OFFLINE                                                   
ora.diskmon                     1                ONLINE        OFFLINE                                                   
ora.evmd                        1                ONLINE        OFFLINE                                                   
ora.gipcd                       1                ONLINE        OFFLINE                                                   
ora.gpnpd                       1                ONLINE        OFFLINE                                                   
ora.mdnsd                       1                ONLINE        OFFLINE 

/Home/Oracle/app/11.2.0/grid/log/FAA5/crsd/crsd.log:

2014-07-24 14:30:11.956: [ CRSMAIN][453293856] Checking the OCR device
2014-07-24 14:30:11.956: [ CRSMAIN][453293856] Sync-up with OCR
2014-07-24 14:30:11.956: [ CRSMAIN][453293856] Connecting to the CSS Daemon
2014-07-24 14:30:11.959: [  CRSRTI][453293856] CSS is not ready. Received status 3
2014-07-24 14:30:11.960: [ CRSMAIN][453293856] Created alert : (:CRSD00109:) :  Could not init the CSS context, error: 3
2014-07-24 14:30:11.960: [    CRSD][453293856][PANIC] CRSD exiting: Could not init the CSS context, error: 3
2014-07-24 14:30:11.960: [    CRSD][453293856] Done.

/Home/Oracle/app/11.2.0/grid/log/FAA5/evmd/evmd.log:

2014-07-24 14:40:26.324: [    EVMD][2799478528] EVMD exiting on stop request from default
2014-07-24 14:40:26.324: [    EVMD][2799478528] Done.

2014-07-24 14:40:26.351: [  OCRMSG][2837387040]prom_waitconnect: CONN NOT ESTABLISHED (0,29,1,2)
2014-07-24 14:40:26.351: [  OCRMSG][2837387040]GIPC error [29] msg [gipcretConnectionRefused]
2014-07-24 14:40:26.351: [  OCRMSG][2837387040]prom_connect: error while waiting for connection complete [24]
2014-07-24 14:40:26.351: [  CRSOCR][2837387040] OCR context init failure.  Error: PROC-32: Cluster Ready Services on the local node is not running Messaging error [gipcretConnectionRefused]

/Home/Oracle/app/11.2.0/grid/log/FAA5/CSSD/ocssd.log:

2014-09-15 16:42:18.924: [    CSSD][1605637888]clssscSelect: cookie accept request 0x7fe9500edb70
2014-09-15 16:42:18.924: [    CSSD][1605637888]clssscevtypSHRCON: getting client with cmproc 0x7fe9500edb70
2014-09-15 16:42:18.924: [    CSSD][1605637888]clssgmRegisterClient: proc(9/0x7fe9500edb70), client(11649/0x7fe9500e0230)
2014-09-15 16:42:18.924: [    CSSD][1605637888]clssgmJoinGrock: global grock +ASM-SPFILE new client 0x7fe9500e0230 with con 0x7fe90027ebf6, requested num -1, flags 0x1
2014-09-15 16:42:18.924: [    CSSD][1605637888]clssgmJoinGrock: ignoring grock join before clsmon for grock (-1/0x1/+ASM-SPFILE)
2014-09-15 16:42:18.924: [    CSSD][1605637888]clssgmDiscEndpcl: gipcDestroy 0x27ebf6
2014-09-15 16:42:18.939: [    CSSD][1590363904]clssgmWaitOnEventValue: after CmInfo State  val 3, eval 1 waited 0
2014-09-15 16:42:19.093: [ CLSINET][1599833856] failed to retrieve GPnP profile, grv 13
2014-09-15 16:42:19.093: [GIPCHDEM][1599833856] gipchaDaemonCheckInterfaces: failed to read private interface information ret 1
2014-09-15 16:42:19.182: [    CSSD][1595102976]clssnmvDHBValidateNCopy: node 1, faa4, has a disk HB, but no network HB, DHB has rcfg 277512631, wrtcnt, 28712232, LATS 172457814, lastSeqNo 28712231, uniqueness 1398115471, timestamp 1410781338/4073989242

Thanks in advance!

[root@faa5 ~] # ocrcheck

PROT-602: failed to extract the data from the cluster registry

PROC-26: error when accessing the physical storage

ORA-15077: could not locate instance ASM serving a required diskgroup

Hi mark!

You have a problem with connection of physical storage on the server faa5 .

Please check, mounted or non-physical storage to the server or not. And try again: oracleasm scandisks

You can check storage set, is one shared server faa5?

Concerning

Mr. Mahir Quluzade

Tags: Database

Similar Questions

  • Installing Oracle RAC: Unix, Windows ASM nodes

    I have a question about the Oracle RAC configuration. I never did no prior installation RAC or ASM. Maybe a stupid question to some of you.

    Is it possible to install Oracle RAC with following options?

    2 node RAC with Sun Solaris
    Shared storage using ASM in Windows server

    Any additional information you can provide will be greatly appreciated.

    Thanks in advance

    In very simple terms, when using ASM, database server process read the CRUDE device and process DBWR write devices ROUGH containing tablespace information. With ASM, these processes simply ask ASM 'where the disk read/write. Therefore, ASM provides simply a "table of contents". (It is much more than that, but this seems to be a good way to start the discussion.)

    ASM MUST run on the same computer as the database instance that is to use it.

    In your scenario, the Windows computer may be Server iSCSI providing raw devices to Solaris machines. These devices would then be managed by ASM and reading/praav1s directly by database processes.

  • Issue of INS-35354 when installing Oracle RAC 12 on infrastructure Oracle Grid 12

    Hello

    I am trying to install Oracle RAC 12.1.0.2 one node installation on Windows Server R2 2012, but the installer told me:

    'Valid [INS-35354] the system on which you try to install Oracle RAC One doesn't not part of a cluster.'

    On the same server I have is installed and running Oracle Grid Infrastructure + ASM.

    I don't know what is missing the installation program, which is possible to register? The installation program does not provide which had not passed the specific audit.

    Thanks for the tips

    Georg

    Hello

    A CCR node does not mean that you install the grid infrastructure in a single node and install a base of a RAC data on this node.

    Basically, a concept CAR is similar to active passive clustering to provide high availability. To have a node database RAC upward and launched, you will have to install clusterware on at least two server by selecting the option during the installation of the IM.

    Pls follow the below post for detailed answer

    INS-35354: validates the system on which you are trying to install Oracle RAC not part of a group when installing a 12 c RAC node database

    Concerning

    Krishnan

  • Oracle RAC 11 g fencing

    Hi all, I want to know if anyone knows the process of closing on the CARS, the other question is, Oracle RAC is able to restart the nodes (physical servers)?

    Thank you very much.

    Pablo

    The Oracle Clusterware is designed to perform a node eviction by removing one or more nodes in the cluster if a critical problem is detected.  A critical problem could be a node does not not by a heartbeat of network, a node does not by a heartbeat of disc, a machine hooked or severely degraded, deprivation of resources OS (IE high use of the processor, memory shortage/Swapping, queue/executed load medium high) or an ocssd.bin process hanging.

    For example,.

    Oracle Clusterware is based on the files of voting accessibility. If a node in the cluster cannot access the majority of the files of voting, the node (s) to which it applies is (are)

    immediately removed from the cluster (expelled / closed).

    This expulsion of node is intended to maintain the overall health of the cluster by removing members wrong (in other words, in order to avoid "split brain" situations, as well as data corruption).

    Note: From 11.2.0.2 RAC (or if you're on Exadata), expulsion of node may not actually reboot the machine.  It's what we call a restart without rebooting.  In this case, we reboot most of the clusterware battery to see if that solves the unhealthy node.

    For more information on 11 g RAC 2: REBOOT - LESS NŒUD FENCING, refer to the following URL:

    URL: http://oracleinaction.com/11g-r2-rac-reboot-less-node-fencing/

  • RAC 11.2.0.3 to 12.1.2.0 upgrade - root.sh failed on the 2nd node

    Hi all

    We are being upgraded CARS 11.2.0.3 to 12.1.2.0 upgrade - root.sh failed on the 2nd node

    We found that GI is down for the phase of script root.sh on the 2nd node, node 1 it went successfully.

    End the command output

    2016-01-18 15:08:29: cmd running: /u01/app/12.1.0.2/grid/bin/clsecho Pei a f clsrsc m 4003

    2016-01-18 15:08:29: output of the command:

    > CLSRSC-4003: patched Oracle Trace Analyzer (TFA) Collector File successfully.

    > end of command output

    2016-01-18 15:08:29: cmd running: /u01/app/12.1.0.2/grid/bin/clsecho Pei a f clsrsc m 4003

    2016-01-18 15:08:29: output of the command:

    > CLSRSC-4003: patched Oracle Trace Analyzer (TFA) Collector File successfully.

    > end of command output

    2016-01-18 15:08:29: CLSRSC-4003: patched Oracle Trace Analyzer (TFA) Collector File successfully.

    2016-01-18 15:08:29: cmd running: /u01/app/12.1.0.2/grid/bin/sqlplus v

    2016-01-18 15:08:29: output of the command:

    >

    > SQL * more: Production of the version 12.1.0.2.0

    >

    > end of command output

    2016-01-18 15:08:29: CRS got version: 12.1.0.2.0

    2016-01-18 15:08:29: ckpt: ckpt - oraclebase - / u01/app/grid-chkckpt-name ROOTCRS_STACK

    2016-01-18 15:08:29: invoking '/u01/app/12.1.0.2/grid/bin/cluutil ckpt - oraclebase - / u01/app/grid-chkckpt-name ROOTCRS_STACK.

    2016-01-18 15:08:29: trace file=/u01/app/grid/crsdata/alhijaz8/crsconfig/cluutil1.log

    2016-01-18 15:08:29: user oracle: /u01/app/12.1.0.2/grid/bin/cluutil ckpt - oraclebase - / u01/app/grid-chkckpt-name ROOTCRS_STACK

    2016-01-18 15:08:29: s_run_as_user2: currently running/bin/su oracle-c ' echo CLSRSC_START; /U01/app/12.1.0.2/grid/bin/cluutil - ckpt - oraclebase/u01/app/grid-chkckpt-name ROOTCRS_STACK '

    2016-01-18 15:08:29: deleting the file/tmp/file6rMePM

    2016-01-18 15:08:29: file deleted successfully: / tmp/file6rMePM

    2016-01-18 15:08:29: direct exit code: 0

    2016-01-18 15:08:29: / bin/su executed successfully

    2016-01-18 15:08:29: invoking '/u01/app/12.1.0.2/grid/bin/cluutil ckpt - oraclebase - / u01/app/grid - chkckpt-name ROOTCRS_STACK - pname VERSION.

    2016-01-18 15:08:29: trace file=/u01/app/grid/crsdata/alhijaz8/crsconfig/cluutil2.log

    2016-01-18 15:08:29: user oracle: /u01/app/12.1.0.2/grid/bin/cluutil ckpt - oraclebase - / u01/app/grid - chkckpt-name ROOTCRS_STACK - pname VERSION

    2016-01-18 15:08:29: s_run_as_user2: currently running/bin/su oracle-c ' echo CLSRSC_START; /U01/app/12.1.0.2/grid/bin/cluutil - ckpt - oraclebase/u01/app/grid - chkckpt-name ROOTCRS_STACK - pname VERSION '

    2016-01-18 15:08:30: deleting the file/tmp/fileMYEj5K

    2016-01-18 15:08:30: file deleted successfully: / tmp/fileMYEj5K

    2016-01-18 15:08:30: run the exit code: 0

    2016-01-18 15:08:30: version1 is 12.1.0.2.0

    2016-01-18 15:08:30: version2 is 12.1.0.2.0

    2016-01-18 15:08:30: version status match is 1

    2016-01-18 15:08:30: setting the TRUE isRerun

    2016-01-18 15:08:30: invoking '/u01/app/12.1.0.2/grid/bin/cluutil ckpt - oraclebase - / u01/app/grid-chkckpt-name ROOTCRS_STACK-status ".

    2016-01-18 15:08:30: trace file=/u01/app/grid/crsdata/alhijaz8/crsconfig/cluutil4.log

    2016-01-18 15:08:30: user oracle: /u01/app/12.1.0.2/grid/bin/cluutil ckpt - oraclebase - / u01/app/grid-chkckpt-name ROOTCRS_STACK-status

    2016-01-18 15:08:30: s_run_as_user2: currently running/bin/su oracle-c ' echo CLSRSC_START; /U01/app/12.1.0.2/grid/bin/cluutil - ckpt - oraclebase/u01/app/grid-chkckpt-name ROOTCRS_STACK-State

    2016-01-18 15:08:31: deleting the file/tmp/filezMRowL

    2016-01-18 15:08:31: file deleted successfully: / tmp/filezMRowL

    2016-01-18 15:08:31: direct exit code: 0

    2016-01-18 15:08:31: / bin/su executed successfully

    2016-01-18 15:08:31: the status of 'ROOTCRS_STACK' is the SUCCESS

    2016-01-18 15:08:31: oracle Clusterware has already been configured successfully; That's why out...

    2016-01-18 15:08:31: cmd running: /u01/app/12.1.0.2/grid/bin/clsecho Pei a f clsrsc m - 456

    2016-01-18 15:08:31: output of the command:

    > CLSRSC-456: the Oracle grid Infrastructure has already been configured.

    > end of command output

    2016-01-18 15:08:31: cmd running: /u01/app/12.1.0.2/grid/bin/clsecho Pei a f clsrsc m - 456

    2016-01-18 15:08:31: output of the command:

    > CLSRSC-456: the Oracle grid Infrastructure has already been configured.

    > end of command output

    2016-01-18 15:08:31: CLSRSC-456: The Oracle Grid Infrastructure has already been configured.

    in the trace file, I could see some errors related to ASM or aslib (see below)

    ---------------------------------------------------------------------------------

    2016-01-18 14:18:24.992593: OCRASM: proprasmo: ASM instance is down. Move forward to open it in dirty mode.

    2016-01-18 14:18:28.360087: OCRRAW: kgfo_kge2slos error of the kgfolclcpi1 cell: UESA-00200: cannot read [32768] bytes of the N0008 drive to offset [140737488355328]

    UESA-00201: Disc N0008: 'ORCL:OCR_VD '.

    UESA-00407: asmlib error! function = [asm_close], [0] = error mesg = [i/o Error]

    UESA-00200: cannot read [32768] bytes of disk N0005 to offset [140737488355328]

    UESA-00201: Disc N0005: "ORCL:FRA."

    UESA-00407: asmlib error! function = [asm_close], [0] = error mesg = [i/o Error]

    UESA-00200: cannot read [32768] bytes of the N0002 drive to offset [140737488355328]

    UESA-00201: Disc N0002: "ORCL: DATA".

    UESA-00407: asmlib error! function = [asm_close], [0] = error mesg = [i/o Error]

    UESA-00200: cannot read [32768] bytes of disk N0001 to offset [140737488355328]

    UESA-00201: Disc N0001: "ORCL:ASM_DATA1."

    UESA-00407: asmlib error! function = [asm_close], [0] = error mesg = [i/o Error]

    2016-01-18 14:18:28.360087*:kgfo.c@947: kgfo_kge2slos to kgfolclcpi1 error stack: UESA-00200: cannot read [32768] bytes of the N0008 drive to offset [140737488355328]

    UESA-00201: Disc N0008: 'ORCL:OCR_VD '.

    UESA-00407: asmlib error! function = [asm_close], [0] = error mesg = [i/o Error]

    UESA-00200: cannot read [32768] bytes of disk N0005 to offset [140737488355328]

    UESA-00201: Disc N0005: "ORCL:FRA."

    UESA-00407: asmlib error! function = [asm_close], [0] = error mesg = [i/o Error]

    UESA-00200: cannot read [32768] bytes of the N0002 drive to offset [140737488355328]

    UESA-00201: Disc N0002: "ORCL: DATA".

    UESA-00407: asmlib error! function = [asm_close], [0] = error mesg = [i/o Error]

    UESA-00200: cannot read [32768] bytes of disk N0001 to offset [140737488355328]

    UESA-00201: Disc N0001: "ORCL:ASM_DATA1."

    UESA-00407: asmlib error! function = [asm_close], [0] = error mesg = [i/o Error]

    2016-01-18 14:18:28.360087*:kgfo.c@2269: kgfoOpenDirty dg = OCR_VD diskstring = ORCL: * filename = / u01/app/grid/crsdata/alhijaz9/output/tmp_amdu_ocr_OCR_VD_01_18_2016_14_18_25

    2016-01-18 14:18:28.360324: OCRRAW: kgfoOpenDirty: dg = OCR_VD diskstring = ORCL: * filename = / u01/app/grid/crsdata/alhijaz9/output/tmp_amdu_ocr_OCR_VD_01_18_2016_14_18_25

    2016-01-18 14:18:28.360394: OCRRAW:-dump to trace to the error output.

    2016-01-18 14:18:28.360424: OCRRAW: error [kgfolclcpi1] [kgfokge] to kgfo.c:2287

    2016-01-18 14:18:28.360446: OCRRAW: UESA-00200: cannot read [32768] bytes of the

    Hello

    The root.sh 2A failed because of problems with the ASM.

    It fixed the issue by updating asmlib.

    restart the /etc/init.d/oracleasm

    or

    stop /etc/init.d/oracleasm

    /etc/init.d/oracleasm start

    /etc/init.d/oracleasm scandisks

    /etc/init.d/oracleasm listdisks

  • Oracle RAC one node

    Hi all, I have a question I hope someone can help me.
    We must apply Oracle RAC a node, but when I install the software of oracle 11.2.0.3 database I see two options: Oracle RAC one node and Oracle RAC, so I understand that I must select a node Oracle RAC but and also I know I can convert the RAC one node to RAC my database. But my question is what that different I have to choose RAC node or CARS in the form of installation for example other executable, etc, because I want to be sure that I won't have problems in the future. Thank you

    A knot of CARS is something different with a different license than CARS. Install to that you have a license.

  • FRA on NFS Oracle RAC one node

    Hi all

    We have installed Oracle RAC one node on Oracle Linux. Everything seems to work fine except one small thing: we try to change the database in archivelog mode, but when we try to move the database, we get ORA-19816 "WARNING: files may exist in... that are not known to the database." and "Linux-x86_64 error: 37: no available lock.

    The FRA is mounted as NFS share with the following options: "rw, bg, hard, nointr, rsize is 32768, wsize = 32768, proto = tcp, noac, worms = 3, suid.

    I searched a lot on the Internet but couldn't find any hint. Can someone point me to the right installation guide?

    Thanks in advance

    Hello

    user10191672 wrote:
    Hi all

    We have installed Oracle RAC one node on Oracle Linux. Everything seems to work fine except one small thing: we try to change the database in archivelog mode, but when we try to move the database, we get ORA-19816 "WARNING: files may exist in... that are not known to the database." and "Linux-x86_64 error: 37: no available lock.

    The FRA is mounted as NFS share with the following options: "rw, bg, hard, nointr, rsize is 32768, wsize = 32768, proto = tcp, noac, worms = 3, suid.

    I searched a lot on the Internet but couldn't find any hint. Can someone point me to the right installation guide?

    Check if service NFSLOCK works... and if not start it.

    # service nfslock status
    

    * Mounting for Oracle files options when used with NAS [359515.1 ID] devices *.
    Mounting for Oracle data files options

    rw,bg,hard,nointr,rsize=32768, wsize=32768,tcp,actimeo=0, vers=3,timeo=600
    

    For games of backups RMAN, copies images and Data Pump dump files, the "NOAC" mount option should not be specified - it's because RMAN and Data Pump do not check this option and specifying this can adversely affect performance.

    The following NFS options must be specified for 11.2.0.2 disc RMAN backup directory:

    opts="-fstype=nfs,rsize=65536,wsize=65536,hard,actime=0,intr,nodev,nosuid"
    

    Hope this helps,
    Levi Pereira

    Published by: Levi Pereira on August 18, 2011 13:20

  • Oracle RAC 11g R2. Node pinned or this marking?

    Hi all, I am working with Oracle Clusterware & RAC 11 g R2.
    As this is my first time, I really don't understand what is the meanning of pinning or unpin a node.
    Can someone help me please?

    Thanks in advance!

    11.2 RAC deployment guide

    Pin a node means the association of a name with a number of node node is fixed. If a node is not pinned, its node number can change if the lease expires while it's down. The lease of a pinned node never expires.

    Given that your installation is a clean installation (no previous installation made), you don't need to pin the nodes, it would be by oracle clusterware.

    HTH
    Aman...

  • Redo command of journal online in Oracle RAC

    Dear,

    If I have 2 Oracle RAC nodes:

    Node 1:2 Online Redo log groups (Group 1 and group 2) - each Member of a group

    Node 2:2 Online Redo log groups (Group 3 and group 4) - each Member of a group

    Would like to know when the switch occurs on the redo log groups, it moves the Group 1 to group 2, then return in the Group 1... or

    Group 1 to group 2, and group 3?

    Concerning

    Claude

    Each Instance has a Thread.  So the Instance on node 1 has 1 wire that consists of groups 1 and 2.  Redo all written by instance of node 1 will always be to groups 1 and 2 only.  Never to groups 3 and 4 (except if they fell and added to 1 thread)

    Similarly, the instance on node 2 will write for groups 3 and 4 only.

    Hemant K Collette

  • Oracle RAC and replacement of equipment

    Hello world

    next year our Oracle RAC (hardware) servers will be is more supported. We have a 2 node RAC (SE) running on Linux 5. For you, what is the strategy easier (and safer) to replace the equipment:

    • Reinstall complete with database exports / imports.
    • Adding new nodes to the cluster with new servers and delete 'old' (but it seems that the same version of operating system is required in accordance with the documentation).
    • Clone of the cluster according to the Oracle documentation on news servers.

    Thank you for all your ideas. I googled a bit, but I've not found much info.

    Carlos.

    • Adding new nodes to the cluster with new servers and delete 'old' (but it seems that the same version of operating system is required in accordance with the documentation).

    It's what I strive for. If your time with new nodes in the cluster is short (less than 24 hours), and you stay on the same distribution (i.e. OL5 to OL7), then several times, this isn't a problem. If you still have concerns and do not feel comfortable with this recommendation, then I'll recommend another avenue. I often used it when I was doing something similar for the production system was too critical to take no risk.

    1. Up new servers in a cluster. You can even start with the version of grid Infrastructure more later/more big who can save a future update maybe you.

      1. It will take a new OCR and voting disk new configuration.
    2. Attach the storage shared on the new cluster. For a short period of time, you will have storage mounted to both old and new.
    3. Install the RDBMS software on the new cluster. Wear on all the config files you might need more... tnsnames.ora, sqlnet.ora, etc..
    4. When you are ready to drop the database on the old cluster, cutting on the new cluster.
    5. Use "srvctl database add" to add the database to the new cluster.
    6. Use 'srvctl add instance' to add the instances to the new cluster.
    7. (Optional but normally a good idea), use 'srvctl add service' to add services to the new cluster.
      1. Steps 5 and 6 above usually use the same name of db, same instance names, same service name.
    8. You now have the database running on the new hardware. Stop the old servers

    The only other sticking point would be handling the SCAN and traditional VIP. When I made this drive, I usually my network guy change the SCAN VIP and traditional dignitaries to be alias in DNS that point to the new VIP.

    If you do your homework and have everything ready to go, the downtime can be less than 15 minutes for this approach.

    See you soon,.
    Brian

  • Oracle RAC

    Hello

    We have an Oracle RAC (11.1.0.7) with two active nodes on AIX 5. We plan to build a third node, working as a passive node.

    This is may be possible?

    Thank you!

    Hello

    Yes, you can add the third node to the cluster and keep it operational without any purpose. As Richard-1 mentioned, it is best to offer 3-node RAC and extend services through them. Unless you want a kind of DR? Having said that what you try to achieve?

    Kind regards

    EVS

  • Oracle RAC (physically separate buildings)

    Hello

    I would like to know who already had this experience to prepare an Oracle RAC in separate buildings.

    We have two data centers, and we intend to prepare two separate nodes physically in this building (each node in one building).

    Data centers are in the same city.

    Could this be possible?

    Thank you!

    This looks like an extended cluster, alias "stretched CARS." It's a reasonably normal environment, although there are some considerations. You'll need storage on each side with ASM mirror and a third independent unit (a NFS server is good) for a copy of the file with the right to vote. A 10 GHz ethernet link should be more than enough. If you need assistance configuring her, let me know

    --

    John Watson

    Oracle Certified Master s/n

  • Oracle RAC private interconnectivity problem

    Hello

    We set up Oracle RAC on RHEL 6.5.

    Our reference document is "deployment Oracle RAC 11 g R2 Database on Red Hat Enterprise Linux 6" verion 1.0 of Red Hat Reference Architecture Series.

    According to the document.

    Two subnets have been used for interconnectivity.

    Two physical switches were used.

    rac.png

    My Question: I see that there is a connection between two switches. What is the purpose of this link?

    Please let me know example of configuration of Private Switch A and Switch B private.

    Thank you.

    Sajeeva.

    Because its use of redundant interconnection function because the function HAIP is officially called would say that networks need to be in different subnets. So, if we ignore the diagram above we'll finish with the following configuration:

    eth1 of each server that is running on the same subnet:

    Server1 - 10.0.1.1

    Server2 - 10.0.1.2

    eth2 of each server that is running on the same subnet:

    Server1 - 10.0.2.1

    Server2 - 10.0.2.2

    FAQ for IP highly available (HAIP) for version 11.2 (Doc ID 1664291.1):

    "Each NIC is defined as a cluster interconnect (using oifcfg setif) on a given node will have a static ip address assigned to it and each cluster NIC interconnection on a given node must be on a single subnet."

    The only advantage I see of HAIP, is that all your interfaces are active at all times where as with collage will be you almost end up with an active/passive configuration.

    Kind regards

    EVS

  • How to apply the PSU on Oracle RAC?

    Hello guys,.

    How about the Oracle RAC database without any stop PSU? (patch only database, not clusterware).

    Lets say I have 2 CCR node on different hosts with separate houses oracle/grid (no shared resources). What are the steps to avoid blackouts? I simply apply the last OPatch on grid/oracle homes, and then generate the oracle user ocm.rsp file and simply run as root: opatch auto/patch/path /-oh oracle_home - ocmrf /path/ocm.rsp and he will do everything for me on this node? then move service on node patched and do the same on another node? Oh, and there will be no interruption of service? Thank you!

    Gytis

    Hello

    Yes, the concept of a rolling patch is that the DB will always be upward, albeit on a node or another (or maybe 2 of 3, etc.)

    Check out this doc, spec 14.2.4 reduce planned downtime Maintenance

    Essentially an instance will come down in time. You can either manually a node patch or another, or well let Oracle patch bearing (but make sure you test it in the configuration of test first, if you have one, or else we put up with the same group of patches just in case).

    The idea is that the VIP relocate Node2, or if you use SCAN apps will detect node1 down.

    If possible, it is good to do this with a team Apps - ask them to move the connection with Betclic / application servers to the next node in the cluster, then the first patch (rolling patch), then exchange all node1 and make the node 2.

    The same steps apply for clusterware.

    See you soon,.

  • Oracle RAC 12.1.0.2 (GI battery) deployment on OEL 7 (3.8.13 - 35.3.4.el7uek) fails with ORA-27102

    Hi guys,.

    I am currently trying to deploy a complete installation (GI battery) of Oracle RAC 12.1.0.2 cool on OEL 7 (3.8.13 - 35.3.4.el7uek) with 2 nodes for validation purposes. However the installation itself went well, but 'Creation repository for Oracle Grid Infrastructure Management container' step fails with "ORA-01034: ORACLE not available / ORA-27102: out of memory / Linux-x86_64 error: 12: cannot allocate memory". The steps of runInstaller validation completed successfully, but I never take this step of configuration without jumping.

    Here is my configuration which should normally avoid such errors ORA.

    Host

    -bash - $4.2 uname - a

    Linux OELRAC1 3.8.13 - 35.3.4.el7uek.x86_64 #2 SMP Tue Jul 29 23:24:14 CDT 2014 x86_64 x86_64 x86_64 GNU/Linux

    "Creation of repository database for Oracle Grid Infrastructure Management container" error in /oracle/base/cfgtoollogs/dbca/_mgmtdb/trace.log

    From restoration to August 9, 14

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 12 type of device = DISK

    channel ORA_DISK_1: from the restore backup set data file

    channel ORA_DISK_1: specifying datafile (s) to restore from backup set

    channel ORA_DISK_1: restore datafile 00003 to + GRID

    channel ORA_DISK_1: backup /oracle/grid/12102/assistants/dbca/templates/MGMTSeed_Database.dfb piece reading

    channel ORA_DISK_1: ORA-19870: error when restoring the backup /oracle/grid/12102/assistants/dbca/templates/MGMTSeed_Database.dfb piece

    ORA-19504: could not create the file "+ GRID.

    ORA-17502: ksfdcre:4 cannot create the file + GRID

    ORA-15001: diskgroup 'GRID' does not exist or is not mounted

    ORA-01034: ORACLE not available

    ORA-27102: out of memory

    Linux-x86_64 error: 12: cannot allocate memory

    Additional information: 2663

    Additional information: 1565392897

    Additional information: 161480704

    switch to the previous backup

    Number of folder of the data file = 3 name = + GRID

    RMAN-00571: ===========================================================

    RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.

    RMAN-00571: ===========================================================

    RMAN-03002: failure of the restore command at 09/08/2014 19:41:47

    ORA-01119: error in creating file of database "+ GRID.

    ORA-17502: ksfdcre:4 cannot create the file + GRID

    ORA-15001: diskgroup 'GRID' does not exist or is not mounted

    ORA-01034: ORACLE not available

    ORA-27102: out of memory

    Linux-x86_64 error: 12: cannot allocate memory

    Additional information: 2663

    Additional information: 1565392897

    Additional information: 1614807040

    RMAN-06956: create the data file failed; try again after removing + OS GRID

    SHM / CPI (check the id of shm with previous information)

    -bash - $4.2 ipcs - a

    -Shared memory segments-

    key shmid owner perms bytes nattch status

    1565360128 0 x 00000000 grid 640 4096 0

    1565392897 0 x 00000000 grid 640 4096 0

    0xfba47600 1565425666 640 24576 29 grid

    Memory of the ASM instance setting

    SQL > see the memory settings

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    whole big memory_max_target 1076M

    whole large memory_target 1076M

    ASM disk groups

    SQL > select NAME, STATE, TOTAL_MB, USABLE_FILE_MB from v$ asm_diskgroup;

    NAME STATE TOTAL_MB USABLE_FILE_MB

    ------------------------------ ----------- ---------- --------------

    RACK MOUNTED 6144 4868

    / dev/shm to memory_target (more than enough free space)

    -bash - $4.2 df-h

    Size of filesystem used Avail use % mounted on

    tmpfs, 630M 2.4 G 3.0 G 21% / dev/shm

    Kernel for SHM limits set to unlimited parameter

    -bash - $4.2 sysctl - a | grep shm

    kernel.shmall = 1152921504606846720

    kernel.shmmax = 922337203685477580

    User limits on unlimited for memory

    -bash-4, $ 2 cat /etc/security/limits.conf

    # Oracle settings

    grid soft nproc 2047

    grid hard nproc 16384

    grid soft nofile 1024

    grid hard nofile 65536

    Oracle nproc 2047 soft

    Oracle nproc 16384 hard

    Oracle soft nofile 1024

    Oracle hard nofile 65536

    * hard memlock unlimited

    * soft memlock unlimited

    -bash - 4, $ 2 known - grid

    -bash - $4.2 ulimit - a

    the file size (blocks, - c) of base 0

    (kbytes, - d) data seg size unlimited

    scheduling priority (-e) 0

    size of the file (blocks, f) - unlimited

    pending signals (-i) 23953

    Max locked memory (kbytes, - l) unlimited

    size of the memory (k, m) max - unlimited

    open files (-n) 1024

    a size (512 bytes, - p) 8 hose

    (Bytes, - q) POSIX message queues 819200

    real-time priority (-r) 0

    size (Ko, - s) 8192 battery

    time processor (seconds,-t) unlimited

    Max user process (-u) 2047

    virtual memory (KB), - v) unlimited

    the locks on files (-x) unlimited

    So what the hell is wrong here? Why the instance ASM (+ ASM1) returns the error below, even if there is no memory limit / problem. The error with the disk group is also absolutely no sense.

    ------------------------------------------------------------------------------------------------------------------------

    ORA-01119: error in creating file of database "+ GRID.

    ORA-17502: ksfdcre:4 cannot create the file + GRID

    ORA-15001: diskgroup 'GRID' does not exist or is not mounted

    ORA-01034: ORACLE not available

    ORA-27102: out of memory

    Linux-x86_64 error: 12: cannot allocate memory

    Additional information: 2663

    Additional information: 1565392897

    Additional information: 1614807040

    ------------------------------------------------------------------------------------------------------------------------

    Someone at - he encounter the same problem with 12.1.0.2 by deploying the CDB for Grid Infrastructure Management repository? Is there something special with UEK3 (3.8.13 - 35.3.4.el7uek) on 7 OEL? Any necessary special kernel parameter (although the runInstaller controls do not mention)? I'm totally tapped by GI 12.1.0.2.

    Thank you.

    Best regards

    Stefan

    Hi guys,.

    I was finally able to solve this problem.

    She was related to a memory on the provisioning of problem in the virtual environment as both nodes are VMs. Unfortunately none of these errors of memory were populated / pushed somehow in the virtual machine.

    Best regards

    Stefan

Maybe you are looking for