What allows archiving in a RAC database

DB, version grid: 11.2.0.3

I have a 3 DB of RAC node that is currently in NOARCHIVELOG mode. I want to activate the archiving of Redo Logs.

LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_DEST_1 settings are already defined.

That's what I intend to do to activate archiving.

I'll stop the DB (ie. 1,2 and 3 Instance shutdown) and then I'll start 1 Instance in ASSEMBLY stage and then I will publish
alter database archivelog;
alter database open;
The above steps are correct? My colleague suggested me to define CLUSTER_DATABASE = FALSE before the activity and it back to TRUE after activation of archive. But is it really necessary?

My colleague suggested me to define CLUSTER_DATABASE = FALSE before the activity and it back to TRUE after activation of archive. But is it really necessary?

Yes!

Change: I'll go this statement. This is the case for 9i and 10g. Just tested and does not seek to be the case for 11g.

Published by: Freddie Essex on 17 April 2013 17:08

Tags: Database

Similar Questions

  • Company of Mgr 12 lost connection to the RAC databases after the restart of the EM


    This last weekend, we restarted our main production RAC database (name = prod, instances = prod1 and prod2) and our certification test RAC database (name = cert, instances = cert1 and cert2).

    These databases have been put up and work OK as a target in MS 12 c for a few months.  However, I was not able to connect to these databases into EM yesterday, so I rebooted the server database repository EM both of EM application server.

    As of today, I can connect to the web console of EM successfully, and I can bring up our databases not CARS.  But if I try to put in place either of our CARS (prod or cert) databases, or if I try to put in place a specific instance prod1, prod2 or cert1, cert2, I get errors like this:

    IO error: the network adapter could not establish the connection

    The connection descriptor was (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = tcp) (HOST = nhaoranla - oracle.nha.local)(PORT=1521))) (CONNECT_DATA = (SID = cert1) (SERVER = DEDICATED)))

    The server nhaoranla where cert1 database instance.  We use the standard port 1521 for all targets.

    Two CARS of prod and cert authorities are running, and user applications are able to connect to the two bodies.

    I have not changed anything in EM.  The only difference with last week's restart.

    I tried to restart the server applications EM once again, but that solves nothing, I get the same error.

    Any suggestions or bugs that are known for this problem would be appreciated.

    Figured out what the problem was... TCP error message indicated it was looking for nhaoranla - oracle.nha.local , but I nhaoranla-oracle in/etc/hosts on the server to OEM applications.

    Lesson learned... If you add databases to the OEM through node discovery, initially, it could work, but it will fail when you restart your server OEM if you don't have good (complete) entries in/etc/hosts.

  • EBS R12.1.3 on the 12 c RAC database

    We are running EBS version 12.1.3 on RAC cluster for node 3 11 GR 2 and works well. We are currently working on upgrading our RAC database to 12 c.

    Looking at the Oracle support, I can see there are notes related to stamps of interoperability thus recommended on 12 c init parameters.

    I can't find any other relevant document where there are performance problems and possible solutions.

    I'm sure that the people would have taken this path... ghoulish someone please provide input as to what measures we should take to make sure that we don't encounter unforeseen problems.

    Know your opinion.

    TO THE

    I'm not aware of important performance issues at the top of the 12.1.0.1 of database with R12, and I believe that if you apply all the patches in the following docs, then you should be good. If you have specific concerns, please do not hesitate to share with us.

    With the help of Oracle 12 c version 1 Real Application Clusters with Oracle E-Business Suite Release 12 (Doc ID 1490850.1)

    Interoperability note EBS 12.0 or 12.1 with RDBMS 12cR1 (Doc ID 1524398.1)

    12.1.0.1 database with Oracle E-Business Suite certified

    https://blogs.Oracle.com/stevenChan/entry/12_1_0_1_db

    Thank you

    Hussein

  • Freezing of the Oracle RAC databases

    Hi all

    We use oracle database 11g version 11.2.0.1.0 RAC database standard edition having two active instances.

    Recently, we have been confronted with a database, freezing problem, the regular orders program has not run in DB.

    Can anyone suggest me how to find the reason of this problem and what needs to be monitored side DB for a few days?

    Thanks in advance.

    John, but this assumes that actually perform the jobs - would not quite the case.

    My meat with the question (as Konda explained), is owner of problem. Developers who seem to refuse to have their process not running problem, he blames on the database and now awaits the DBA to own the problem simply because of their opinion without merit (and a little ignorant) as the database "frozen".

    It is unfair to expect the DBA through all kinds of hoops to find the 'problem' in a bottom-up analysis of the problem.

    A fundamental question for developers should be, prove it. Showed the code to run (work performed). And then a question why there is no instrumentation in their code just to say what's happened...

  • Multiple RAC databases on IM even using different subnets for Public i / face

    Hello. We are setting up a 2 cluster nodes. This group will be the host of several RAC databases. For security reasons, our network team want to create separate subnets for the application traffic to each RAC specific database on the cluster.

    For example, request 1 to 2 application servers that will connects to database PROD1 RAC via a single subnet, application 2-3 application servers etc which will be connected to the database RAC PROD2 via a different subnet,.

    In addition, the network team want to configure a subnet separate management DBA etc. will use to administer all the RAC databases and infrastructure in the cluster.

    Version 11.2.0.2 grid infrastructure. The database versions vary from 10.2.0.x to 11.2.0.2. All databases will use RAC.

    We want to take advantage of the features of earphone SCAN to support connectivity to databases on the cluster. 2199620 [https://cn.forums.oracle.com/forums/thread.jspa?threadID=2199620] thread suggests that 11 GR 2 supports several subnets, that seems to be exactly the functionality we need. Please can you confirm how it works and tell us any documentation (standard docs, whitepapers, MOS, etc.) which could help us to configure it.

    Document referenced in thread 2199620 was not exactly what we were looking for and didn't translate too well in Google Translate.

    Any guidance is appreciated. Thanks, Rich.

    Similar topics:

    https://CN.forums.Oracle.com/forums/thread.jspa?MessageID=9846298? (Double SCAN on multi cluster hosted)
    https://CN.forums.Oracle.com/forums/thread.jspa?threadID=2199620 (scan earphone in VLAN OAM)

    Published by: 887449 on 26-Sep-2011 01:41

    Hello

    With Oracle 11.2, you can have multiple public networks accessing your Oracle RAC.
    You must set the init.ora new LISTENER_NETWORKS setting so users are load-balanced on their network. Services are related to the networks so users who connect with network 1 will use a different service as network 2. Each network will have its own VIP.

    Impossible to use both network SCAN function because SCAN will work into a single network and on GRID 11.2 you cannot config more than a SCAN.

    So, you can have a public network (for example, 10.10.10.0) with SCAN/VIP and another public network (e.g. 192.168.217.0) you will only use VIP on TNSNAMES.ora.

    You configure a Service (A) on the network (10.10.10.0) and one other Service (B) on the network (192.168.217.0).

    In the example above using (A) Service you will configure SCAN (scan host) and using Service (B), you must configure all address VIP.

    Kind regards
    Levi Pereira

    Published by: Levi Pereira Sep 26, 2011 18:03

  • Logs archiving for the RAC ASM basics

    Hello

    I have a question about logs archiving on the ASM database located on a RAC. I created a database orcl who has orcl1 instance on node1 and orcl2 on Node2. For the backup of this database, I enabled for the database to archivelog.

    After a few transactions and backups, I noticed that there are two sets of archiving logs created on each node in the folder $ORACLE_HOME/dbs. In node 1, it starts with arch1_ * and node2 is arch2_ *.

    IWhy is it creates logs archiving on local disks, in which she should ideally create disks asm which is shared between the nodes. My backup application fails with journal archive not found error, because it searches newspaper archives in the other node.

    All entries on this will be useful.

    Amith

    Hello

    I have a question about logs archiving on the ASM database located on a RAC. I created a database orcl who has orcl1 instance on node1 and orcl2 on Node2. For the backup of this database, I enabled for the database to archivelog.

    After a few transactions and backups, I noticed that there are two sets of archiving logs created on each node in the folder $ORACLE_HOME/dbs. In node 1, it starts with arch1_ * and node2 is arch2_ *.

    I believe that it is missing from your configuration database and Oracle uses the default location. (i.e. your "$ORACLE_HOME/dbs")

    ARCHIVELOG must focus on a shared domain.

    You need the parameter config below:

    SQL> show parameter db_recovery_file
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    db_recovery_file_dest                string
    db_recovery_file_dest_size           big integer 
    

    Or location of default config:

    SQL> show parameter log_archive_dest
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    log_archive_dest                     string
    

    IWhy is it creates logs archiving on local disks, in which she should ideally create disks asm which is shared between the nodes. My backup application fails with journal archive not found error, because it searches newspaper archives in the other node.

    To resolve this problem see this example:

    SQL> show parameter recover
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    db_recovery_file_dest                string
    db_recovery_file_dest_size           big integer 1
    
    SQL> archive log list;
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            /u01/app/oracle/product/10.2.0/db_1/dbs/
    Oldest online log sequence     2
    Next log sequence to archive   3
    Current log sequence           3
    SQL>
    
    SQL> alter system set db_recovery_file_dest_size=20G scope=both sid='*';
    
    System altered.
    
    SQL> alter system set db_recovery_file_dest='+FRA' scope=both sid='*';
    
    System altered.
    
    SQL> archive log list;
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     5
    Next log sequence to archive   6
    Current log sequence           6
    SQL>
    

    With RMAN

    RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT 'sys/oracle@db10g1';
    
    new RMAN configuration parameters:
    CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT '*';
    new RMAN configuration parameters are successfully stored
    starting full resync of recovery catalog
    full resync complete
    
    RMAN>  CONFIGURE CHANNEL 2  DEVICE TYPE DISK CONNECT  'sys/oracle@db10g2';
    
    new RMAN configuration parameters:
    CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT '*';
    new RMAN configuration parameters are successfully stored
    starting full resync of recovery catalog
    full resync complete
    
    RMAN> list archivelog all;
    
    using target database control file instead of recovery catalog
    
    List of Archived Log Copies
    Key     Thrd Seq     S Low Time  Name
    ------- ---- ------- - --------- ----
    1       1    3       A 28-FEB-11 /u01/app/oracle/product/10.2.0/db_1/dbs/arch1_3_744216789.dbf
    2       2    2       A 27-FEB-11 /u01/app/oracle/product/10.2.0/db_1/dbs/arch2_2_744216789.dbf
    
    RMAN> crosscheck archivelog all;
    
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=127 instance=db10g1 devtype=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: sid=135 instance=db10g2 devtype=DISK
    validation succeeded for archived log
    archive log filename=/u01/app/oracle/product/10.2.0/db_1/dbs/arch1_3_744216789.dbf recid=1 stamp=744292116
    Crosschecked 1 objects
    
    validation succeeded for archived log
    archive log filename=/u01/app/oracle/product/10.2.0/db_1/dbs/arch2_2_744216789.dbf recid=2 stamp=743939327
    Crosschecked 1 objects
    
    RMAN> backup archivelog all delete input;
    
    Starting backup at 28-FEB-11
    current log archived
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    channel ORA_DISK_1: starting archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=3 recid=1 stamp=744292116
    channel ORA_DISK_1: starting piece 1 at 28-FEB-11
    channel ORA_DISK_2: starting archive log backupset
    channel ORA_DISK_2: specifying archive log(s) in backup set
    input archive log thread=2 sequence=2 recid=2 stamp=743939327
    channel ORA_DISK_2: starting piece 1 at 24-FEB-11
    channel ORA_DISK_1: finished piece 1 at 28-FEB-11
    piece handle=+FRA/db10g/backupset/2011_02_28/annnf0_tag20110228t120354_0.265.744293037 tag=TAG20110228T120354 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    channel ORA_DISK_1: deleting archive log(s)
    archive log filename=/u01/app/oracle/product/10.2.0/db_1/dbs/arch1_3_744216789.dbf recid=1 stamp=744292116
    channel ORA_DISK_2: finished piece 1 at 24-FEB-11
    piece handle=+FRA/db10g/backupset/2011_02_24/annnf0_tag20110228t120354_0.266.743940249 tag=TAG20110228T120354 comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
    channel ORA_DISK_2: deleting archive log(s)
    archive log filename=/u01/app/oracle/product/10.2.0/db_1/dbs/arch2_2_744216789.dbf recid=2 stamp=743939327
    channel ORA_DISK_1: starting archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=4 recid=4 stamp=744293023
    input archive log thread=2 sequence=3 recid=3 stamp=743940232
    channel ORA_DISK_1: starting piece 1 at 28-FEB-11
    channel ORA_DISK_1: finished piece 1 at 28-FEB-11
    piece handle=+FRA/db10g/backupset/2011_02_28/annnf0_tag20110228t120354_0.267.744293039 tag=TAG20110228T120354 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    channel ORA_DISK_1: deleting archive log(s)
    archive log filename=+FRA/db10g/archivelog/2011_02_28/thread_1_seq_4.264.744293023 recid=4 stamp=744293023
    archive log filename=+FRA/db10g/archivelog/2011_02_24/thread_2_seq_3.263.743940231 recid=3 stamp=743940232
    Finished backup at 28-FEB-11
    
    Starting Control File and SPFILE Autobackup at 28-FEB-11
    piece handle=+FRA/db10g/autobackup/2011_02_28/s_744293039.263.744293039 comment=NONE
    Finished Control File and SPFILE Autobackup at 28-FEB-11
    
    SQL> alter system archive log current;
    
    System altered.
    
    RMAN> list archivelog all;
    
    using target database control file instead of recovery catalog
    
    List of Archived Log Copies
    Key     Thrd Seq     S Low Time  Name
    ------- ---- ------- - --------- ----
    5       1    5       A 28-FEB-11 +FRA/db10g/archivelog/2011_02_28/thread_1_seq_5.264.744293089
    6       2    4       A 24-FEB-11 +FRA/db10g/archivelog/2011_02_24/thread_2_seq_4.268.743940307
    
    RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE DISK CLEAR;
    
    old RMAN configuration parameters:
    CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT '*';
    old RMAN configuration parameters are successfully deleted
    
    RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE DISK CLEAR;
    
    old RMAN configuration parameters:
    CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT '*';
    old RMAN configuration parameters are successfully deleted
    
    RMAN> exit
    
    Recovery Manager complete.
    

    Kind regards
    Levi Pereira

    Published by: Levi Pereira on February 28, 2011 12:16

  • Duplicate target for RAC database

    Oracle Database 11g Enterprise Edition
    Red Hat Linux


    Hello

    I'm a RAC database cloning using the command duplicate target database. First of all, we need create an auxiliary Instance using the source RAC database init file.

    I see the following in the init file,

    Mdb1.undo_tablespace = "UNDOTBS1.
    MDB2.undo_tablespace = "UNDOTBS2".


    What should I enter in the file init auxiliary instance for these Undo tablespace? I'll put CLUSTER_DATABASE false since the duplicate database is a single instance one. Only the Source database is RAC.


    I'm cloning only SYSTEM, SYSAUX tablespace user and tablespaces Cancel two to 14 days sometime back. I'm ignoring all other storage spaces.

    You only need a cancellation single tablespace... you can also create one afterwards if you want.

  • Mode of archive in 11g RAC

    Hello

    I have a CAR in noarchive mode. I need to put the database offline in order to put the database in mode archive (http://www.toadworld.com/Newsletter/TWPIPELINESept2008/PIPESept08Oracle/tabid/450/Default.aspx)?

    Regarding the destination of the newspaper archive, I put the archives in shared space? I think they can be put in local drives, like her, instance would have its own archives. Is this fair?

    follow the steps below to put your database in archivelog mode

    Put the database using archive / noarchivelog in RAC environment:

    ========================================================================

    1 set cluster_database = false for the instance.

    ALTER system set cluster_database = false scope = spfile sid = "PROD1";

    2. stop all instances of access to the database.

    srvctl stop database d prod

    3. Mount the database using the local instance.

    Startup mount

    4 activate archiving / noarchiving

    ALTER database archivelog;

    OR

    ALTER database noarchivelog;

    5. change the cluster_database = true for the prod1 instance parameter.

    ALTER system set cluster_database = true scope = spfile sid = "PROD1";

    6. judgment of the local database.

    Shutdown

    7. place all instances.

    srvctl start database-d prod

    5.A) TO START AND STOP THE DATABASE AND THE INSTANCE IN A RAC ENVIRONMENT:

    srvctl status database-d prod

    Hope that answers your query

    Anil Malkai

  • 10g R2 no RAC-database unique Instance to upgrade to 11g R2-2 node RAC database

    Hi all

    I have a requirement for my client.

    We have a three databases with replication between them activated by MV & MV connects to the 10 g R2 database.

    This must be upgraded to Database 11g R2 with a 2 RAC node.

    Please tell us how to get started and go.

    Also please notify once upgraded to database 11g R2, how replication can be achieved.

    Thank you

    Vijayaraghavan.K

    > We have to first upgrade to 11 g R2 Instance unique and then move on RAC - node 2

    You must install the grid Infrastructure for Cluster and RAC database software in the new cluster.

    Then upgrade the database as a single instance database.

    Then, convert it to the RAC

    See http://docs.oracle.com/cd/E11882_01/rac.112/e17264/install_rac.htm#TDPRC185 for the conversion of the measures.

    Hemant K Collette

  • IF database-> RAC database: estimate of the size of the SGA for each instance

    Hello!

    Let's say that there is a single instance with SGA 100 G size DB (assume this size is optimal for this database).

    If this database is intended to be converted to a RAC database with 2 instances (suppose DB workload will be spread over multiple instances).

    How to estimate the needs of SGA size for each instance in the RAC database?

    It takes 50G + General cool CARS (say 15%) for each instance?

    (Do not consider situation when one of the instances has failed, and all the charge redirects to an instance having survived)

    I'd be grateful if someone has offered or provided useful references.

    ---

    Kind regards

    Kyrylo

    As a general rule, I would size the SGA on new nodes to 120% of the SGA to single instance. Yes, this means that you will more than double the size of the SGA. You will need additional space to hold the load memory Cache Fusion. And the buffer Cache are possibly a little more space to store additional versions of blocks that seem to accompany the many implementations of CARS. Now, it's just a rule of thumb, so it has a high degree to be very wrong.

    You'll want to consider two big things... (1) the service being used. Will you take advantage of the services to perform the partitioning application? If so, instances can see various workloads that can lead to requests for different resources. (2) you will support a kind of failover (TAF or manual failover) applications? If so, then the LMS must be sufficiently sized to support the additional workload.

    See you soon,.
    Brian

  • Try to clone RAC database with RMAN database RAC but becomes lower than ORA-01503

    I'm trying to clone a RAC database of RMAN target database is also CARS, so for every thing wen fine but getting "ORA-01503.

    Please find below, please help me solve this problem.

    connect sys/p0ck3t@pcard1 target;

    auxiliary connection;

    Connect the rmancat/rmanamex@rmandb catalog;

    Run {}

    SQL "alter session set optimizer_mode = RULE ';"

    until ' to_date (' may 1, 2015 06:30 ',' Lun ' DD YYYY HH24:MI:SS);

    allocate auxiliary channel t1 type disk;

    allocate auxiliary channel t2 type disk;

    allocate auxiliary channel t3 type disk;

    the value of newname for datafile 4 to "+ DATA/pcardtst/datafile/users.304.723229675";

    the value of newname for datafile 3 to "+ DATA/pcardtst/datafile/sysaux.395.723228961";

    the value of newname for datafile 2 to "+ DATA/pcardtst/datafile/undotbs1_1.dbf";

    the value of newname for datafile 1 to "+ DATA/pcardtst/datafile/system.328.723229027";

    the value of newname for datafile 5 to "+ DATA/pcardtst/datafile/undotbs2.409.723228957";

    the value of newname for datafile 6 to "+ DATA/pcardtst/datafile/pwrcrd_usr01.dbf";

    the value of newname for datafile 7 to "+ DATA/pcardtst/datafile/pwrcrd_undotbs01.dbf";

    the value of newname for datafile 8 to "+ DATA/pcardtst/datafile/pwrcrd_undotbs02.dbf";

    the value of newname for datafile 9 to "+ DATA/pcardtst/datafile/pwrcrd_data_par01.dbf";

    the value of newname for datafile 10 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_par01.dbf";

    the value of newname for datafile 11 in "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part01_01.dbf";

    the value of newname for datafile 12 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part01_02.dbf";

    the value of newname for datafile 13 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part01_03.dbf";

    the value of newname for datafile 14 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part01_04.dbf";

    the value of newname for datafile 15 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_bo_part01_01.dbf";

    the value of newname for datafile 16 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_bo_part01_02.dbf";

    the value of newname for datafile 17 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part02_01.dbf";

    the value of newname for datafile 18 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part02_02.dbf";

    the value of newname for datafile 19 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part02_03.dbf";

    the value of newname for datafile 20 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part02_04.dbf";

    the value of newname for datafile 21 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_bo_part02_01.dbf";

    the value of newname for datafile 22 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_bo_part02_02.dbf";

    the value of newname for datafile 23 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part03_01.dbf";

    the value of newname for datafile 24 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part03_02.dbf";

    the value of newname for datafile 25 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part03_03.dbf";

    the value of newname for datafile 26 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_bo_part03_01.dbf";

    the value of newname for datafile 27 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_bo_part03_02.dbf";

    the value of newname for datafile 28 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part04_01.dbf";

    the value of newname for datafile 29 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part04_02.dbf";

    the value of newname for datafile 30 to "+ DATA/pcardtst/datafile/pwrcrd_data_bo_part04_03.dbf";

    the value of newname for datafile 31 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_bo_part04_01.dbf";

    the value of newname for datafile 32 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_bo_part04_02.dbf";

    the value of newname for datafile 33 to '+ DATA/pcardtst/datafile/pwrcrd_data_fe_part01_01.dbf;

    the value of newname for datafile 34 at "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part01_02.dbf";

    the value of newname for datafile 35 to "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part01_03.dbf";

    the value of newname for datafile 36 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_fe_part01_01.dbf";

    the value of newname for datafile 37 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_fe_part01_02.dbf";

    the value of newname for datafile 38 at "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part02_01.dbf";

    the value of newname for datafile 39 to "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part02_02.dbf";

    the value of newname for datafile 40 to "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part02_03.dbf";

    the value of newname for datafile 41 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_fe_part02_01.dbf";

    the value of newname for datafile 42 at "+ DATA/pcardtst/datafile/pwrcrd_ndx_fe_part02_02.dbf";

    the value of newname for datafile 43 at "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part03_01.dbf";

    the value of newname for datafile 44 to "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part03_02.dbf";

    the value of newname for datafile 45 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_fe_part03_01.dbf";

    the value of newname for datafile 46 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_fe_part03_02.dbf";

    the value of newname for datafile 47 to "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part04_01.dbf";

    the value of newname for datafile 48 to "+ DATA/pcardtst/datafile/pwrcrd_data_fe_part04_02.dbf";

    the value of newname for datafile 49 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_fe_part04_01.dbf";

    the value of newname for datafile 50 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_fe_part04_02.dbf";

    the value of newname for datafile 51 to "+ DATA/pcardtst/datafile/pwrcrd_data_batch01.dbf";

    the value of newname for datafile 52 at "+ DATA/pcardtst/datafile/pwrcrd_data_batch02.dbf";

    the value of newname for datafile 53 to '+ DATA/pcardtst/datafile/pwrcrd_data_batch03.dbf;

    the value of newname for datafile 54 to "+ DATA/pcardtst/datafile/pwrcrd_data_batch04.dbf";

    the value of newname for datafile 55 to "+ DATA/pcardtst/datafile/pwrcrd_data_batch05.dbf";

    the value of newname for datafile 56 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_batch01.dbf";

    the value of newname for datafile 57 to "+ DATA/pcardtst/datafile/pwrcrd_ndx_batch02.dbf";

    the value of newname for datafile 58 at "+ DATA/pcardtst/datafile/pwrcrd_ndx_batch03.dbf";

    the value of newname for datafile 59 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part01_01.dbf";

    the value of newname for datafile 60 to "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part01_01.dbf";

    the value of newname for datafile 61 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part02_01.dbf";

    the value of newname for datafile 62 at "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part02_01.dbf";

    the value of newname for datafile 63 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part03_01.dbf";

    the value of newname for datafile 64 at "+ DATA/pcardtst/datafile/pwrcrd_data_index_part03_01.dbf";

    the value of newname for datafile 65 at "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part04_01.dbf";

    the value of newname for datafile 66 to "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part04_01.dbf";

    the value of newname for datafile 67 to '+ DATA/pcardtst/datafile/pwrcrd_data_hist_part05_01.dbf;

    the value of newname for datafile 68 to "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part05_01.dbf";

    the value of newname for datafile 69 at "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part06_01.dbf";

    the value of newname for datafile 70 to "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part06_01.dbf";

    the value of newname for datafile 71 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part07_01.dbf";

    the value of newname for datafile 72 at "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part07_01.dbf";

    the value of newname for datafile 73 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part08_01.dbf";

    the value of newname for datafile 74 at "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part08_01.dbf";

    the value of newname for datafile 75 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part09_01.dbf";

    the value of newname for datafile 76 to '+ DATA/pcardtst/datafile/pwrcrd_index_hist_part09_01.dbf;

    the value of newname for datafile 77 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part10_01.dbf";

    the value of newname for datafile 78 to "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part10_01.dbf";

    the value of newname for datafile 79 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part11_01.dbf";

    the value of newname for datafile 80 to "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part11_01.dbf";

    the value of newname for datafile 81 to '+ DATA/pcardtst/datafile/pwrcrd_data_hist_part12_01.dbf;

    the value of newname for datafile 82 to "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part12_01.dbf";

    the value of newname for datafile 83 at "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part13_01.dbf";

    the value of newname for datafile 84 to "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part13_01.dbf";

    the value of newname for datafile 85 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part14_01.dbf";

    the value of newname for datafile 86 to '+ DATA/pcardtst/datafile/pwrcrd_index_hist_part14_01.dbf;

    the value of newname for datafile 87 to "+ DATA/pcardtst/datafile/powercard_users.375.723228963";

    the value of newname for datafile 88 to "+ DATA/pcardtst/datafile/powercard_users.396.723228959";

    the value of newname for datafile 89 to "+ DATA/pcardtst/datafile/powercard_users.399.723228959";

    the value of newname for datafile 90 to "+ DATA/pcardtst/datafile/powercard_users.393.723228961";

    the value of newname for datafile 91 to '+ DATA/pcardtst/datafile/mig_pc_delivery.309.723228955;

    the value of newname for datafile 92 to "+ DATA/pcardtst/datafile/powercard_index_par.403.723228961";

    the value of newname for datafile 93 to "+ DATA/pcardtst/datafile/powercard_users.381.723228963";

    the value of newname for datafile 94 to "+ DATA/pcardtst/datafile/powercard_index_hist_part01.275.723229411";

    the value of newname for datafile 95 at "+ DATA/pcardtst/datafile/powercard_data_fe_part01.364.723229607";

    the value of newname for datafile 96 to "+ DATA/pcardtst/datafile/powercard_data_hist_part05.380.723228963";

    the value of newname for datafile 97 to '+ DATA/pcardtst/datafile/powercard_data_hist_part10.310.723228959;

    the value of newname for datafile 98 in "+ DATA/pcardtst/datafile/powercard_data_hist_part14.268.723229465";

    the value of newname for datafile 99 to "+ DATA/pcardtst/datafile/powercard_index_hist_part08.367.723229607";

    the value of newname for datafile 100 to "+ DATA/pcardtst/datafile/powercard_index_hist_part09.385.723229087";

    the value of newname for datafile 101 to "+ DATA/pcardtst/datafile/powercard_index_hist_part11.286.723229559";

    the value of newname for datafile 102 to "+ DATA/pcardtst/datafile/powercard_index_hist_part13.355.723229615";

    the value of newname for datafile 103 to "+ DATA/pcardtst/datafile/powercard_data_batch.330.723228961";

    the value of newname for datafile 104 to "+ DATA/pcardtst/datafile/powercard_data_hist_part13.335.723229485";

    the value of newname for datafile 105 to "+ DATA/pcardtst/datafile/powercard_index_bo_part03.285.723229567";

    the value of newname for datafile 106 to "+ DATA/pcardtst/datafile/powercard_index_bo_part01.351.723229617";

    the value of newname for datafile 107 to "+ DATA/pcardtst/datafile/powercard_index_fe_part04.261.723229627";

    the value of newname for datafile 108 to "+ DATA/pcardtst/datafile/powercard_index_hist_part02.264.723229465";

    the value of newname for datafile 109 to "+ DATA/pcardtst/datafile/powercard_data_bo_part01.284.723229573";

    the value of newname for datafile 110 to "+ DATA/pcardtst/datafile/powercard_data_hist_part01.406.723229497";

    the value of newname for datafile 111 to "+ DATA/pcardtst/datafile/powercard_data_hist_part07.291.723229557";

    the value of newname for datafile 112 to "+ DATA/pcardtst/datafile/powercard_data_hist_part08.371.723229615";

    the value of newname for datafile 113 to "+ DATA/pcardtst/datafile/powercard_data_hist_part09.346.723228973";

    the value of newname for datafile 114 to "+ DATA/pcardtst/datafile/powercard_data_hist_part12.401.723229447";

    the value of newname for datafile 115 to "+ DATA/pcardtst/datafile/powercard_index_fe_part02.262.723229625";

    the value of newname for datafile 116 to "+ DATA/pcardtst/datafile/powercard_data_fe_part04.374.723229627";

    the value of newname for datafile 117 to "+ DATA/pcardtst/datafile/powercard_data_bo_part03.276.723229585";

    the value of newname for datafile 118 to "+ DATA/pcardtst/datafile/powercard_data_bo_part04.350.723229633";

    the value of newname for datafile 119 to "+ DATA/pcardtst/datafile/powercard_index_bo_part04.317.723229633";

    the value of newname for datafile 120 to "+ DATA/pcardtst/datafile/powercard_data_hist_part03.400.723229511";

    the value of newname for datafile 121 in '+ DATA/pcardtst/datafile/powercard_index_hist_part12.369.723229605;

    the value of newname for datafile 122 to "+ DATA/pcardtst/datafile/powercard_index_hist_part05.373.723229641";

    the value of newname for datafile 123 to "+ DATA/pcardtst/datafile/powercard_data_par.319.723229635";

    the value of newname for datafile 124 to "+ DATA/pcardtst/datafile/powercard_users.402.723228957";

    the value of newname for datafile 125 to "+ DATA/pcardtst/datafile/powercard_index_batch.356.723229609";

    the value of newname for datafile 126 to "+ DATA/pcardtst/datafile/powercard_index_bo_part02.256.723229647";

    the value of newname for datafile 127 to "+ DATA/pcardtst/datafile/powercard_index_hist_part07.320.723229641";

    the value of newname for datafile 128 to "+ DATA/pcardtst/datafile/powercard_index_hist_part04.258.723229615";

    the value of newname for datafile 129 to "+ DATA/pcardtst/datafile/powercard_data_bo_part02.366.723229651";

    the value of newname for datafile 130 to "+ DATA/pcardtst/datafile/powercard_data_hist_part06.348.723228965";

    the value of newname for datafile 131 to "+ DATA/pcardtst/datafile/powercard_data_hist_part02.307.723229525";

    the value of newname for datafile 132 to "+ DATA/pcardtst/datafile/powercard_index_fe_part01.370.723229647";

    the value of newname for datafile 133 to "+ DATA/pcardtst/datafile/powercard_index_fe_part03.352.723229619";

    the value of newname for datafile 134 to "+ DATA/pcardtst/datafile/powercard_index_hist_part06.266.723229659";

    the value of newname for datafile 135 to "+ DATA/pcardtst/datafile/powercard_index_hist_part10.329.723229027";

    the value of newname for datafile 136 to "+ DATA/pcardtst/datafile/powercard_index_hist_part03.337.723229651";

    the value of newname for datafile 137 to "+ DATA/pcardtst/datafile/powercard_index_hist_part14.263.723229627";

    the value of newname for datafile 138 to "+ DATA/pcardtst/datafile/stpmw_data01.dbf";

    the value of newname for datafile 139 to "+ DATA/pcardtst/datafile/undotbs1_2.dbf";

    the value of newname for datafile 140 to "+ DATA/pcardtst/datafile/undotbs1_3.dbf";

    the value of newname for datafile 141 to "+ DATA/pcardtst/datafile/undotbs1_4.dbf";

    the value of newname for datafile 142 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part05_03.dbf";

    the value of newname for datafile 143 to "+ DATA/pcardtst/datafile/powercard_data_hist_part05.426.851242407";

    the value of newname for datafile 144 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part11_02.dbf";

    the value of newname for datafile 145 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part12_02.dbf";

    the value of newname for datafile 146 to "+ DATA/pcardtst/datafile/pwrcrd_data_hist_part05_04.dbf";

    the value of newname for datafile 147 in "+ DATA/pcardtst/datafile/pwrcrd_index_hist_part05_02.dbf";

    the value of newname for tempfile 1 to "+ DATA/pcardtst/tempfile/temp1.dbf";

    the value of newname for tempfile 2 to "+ DATA/pcardtst/tempfile/temp2.dbf";

    the value of newname for tempfile 3 to "+ DATA/pcardtst/tempfile/temp3.dbf";

    the value of newname for tempfile 4 to "+ DATA/pcardtst/tempfile/pwrcard_temp01.dbf";

    the value of newname for tempfile 5 to "+ DATA/pcardtst/tempfile/pwrcard_temp02.dbf";

    the value of newname for tempfile 6 to "+ DATA/pcardtst/tempfile/pwrcard_temp03.dbf";

    the value of newname for tempfile 7 to "+ DATA/pcardtst/tempfile/pwrcard_temp04.dbf";

    duplicate target database in the pcardtst log file

    GROUP 10 ('+ DATA/pcardtst/onlinelog/group_10_01') size 512 M,

    GROUP 10 ('+ DATA/pcardtst/onlinelog/group_10_02') size 512 M,

    GROUP 11 ('+ DATA/pcardtst/onlinelog/group_11_01') size 512 M,

    GROUP 11 ('+ DATA/pcardtst/onlinelog/group_11_02') size 512 M,

    GROUP 12 ('+ DATA/pcardtst/onlinelog/group_12_01') size 512 M,

    GROUP 12 ('+ DATA/pcardtst/onlinelog/group_12_02') size 512 M,

    GROUP 13 ('+ DATA/pcardtst/onlinelog/group_13_01') size 512 M,

    GROUP 13 ('+ DATA/pcardtst/onlinelog/group_13_02') size 512 M,

    GROUP 14 ('+ DATA/pcardtst/onlinelog/group_14_01') size 512 M,

    GROUP 14 ('+ DATA/pcardtst/onlinelog/group_14_02') size 512 M,

    GROUP 7 ('+ DATA/pcardtst/onlinelog/group_7_01') size 512 M,

    GROUP 7 ('+ DATA/pcardtst/onlinelog/group_7_02') size 512 M,

    GROUP 8 ('+ DATA/pcardtst/onlinelog/group_8_01') size 512 M,

    GROUP 8 ('+ DATA/pcardtst/onlinelog/group_8_02') size 512 M,

    GROUP 9 ('+ DATA/pcardtst/onlinelog/group_9_01') size 512 M,

    GROUP 9 ('+ DATA/pcardtst/onlinelog/group_9_02') size 512 M;

    output channel t1;

    output channel t2;

    output channel t3;

    }

    But gettting error below

    DATA FILE

    "+ DATA/pcardtst/datafile/system.635.878984933".

    CHARACTER SET WE8ISO8859P1

    output channel: t1

    output channel: t2

    output channel: t3

    RMAN-00571: ===========================================================

    RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.

    RMAN-00571: ===========================================================

    RMAN-03002: failure of Db in dual at 06/05/2015 14:26:01

    RMAN-06136: the auxiliary database ORACLE error: ORA-01503: CREATE CONTROLFILE failed

    ORA-01167: two files are the same file/group number or the same file

    ORA-01517: Member of journal: '+ DATA/pcardtst/onlinelog/group_10_02.

    ORA-01517: Member of journal: '+ DATA/pcardtst/onlinelog/group_10_01.

    Complete recovery manager.

    Kind regards.

    Younus

    Hi Eric,.

    Let's not your syntax for log files. Check out the docs:

    http://docs.Oracle.com/CD/B28359_01/backup.111/b28270/rcmdupdb.htm#BRADV89956

    You need only name the Group once followed a list separated by commas of the logfile members.

    Concerning

    Thomas

  • How to use backups RMAN to restore a RAC database to single instance on another host?

    How to use backups Rman to restore a RAC database to single instance on another host?

    I tried to copy these inline for you:

    ------------




    Backups RMAN disk HowTo restore database RAC to single Instance on another node (Doc ID 415579.1)

    Down

  • How to find the list of users in the rac database?

    Friends,

    OS: OEL 6.3 64-bit
    DB: 11 GR 2 (11.2.0.3)
    2 node rac ASM.

    in node 1, I connected as sys.
    and in the second node, I logged as hr
    When I try to check user names in v$ session in node1... I have not found the hr user name...
    How can I check the list of user name for all users who are connected to the rac database?
    If I check in the session of each node v$ I come, but how can I check any node for all the loggedin user names. ?

    Thank you

    use gv$ session

    INST_ID select lets you know which instance.

  • What can I do with the database in offline mode

    What can I do with the database in offline mode?
    Other than for developers to understand the structure of database objects, what can I do with objects of database offline at development time.
    Is there anyway that I can refer to the tables defined in the database offline of EO, VO, even if the tables have not been defined in the database online?

    Thank you

    There are some useful articles and the doc on the Web:
    http://download.Oracle.com/docs/CD/E18941_01/tutorials/jdtut_11r2_81/jdtut_11r2_81_2.html
    and http://susanduncan.blogspot.com/2011/06/fine-tuning-your-logical-to-physical-db.html
    and a tutorial http://st-curriculum.oracle.com/obe/jdev/obe11jdev/ps1/databasedevelopment/obe_%20databasedevmt.htm

    Timo

  • What are the steps to merge database files?

    Hello

    I have 23 files of data in a tablespace .i to merge those 23 data files in 4 or 5 data files. What are the steps to merge database files

    Published by: mithun 22 October 2011 23:29

    I have 23 files of data in a tablespace .i to merge those 23 data files in 4 or 5 data files. What are the steps to merge database files

    Create new tablespace with 4 or 5 files of data and use
    "alter table move tablespace new_tablespace;
    Once completed, drop the old_tablespace.

Maybe you are looking for