Archive Data Guard and Flashback

Hello

I have a DB 11.2.0.1 with a database of pending.
If I put a few tables to be in the flashback data archive, (support for FDA tables) will be copied to a db as well sleep?

concerning

Hello;

I hesitate to answer because you don't seem to close your old questions or give points to those who help. But I also believe in giving the benefit of the doubt.

If the FIU is configured on your watch then the answer is Yes.

While this is not a requirement to use I would always use FRA with Data Guard.

db_recovery_file_dest = ' / u01/app/oracle/flash_recovery_area.

The following are some sample parameter values that might be used to configure a physical standby database to archive its standby redo log to the fast recovery area:

LOG_ARCHIVE_DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)' LOG_ARCHIVE_DEST_STATE_2=ENABLE

Source

DataGuard of Concepts and Administration 11g Release 2 (11.2) E10700-02

With RMAN efficiently in an environment of Dataguard. [848716.1 ID]

Best regards

mseberg

Published by: mseberg on December 1, 2012 13:53

Tags: Database

Similar Questions

  • Data Guard and Flashback

    I will be an active data guard with a primary instance and a secondary instance. I'm in the environment no CARS. My primary databases and Eve will be the flashback database on. I duration of blowback for one day, i.e., I should be able to flashback to the top of any time in the last 24 hours.

    When I do the RMAN backup on primary and remove the archiving log by using the rman command, RMA will not remove newspapers to archive that will be required for the flashback database (24 hours a day)?

    When I am applying newspapers archived on the backup site, I want to remove the archived logs that have been applied, but not required for the flashback database. How can I determine that? In the worst case scenario, I can always keep archived logs from 1 day to wait?

    1. Thank you.

    You need not remove the flashback logs.  Oracle automatically manages the removal of flashback logs.

    Configure retention policy to remove the archive logs more at some point.

    Configure archivelog deletion policy to manage the removal of logs that have been applied to the wait.

    for example:

    CONFIGURE RETENTION POLICY TO RECOVERY OF 7-DAY WINDOW;

    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

    In the worst case if you discover you need archivelogs that have been removed from the drive, then you can restore them because they have already been backed up by RMAN.

  • Data Guard and connectable to database

    Possible to have some project documents in a CDP that is used for Data Guard and other PDB files don't not ITIS?

    No, not in the current version. You must use the PDBs als for DG or nothing.

  • Data Guard and auditor of the Apex

    Hi all

    I seem to be unable to understand the configuration of parameters for protective Apex listener when connecting to a database.

    We have a lot of servers of databases (without data guard) and we use the command line Wizard to generate the configuration file. It's straigthforeward and
    easy. We enter the server, port, and Service_name and it works.

    But for a guard database, there are two servers and I seem to be unable to set it in the right way. After doing some research I came across the parameter apex.db.customURL which should solve the problem.

    I removed the references to the server, port, and Service_name and put the key in.

    The result were errors of connection due to some incorrect port settings.

    SEVERE: The pool named: apex is not properly configured, error: IO error: format invalid number for the port number

    oracle.dbtools.common.jdbc.ConnectionPoolException: the pool named: apex is not properly configured, error: IO error: format invalid number for the port number

    at oracle.dbtools.common.jdbc.ConnectionPoolException.badConfiguration(ConnectionPoolException.java:65)

    What Miss me?

    Thank you
    Michael

    (Here's the rest of our configuration :)

    <? XML version = "1.0" encoding = "UTF-8" standalone = 'no '? >

    < ! DOCTYPE SYSTEM property "http://java.sun.com/dtd/properties.dtd" > ""

    Properties of <>

    < comment > saved on Mon Oct 19 18:28:41 CEST 2015 < / comment >

    < key "debug.printDebugToScreen entry" = > false < / entry >

    < key "security.disableDefaultExclusionList entry" = > false < / entry >

    < key = "db.password entry" > @055EA3CC68C35F70CF34A203A8EE1A55D411997069F6AE9053B3D1F0B951D84E0E < / entry >

    < key = "enter cache.maxEntries" > 500 < / entry >

    < key = "enter error.maxEntries" > 50 < / entry >

    < key = "enter security.maxEntries" > 2000 < / entry >

    < key = "cache.directory entry" > / tmp/apex/cache < / entry >

    < enter key = "jdbc. DriverType"> thin < / entry >

    < key = "enter log.maxEntries" > 50 < / entry >

    < enter key = "jdbc. MaxConnectionReuseCount"> 1000 < / entry >

    < key "log.logging entry" = > false < / entry >

    < enter key = "jdbc. InitialLimit' > 3 < / entry >

    < enter key = "jdbc. MaxLimit' 10 > < / entry >

    < key = "enter cache.monitorInterval" 60 > < / entry >

    < key = "enter cache.expiration" > 7 < / entry >

    < key = "enter jdbc.statementTimeout" > 900 < / entry >

    < enter key = "jdbc. MaxStatementsLimit' 10 > < / entry >

    < key = "misc.defaultPage entry" > apex < / entry >

    < key = "misc.compress" / entry >

    < enter key = "jdbc. MinLimit"> 1 < / entry >

    < key = "cache.type entry" > lru < / entry >

    < key "cache.caching entry" = > false < / entry >

    < key "error.keepErrorMessages entry" = > true < / entry >

    < key = "cache.procedureNameList" / entry >

    < key = "cache.duration entry" > days < / entry >

    < enter key = "jdbc. InactivityTimeout"1800 > < / entry >

    < key "debug.debugger entry" = > false < / entry >

    < key = "enter db.customURL" > JDBC: thin: @(DESCRIPTION = (ADDRESS_LIST = (ADRESSE = (COMMUNAUTÉ = tcp.world) (PROTOCOL = TCP) (host = DB-ENDUR) (Port = 1520)) (ADDRESS = (COMMUNITY = tcp.world)(PROTOCOL=TCP) (Host = DB-ENDURK) (PORT = 1521)) (LOAD_BALANCE = off)(FAILOVER=on)) (CONNECT_DATA = (SERVICE_NAME = ENDUR_PROD.) VERBUND.CO. «"" AT))) < / entry >»»»

    < / properties >



    Hi Michael Weinberger,

    Michael Weinberger wrote:

    I seem to be unable to understand the configuration of parameters for protective Apex listener when connecting to a database.

    We have a lot of servers of databases (without data guard) and we use the command line Wizard to generate the configuration file. It's straigthforeward and
    easy. We enter the server, port, and Service_name and it works.

    But for a guard database, there are two servers and I seem to be unable to set it in the right way. After doing some research I came across the parameter apex.db.customURL which should solve the problem.

    I removed the references to the server, port, and Service_name and put the key in.

    Keep the references the server name, port and service. No need to delete.

    The result were errors of connection due to some incorrect port settings.

    SEVERE: The pool named: apex is not properly configured, error: IO error: format invalid number for the port number

    oracle.dbtools.common.jdbc.ConnectionPoolException: the pool named: apex is not properly configured, error: IO error: format invalid number for the port number

    at oracle.dbtools.common.jdbc.ConnectionPoolException.badConfiguration(ConnectionPoolException.java:65)

    What Miss me?

    You must create two entries in the configuration file "defaults.xml" for your ADR (formerly APEX Listener).

    One for db.connectionType and, secondly, for db.customURL, for example:

    customurl
    jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=
    (ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)(HOST=DB-ENDUR)(Port = 1520))
    (ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)(HOST=DB-ENDURK)(PORT = 1521))
    (LOAD_BALANCE=off)(FAILOVER=on))(CONNECT_DATA=(SERVICE_NAME=ENDUR_PROD.VERBUND.CO.AT)))
    

    Reference: http://docs.oracle.com/cd/E56351_01/doc.30/e56293/config_file.htm#AELIG7204

    NOTE: After you change the configuration file, don't forget to restart independent ADR / Java EE application server support if ADR is deployed on one.

    Also check if your URL for a JDBC connection is working properly and if there are any questions, you can turn on debugging for ADR:

    Reference:

    Directed by Tony, you should post the ADR related questions to the appropriate forum. Reference: ADR, SODA & JSON in the database

    You can also move this thread on the forum of the ADR.

    Kind regards

    Kiran

  • Problem link DB between active Data Guard and reports application database

    My version of the 11.2.0.2.0 and OS database is Oracle Solaris 10 9/10.
    I am facing a problem in my custody of data Active data base for purposes of tax. Active Data guard information is as below.

    SQL > select name, database_role, open_mode from v$ database;

    NAME DATABASE_ROLE OPEN_MODE
    --------- ---------------- --------------------
    ORCL PHYSICS READ SHALL ONLY APPLY

    Detail of the problem is less than
    ------------------------------
    I have created a db link (name: DATADB_LINK) between active data guard and report of application of data base for purposes of tax.
    SQL > create database DATADB_LINK link to connect to HR identified by HR using 'DRFUNPD ';
    Database link created.

    But when I run a query using db link to my database of enforcement report I got this error below.

    ORA-01555: snapshot too old: rollback segment number 10 with the name ' _SYSSMU10_4261549777$ ' too small
    ORA-02063: preceding the line of DATADB_LINK

    Then I see logfile named database alart Active Data Guard and get below error

    ORA-01555 caused by the following SQL statement (SQL ID: 11yj3pucjguc8, time of request = 1 sec, SNA: 0x0000.07c708c3): SELECT "A2". "' BUSINESS_TRANSACTION_REFERENCE ', 'A2 '. "' BUSINESS_TRANSACTION_CODE ', MAX (CASE 'A1'. "TRANS_DATA_KEY"WHEN "feature' AND 'A1'." " END OF TRANS_DATA_VALUE"), MAX (CASE 'A1'. "TRANS_DATA_KEY" WHEN 'otherFeature' THEN 'A1' '. "" END OF TRANS_DATA_VALUE")

    But the interesting point if I run the query report directly in the Active Data Guard database, I got never error.

    So it's a problem of link DB between active Data Guard and other databases?

    Fazlul Kabir Mahfuz wrote:
    My version of the 11.2.0.2.0 and OS database is Oracle Solaris 10 9/10.
    I am facing a problem in my custody of data Active data base for purposes of tax. Active Data guard information is as below.

    SQL > select name, database_role, open_mode from v$ database;

    NAME DATABASE_ROLE OPEN_MODE
    --------- ---------------- --------------------
    ORCL PHYSICS READ SHALL ONLY APPLY

    Detail of the problem is less than
    ------------------------------
    I have created a db link (name: DATADB_LINK) between active data guard and report of application of data base for purposes of tax.
    SQL > create database DATADB_LINK link to connect to HR identified by HR using 'DRFUNPD ';
    Database link created.

    But when I run a query using db link to my database of enforcement report I got this error below.

    ORA-01555: snapshot too old: rollback segment number 10 with the name ' _SYSSMU10_4261549777$ ' too small
    ORA-02063: preceding the line of DATADB_LINK

    Then I see logfile named database alart Active Data Guard and get below error

    ORA-01555 caused by the following SQL statement (SQL ID: 11yj3pucjguc8, time of request = 1 sec, SNA: 0x0000.07c708c3): SELECT "A2". "' BUSINESS_TRANSACTION_REFERENCE ', 'A2 '. "' BUSINESS_TRANSACTION_CODE ', MAX (CASE 'A1'. "TRANS_DATA_KEY"WHEN "feature' AND 'A1'." " END OF TRANS_DATA_VALUE"), MAX (CASE 'A1'. "TRANS_DATA_KEY" WHEN 'otherFeature' THEN 'A1' '. "" END OF TRANS_DATA_VALUE")

    But the interesting point if I run the query report directly in the Active Data Guard database, I got never error.

    So it's a problem of link DB between active Data Guard and other databases?

    Check this statement that applies to your environment

    * ORA-01555 on Active Data Guard Standby Database [1273808.1 ID] *.

    also

    http://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:8908307196113

  • Removal of RMAN, Data Guard, and Archives journal

    Our DG Environment running Oracle 11 g R2
    We have a 3 node DG Environment with
    A primary being
    B and C being Active Data Guard Standby
    Backups are taken out of B and go directly to tape.
    Standby Redo Logs and fast recovery area are used



    Holding the recommendation of "Using Recovery Manager with Oracle Data Guard in Oracle Database 10g"
    RMAN primary control ('A')
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED PENDING

    RMAN standby ("B") where the backup is performed
    CONFIGURE THE NONE ARCHIVELOG DELETION POLICY

    RMAN set the other Standby ("C")
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED PENDING

    How can we know what newspaper Archives are eligible to be removed from 'A' and 'C '?
    When remove it takes place?
    How can say us when archiving logs are deleted from the 'A' and 'C '?

    Dear user10260925,

    The documentation that you read is reliable but not enough.

    The Oracle can handle the archivelog directory and know that you are eligible for deletion. Stuff you posted here was derived from the books online and is supported and can be used when the Oracle knows and manages the archivelogs. This is simply called the flash recovery area. Please read the FRA at this very moment.

    Under the personal component normal circumstances in the industry use a few scripts to achieve on backup archivelog deletion.

    Here's a useful example to you;

    # Remove old archivelogs
    ######################################
    00,30 * * * * /home/oracle/scripts/delete_applied_redo_logs_OPTSTBY.sh
    ######################################
    
    vals3:/home/oracle#cat /home/oracle/scripts/delete_applied_redo_logs_OPTSTBY.sh
    export ORACLE_SID=optstby
    export ORACLE_HOME=/oracle/product/10.2.0/db_1
    cd /db/optima/archive/OPTPROD/archivelog
    
    /oracle/product/10.2.0/db_1/bin/sqlplus "/ as sysdba" @delete_applied_redo_logs.sql
    grep arc delete_applied_redo_logs.lst > delete_applied_redo_logs_1.sh
    chmod 755 delete_applied_redo_logs_1.sh
    sh delete_applied_redo_logs_1.sh
    rm delete_applied_redo_logs_1.sh
    rm delete_applied_redo_logs.lst
    
    vals3:/home/oracle#cd /db/optima/archive/OPTPROD/archivelog
    vals3:/db/optima/archive/OPTPROD/archivelog#cat delete_applied_redo_logs.sql
    set echo off
    set heading off
    spool /db/optima/archive/OPTPROD/archivelog/delete_applied_redo_logs.lst
    select 'rm -f ' || name from v$archived_log where applied = 'YES';
    spool off
    exit
    vals3:/db/optima/archive/OPTPROD/archivelog#
    

    Hope that helps.

    Ogan

  • Active Data Guard and Undo and ora-01555

    Hello

    We have a 11.1.0.7 two nodes primary CARS with a physical standby single instance Active Data Guard. When you run an export datapump since the day before, we get an ORA-01555 while the undo_retention is well beyond the duration of watering for export.

    I'm looking to better understand the cancellation as part of Active Data Guard. Is there any dependency for cancellation between the primary (s) and the day before? Are the settings Cancel (undo_retention, tablespace... undo size) on the primary and independent intelligence on the other?

    Thank you.

    The wait is a physical expectation. An exact copy of the primary. The cancellation to sleep is controlled by it again come from the primary. When the cancellation is replaced in the primary, he gets crushed to the standby regardless of what it executes. You cannot cancel maintained around more on the eve, it is a replica of the primary.

    Increasing the undo tablespace and the primary undo_retention allows more Cancel to keep both primary and standby, but Eve is always controlled by the direction of the primary to cancel it. Of course, you must change the setting to sleep too so that everything is the same when you made the day before to a primary database.

    I hope this helps.

    Larry

  • Data Guard and EHCC on Exadata - does work?

    Exadata Hybrid columnar Compression (OHCC) works when a database is in mode "force logging?

    I fear that it may not work because OHCC works only for inserts in direct mode, which means me no logging.

    We are in the process of purchasing a vault Exadata, a second target Data Guard.

    V11.2 documentation Data Guard is specifically stated that Data Guard doesn't support hybrid columnar compression, but I don't know what are the implications of this.

    I expect to have to solve the primary database Machine for 'forcing logging' Data Guard will be used.

    DataGuard physical or logical?

    Oracle fully supports data Guard physical standby and hybrid columnar Compression on Exadata.

    Direct support issues that may exist with Data Guard logic and therefore, at least at the present time, logic is not supported.

    The above is to the best of my knowledge that I don't have access to the documentation of the Exadata. But was certainly true of my
    last experience of HCC. Your Oracle sales representative should be able to provide the definitive answer quickly and I would be grateful
    post you here.

  • 12 c database Data Guard and connectable to DR

    Hi all

    I have a database upgrade from 11.2.0.3 to 12.1.0.2 to come. The current set up is the PROD & DR are 11g data protection configuration and it works fine. The two databases are in ASM and the configuration is exactly the same as well PROD & DR sites. The database is currently 25 TB.

    I understand how make the upgrade to 12 c, then let cascading changes through on the DR database - so NON - CBD on both sites. Thinking aloud here, I would plug in the PROD database into a new CBD. I suppose I create a CBD DR and then convert DR pluggable database, modify the dataguard config (as it is at the level of the container in 12 c) and plug into the DR database in the new CBD DR? Then copy the archivelogs of CBD PROD on the DR site to fix gaps. This approach wouldn't work?

    Thoughts everyone?

    Thank you

    Vic010

    OK Oracle confirmed - you must copy the data files to the doctor and then rebuild the database of DR... Is not ideal if your database is 25 to and in full swing. The last time, he had to rebuild the database, it took a week...

    Vic010

  • CAN I RECOVER DELETED DATA FILE AND ITS TABLESPACE BY USING FLASHBACK DATABASE

    Hello!

    I CREATED THE TABLESPACE WITH ITS DATA FILE.

    SQL > create tablespace usmantbs datafile 'E:\oracle\product\10.2.0\oradata\orcl\usman.dbf' recording petit_fichier the size of 10 M extent management local segment space management auto;

    THEN, I CREATED A USER AND HIM ENTRUST THIS TABLESPACE.

    SQL > create default profil_utilisateur identified by Neil Leal Microsoft account unlock default tablespace usmantbs;
    SQL > grant connect, resources for Neil;

    I CONNECTED WITH Neil as USER AND CREATED a TABLE.

    SQL > conn Leal/Leal
    SQL > create table baseball (id number (9));

    SQL > select current_scn from v$ database;

    CURRENT_SCN
    ---------------------
    545863

    Then I deleted the tablespace including contents and data files...

    SQL > drop tablespace usmantbs including content and data files;

    I have no backup of this data file, but my database is in archive log...

    So I can... .flashback database to the SNA 545863 as it was before the fall... to get back my along its tablespace data file
    Wil I get my datafile back or not? Help, please...

    You can test it by yourself easily :) You will not be able to open your database
    After getting the error, just rename this data file and flashback again. Then open your database

    C:\Documents and Settings\Administrator>sqlplus "/as sysdba"
    
    SQL*Plus: Release 10.2.0.1.0 - Production on Sat Aug 1 14:20:34 2009
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount
    ORACLE instance started.
    
    Total System Global Area  293601280 bytes
    Fixed Size                  1248624 bytes
    Variable Size              96469648 bytes
    Database Buffers          192937984 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    
    SQL> alter database archivelog;
    
    Database altered.
    
    SQL> alter database flashback on;
    
    Database altered.
    
    SQL> alter database open;
    
    Database altered.
    
    SQL> create tablespace tb datafile 'c:\tb.df' size 1m;
    
    Tablespace created.
    
    SQL> create user tb identified by tb;
    
    User created.
    
    SQL> grant dba to tb;
    
    Grant succeeded.
    
    SQL> alter user tb default tablespace tb;
    
    User altered.
    
    SQL> create table tb (id number);
    
    Table created.
    
    SQL> select current_scn from v$database;
    
    CURRENT_SCN
    -----------
         547292
    
    SQL> drop tablespace tb including contents and datafiles;
    
    Tablespace dropped.
    
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount
    ORACLE instance started.
    
    Total System Global Area  293601280 bytes
    Fixed Size                  1248624 bytes
    Variable Size              96469648 bytes
    Database Buffers          192937984 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    
    SQL> flashback database to scn 547292;
    flashback database to scn 547292
    *
    ERROR at line 1:
    ORA-38795: warning: FLASHBACK succeeded but OPEN RESETLOGS would get error
    below
    ORA-01245: offline file 5 will be lost if RESETLOGS is done
    ORA-01111: name for data file 5 is unknown - rename to correct file
    ORA-01110: data file 5: 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00005'
    
    SQL> alter database open resetlogs;
    alter database open resetlogs
    *
    ERROR at line 1:
    ORA-01245: offline file 5 will be lost if RESETLOGS is done
    ORA-01111: name for data file 5 is unknown - rename to correct file
    ORA-01110: data file 5: 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00005'
    
    SQL> select name from v$datafile;
    
    NAME
    --------------------------------------------------------------------------------
    
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\SYSTEM01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\UNDOTBS01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\SYSAUX01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\USERS01.DBF
    C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00005
    
    SQL> alter database create datafile 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00005' as 'c:\tb.dbf';
    
    Database altered.
    
    SQL> flashback database to scn 547292;
    
    Flashback complete.
    
    SQL> alter database open resetlogs;
    
    Database altered.
    
    SQL>
    
    SQL> select * from tb;
    
    no rows selected
    
    SQL> select name from v$datafile;
    
    NAME
    --------------------------------------------------------------------------------
    
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\SYSTEM01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\UNDOTBS01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\SYSAUX01.DBF
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST1\USERS01.DBF
    C:\TB.DBF
    
    SQL> select name from v$tablespace;
    
    NAME
    ------------------------------
    SYSTEM
    UNDOTBS1
    SYSAUX
    USERS
    TEMP
    TB
    
    6 rows selected.
    
    SQL>
    

    - - - - - - - - - - - - - - - - - - - - -
    Kamran Agayev a. (10g OCP)
    http://kamranagayev.WordPress.com
    [Step by step installation Oracle Linux and automate the installation by using Shell Script | http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

    Published by: Kamran Agayev, a., July 27, 2009 14:38

  • Restore the size of the log in Data Guard configurations

    DB version: 11.2
    Platform: Solaris 10

    We have currently a DB production which is not Dataguard.It has a load of joint working: some processing OLTP and batch.
    Its size of redo log is * 100 * MB.

    We will create a database with the requirement very similair but this DB will have primary standby (Data guard) and real time applies.

    To adapt to the requirements of dataguard, we should reduce the size of the recovery online newspapers? That is to say. Transport of small pieces of roll forward is better than carrying more. Right?

    Hello;

    If you use "real-time applies" the key is not the size so that the standby Redo Logs.

    In most cases, 100 MB is fine. Newspapers to sleep again must be the same size that it again.

    With 'real time applies' SRL act as a buffer.

    Unless you have a real problem with the size of do it again I would not change it.

    An excellent source of information on this is 'Restore the Services of Transport' in ' Data Guard Concepts and Administration 11 g Release 2 (11.2) "E10700-02".

    If you believe that your logs are too big departure "Troubleshooting performance problems with the database and base/MFG MRP [ID 100964.1]"

    Best regards

    mseberg

    Published by: mseberg on May 31, 2012 11:33

  • Active Data Guard

    Hello
    Can we create an Active Data Guard in Oracle 10 g R2 release?


    Vinita

    Hi Vinita;

    Can we create an Active Data Guard in Oracle 10 g R2 release?

    You can not, you need to be level 11g. Please see:
    http://www.databasejournal.com/features/Oracle/article.php/3834931/using-Oracle-11gs-Active-Data-Guard-and-snapshot-standby-features.htm
    http://en.Wikipedia.org/wiki/Oracle_Data_Guard

    Respect of
    HELIOS

  • Request to license Oracle Data Guard

    Nice day! at all. I don't know if this is already asked before, but I would like to ask for your help regarding my inquiry.

    We have an Oracle 10 g Database EE here in our company which is established in a RAC (2 nodes and 1 SAN). Of course, that are already approved.

    We intend to implement the Oracle Data Guard.

    Of course the Oracle Data Guard will run voluntarily for a disaster recovery, in the case that our CARS (especially the SAN) will crash, we would like to have a stand physically separated by our primary database server (stand by server is completely separate to our existing main server) server.

    My questions are:

    1. don't we have to take advantage of a new license for custody of data?
    2. what other license should we acquire for the above settings?
    3. we would have a separate Installation of Oracle database on the standby server?
    4. If we have a named user license 150 on the primary database should purchase us a new license user named 150 for the confirmation Server?
    5. are there other things I have to consider in what concerns the granting of licenses for our project?

    OK, here's how my speailist license put, Data Guard as a product is free, but is not the database instance. A physical standby is a database, it must be so licensed, however the expert of the license indicates that licenses you buy for the execution of the physical standby can be ubtained at a substantially reduced rate as they would be called limited licenses. It's not data hold is the cost, it is the cost of an another database on another machine. According to you, current license, you can be covered however, you may not. Everything I say is call a specialist to license Oracle Metalink support, call the guy or gal who has sold you Oracle licenses or appeal and certified Oracle partner/reseller can help you with a specialist of the license and have a licensing discussion/examination before implementation. Oracle support is not those who understand the licenses and how this all works, he did this and like them, always thought that data guard and physical databases Eve were free and included and recently corrected, hard enough I might add.

  • Purge logs archiving on things primary and Standby and for Data Guard RMAN


    Hi, I saw a couple of orders in regard to the purge archive records in a Data Guard configuration.

    Set UP the STRATEGY of SUPPRESSION of ARCHIVE to SHIPPED to All RELIEF;

    Set UP the STRATEGY of ARCHIVELOG DELETION to APPLIED on All RELIEF;

    Q1. The above only removes logs archiving to the primary or primary and Standby?

    Q2. If deletions above archive logs on the primary, is it really remove them (immediately) or does the FRA to delete if space is needed?

    I also saw

    CONFIGURE ARCHIVELOG DELETION POLICY TO SAVED;

    Q3. That what precedes, and once again it is something that you re primary side?

    I saw the following advice in the manual of Concepts of data protection & Admin

    Configure the DB_UNIQUE_NAME in RMAN for each database (primary and Standby) so RMAN can connect remotely to him

    Q4. Why would I want my primary ro connect to the RMAN Repository (I use the local control file) of the standby? (is this for me to say define RMAN configuration settings in the Standby) manual

    Q5. Should I only work with the RMAN Repository on the primary or should I be also in things in the deposits of RMAN (i.e. control files) of the standby?

    Q6. If I am (usually mounted but not open) Physics, of standby I able to connect to its own local repository for RMAN (i.e. control based on files) while sleep mode is just mounted?

    Q7. Similiarly if I have an old lofgical (i.e. effectively read-only), to even connect to the local RMAN Repository of Eve?

    Q8. What is the most common way to schedule a RMAN backup to such an environment? example cron, planner of the OEM, DBMS_SCHEDULER? MY instinct is cron script as Planner OEM requires running the OEM service and DBMS_SCHEDULER requires the data runs?

    Any idea greatly appreciated,

    Jim

    The above only removes logs archiving to the primary or primary and Standby?

    When CONFIGURE you a deletion policy, the configuration applies to all destinations archive

    including the flash recovery area. BACKUP - ENTRY DELETE and DELETE - ARCHIVELOG obey this configuration, like the flash recovery area.

    You can also CONFIGURE an archived redo log political suppression that newspapers are eligible for deletion only after being applied to or transferred to database backup destinations.

    If deletions above archive logs on the primary, is it really remove them (immediately) or does the FRA to delete if space is needed?

    Its a configuration, it will not erase by itself.

    If you want to use FRA for the automatic removal of the archivelogs on a database of physical before you do this:

    1. make sure that DB_RECOVERY_FILE_DEST is set to FRA - view parameter DB_RECOVERY_FILE_DEST - and - setting DB_RECOVERY_FILE_DEST_SIZE

    2. do you have political RMAN primary and Standby set - CONFIGURE ARCHIVELOG DELETION POLICY to APPLY ON ALL STANDBY;

    If you want to keep archives more you can control how long keep logs by adjusting the size of the FRA

    Great example:

    http://emrebaransel.blogspot.com/2009/03/delete-applied-archivelogs-on-standby.html

    All Oracle is worth a peek here:

    http://emrebaransel.blogspot.com/

    That what precedes, and once again it is something that you re primary side?

    I would never use it. I always had it put it this way:

    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

    I would use only 'BACKED UP' for the data protection system. Usually it tell oracle how many times to back up before removing.

    Why would I want my primary ro connect to the RMAN Repository?

    Because if you break down, you want to be able to backup there, too.

    Also if it remains idle for awhile you want.

    Should I only work with the RMAN Repository on the primary or should I be also in things in the deposits of RMAN (i.e. control files) of the standby?

    Always use a database of catalog RMAN with Data Guard.

    If I am (usually mounted but not open) Physics, of standby I able to connect to its own local repository for RMAN (i.e. control based on files) while sleep mode is just mounted?

    Same answer as 5, use a catalog database.

    Similiarly if I have an old lofgical (i.e. effectively read-only), to even connect to the local RMAN Repository of Eve?

    Same answer as 5, use a catalog database.

    What is the most common way to schedule a RMAN backup to such an environment? example cron, planner of the OEM, DBMS_SCHEDULER? MY instinct is cron script as Planner OEM requires running the OEM service and DBMS_SCHEDULER requires the data runs?

    I think cron is still, but they all work. I like cron because the database has a problem at work always reports its results.

    Best regards

    mseberg

    Summary

    Always use a database with RMAN catalog.

    Always use CRF with RMAN.

    Always set the deletion «To APPLY ON ALL STANDBY» policy

    DB_RECOVERY_FILE_DEST_SIZE determines how long to keep the archives with this configuration.

    Post edited by: mseberg

  • Data Guard archive political deletion log

    Hi all

    I configured the data guard with 2 bases Eve in oracle 11g.

    I want to use rman to delete archives on all servers.

    I have configured no FRA, no recovery catalog.

    My question is:

    If I delete waiting til sequence # 1627 and on primary sequence til = # = 1620 or vice versa this would cause problems for Data Guard.

    I say delete until the different sequences # on primary and standby cause problems for synchronization?

    Thank you

    So if I understand delete archivelog to sysdate - or delete archivelog all completed before sysdate n is not the archivelog deletion policy you put on RMAN?

    She depends on it. The note was made at the base of the MOS document how ensure that RMAN is NOT delete archived logs that have not yet shipped to standby mode (Doc ID 394261.1)

    Delete archivelog to sysdate - depends on all of the deletion policy. If the deletion policy is set on "applied pending all", then it does not delete the archives, if they are not applied on the day before. If the deletion policy is set to none, then it does not check if the archive is applied on the eve and deletes them directly.

    Last Archive generated on primary:

    SYS@oraprim > select max(sequence#) from v$ archived_log;

    MAX(SEQUENCE#)

    --------------

    114

    Last check-in applied in mode ensures:

    SYS@orastb > select max(sequence#) from v$ archived_log in case of application = 'YES ';

    MAX(SEQUENCE#)

    --------------

    109

    Delete policy set to NONE on primary:

    RMAN > show archivelog deletion policy;

    using the control file of the target instead of recovery catalog database

    RMAN settings for database with db_unique_name ORAPRIM are:

    CONFIGURE THE NONE ARCHIVELOG DELETION POLICY;

    RMAN > delete archivelog until 'sysdate;

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 60 type of device = DISK

    List copies of newspapers archived to database with db_unique_name ORAPRIM

    =====================================================================

    Thrd Seq S key low time

    ------- ---- ------- - ---------

    110-1-214 A JANUARY 13, 16

    Name: +FRA/oraprim/archivelog/2016_01_13/thread_1_seq_110.318.901050727

    216-1-111 A JANUARY 13, 16

    Name: +FRA/oraprim/archivelog/2016_01_13/thread_1_seq_111.317.901050731

    217-1-112 A JANUARY 13, 16

    Name: +FRA/oraprim/archivelog/2016_01_13/thread_1_seq_112.316.901050731

    113. OF 1 219 A 13 JANUARY 16

    Name: +FRA/oraprim/archivelog/2016_01_13/thread_1_seq_113.315.901050733

    222 1 114 A JANUARY 13, 16

    Name: +FRA/oraprim/archivelog/2016_01_13/thread_1_seq_114.314.901050737

    You sure you want to delete the items above (enter YES or NO)? NO.

    It is the removal of primary archives that have not yet been applied on the eve.

    Delete policy 'APPLIES ON STANDBY ALL' value on the primary:

    RMAN > show archivelog deletion policy;

    RMAN settings for database with db_unique_name ORAPRIM are:

    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

    RMAN > delete archivelog until 'sysdate;

    output channel: ORA_DISK_1

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 60 type of device = DISK

    RMAN-08120: WARNING: log archived not deleted, not yet applied watch

    Archive log file name=+FRA/oraprim/archivelog/2016_01_13/thread_1_seq_110.318.901050727 thread = 1 sequence = 110

    RMAN-08120: WARNING: log archived not deleted, not yet applied watch

    Archive log file name=+FRA/oraprim/archivelog/2016_01_13/thread_1_seq_111.317.901050731 thread = 1 sequence = 111

    RMAN-08120: WARNING: log archived not deleted, not yet applied watch

    Archive log file name=+FRA/oraprim/archivelog/2016_01_13/thread_1_seq_112.316.901050731 thread = 1 sequence = 112

    RMAN-08120: WARNING: log archived not deleted, not yet applied watch

    Archive log file name=+FRA/oraprim/archivelog/2016_01_13/thread_1_seq_113.315.901050733 thread = 1 sequence = 113

    RMAN-08120: WARNING: log archived not deleted, not yet applied watch

    Archive log file name=+FRA/oraprim/archivelog/2016_01_13/thread_1_seq_114.314.901050737 thread = 1 sequence = 114

    It checks if the archive has been applied on the day before. As the 110 to 114 sequence is not applied in the waiting, the archives are not deleted on the primary.

    Hope that gives you a clear picture of how it works.

    -Jonathan Rolland

Maybe you are looking for