Why redo log is displayed in the area of flashback?

According

documentation on Oracle managed file under Administration of the Oracle database

http://download.Oracle.com/docs/CD/B19306_01/server.102/b14231/OMF.htm#sthref1534

He said very clearly
For the creation of redo log files and files of control only, this setting overrides any default location specified in the DB_RECOVERY_FILE_DEST and DB_CREATE_FILE_DEST initialization settings.

I did the check to the top as follows

SQL > select Member from v$ logfile;

MEMBERS
--------------------------------------------------------------------------------
/U01/app/Oracle/oradata/TOMKYTE/onlinelog/o1_mf_3_4kx4wdld_.log
/U01/app/Oracle/flash_recovery_area/TOMKYTE/onlinelog/o1_mf_3_4kx4wto6_.log
/U01/app/Oracle/oradata/TOMKYTE/onlinelog/o1_mf_2_4kx4vjcs_.log
/U01/app/Oracle/flash_recovery_area/TOMKYTE/onlinelog/o1_mf_2_4kx4vyjt_.log
/U01/app/Oracle/oradata/TOMKYTE/onlinelog/o1_mf_1_4kx4tn4n_.log
/U01/app/Oracle/flash_recovery_area/TOMKYTE/onlinelog/o1_mf_1_4kx4v26y_.log

-to verify the location of the recovery logs.



SQL > show parameter db_create_online

VALUE OF TYPE NAME
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_1 string/u01/app/oracle/oradata/TOMKYT
E/onlinelog
db_create_online_log_dest_2 string/u01/app/oracle/oradata/TOMKYT
E/onlinelog

-to check where they are supposed to be written

SQL > show parameter db_recovery_file_dest

VALUE OF TYPE NAME
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string/u01/app/oracle/flash_recovery
_area
whole large db_recovery_file_dest_size 2G

-to check where is the flashback recovery area.

I know if they are immediate or not parameters

SELECT name, value, issys_modifiable of the parameter $ v = name: v_param_name';

all the above mentioned parameters are all immediate, it will change immediately without having to restart the server.

Why is my recovery connects to two different places when there's only one place?

I am currently practicing the concepts in the required Oracle workshop and hope to have a successful learning.

Thank you very much!

OK, when you set the db_create_online_log_dest_n parameters, you don't need to bounce the DB, they immediately change, but is not means that newspapers in restoration online will change location, this means that when you create new logs in restoration online they will be created in the new destination. When you created the db with dbca, db_create_file_dest and db_recovery_file_dest parameters have been defined, then online redo logs are created on the two sites, this is the documented behavior (http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/omf.htm#i1206224). If you set the db_create_online_log_dest_n, not to refer to "/onlinelog", these directories will be created by Oracle under the standard of the OFA.

HTH

Enrique

Tags: Database

Similar Questions

  • I recently used a dvd + rw disk to back up my photos. Now when I put a dvd in my pc, why it will not display on the screen that's all there is, or how to give me access to it?

    I recently used a dvd + rw disk to back up my photos. Now when I put a dvd in my pc, why it will not display on the screen that's all there is, or how to give me access to it?

    Depends on what you used for the backup.

    If you used the windows backup or another backup software will contain the entire disk will be a backup file, and view/use this file, you use the same software that allows you to save.

  • Redo logs and crash of the disk where they are stored

    What happens when the disk where the redo logs are written becomes HS (e.g., a disk crash). I read somewhere that the database be shutdown. Is this right? You can ask to copy the recovery logs at 2 locations in order to improve the security of the system?

    Thanks in advance,
    Alexandre Bailly

    user8716187 wrote:
    What happens when the disk where the redo logs are written becomes HS (e.g., a disk crash). I read somewhere that the database be shutdown. Is this right? You can ask to copy the recovery logs at 2 locations in order to improve the security of the system?

    Thanks in advance,
    Alexandre Bailly

    Online redo logs can and should be multiplexed, with copies on separate physical devices. The details are in the ALTER DATABASE command, found in the SQL reference Guide. Additional information can be found by going to tahiti.oracle.com, drilling until your product (without name) and version, the by using the "search" feature to find something on "restore". A lot of information in the Administrators Guide.

    In addition, default Oracle will appoint the «redo_*.log» redologs ".Log" is an open invitation to ITS s to open the with a text editor or delete. After all, 'it's just the log file I named mine by the older convention of the "redo_*.rdo" just to reduce this kind of human error.

  • V4.1.1.887 Data Modeler - 8 opens and displays only the areas of my design of the file

    Yesterday I was working in my design of the database with data v4.1.1.887 maker and then saved work in file and closed the application.

    And today, something bad happened, the application opens my design but only displays a list of the areas that I created for my design.

    I can't display in the Explorer window of my: logic model, structured data types that I created, process and business information model diagrams.

    I migrated to v4.1.1.888 Data Modeler and the problem persists, then looked at the previous issues resolved in the forum and found an answer to a similar case of problem

    4.0EA2 (v4.0.0.820) Data Modeler after accepting migration from previous installation of v3.3.0.747 DM preferences settings.

    So I deleted the files preferences (I use Windows Vista) C:\Users\username\AppData\Roaming\Oracle SQL Developer data Modeler\system4.0.0.887 and 8, but the problem persists there.

    I also looked at the directory of the files that make up my design, and the size and the number of them seem to be without significant alteration I looked a few days ago.

    Can give me a solution to this problem?

    Hi David,

    After some research and testing, I finally solved the problem. I did the following:

    I deleted the text value from my memory of java in the java Control Panel of windows,

    I deleted the 4.1.1. C:\Users\username\AppData\Roaming\datamodeler\4.1.1 carpet

    I changed the memory to-Xmx2048m in datamodeler.conf in the datamodeler\datamodeler\bin folder

    then tried to start the Data Modeler but never value began, then reduced to-Xmx1024m and he began

    and not migrated preferences the next time I opened the Data Modeler and set preferences with utf-8 (I noticed it comes by default with coding cirillic)

    I closed and restarted datamodeler the external newspaper and made settings worked very well.

    Then I started to test with 3 versions of my design work for this month, their sizes are 5MB, 10MB, 10.1 MB the recent

    I opened and closed each oldest works to the latest and the oldest two worked fine without error in the external journal.

    However, the recent open all the drawings but I found an opening in the external log error:

    2015-07-24 19:46:31, 685 [Thread-25] ERROR XMLTransformationManager - cannot parse list of objects: C:\Systems\Software\Event-Cloud\Workspaces\Des-Database\v1.0.0.1-whole\dbd-150721-1\v100-dev\businessinfo/Objects.local

    Oracle.Xml.Parser.v2.XMLParseException; lineNumber: 1; columnNumber: 1; Start of the element root waited.

    at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:326)

    at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:463)

    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:404)

    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:245)

    at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:175)

    at oracle.dbtools.crest.model.metadata.XMLTransformationManager.getObjects(XMLTransformationManager.java:3236)

    at oracle.dbtools.crest.model.metadata.XMLTransformationManager.openDesignPartManyFiles(XMLTransformationManager.java:3692)

    at oracle.dbtools.crest.model.metadata.XMLTransformationManager.openDesignPart(XMLTransformationManager.java:3545)

    at oracle.dbtools.crest.model.design.Design.openDesign(Design.java:1438)

    I checked the Objects.local file and it's there. then I closed the design without saving changes and ended Datamodeler application.

    Then I opened the Objects.local with Notepad ++ and I found and structure XML and the data that I wrote in my family business contact section has disappeared from the design

    I restarted Datamodeler opened and closed my recent design and that article accompanied by white, the datamodeler clos

    Search the file and noted increased its size 153 MB so she tried to open it with notepad ++ but it crashed.

    So I compared the previous Objects.local files and they are content equal and I did not modify this related part and the IDS are the same.

    I then removed the big Objects.local bad and replaced it with the previous version of the file and then started to Datamodeler

    and my recent design opens and closes fine without errors in the external journal.

    I think your application when open a file and found a critical error Stops to load the file and reports the external newspaper but if a non-critical error

    It displays the design and the log window does not show something goes wrong, in my case, I was not aware of this.

    Kind regards

    Julio.

  • Redo log switch always calls the control point?

    Hello

    Is a log switch redo always calls process checkpoint?

    I have a database of test on which nothing is not running, but when I switch the redo log file, so it takes time to make the State of redo log active to INACTIVE.

    But when I change check system; the State immediately changes from ACTIVE to INACTIVE.

    Gives me a feeling that the redo log switch, it is not mandatory for the checkpoint occurs.

    DB version is 11.2.0.1

    Comment of nicely.

    A 11.2.0.4 alert after log is 3 'alter system switch logfile' in a line:

    ALTER SYSTEM SET log_checkpoints_to_alert = TRUE SCOPE = BOTH;

    Wed Apr 16 22:17:09 2014

    Beginning log switch checkpoint up to RBA [0xd8.2.10], SNA: 12670277595657 *.

    Thread 1 Advanced to save the 216 (switch LGWR) sequence

    Currently Journal # 3 seq # 216 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_3_938s43lb_.log

    Currently Journal # 3 seq # 216 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_3_938s49xz_.log

    Wed Apr 16 22:17:25 2014

    Beginning log switch checkpoint up to RBA [0xd9.2.10], SNA: 12670277595726 *.

    Thread 1 Advanced to record the sequence 217 (switch LGWR)

    Currently journal # 1, seq # 217 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_1_938s3lbv_.log

    Currently journal # 1, seq # 217 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_1_938s3nmc_.log

    Wed Apr 16 22:17:36 2014

    Thread 1 cannot allot of new newspapers, sequence 218

    Checkpoint is not complete

    Currently journal # 1, seq # 217 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_1_938s3lbv_.log

    Currently journal # 1, seq # 217 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_1_938s3nmc_.log

    Wed Apr 16 22:17:40 2014

    Checkpoint completed up to [0xd8.2.10], YVERT RBA: 12670277595657 +++

    Beginning log switch checkpoint up to RBA [0xda.2.10], SNA: 12670277596242 *.

    Thread 1 Advanced to record the sequence of 218 (switch LGWR)

    Notice how we have lines (marked *) what does "beginning log switch checkpoint."

    However, note that control points are not "emergency" until we have a problem with "checkpoint is not complete", and the first control point in the bucket fill in only some time after the second control point in the dumpster.  Log switch checkpoint he still 'place' (or, at least, its necessity is noticed) - but it is not the urgent case where it used to be in early versions of Oracle.

    Concerning

    Jonathan Lewis

  • Why windows media player displays only the names of original files?

    Windows Media Player only shows the original file names that can be confusing if I changed the name of a file. The name of the file (usually audio files) is correct in Windows Explorer, but in the component list of WMP, it's often the old file name. Why is this and what can I do about it? Advice would be appreciated. TA. Kevin.

    What the file type of these music files (MP3, WMA, WAV...)?

    WMP usually displays the title tags of the music file, which is independent of the file name. However, WMP always displays the current file name in the path column of the file, which can be activated by right clicking on a column header and select choose columns.

  • Multiplex redo logs and slow down the e/s

    I sometimes use a San that gets saturated by a another client that I have no control over. My recovery on is newspapers the SAN and it causes all my customers to save waiting for synchronization of log file when it arrives. My overall IO is low, but customers commit frequently. In this scenario, I have no control over the San or other customer behaviours.

    What I'm considering is multiplexing by newspapers of recovery more fast local disk, but leaving a member of newspaper on the SAN because these discs are less likely to fail purely and simply.

    My question is how Oracle will behave during the SAN slows down - it will deal with the posting as 'fact' when the first newspaper finishes member in writing, or the last? I know that if writing an error then just Oracle will mark this member as dead, however in this case I get no errors, some delays.

    Edit: I have no control over the San or its configuration as well, I just have to work around the problem, while SAN team working on it.

    Published by: jhmartin1 on April 7, 2009 15:55

    Write completion wil be when more destinations slow two members signals, that Scripture is complete.

    Other thoughts.

    For a San, I expect to be when writing reaches a battery backed cache. So think of it as no adaptation write cache can be useful.
    You can increase the cache to write to your San?

    Obviously writing half as much for the san that can help.

    Make sure you use the SAME guidelines [http://www.oracle.com/technology/deploy/availability/pdf/OOW2000_same_ppt.pdf]

    Check if the problem is caused when backups are.

    Check a second time SAN and SUN firmware is up to date and ensure that best practices for san are respected etc.

    -I think that there is a problem for sun solaris I try to drag in the back of my mind by which a series of small writings are not well on the most massive. But Ithink that is direct attached tables and there is a workaround.

    HOEP this helps, bigdelboy.

  • At the time to rename the redo log file, I got the following error.

    SQL > host moves D:\oracle\product\10.2.0\oradata\test\redo02.log D:\oracle\produc
    t\10.2.0\oradata\test\re.log
    The process cannot access the file because it is being used by another process.

    This isn't a surprise if you do not rename the file also with SQL statements before the move in the OS layer. Did you?

    Add above: If you just want to get a group of newspapers moved to another directory, the right way to do that would be
    (1) to create the new group to the destination of your choice
    (2) the old Group drop
    (3) delete the old OS layer group

    This is done online, while the instance continues to run in the OPEN State.

    Kind regards
    Uwe Hesse

    http://uhesse.WordPress.com

    Published by: Uwe Hesse on 29.07.2010 12:08

  • Questions about the parameters of database using a fast recovery area and the writing of two copies of archived redo logs.

    My databases are 11.2.0.3.7 Enterprise Edition. My OS is AIX 7.1.

    I am to convert databases to use individual zones of rapid recovery and have two questions about what values to assign to database settings related to archived redo logs. This example refers to a database.

    I read that if I specify

    Log_archive_dest_1 =' LOCATION = USE_DB_RECOVERY_FILE_DEST'

    the names of newspapers archived redo written in the default quick recovery area is '% t_%S_%r.dbf '.

    In the past my archived redo logs have been appointed based on the parameter

    log_archive_format='GPAIDT_archive_log_%t_%s_%r.arc'

    I think log_archive_format will be ignored for logs archived redo written in the fast recovery area.

    I am planning to write a second copy of the archived redo logs based on the parameter

    ALTER system set log_archive_dest_2 = ' LOCATION = / t07/admin/GPAIDT/arch.

    If I do this, the copy of logs placed in /t07 will be called '% t_%S_%r.dbf' or 'GPAIDT_archive_log_%t_%s_%r.arc '?

    Before my use of a fast recovery area, I used the OEM 12 c Console to specify settings of backup of database that has been deleted and archived redo logs after 1 backup. Oracle manuals say rather specify a deletion of "none" policy and allow Oracle delete newspapers in the area of fast recovery if necessary. Since I got to keep a second copy of these log files in /t07 should I keep the policy that says to delete logs after 1 backup? If I don't do that, how will they removed from /t07?

    Thank you

    Bill

    If I do this, the copy of logs placed in /t07 will be called '% t_%S_%r.dbf' or 'GPAIDT_archive_log_%t_%s_%r.arc '?

    They will be "GPAIDT_archive_log_%t_%s_%r.arc". LOG_ARCHIVE_FORMAT is only ignored for directories under OMF.

    Since I got to keep a second copy of these log files in /t07 should I keep the policy that says to delete logs after 1 backup? If I don't do that, how will they removed from /t07?

    You can hold the deletion policy as it is. Oracle documentation, defining the STRATEGY of the ARCHIVELOG DELETION: "the deletion of archived newspaper policy applies to logs archive destinations, including the area of fast recovery."

  • REDO LOG STOP TRANSPORT MARITIMES TO THE BACKUP SITE

    Hi guru,.

    After physical creation Eve redologs are shipped and applied to standby mode. But suddenly stopped shipping redo log. I checked the main site from the log file it log archiving generates here but that is not delivered to the backup site. As I checked the pwd file is same on both sites, so I am able to connect to the db of the main site backup site.

    Can any of clarify you why it happens so.

    Thanks in advance

    From point 1 to 3, I check that all are defined

    But for pt 4 when I run the above command is as below:

    STATE ERROR

    -------------------               ---------------------------------------------------------------------

    VALID

    DISABLED ORA - 16057:DGID OF THE SERVER NOT IN THE CONFIG OF DATA HOLD

    OK, I got the error where is is LOG_ARCHIVE_CONFIG check box enabled on db primary

    I correct and now delivery is correct, but one thing which still confuse me, is that when I run the command to check the archiev applied

    SEQUENCE # FIRST_TIME NEXT_TIME APP

    APP COLUMN ALWAYS SHOWS NO standby, as well as on the primary, but before it shows why Yes

  • SIZE OF THE REDO LOG FILE


    Hello

    I got an error message when I add me new group. log files I searched and found the answer on the form. Ago 4 M minimum size of 11 g R2 log file size.

    My question is why a log file size depends on DB_BLOCK_SIZE? This parameter is set to the component structures of memory that create an instance when a log file is an operating system file that depend on the version of the OS not DB_BLOCK_SIZE.

    Thank you.


    SQL > alter database add logfile group 4 'c:\app\asif\oradata\employee\redo04.log' size 1 m;
    alter database add logfile group 4 'c:\app\asif\oradata\employee\redo04.log' size 1 m
    *
    ERROR on line 1:
    ORA-00336: 2048 blocks the size of the log file is minimum lower than 8192 blocks


    SQL > show parameter db_block_size

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    Whole DB_BLOCK_SIZE 8192
    SQL >

    You are assuming that the redo log block size is the same as the database block size. This is not correct.

    The error indicates that 8192 is the minimum number of blocks of a redo log file. The documentation states that the minimum size is 4 M. For example, you can deduct your redo log block size is 512 bytes.

    Here's some more information about the size of redo log, the documentation block.

    Unlike the database block size, which can always be between 2 K and 32 K, redo log default files to a block size that is equal to the size of physical sector of the disk. Historically, it is usually 512 bytes (512 b).

    Some new large disks offer 4K sizes byte (4K) to increase sector efficiency improved format and ECC capabilities. Most of the Oracle database platforms are able to detect this bigger sector size. The database then automatically creates files redo log with a block size of 4 K of these discs.

  • Check the configuration of Redo Log Archives to

    DB version: 10 gr 2, 11 2 GR
    Version of the operating system: Solaris 5.10, AIX 6.1

    While allowing for a DB from Production 2 node RAC, I made a mistake specifying LOG_ARCHIVE_DEST (a typing error).

    After I brought the DB to the top (open) after you have enabled archiving, I tested the journal using switching
    SQL> alter system switch logfile;
    
    System altered
    and had no error.

    Later, I realized, archiving does not work. But I wonder, why I have no error for
    SQL> alter system switch logfile;
    Errors may have been recorded in the log of alerts. Since then I have no error from sqlplus to the command above, I thought everything was fine.

    But the following command triggered error.
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    ALTER SYSTEM ARCHIVE LOG CURRENT
    *
    ERROR at line 1:
    ORA-16038: log 4 sequence# 127 cannot be archived
    ORA-00254: error in archive control string ''
    ORA-00312: online log 4 thread 2: '+LTMPROD_DATA/LTMPROD/redo04.log'
    ORA-15173: entry 'ltmprod_arch' does not exist in directory '/'
    Should what command I use to ensure that activation of Archive was successful?

    Hello

    Any more ways on how professional to check whether the archive allowing a success?

    ALWAYS... After putting data in ARCHIVE MODE, you must force the newspaper archive database.

    SQL > ARCHIVE LOG LIST; # To check status
    
    SQL > ALTER SYSTEM ARCHIVE LOG CURRENT; # This command must be executed sucessfull.
    
    Query V$ARCHIVE_LOG or use RMAN to check path/location of ARCHIVELOG. 
    
    $ rman target /
    RMAN > LIST ARCHIVELOG ALL; 
    

    Use ARCHIVE LOG SWITCH LOGFILE command to force again them to archive the disk, because they have a slight difference.

    JOURNAL of ARCHIVING is synchronous, this command waits until the online redo log has completed writing the log file again the file system, if again is not archived error is thrown on command prompt.

    SWITCH LOGFILE is asynchronous, this command start only switch / archive run in the background if ARCH process cannot finish writing the archivelog directory on file the error not returned not invite since he was released.

    You can use this technical note:
    * How to enable archiving ON and OFF [ID 69739.1] *.

    Hope this helps,
    Levi Pereira

    Published by: Levi Pereira on September 23, 2011 01:28

  • Question about how Oracle manages the Redo logs

    Hello

    Assuming a configuration which consists of 2 redo log groups (groups A and B), each group of 2 disks (disks A1 and A2 for Group A) and B1 and B2 disks for Group B. Additionally, assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to her. So in the situation described above, there are 4 discs, one for each redo log file, and each disc contains nothing other than a redo log file. Also, assume that the database is in ARCHIVELOG mode and the files from archive is stored on another different set of devices.

    kind of graphically:
        GROUP A             GROUP B
    
          A1                  B1
          A2                  B2
    The question is: when the disks that make up the Group A are filled and Oracle switches to the disks in the Group B, can the group drives to take offline, perhaps even physically removed from the system if necessary without affecting the functioning of the database? Can the archiver process temporarily delayed until the disks (which have been removed) are presented online or is the DBA have to wait until the end of the process to archive a copy of the redo log file creating in archive?

    Thank you for your help,

    John.

    Hello
    A journal of the groups fall

    To remove a group of online redo logs, you must have the ALTER DATABASE system privilege. Before you delete a line redo log group, consider the following precautions and restrictions:

    * An instance requires at least two groups of files logging online, regardless of the number of members in the groups. (A group is one or more members.)
    * You can delete a group of newspapers online redo only if it is inactive. If you need to drop the current group, first force a log switch occurs.
    * Make sure a group of online redo logs is archived (if archiving is enabled) before dropping. To see if this happens, use the view LOG V$.

    SELECT GROUP #, ARCHIVED, STATUS FROM V$ LOG;

    GROUP # ARC STATUS
    --------- --- ----------------
    1 ACTIVE YES
    2. NO CURRENT
    3 INACTIVE YES
    4 INACTIVE YES

    Delete a group of newspapers online redo with the SQL ALTER DATABASE statement with the DROP LOGFILE clause.

    The following statement drops redo log group number 3:

    ALTER DATABASE, DROP LOGFILE GROUP 3;

    When a group of online redo logs is deleted from the database, and you do not use Oracle managed files, operating system files are not removed from the disk. Instead, control of the associated database files are updated to remove members of the Group of the database structure. After deleting a group of online redo logs, make sure the drop completed successfully and then use the command of operating system appropriate to delete the dropped online redo log files.

    When you work with files managed by Oracle, the cleaning of operating system files is done automatically for you.
    Your database will not be affected as you can work with 2 files, redo log in each group, as the minimum number of redo log in a database file is two because the process LGWR (newspaper writer) writes in the redo log in a circular way. If the process crashes because you have 2 groups only if you want to remove 1 Add a third and make that the current group and remove the one you want to be offline.

    Please refer to:
    http://download.Oracle.com/docs/CD/B10500_01/server.920/a96521/onlineredo.htm#7438
    Kind regards
    Mohamed
    Oracle DBA

  • Why archive log file size constantly change?

    I have a question related to the size of the archive log file:
    I constantly receive a different size to the size of the log file archive, of course, they are smaller than to the size of the log file.
    A guru can explain to me: what possible reason could be?
    Thanks in advance!

    It can happen due to several reasons:

    When you perform a manual archive (via ALTER SYSTEM ARCHIVE LOG CURRENT/ALL)
    When Oracle must respect your ARCHIVE_LAG_TARGET (defined in seconds)

    Unless you do a manual check-in, your parameter ARCHIVE_LAG_TARGET is the reason why you see Archives of logs with different sizes as Oracle has archived the redo log as soon as the time ARCHIVE_LAG_TARGET is reached. Even if the redo log is not full, Oracle will always have to archive the redo log.

  • One group of standby Redo Log is ACTIVE

    Hi guys,.

    I have successfully configured a custodian of data between a primary database (oradb) and a database ensures Physics (oradb_s8).

    However, I have noticed that in V$ STANDBY_LOG only one group of standby Redo Log is ACTIVE, regardless of how many times I go log in the primary database.

    ' The following was stated in the documentation:

    When a switch of newspaper occurs on the database of the source again, redo incoming is then written to the next group of waiting for redo log, and the group used newspaper before again Eve is archived by a foreground ARCn process.

    Source.

    So, I guess that group of standby Redo Log is turned on when the Redo Log is enabled in the primary database.

    Could you please clarify it for me?

    It's the Oracle 11 g R2 (11.2.0.1) on Red Hat Server 5.2.

    On autonomy in standby:

    SQL > SELECT GROUP #, THREAD #, SEQUENCE #, ARCHIVED, STATUS FROM V$ STANDBY_LOG

    GROUP # THREAD # SEQUENCE # ARC STATUS

    ---------- ---------- ---------- --- ----------

    4 1 248 YES ACTIVE <-this is the only group that is still ACTIVE

    5 1 0 NOT ASSIGNED NO.

    6 0 YES 0 UNASSIGNED

    7 0 YES 0 UNASSIGNED

    SQL > SELECT SEQUENCE #, APPLIED, FIRST_TIME NEXT_TIME

    V $ ARCHIVED_LOG

    SEQUENCE ORDER #;

    SEQUENCE # FIRST_TIM NEXT_TIME APPLIED

    ---------- --------- --------- ---------

    232 06-06-SEPT.-15-15-SEP YES

    233 06-06-SEPT.-15-15-SEP YES

    234 06-06-SEPT.-15-15-SEP YES

    235 06-06-SEPT.-15-15-SEP YES

    236 06-06-SEPT.-15-15-SEP YES

    237 06-06-SEPT.-15-15-SEP YES

    238 06-06-SEPT.-15-15-SEP YES

    239 07-06-SEPT.-15-15-SEP YES

    240 YES 15-SEP-07 07-SEVEN.-15

    241 YES 15-SEP-07 07-SEVEN.-15

    242 YES 15-SEP-07 07-SEVEN.-15

    YES 243 15-SEP-07 07-SEVEN.-15

    244 YES 15-SEP-07 07-SEVEN.-15

    245 YES 15-SEP-07 07-SEVEN.-15

    246 YES 15-SEP-08 15-SEP-07

    247 15 - SEP - 08 IN-MEMORY 15-SEP-08

    16 selected lines.

    On the primary:

    SQL > SELECT SEQUENCE #, APPLIED, FIRST_TIME NEXT_TIME

    V $ ARCHIVED_LOG

    WHERE NAME = 'ORADB_S8 '.

    SEQUENCE ORDER #;

    SEQUENCE # FIRST_TIM NEXT_TIME APPLIED

    ---------- --------- --------- ---------

    232 06-06-SEPT.-15-15-SEP YES

    233 06-06-SEPT.-15-15-SEP YES

    234 06-06-SEPT.-15-15-SEP YES

    235 06-06-SEPT.-15-15-SEP YES

    236 06-06-SEPT.-15-15-SEP YES

    237 06-06-SEPT.-15-15-SEP YES

    238 06-06-SEPT.-15-15-SEP YES

    239 07-06-SEPT.-15-15-SEP YES

    240 YES 15-SEP-07 07-SEVEN.-15

    241 YES 15-SEP-07 07-SEVEN.-15

    YES 07-SEP-15 242 07-SEP-15

    YES 243 15-SEP-07 07-SEVEN.-15

    244 YES 15-SEP-07 07-SEVEN.-15

    245 YES 15-SEP-07 07-SEVEN.-15

    246 YES 15-SEP-08 15-SEP-07

    247 NO. 15-SEP-08 08 - SEPT.-15

    If you have a large DML activity on primary, you see more than one group # in v$ active standby_log.

    RFS will always try to assign the next available standby log, because your group changes # 4 are already applied, it allocates this group again after the switch.

    Check metalink doc: bug 2722195 and 219344.1

Maybe you are looking for