redo log switch

11 GR 2
Discovered newspaper spent all the minutes (2, 3 minutes) at peak periods. I'll make the recommendation for the largest newspapers of recovery such as switches will be every 20 to 30 minutes.
Wanted to know what would be the negative side of the great redo logs?

You have the active setting FAST_START_MTTR_TARGET? If you have it, you can use the Redo size for the log file Advisor who can tell you the right size for them for your redo log files. In addition, you can set parameter archive_lag_target while fixing a large size for her redo log files.

Aman...

Tags: Database

Similar Questions

  • Redo log switch always calls the control point?

    Hello

    Is a log switch redo always calls process checkpoint?

    I have a database of test on which nothing is not running, but when I switch the redo log file, so it takes time to make the State of redo log active to INACTIVE.

    But when I change check system; the State immediately changes from ACTIVE to INACTIVE.

    Gives me a feeling that the redo log switch, it is not mandatory for the checkpoint occurs.

    DB version is 11.2.0.1

    Comment of nicely.

    A 11.2.0.4 alert after log is 3 'alter system switch logfile' in a line:

    ALTER SYSTEM SET log_checkpoints_to_alert = TRUE SCOPE = BOTH;

    Wed Apr 16 22:17:09 2014

    Beginning log switch checkpoint up to RBA [0xd8.2.10], SNA: 12670277595657 *.

    Thread 1 Advanced to save the 216 (switch LGWR) sequence

    Currently Journal # 3 seq # 216 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_3_938s43lb_.log

    Currently Journal # 3 seq # 216 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_3_938s49xz_.log

    Wed Apr 16 22:17:25 2014

    Beginning log switch checkpoint up to RBA [0xd9.2.10], SNA: 12670277595726 *.

    Thread 1 Advanced to record the sequence 217 (switch LGWR)

    Currently journal # 1, seq # 217 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_1_938s3lbv_.log

    Currently journal # 1, seq # 217 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_1_938s3nmc_.log

    Wed Apr 16 22:17:36 2014

    Thread 1 cannot allot of new newspapers, sequence 218

    Checkpoint is not complete

    Currently journal # 1, seq # 217 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_1_938s3lbv_.log

    Currently journal # 1, seq # 217 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_1_938s3nmc_.log

    Wed Apr 16 22:17:40 2014

    Checkpoint completed up to [0xd8.2.10], YVERT RBA: 12670277595657 +++

    Beginning log switch checkpoint up to RBA [0xda.2.10], SNA: 12670277596242 *.

    Thread 1 Advanced to record the sequence of 218 (switch LGWR)

    Notice how we have lines (marked *) what does "beginning log switch checkpoint."

    However, note that control points are not "emergency" until we have a problem with "checkpoint is not complete", and the first control point in the bucket fill in only some time after the second control point in the dumpster.  Log switch checkpoint he still 'place' (or, at least, its necessity is noticed) - but it is not the urgent case where it used to be in early versions of Oracle.

    Concerning

    Jonathan Lewis

  • Many redo logs switches the line seconds on an irregular basis

    Hello

    We are 11 GR 2 RAC. We migrated a DB 10 g that used to pass on every 20-25 minutes all the

    While doing about 160 to 200 GB per day; I noticed this newly migrated DB now show

    pics of quick saves switches from time to time, for example yesterday (excerpt):

    SQL > select thread #, FIRST_TIME from v$ log_history by 2;

    ...

    1 20140524 10:19.36
    1 20140524 11:24.34
    2 20140524 11:24.35
    2 20140524 12:00.02
    1 20140524 12:00.03
    1 20140524 12:00.05
    2 20140524 12:00.08
    1 20140524 12:00.14
    1 20140524 13:06.19
    2 20140524 13:09.44
    2 20140524 14:15.46
    1 20140524 15:00.08
    2 20140524 15:00.50
    2 20140524 15:53.56
    2 20140524 16:45.37
    1 20140524 16:45.39
    2 20140524 17:43.26

    Ignore lines because of lunch at this time that I start a backup of archive logs, but to explain

    switches 11:24.34 et.35 for example, or 15:00.08 et.50, then at 16:45, where

    Do you think I should study? I have no reason why in these moments my DB suddenly

    give 3 GB of Oder, before you wait another hour to do this (e.g. between 16:45 and 17:43)...

    Everything about the fact that we are on CARS now?

    Thank you very much.

    Kind regards

    SEB

    Nothing to worry about.

    Each node in a RAC has it of own redo logs and performs his own logswitches.

    If you look for each thread (1 and 2 that correspond to the node) itself, everything is ok.

  • Bandwidth of the network for the redo log switches

    Hi all

    I'm using Oracle 11 g R2, I want to clarify the necessary bandwidth of my primary site to the secondary site (using Oracle Data guard)

    My database is an Oracle RAC 2 nodes each node has a log size to 2 GB again. listen to this file in operation every 5 minutes. It means that I'm spending a 2 GB redo log file every 5 minutes from each node. This means that 4 GB in total every 5 minutes.

    I'm planing to set up Oracle Dataguard in maximum performance mode. Is that mean I need a link between the primary and secondary site that is able to transfer files of 2 GB + 2 GB every 5 minutes?

    I do the calculation below and need your advice:

    4 GB / (5x60s) = 13981 KByte/s

    This means that I need to link to 13981x8bites/s = 111, 848Kbit/s?

    Is the above correct? your advice please

    Kind regards

    Please check below two

    http://www.Oracle.com/au/products/database/MAA-WP-10gR2-dataguardnetworkbestpr-134557.PDF

    How to calculate bandwidth transfer network required recovery in care of data (Doc ID 736755.1)

  • Switch redo logs every hour to provide point-in-time?

    Hi all

    Oracle 11G on Windows 2008 R2

    I I learn the details of the backup/restore management and want to be able to provide a good point-in-time capacity in the event of a disaster. It is advisable to plan a hourly job making ALTER SYSTEM SWITCH LOGFILE;? This causes the redo log archiving every hour so if I need to restore, I know that I can restore at least the previous hour (using newspapers to archive) and before?

    Thanks in advance

    From time to time, this may come as a result of the excessive workload. Having the EA edition, having defined FAST_START_MTTR_TARGET? If Yes, you can take use Advisor Redo Log suggest the adequate size of logs again for you. A safer side, you can also increase the size of f they are greater as 1gig and order the rate switching by using the parameter Archive_lag_Target.

    HTH

    Aman...

  • How do I know if there's bottleneck in the redo logs?

    Hi all

    EBS R12.2

    11 GR 2

    OL 6.5

    We have two groups of log of size 1 GB.

    and oyour archiving logs generates 13 newspapers every hour.

    -rw - r-. 1 oraprod s/n 956896768 Jan 25 14:00 1_3372_898613761.dbf

    -rw - r-. 1 oraprod s/n 1004083200 Jan 25 14:03 1_3373_898613761.dbf

    -rw - r-. 1 oraprod s/n 928530432 Jan 25 14:10 1_3374_898613761.dbf

    -rw - r-. 1 oraprod s/n 928728576 Jan 25 14:12 1_3375_898613761.dbf

    -rw - r-. 1 oraprod s/n 967805952 Jan 25 14:20 1_3376_898613761.dbf

    -rw - r-. 1 oraprod s/n 916065792 Jan 25 14:22 1_3377_898613761.dbf

    -rw - r-. 1 oraprod s/n 951790592 Jan 25 14:30 1_3378_898613761.dbf

    -rw - r-. 1 oraprod s/n 978358272 Jan 25 14:32 1_3379_898613761.dbf

    -rw - r-. 1 oraprod s/n 974519808 Jan 25 14:40 1_3380_898613761.dbf

    -rw - r-. 1 oraprod s/n 960421376 Jan 25 14:42 1_3381_898613761.dbf

    -rw - r-. 1 oraprod s/n 917438976 Jan 25 14:49 1_3382_898613761.dbf

    -rw - r-. 1 oraprod s/n 920794624 Jan 25 14:51 1_3383_898613761.dbf

    -rw - r-. 1 oraprod s/n 920704000 Jan 25 14:59 1_3384_898613761.dbf

    I got this alert saves messages:

    Mon Jan 25 15:08:37 2016

    Filled checkpoint up to RBA [0xd3b.2.10], RCS: 5978324588151

    Mon Jan 25 15:10:57 2016

    Thread 1 cannot allot of new newspapers, sequence 3388

    Private stream flush is not complete

    Currently journal # 1, seq # 3387 mem # 0: /home/oraprod/PROD/data/log01a.dbf

    Currently journal # 1, seq # 3387 mem # 1: /home/oraprod/PROD/data/log01b.dbf

    Beginning log switch checkpoint up to RBA [0xd3c.2.10], RCS: 5978324634623

    Thread 1 Advanced for you connect to sequence 3388 (switch LGWR)

    Currently Journal # 2 seq # 3388 mem # 0: /home/oraprod/PROD/data/log02a.dbf

    Currently Journal # 2 seq # 3388 mem # 1: /home/oraprod/PROD/data/log02b.dbf

    Mon Jan 25 15:11:01 2016

    LNS: Standby redo log file selected for thread 1 sequence 3388 for destination LOG_ARCHIVE_DEST_2

    Mon Jan 25 15:11:04 2016

    Archived journal 6791 extra for each sequence 1 3387 ID thread entry 0 x 12809081 dest 1:

    Mon Jan 25 15:11:17 2016

    Filled checkpoint up to RBA [0xd3c.2.10], RCS: 5978324634623

    Mon Jan 25 15:13 2016

    Additional checkpoint up to RBA to RBA [0xd3c.1a8e82.0] [0xd3c.18d210.0], current journal line

    Mon Jan 25 15:13:04 2016

    Thread 1 cannot allot of new newspapers, sequence 3389

    Private stream flush is not complete

    Currently Journal # 2 seq # 3388 mem # 0: /home/oraprod/PROD/data/log02a.dbf

    Currently Journal # 2 seq # 3388 mem # 1: /home/oraprod/PROD/data/log02b.dbf

    Beginning log switch checkpoint up to RBA [0xd3d.2.10], RCS: 5978324673444

    Thread 1 Advanced for you connect to 3389 (switch LGWR) sequence

    Currently journal # 1, seq # 3389 mem # 0: /home/oraprod/PROD/data/log01a.dbf

    Currently journal # 1, seq # 3389 mem # 1: /home/oraprod/PROD/data/log01b.dbf

    Mon Jan 25 15:13:07 2016

    LNS: Standby redo log file selected for thread 1 sequence 3389 for destination LOG_ARCHIVE_DEST_2

    Mon Jan 25 15:13:09 2016

    Archived journal 6793 extra for each sequence 1 3388 ID thread entry 0 x 12809081 dest 1:

    Mon Jan 25 15:13:11 2016

    Filled checkpoint up to RBA [0xd3d.2.10], RCS: 5978324673444

    Is it a sign of botteneck? Users complained that each 15:00 they encounter performance degradation.

    Kind regards

    JC

    Jenna_C wrote:

    We have two groups of size 1 GB log and our archiving logs generates 13 newspapers every hour.

    Is it a sign of botteneck? Users complained that each 15:00 they encounter performance degradation.

    If your users are complaining about slow at 15:00 then look at what is happening on the system at this point in time.  Use the Active Session history, providing that you have the license, at 3 pm to see if there is a problem.  Or take snapshots of the AWR manually around 15:00 in an attempt to capture any anomaly.  Or capture instant V$ SESSION yourself manually around 15:00 and see so many sessions is pending or not.

    I have seen problems like this on shared physical infrastructure systems where the activity on a system can interfere with each other.  Attention to this kind of thing, because if it is caused by another system then you will never find the cause of the problem inside the Oracle.  Examples include backups are made at a specific time, causing large amounts of sequential disk i/o and slows down access to the drive for everything else.  Such a scenario could cause some writing redo log time to increase significantly.  Or the backup could be another system completely who shares the same disks in the same drive bay or the SAN and Bay drives or SAN is inundated by applications e/s backup.  But as it's another system, that you will not see what is happening on your Oracle database system.

    Or a virtualized with environment sharing CPU and memory where another virtual machine running a CPU intensive job and steals the Oracle system CPU capacity.  I saw the same thing with memory so - Vmware has a strange way to virtualize memory, which means it can use all of the memory on the physical hardware and run out, and then he had to intervene to release somehow little memory.  What ends up slowing things down.

    It could be a network problem - is something that the network of the floods at 15:00?  Still, backups can be a common cause of this, during the transfer of data over the network to another system.  Is there a work extracted regularly data which runs at 15:00 for example extract given OLTP to feed into a data warehouse, which is copied on the network?

    It could be many different things, causing the slowdown.  I would definitely recommend watching any expected sessions know and see if this gets worse around 15:00, or if it remains the same, in which case the problem is elsewhere.  You must also eliminate everything else - the disks, network, etc.

    Good luck

    John Brady

  • Loss/Corruption of a Redo Log multiplex group member

    Grid infrastructure: 11.2.0.4

    DB version: 11.2.0.4

    OS: RHEL 6.5

    All recovery logs are multiplexed IE once. Copy of a mirror

    I created a document on how to cope with the loss of a member of a group of Redo Log multiplex. It applies to DBs in ASM or Linux Filesystem.

    If a member of a multiplex redo logs group is lost/damaged is in State assets or CURRENT (v$ log.status), will be the following work?

    Assuming that the DB would not crash (well that's the whole point of multiplexing)

    Step 1. Switch redo log group and bring him to the INACTIVE State IE. LGWR is not written for her now


    Step 2. Remove the Member lost/damaged.

    ALTER DATABASE DROP LOGFILE MEMBER "+ DATA_DG1/mbhsprd/onlinelog/group_1.256.834497203";

    Apparently, this command does not actually remove the log file at the level of the ASM/OS; It updates only the control file

    Step 3. If it is a corrupt then log file physically remove the damaged location ASM/OS file

    Step 4. I hope the below will create a mirror copy of the survivor

    ALTER DATABASE ADD LOGFILE MEMBER '+ DATA_DG1/mbhsprd/onlinelog/group_1.256.834497203' to GROUP 3.

    The above steps won't work?

    There is no point in writing in the document if you do not actually test what you write

    It seems that you are working with Oracle managed files. In this case:

    You will find that your "apparently" in step 2 is not correct, and that your step 4 will throw an ora-01276.

  • Redo log shipping after a stop pending physical

    Hallo,

    I just have a question for redo log shipping

    in a configuration simple primary/secondary 11g.

    In a shutted down an instance physical standby, redoshipping is active? Primary redo file transfer is still ongoing?

    Thanks in advance

    Hello;

    I'm not going to say no. I still SEE on the primary side before I shutdown, maintains false alarms from happening.

    Test

    Using this query

    http://www.mseberg.NET/data_guard/monitor_data_guard_transport.html

    DB_NAME HOSTNAME LOG_ARCHIVED LOG_APPLIED APPLIED_TIME LOG_GAP

    ---------- -------------- ------------ ----------- -------------- -------

    TDBGIS01B PRIMARY 5558 5557 21-OCT/12:51 1

    Stop waiting

    SQL > alter database recover managed standby database cancel;

    Database altered.

    SQL > shutdown

    ORA-01109: database is not open

    The database is dismounted.

    Journal of strength comes on primary side

    SQL > alter system switch logfile;

    Modified system.

    (3)

    Journal of before was 5558 check so make sure 5559 or higher

    Last log is

    o1_mf_1_5558_c2hn5vdf_.arc

    Wait a few minutes, no change

    Best regards

    mseberg

  • Will the DB down if INACTIVE redo log group is deleted?

    Platform: Oracle Linux 6.5

    DB version: 11.2

    If I delete one or all members of a Redo Log group online that is IDLE, manually (using the rm command) will be the DB crash?

    Instead of everybody speculating, why not just run a test?

    Oracle 11.2.0.1, Enterprise Edition, 64-bit on OL 5, running in archivelog mode

    SQL> select group#,
      2 sequence#,
      3 archived,
      4 status
      5  from v$log
      6 order by group#
      7  ;
        GROUP# SEQUENCE# ARC STATUS
    ---------- ---------- --- ----------------
     1    1 YES INACTIVE
     2    2 YES INACTIVE
     3    3 NO  CURRENT
    3 rows selected.
    SQL> --
    SQL> insert into scott.mytest values (sysdate);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> alter system switch logfile;
    System altered.
    SQL> select * from v$logfile;
        GROUP# STATUS  TYPE MEMBER  IS_
    ---------- ------- ------- ------------------------------ ---
     3 ONLINE /oradata/tulsa/redo03.rdo  NO
     2 ONLINE /oradata/tulsa/redo02.rdo  NO
     1 ONLINE /oradata/tulsa/redo01.rdo  NO
    3 rows selected.
    SQL>
    SQL> select group#,
      2 sequence#,
      3 archived,
      4 status
      5  from v$log
      6 order by group#
      7  ;
        GROUP# SEQUENCE# ARC STATUS
    ---------- ---------- --- ----------------
     1    4 NO  CURRENT
     2    2 YES INACTIVE
     3    3 NO  ACTIVE
    3 rows selected.
    

    Group 2 is inactive, so delete the

    SQL> !rm /oradata/tulsa/redo02.rdo
    SQL> !ls -l /oradata/tulsa/redo02.rdo
    ls: /oradata/tulsa/redo02.rdo: No such file or directory
    

    And continue the activity of db.  Keep an eye on the #2 group

    SQL> select group#,
      2 sequence#,
      3 archived,
      4 status
      5  from v$log
      6 order by group#
      7  ;
        GROUP# SEQUENCE# ARC STATUS
    ---------- ---------- --- ----------------
     1    4 NO  CURRENT
     2    2 YES INACTIVE
     3    3 YES ACTIVE
    3 rows selected.
    SQL> insert into scott.mytest values (sysdate);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> alter system switch logfile;
    System altered.
    SQL> --
    SQL> select group#,
      2 sequence#,
      3 archived,
      4 status
      5  from v$log
      6 order by group#
      7  ;
        GROUP# SEQUENCE# ARC STATUS
    ---------- ---------- --- ----------------
     1    4 NO  ACTIVE
     2    5 NO  CURRENT
     3    3 YES ACTIVE
    3 rows selected.
    SQL> insert into scott.mytest values (sysdate);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> alter system switch logfile;
    System altered.
    SQL> --
    SQL> select group#,
      2 sequence#,
      3 archived,
      4 status
      5  from v$log
      6 order by group#
      7  ;
        GROUP# SEQUENCE# ARC STATUS
    ---------- ---------- --- ----------------
     1    4 YES INACTIVE
     2    5 NO  ACTIVE
     3    6 NO  CURRENT
    3 rows selected.
    SQL> insert into scott.mytest values (sysdate);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> alter system switch logfile;
    System altered.
    SQL> --
    SQL> select group#,
      2 sequence#,
      3 archived,
      4 status
      5  from v$log
      6 order by group#
      7  ;
        GROUP# SEQUENCE# ARC STATUS
    ---------- ---------- --- ----------------
     1    7 NO  CURRENT
     2    5 NO  ACTIVE
     3    6 NO  ACTIVE
    3 rows selected.
    SQL> insert into scott.mytest values (sysdate);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> alter system switch logfile;
    

    At this point, the SWTICH LOGFILE command has been attached, and in the alerts log, we see this:

    CJQ0 started with pid=26, OS id=4817
    Sat Sep 19 10:48:11 2015
    Thread 1 advanced to log sequence 4 (LGWR switch)
      Current log# 1 seq# 4 mem# 0: /oradata/tulsa/redo01.rdo
    Sat Sep 19 10:48:11 2015
    Archived Log entry 1 added for thread 1 sequence 3 ID 0xdaf1e381 dest 1:
    Sat Sep 19 10:48:44 2015
    Thread 1 advanced to log sequence 5 (LGWR switch)
      Current log# 2 seq# 5 mem# 0: /oradata/tulsa/redo02.rdo
    Sat Sep 19 10:48:44 2015
    Archived Log entry 2 added for thread 1 sequence 4 ID 0xdaf1e381 dest 1:
    Thread 1 cannot allocate new log, sequence 6
    Checkpoint not complete
      Current log# 2 seq# 5 mem# 0: /oradata/tulsa/redo02.rdo
    Thread 1 advanced to log sequence 6 (LGWR switch)
      Current log# 3 seq# 6 mem# 0: /oradata/tulsa/redo03.rdo
    Sat Sep 19 10:48:48 2015
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc3_4789.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc3_4789.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Thread 1 advanced to log sequence 7 (LGWR switch)
      Current log# 1 seq# 7 mem# 0: /oradata/tulsa/redo01.rdo
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance tulsa - Archival Error
    Thread 1 cannot allocate new log, sequence 8
    Checkpoint not complete
      Current log# 1 seq# 7 mem# 0: /oradata/tulsa/redo01.rdo
    ORA-16038: log 2 sequence# 5 cannot be archived
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc3_4789.trc:
    ORA-16038: log 2 sequence# 5 cannot be archived
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    Sat Sep 19 10:48:48 2015
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc0_4777.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc0_4777.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Sat Sep 19 10:48:48 2015
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_m000_4864.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Checker run found 2 new persistent data failures
    ORACLE Instance tulsa - Can not allocate log, archival required
    Thread 1 cannot allocate new log, sequence 8
    All online logs needed archiving
      Current log# 1 seq# 7 mem# 0: /oradata/tulsa/redo01.rdo
    Sat Sep 19 10:48:59 2015
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc2_4785.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc2_4785.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Sat Sep 19 10:48:59 2015
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc1_4781.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc1_4781.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Sat Sep 19 10:49:59 2015
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc2_4785.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc2_4785.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Sat Sep 19 10:49:59 2015
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc3_4789.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_arc3_4789.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Sat Sep 19 10:49:59 2015
    Errors in file /u01/app/oracle/diag/rdbms/tulsa/tulsa/trace/tulsa_m000_5163.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/oradata/tulsa/redo02.rdo'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    
  • How to create Redo Log as a good way to avoid the problem of performance

    Hi Experts,

    I work in the following environment.

    OS - Windows server 2012

    version - 11.2.0.1.0

    Server: production server

    1 to 10 of each month, we have huge process. Like many DML and ddl involve.

    I've implemented the RMAN also.

    My Alerts log entry is as below,

    Kills Sep 08 17:05:34 2015

    Thread 1 cannot allot of new newspapers, 88278 sequence

    Private stream flush is not complete

    Currently journal # 1, seq # 88277 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO01. JOURNAL

    Thread 1 Advanced to record the sequence 88278 (switch LGWR)

    Currently Journal # 2 seq # 88278 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL

    Kills Sep 08 17:05:38 2015

    Archived journal 81073 extra for each sequence 1 88277 0x38b3fdf6 dest ID thread entry 1:

    Kills Sep 08 17:05:52 2015

    Thread 1 cannot allot of new newspapers, sequence 88279

    Checkpoint is not complete

    Currently Journal # 2 seq # 88278 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL

    Thread 1 Advanced to record the sequence 88279 (switch LGWR)

    Currently Journal # 3 seq # 88279 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO03. JOURNAL

    Kills Sep 08 17:05:58 2015

    Archived journal 81074 extra for each sequence 1 88278 0x38b3fdf6 dest ID thread entry 1:

    Kills Sep 08 17:06:16 2015

    Thread 1 cannot allot of new newspapers, sequence 88280

    Checkpoint is not complete

    When I check in the internet I found a few points and need clarity also by the following.

    -It is recommended that this switch Redo log 5 times maximum per hour, but I have this time of maximum peak about 100 redo log changes occur.

    -It is recommended to have the large size of the Redo log. But I have 3 Group of redo and each size is 50 MB by default.

    -My group of Redo log not multiplexed and it by default with each group have only a single Member

    Group 1 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO01. JOURNAL

    Group 2 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL

    Group 3 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO03. JOURNAL

    And advised by experts with add that again group will give more performance in the other location. Is it mean that I need to add the Redo log as a different group, player or Redo Log itself I need to replace the other location.

    Say in the end, My Live server works with the location and size of restore default. I'm trying to create a better way and get all confused

    How to do it perfectly.

    Experts, please share your comments to my environment to log redo instance as better through your comment or recommended weblink based on my above needs, or a document.

    Thanks in advance, if you need more information to help me, then ask me, I'll share you.

    Hello

    You can't resize an existing redo log group.

    You will need to create 3 new redo log groups and drop old log redo (with 200 MB size) groups.

    Here you can find a good explanation: DBA for Oracle tips Archives

  • One group of standby Redo Log is ACTIVE

    Hi guys,.

    I have successfully configured a custodian of data between a primary database (oradb) and a database ensures Physics (oradb_s8).

    However, I have noticed that in V$ STANDBY_LOG only one group of standby Redo Log is ACTIVE, regardless of how many times I go log in the primary database.

    ' The following was stated in the documentation:

    When a switch of newspaper occurs on the database of the source again, redo incoming is then written to the next group of waiting for redo log, and the group used newspaper before again Eve is archived by a foreground ARCn process.

    Source.

    So, I guess that group of standby Redo Log is turned on when the Redo Log is enabled in the primary database.

    Could you please clarify it for me?

    It's the Oracle 11 g R2 (11.2.0.1) on Red Hat Server 5.2.

    On autonomy in standby:

    SQL > SELECT GROUP #, THREAD #, SEQUENCE #, ARCHIVED, STATUS FROM V$ STANDBY_LOG

    GROUP # THREAD # SEQUENCE # ARC STATUS

    ---------- ---------- ---------- --- ----------

    4 1 248 YES ACTIVE <-this is the only group that is still ACTIVE

    5 1 0 NOT ASSIGNED NO.

    6 0 YES 0 UNASSIGNED

    7 0 YES 0 UNASSIGNED

    SQL > SELECT SEQUENCE #, APPLIED, FIRST_TIME NEXT_TIME

    V $ ARCHIVED_LOG

    SEQUENCE ORDER #;

    SEQUENCE # FIRST_TIM NEXT_TIME APPLIED

    ---------- --------- --------- ---------

    232 06-06-SEPT.-15-15-SEP YES

    233 06-06-SEPT.-15-15-SEP YES

    234 06-06-SEPT.-15-15-SEP YES

    235 06-06-SEPT.-15-15-SEP YES

    236 06-06-SEPT.-15-15-SEP YES

    237 06-06-SEPT.-15-15-SEP YES

    238 06-06-SEPT.-15-15-SEP YES

    239 07-06-SEPT.-15-15-SEP YES

    240 YES 15-SEP-07 07-SEVEN.-15

    241 YES 15-SEP-07 07-SEVEN.-15

    242 YES 15-SEP-07 07-SEVEN.-15

    YES 243 15-SEP-07 07-SEVEN.-15

    244 YES 15-SEP-07 07-SEVEN.-15

    245 YES 15-SEP-07 07-SEVEN.-15

    246 YES 15-SEP-08 15-SEP-07

    247 15 - SEP - 08 IN-MEMORY 15-SEP-08

    16 selected lines.

    On the primary:

    SQL > SELECT SEQUENCE #, APPLIED, FIRST_TIME NEXT_TIME

    V $ ARCHIVED_LOG

    WHERE NAME = 'ORADB_S8 '.

    SEQUENCE ORDER #;

    SEQUENCE # FIRST_TIM NEXT_TIME APPLIED

    ---------- --------- --------- ---------

    232 06-06-SEPT.-15-15-SEP YES

    233 06-06-SEPT.-15-15-SEP YES

    234 06-06-SEPT.-15-15-SEP YES

    235 06-06-SEPT.-15-15-SEP YES

    236 06-06-SEPT.-15-15-SEP YES

    237 06-06-SEPT.-15-15-SEP YES

    238 06-06-SEPT.-15-15-SEP YES

    239 07-06-SEPT.-15-15-SEP YES

    240 YES 15-SEP-07 07-SEVEN.-15

    241 YES 15-SEP-07 07-SEVEN.-15

    YES 07-SEP-15 242 07-SEP-15

    YES 243 15-SEP-07 07-SEVEN.-15

    244 YES 15-SEP-07 07-SEVEN.-15

    245 YES 15-SEP-07 07-SEVEN.-15

    246 YES 15-SEP-08 15-SEP-07

    247 NO. 15-SEP-08 08 - SEPT.-15

    If you have a large DML activity on primary, you see more than one group # in v$ active standby_log.

    RFS will always try to assign the next available standby log, because your group changes # 4 are already applied, it allocates this group again after the switch.

    Check metalink doc: bug 2722195 and 219344.1

  • The database log switches statistical user name

    Hello

    is it possible to see the user witch / sql create a log switches (archivelog)?

    We have an oracle 11.2 database with multiple small schemas for many small applications.

    Some sql´s have unusual create many logswitches last night, we need to know what user / sql

    created the log switches.

    Thank you * T

    Newspapers do not belong to a specific user, so no user does cause a switching of newspaper. You can view statistics for a running session how many redo is generated, but it does not for the past.

  • What are the causes of archivedlog of size to be different from the size of the online redo log?

    11 GR 2 on RHEL 6.2

    4 GB is our size of Redo Log online.

    SQL > select bytes/1024/1024/1024 GB of log v$.

    GB

    ----------

    4

    4

    4

    4

    But the archive logs are 3.55 GB in size instead of 4 GB. Some of the archivelogs that are smaller than 3.55 below must be caused by

    Archive log backup RMAN offers jobs at 10:00, 16:00 and 22:00 that initiates log switching (I guess)

    SQL > select (blocks * block_size/1024/1024/1024) GB of v$ archived_log where status = 'A ';

    GB

    ----------

    3.55978966

    3.31046581

    3.55826092

    3.55963707

    1.39474106

    3.561553

    3.55736685

    3.55881786

    .135155678

    3.55546999

    .054887295

    1.88027525

    .078295708

    1.97425985

    3.55703735

    3.55765438

    .421986103

    3.55839968

    < snipped >

    It has something to do with the parameter FAST_START_MTTR_TARGET ? It is set to zero in this PB anyway.

    SQL > show parameter mtt

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    fast_start_mttr_target integer 0

    SQL >

    He could carry on public discussions and private discussions as Jonathan Lewis points out in the other discussion that Mihael provides a link to.

    It could concern the parameter archive_lag_target

    It could be the result of the BACKUP of RMAN commands

    It may be a programmed script / who commands ALTER SYSTEM ARCHIVE LOG of employment issues.

    Oracle can create an archivelog that is smaller than the redo log, if it isn't a switch or command before archive the redo log is full.

    Hemant K Collette

  • Files redo log by using the optimal_logfile_size to design view.

    Concerning

    I have a specific question about the size of the log file. I have deployed a database of test and I was exploring some aspects regarding the selection of the optimal size of recovery for performance logs by using the view v$ instance_recovery optimal_logfile_size. My main goal is necessary for example to reduce the bytes redo recovery. Currently I have not been able to optimize the size of redo log file. Here are the steps I followed: -.

    In order to use the view v$ instance_recovery, I had to put the fast_start_mttr_target parameter which is by default do not then I did as follows: -.

    (1) SQL > sho parameter fast_start_mttr_target;

    VALUE OF TYPE NAME
    ------------------------------------ --------------------------------- ------------------------------
    fast_start_mttr_target integer 0

    (2) setting the fast_start_mttr_target requires destroy deferred to the following parameters: -.
    SQL > show parameter log_checkpoint;

    VALUE OF TYPE NAME
    ------------------------------------ --------------------------------- ------------------------------
    log_checkpoint_interval integer 0
    log_checkpoint_timeout around 1800
    log_checkpoints_to_alert boolean FALSE

    SQL > select ISSES_MODIFIABLE, ISSYS_MODIFIABLE, ISINSTANCE_MODIFIABLE, ISMODIFIED parameter of $ v where name like 'log_checkpoint_timeout;

    ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
    --------------- --------------------------- --------------- ------------------------------
    IMMEDIATE FALSE TRUE FALSE

    SQL > alter system set log_checkpoint_timeout = 0 scope = both;

    Modified system.

    SQL > show parameter log_checkpoint_timeout;

    VALUE OF TYPE NAME
    ------------------------------------ --------------------------------- ------------------------------
    log_checkpoint_timeout integer 0

    (3) setting fast_start_mttr_target now

    SQL > select ISSES_MODIFIABLE, ISSYS_MODIFIABLE, ISINSTANCE_MODIFIABLE, ISMODIFIED parameter of $ v where name like 'fast_start_mttr_target;

    ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
    --------------- --------------------------- --------------- ------------------------------
    IMMEDIATE FALSE TRUE FALSE

    Definition of the fast_mttr_target to 1200 = 20 minutes of the control point of switching on the recommendation of the Oracle

    Ask the view v instance_recovery $

    (4) SQL > select ACTUAL_REDO_BLKS, TARGET_REDO_BLKS, TARGET_MTTR, ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE, CKPT_BLOCK_WRITES from v$ instance_recovery;

    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    ---------------- ---------------- ----------- -------------- -------------------- -----------------
    276 165888 * 93 * 59 361 16040

    Here target MTTR was 93 so I put the fast_mttr_target to 120

    SQL > alter system set fast_start_mttr_target = 120 scope = both;

    Modified system.

    The size of the log file suggested by v$ instance_recovery is now 290 MB

    SQL > select ACTUAL_REDO_BLKS, TARGET_REDO_BLKS, TARGET_MTTR, ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE, CKPT_BLOCK_WRITES from v$ instance_recovery;

    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    ---------------- ---------------- ----------- -------------- -------------------- -----------------
    59 165888 93 59 290 16080

    After you change the size of the logfile to 290 as indicated below by v$ log view;

    SQL > select GROUP #, THREAD #, SEQUENCE #, BYTES of the log v$.

    GROUP # THREAD # SEQUENCE # BYTES
    ---------- ---------- ---------- ----------
    1 1 24 304087040
    2 1 0 304087040
    3 1 0 304087040
    4 1 0 304087040
    (5) after you change the size, I observed the anomaly as redo log blocks to be applied for the recovery went to * $ 59 to 696 * v instance_recovery view is now also now offers the size of the log file of * 276 MB *. Have I misunderstood something

    SQL > select ACTUAL_REDO_BLKS, TARGET_REDO_BLKS, TARGET_MTTR, ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE, CKPT_BLOCK_WRITES from v$ instance_recovery;

    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    ---------------- ---------------- ----------- -------------- -------------------- -----------------
    * 696 * 646947 120 59 * 276 * 18474

    If please specify the output above, that I'm unable to optimize the size of the log file and have not been able to achieve the goal of reducing the redo log blocks to apply for recovery, any help is appreciated in this regard.

    I don't think that test for the optimum size of redo's CTD. Maybe if you did each week, as a regular or something, but the unique initial setup and sometimes check in production? N ° but it will be iterative as part of a deployment of init.ora Exchange.

    In addition, it is part of the DBA job to understand the recovery and what to expect. Which includes the understanding of all these parameters, init and tests for the environment. It also includes an understanding of the characteristics of the application and its load.

  • Redo log sizing

    Hi all

    OEL 5.6

    Oracle 9.2.0.6

    Are just my recovery for his number and connects the size based on the delay time of switching? Thank you
    -rw-r--r-- 1 oraprod dba    10486272 Jan  3 23:24 log01a.dbf
    -rw-r--r-- 1 oraprod dba    10486272 Jan  3 23:24 log01b.dbf
    -rw-r--r-- 1 oraprod dba    10486272 Jan  3 23:09 log02a.dbf
    -rw-r--r-- 1 oraprod dba    10486272 Jan  3 23:09 log02b.dbf
    In the alerts log switching frequency:
    Beginning log switch checkpoint up to RBA [0x21186.2.10], SCN: 0x056e.66a6d05f
    Thread 1 advanced to log sequence 135558
      Current log# 1 seq# 135558 mem# 0: /u02/oracle/oaproddata/log01a.dbf
      Current log# 1 seq# 135558 mem# 1: /u02/oracle/oaproddata/log01b.dbf
    Thread 1 cannot allocate new log, sequence 135559
    Checkpoint not complete
      Current log# 1 seq# 135558 mem# 0: /u02/oracle/oaproddata/log01a.dbf
      Current log# 1 seq# 135558 mem# 1: /u02/oracle/oaproddata/log01b.dbf
    Wed Jan  2 20:37:01 2013
    Completed checkpoint up to RBA [0x21186.2.10], SCN: 0x056e.66a6d05f
    Wed Jan  2 20:37:01 2013
    Beginning log switch checkpoint up to RBA [0x21187.2.10], SCN: 0x056e.66a6d5e4
    Thread 1 advanced to log sequence 135559
      Current log# 2 seq# 135559 mem# 0: /u02/oracle/oaproddata/log02a.dbf
      Current log# 2 seq# 135559 mem# 1: /u02/oracle/oaproddata/log02b.dbf
    Thread 1 cannot allocate new log, sequence 135560
    Checkpoint not complete
      Current log# 2 seq# 135559 mem# 0: /u02/oracle/oaproddata/log02a.dbf
      Current log# 2 seq# 135559 mem# 1: /u02/oracle/oaproddata/log02b.dbf
    Wed Jan  2 20:37:07 2013
    Completed checkpoint up to RBA [0x21187.2.10], SCN: 0x056e.66a6d5e4
    Wed Jan  2 20:37:07 2013
    Beginning log switch checkpoint up to RBA [0x21188.2.10], SCN: 0x056e.66a6e3f2
    Thread 1 advanced to log sequence 135560
      Current log# 1 seq# 135560 mem# 0: /u02/oracle/oaproddata/log01a.dbf
      Current log# 1 seq# 135560 mem# 1: /u02/oracle/oaproddata/log01b.dbf
    Wed Jan  2 20:37:18 2013
    Thread 1 cannot allocate new log, sequence 135561
    Checkpoint not complete
      Current log# 1 seq# 135560 mem# 0: /u02/oracle/oaproddata/log01a.dbf
      Current log# 1 seq# 135560 mem# 1: /u02/oracle/oaproddata/log01b.dbf
    Wed Jan  2 20:37:18 2013
    Completed checkpoint up to RBA [0x21188.2.10], SCN: 0x056e.66a6e3f2
    Wed Jan  2 20:37:18 2013
    Beginning log switch checkpoint up to RBA [0x21189.2.10], SCN: 0x056e.66a6f0a4
    Thread 1 advanced to log sequence 135561
      Current log# 2 seq# 135561 mem# 0: /u02/oracle/oaproddata/log02a.dbf
      Current log# 2 seq# 135561 mem# 1: /u02/oracle/oaproddata/log02b.dbf
    Wed Jan  2 20:40:14 2013
    Completed checkpoint up to RBA [0x21189.2.10], SCN: 0x056e.66a6f0a4
    Wed Jan  2 20:40:53 2013
    Beginning log switch checkpoint up to RBA [0x2118a.2.10], SCN: 0x056e.66a70cb2
    Thread 1 advanced to log sequence 135562
      Current log# 1 seq# 135562 mem# 0: /u02/oracle/oaproddata/log01a.dbf
      Current log# 1 seq# 135562 mem# 1: /u02/oracle/oaproddata/log01b.dbf
    Wed Jan  2 20:43:24 2013
    Completed checkpoint up to RBA [0x2118a.2.10], SCN: 0x056e.66a70cb2
    And also I see frequently repeated ORA-1555 on the same statement in the alerts log, how can I avoid this?
    Is this cause also of small redo logs?
    Beginning log switch checkpoint up to RBA [0x211b2.2.10], SCN: 0x056e.70580afc
    Thread 1 advanced to log sequence 135602
      Current log# 1 seq# 135602 mem# 0: /u02/oracle/oaproddata/log01a.dbf
      Current log# 1 seq# 135602 mem# 1: /u02/oracle/oaproddata/log01b.dbf
    Wed Jan  2 21:26:14 2013
    Completed checkpoint up to RBA [0x211b2.2.10], SCN: 0x056e.70580afc
    Wed Jan  2 21:26:16 2013
    ORA-01555 caused by SQL statement below (Query Duration=2913 sec, SCN: 0x056e.66a6f853):
    Wed Jan  2 21:26:16 2013
     INSERT INTO RA_INTERFACE_ERRORS
     (INTERFACE_LINE_ID,
      MESSAGE_TEXT,
      INVALID_VALUE)
    SELECT
    INTERFACE_LINE_ID,
    :b_err_msg6,
    'trx_number='||T.TRX_NUMBER||','||'customer_trx_id='||TL.CUSTOMER_TRX_ID
    FROM RA_INTERFACE_LINES_GT IL, RA_CUSTOMER_TRX_LINES TL, RA_CUSTOMER_TRX T
    WHERE  IL.REQUEST_ID = :b1
    AND    IL.INTERFACE_LINE_CONTEXT = 'ORDER ENTRY'
    AND    T.CUSTOMER_TRX_ID =TL.CUSTOMER_TRX_ID
    AND  IL.INTERFACE_LINE_CONTEXT = TL.INTERFACE_LINE_CONTEXT
    AND IL.INTERFACE_LINE_ATTRIBUTE1 = TL.INTERFACE_LINE_ATTRIBUTE1
    AND IL.INTERFACE_LINE_ATTRIBUTE2 = TL.INTERFACE_LINE_ATTRIBUTE2
    AND IL.INTERFACE_LINE_ATTRIBUTE3 = TL.INTERFACE_LINE_ATTRIBUTE3
    AND IL.INTERFACE_LINE_ATTRIBUTE4 = TL.INTERFACE_LINE_ATTRIBUTE4
    AND IL.INTERFACE_LINE_ATTRIBUTE5 = TL.INTERFACE_LINE_ATTRIBUTE5
    AND IL.INTERFACE_LINE_ATTRIBUTE6 = TL.INTERFACE_LINE_ATTRIBUTE6
    AND IL.INTERFACE_LINE_ATTRIBUTE7 = TL.INTERFACE_LINE_ATTRIBUTE7
    AND IL.INTERFACE_LINE_ATTRIBUTE8 = TL.INTERFACE_LINE_ATTRIBUTE8
    AND IL.INTERFACE_LINE_ATTRIBUTE9 = TL.INTERFACE_LINE_ATT
    Wed Jan  2 21:26:22 2013
    Beginning log switch checkpoint up to RBA [0x211b3.2.10], SCN: 0x056e.7062ba10
    Thread 1 advanced to log sequence 135603
      Current log# 2 seq# 135603 mem# 0: /u02/oracle/oaproddata/log02a.dbf
      Current log# 2 seq# 135603 mem# 1: /u02/oracle/oaproddata/log02b.dbf
    Wed Jan  2 21:26:22 2013
    How can I configure management automatic cancellation in 9i?

    Thank you

    If UNDO_RETENTION has no impact on the ORA-01555 error, this error message you will get in this scenario?

    UNDO_RETENTION = 1800
    Query running time = 1900

    Works SQL and requires a block of data to read cancellation that has been updated there are 1900. Meanwhile, other users are using the system and the cancellation expired (anything above 1800) was crushed. User request will be error, and I would say that this will be an ORA-01555? Correct me if I'm wrong.

    As pointed out by bby Viswarayar Maran, your UNDO_RETENTION must be on the longer duration of the query. It's what he's there for. Yes, it can happen when you commit inside a loop, but it may happen in this case, too, I think.

    Actions:

    1 increase the size of REDO logs
    2. take another report statspack for the same period of time on a comparable day (in terms of workload, running processes, etc.) and see what it looks like

    Question: Your statspack report is from 22:00 - midnight, what are your pics to load? You have a working window batch during the night which is and then charge during the day? If so, we will need to look at this time for the statspack report, too.

Maybe you are looking for