Controlfile and Redo Log on a disk group

Hello!

Controlfile and Redo Log on a disk group - it's good or bad?

You have the best practices or Doc ID on Metalink?

918027 wrote:
You must have two control files on two separate disks and at least 2 members of each group redolog, each on a separate drive.

http://docs.Oracle.com/CD/B10500_01/server.920/a96521/control.htm#4578

No, you do not have "must have".

You have not yet 'must have' more than a control queue.
You do not have 'must have' more than 2 redo log groups, and these groups don't 'must have', but one file each.

This is the "must have".

Now, it's something else.

You "should" have a minimum of 2 files of control, and that they "should" (not "needs") be on separate disks.
You "should" have at least 2 files of members in each group of redo log, and that they "should" (not "needs") be on separate disks.

And FWIW, I see no reason to keep necessarily completely separate redo file control file. They key is to have versions multiplexed of a particular type, separated from their 'twin '.

Tags: Database

Similar Questions

  • datafiles and redo logs on the same drive

    Hi guys,.

    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo002.htm#i1306224

    >

    Data files should also be placed on different disks of redo log files to reduce contention in writing of the data blocks and redo records.
    >

    I really think if he actually any challenge when the first of all oracle can only writes files 1 redo log at a time.
    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo001.htm
    >
    Database only Oracle uses redo log files at the same time to store redo records written since the restore log buffer by progression. Log roll forward LGWR is preparing actively to is called the log during recovery.
    >

    If the process flow, I got after reading the chapters is

    When LGWR fills a log file of redo (roll forward records) then there will be a log switch + control points where writing for data blocks occur. There seems to be a flow series rather than a simultaneous sort of flow. So I don't really understand it when he speaks of contention will take place when records of data and files written redo which is the redo log file are on the same disks.

    Just to confirm with you guys, whenever there is a switch of newspaper one point of control occurs too right.
    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo002.htm? You can search checkpoint to access the section she made mention of this documentation.

    Think about this:

    Restore means to keep the information around in case you need to re-do (recovery or standby mode, i.e. the continuous collection). Updates as you do in a broad sense will be so potentially being twice, once for the data files and once to do it again. I'm sure you understand that the data is written in memory, and then later to the data by the writer of the db files, while there could be many more group of writing data, or they may even be long delayed. In addition, the data files are read at random, so you can't really think again in series compared to the data. Do it again is series, archiving is set, but the data is random, and random how depends on how your system is used.

    So that means all the type of reads and writes per redo and archive is fundamentally different from data and cancel. In the first case, you want to be able to breath out as I/O that you can, for the latter, you want to be able to randomly reading or writing at different times, with Oracle being smart enough to do a bit of it in memory and optimistic enough to make assumptions on when to do things and quite lazy for not doing everything right. Roll forward is critical.

    A while ago, someone pointed out that the e/s modern buffered in memory, do not really worry about this, because all the work required to set up and maintain it after spending records, fees are not much better than striping and mirroring everything (you can google SAMI). This is true to a point, and we can debate endlessly about RAID types and their effects on performance and how their buffering makes [url http://www.baarf.com/] useless BAARF. But the real debate is, where is the point that you should use separate to redo and data devices? In the real world, we often receive a standard hardware configuration, which works very well until it's not. A disk or controller will puff in a RAID-5 can happen "is not" real quick.

    You should probably take two thoughts:

    The docs are pretty General, and some old tips do not apply, may have transformed into myth or perhaps too general to be meaningful.

    There is always something in the db, and the more things underway, less you can make generalizations about the serialization.

  • Standby database is down and redo logs is not transmitted

    Hi all

    I have a question about the primary recovery database logs transmition for the standby database.
    What happens if the transmition of redo logs was arrested for a period of time (1 month), if I want to re - build the database ensures what measures must be applied?
    The papers of recovery will be put on hold? and once the database pending is redo these logs will be sent again? or I have to rebuild my database Eve from scratch?

    Kind regards

    Hello;

    When I rebooted the month so I would change the parameter 'log_archive_dest_state_n' on the primary to postpone next it would stay like that.

    I would like to delete the database in waiting, and if the folder structure was not the same I correct it.

    When the month has increased I would use RMAN to duplicate a new sleep mode:

    http://www.Visi.com/~mseberg/duprman2.html

    And then I put "log_archive_dest_state_n" to Activate.

    Best regards

    mseberg

  • Get the former locations of the data files and Redo logs

    Version: 11.2
    Platform: Solaris 10

    When we manage hundreds of DBs, we do not know the locations of all DB files these allows DBs. say a DB goes down and you have all the required RMAN backups.

    When you restore the DB in a new location in the path of the new server, you must run the commands for the data files and ORLs below. But how do we know

    The former location of the data files.

    B. the old location of redo online stores that I can run

    run
    alter database rename file 'oldPath_of_OnlineRedoLogs' to 'newPath_of_OnlineRedoLogs' ;  --- Without this command , the restored control file will still reflect the old control file location
    run {
    set newname for datafile 1 to '/u04/oradata/lmnprod/lmnprod_system01.dbf' ;
    set newname for datafile 2 to '/u04/oradata/lmnprod/lmnprod_sysaux01.dbf' ;
    set newname for datafile 3 to '/u04/oradata/lmnprod/lmnprod_undotbs101.dbf' ;
    set newname for datafile 4 to '/u04/oradata/lmnprod/lmnprod_audit_ts01.dbf' ;
    set newname for datafile 5 to '/u04/oradata/lmnprod/lmnprod_quest_ts01.dbf' ;
    set newname for datafile 6 to '/u04/oradata/lmnprod/lmnprod_yelxr_ts01.dbf' ;
    .
    .
    .
    .
    .
    }

    Hello

    With the help of Oracle 11.2, you can use feature 'set newname for database' using OMF.

    SET NEWNAME FOR DATABASE TO '/oradata/%U';
    RESTORE DATABASE;
    SWITCH DATAFILE ALL;
    SWITCH TEMPFILE ALL;
    RECOVER DATABASE;
    

    After the restore and recover databases (i.e. before resetlog open) you can do to rename redolog. Just a query column member from v$ logfile and deliver ' alter database rename file 'oldPath_of_OnlineRedoLogs' to 'newPath_of_OnlineRedoLogs ';

    When we use the DSO is much easier to use OMF because Oracle automatically creates the directory structure.
    But when we use the file system that the OMF does not serve due DBA dislikes system generated on file system names.

    If you don't like OMF file system, you can use the script on thread below to help restore you using readable for datafile names, tempfile, and redo.

    {message: id = 9866752}

    Kind regards
    Levi Pereira

  • Redo Log and Supplemental Logging doubts related

    Hi friends,

    I am a student extra import logging in detail. Reading a lot of articles and the oracle documentation to this topic and redo logs. But couldnot find answers some doubts...
    Please help me to delete.

    Scenario: we have a table with primary key. And we execute a query update on the table that does not use the primary key column in a clause...
    Question: in this case, the roll forward records entry generated for changes made by put request to update contain primary column values... ?

    Question: If we have a table with primary key, we need to enable additional logging on primary column of this table? If so, in what circumstances, do we need to?

    Question: If we set up replication streams on this (having the primary key) table, why do we really need to enable saving of its complementary (I read documentation saying that flow requires some then more information., but in reality what information does it. Once again this question is closely related to the first question.)

    Please suggest also any good article/site that provide inside details of the redo log and additional logging, if you know.

    Kind regards
    Lifexisxnotxsoxbeautiful...

    (1) assuming that you do not update the primary key column and additional logging is not enabled, Oracle doesn't have to log in the primary key column in the log to roll forward, just the ROWID.

    (2) is rather difficult to answer without being tautological. You need to enable additional logging if and only if you have some use downstream for additional columns in the redo logs. Rivers and streams, these technologies built above are the most common reason to enable additional logging.

    (3) If you run an update as

    UPDATE some_table
      SET some_column = new_value
     WHERE primary_key = some_key_value
       AND <>
    

    and look at an update statement that LogMiner relies on newspapers in recovery in the absence of additional record, basically it would be something like

    UPDATE some_table
      SET some_column = new_value
     WHERE rowid = rowid_of_the_row_you_updated
    

    Oracle has no need to replay the exact SQL statement you issued, (and so he doesn't have to write the SQL statement in the redo log, he doesn't have to worry if the UPDATE takes a long time to run (otherwise, it would take so much time to apply an archived log as it did to generate the log) ((, which would be disastrous in a recovery situation) etc.). He just needs to rebuild the SQL statement of the information contained in the restoration by progression, which is just the ROWID and the columns that have changed.

    If you try to execute this statement on a different database (via the stream, for example) may the ROWID on totally different destination database (since a ROWID is just a physical address of a line on disc). So adding additional record tells Oracle to connect to the primary again key column and allows LogMiner / flow / etc. to rebuild the statement using the values of the primary key for the changed lines, which would be the same on the source database and destination.

    Justin

  • restore the data files and redo

    Jin
    in a scenario of recovery

    I have lost data and redo logs files but still have controlfile and spfile
    H:\>rman target /
    
    Recovery Manager: Release 10.2.0.4.0 - Production on Thu May 21 11:16:08 2009
    
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    
    connected to target database: ORCL (DBID=1215151677, not open)
    
    RMAN> startup nomount
    
    database is already started
    
    RMAN> restore database;
    
    Starting restore at 21-MAY-09
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=155 devtype=DISK
    
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
    restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
    restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
    restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
    channel ORA_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\FLASH_REC
    OVERY_AREA\ORCL\BACKUPSET\2009_05_21\O1_MF_NNNDF_TAG20090521T111219_51BB84N3_.BK
    P
    channel ORA_DISK_1: restored backup piece 1
    piece handle=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2009_05
    _21\O1_MF_NNNDF_TAG20090521T111219_51BB84N3_.BKP tag=TAG20090521T111219
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:55
    Finished restore at 21-MAY-09
    
    RMAN> recover database;
    
    Starting recover at 21-MAY-09
    using channel ORA_DISK_1
    
    starting media recovery
    media recovery failed
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 05/21/2009 11:17:38
    ORA-00283: recovery session canceled due to errors
    RMAN-11003: failure during parse/execution of SQL statement: alter database reco
    ver if needed
     start
    ORA-00283: recovery session canceled due to errors
    ORA-00313: open failed for members of log group 3 of thread 1
    ORA-00312: online log 3 thread 1: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.
    LOG'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    
    RMAN>
    It seems that RMAN has not saved the redo logs? is that correct?

    How could recover from this situation?

    concerning

    There is an associated bug (it can still affect 10.2.0.4) with CAMILLE using current controlfile with until the TIME.

    Try the process with the SEQUENCE up to THAT, instead of until the TIME.

  • How to create Redo Log as a good way to avoid the problem of performance

    Hi Experts,

    I work in the following environment.

    OS - Windows server 2012

    version - 11.2.0.1.0

    Server: production server

    1 to 10 of each month, we have huge process. Like many DML and ddl involve.

    I've implemented the RMAN also.

    My Alerts log entry is as below,

    Kills Sep 08 17:05:34 2015

    Thread 1 cannot allot of new newspapers, 88278 sequence

    Private stream flush is not complete

    Currently journal # 1, seq # 88277 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO01. JOURNAL

    Thread 1 Advanced to record the sequence 88278 (switch LGWR)

    Currently Journal # 2 seq # 88278 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL

    Kills Sep 08 17:05:38 2015

    Archived journal 81073 extra for each sequence 1 88277 0x38b3fdf6 dest ID thread entry 1:

    Kills Sep 08 17:05:52 2015

    Thread 1 cannot allot of new newspapers, sequence 88279

    Checkpoint is not complete

    Currently Journal # 2 seq # 88278 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL

    Thread 1 Advanced to record the sequence 88279 (switch LGWR)

    Currently Journal # 3 seq # 88279 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO03. JOURNAL

    Kills Sep 08 17:05:58 2015

    Archived journal 81074 extra for each sequence 1 88278 0x38b3fdf6 dest ID thread entry 1:

    Kills Sep 08 17:06:16 2015

    Thread 1 cannot allot of new newspapers, sequence 88280

    Checkpoint is not complete

    When I check in the internet I found a few points and need clarity also by the following.

    -It is recommended that this switch Redo log 5 times maximum per hour, but I have this time of maximum peak about 100 redo log changes occur.

    -It is recommended to have the large size of the Redo log. But I have 3 Group of redo and each size is 50 MB by default.

    -My group of Redo log not multiplexed and it by default with each group have only a single Member

    Group 1 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO01. JOURNAL

    Group 2 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL

    Group 3 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO03. JOURNAL

    And advised by experts with add that again group will give more performance in the other location. Is it mean that I need to add the Redo log as a different group, player or Redo Log itself I need to replace the other location.

    Say in the end, My Live server works with the location and size of restore default. I'm trying to create a better way and get all confused

    How to do it perfectly.

    Experts, please share your comments to my environment to log redo instance as better through your comment or recommended weblink based on my above needs, or a document.

    Thanks in advance, if you need more information to help me, then ask me, I'll share you.

    Hello

    You can't resize an existing redo log group.

    You will need to create 3 new redo log groups and drop old log redo (with 200 MB size) groups.

    Here you can find a good explanation: DBA for Oracle tips Archives

  • Disk groups creation fails

    After having manually by selecting a desired group of HARD drive and SSD for creating a disk group, the operation took place, but there is no group created disc.

    There are currently two possible problems that can be the cause of this behavior. In most cases, it has to do with Virtual SAN conceded not correctly. License cluster via the Web Client vSphere Virtual SAN. Virtual SAN feature is not automatically added to the cluster when a Virtual SAN license is added to the catalog server vCenter license.

    The second possibility is the vSphere Client Web refresh time out. Depending on the number of disk groups and the number of disks, the end of the operation can take a long time. Disconnection of their system and log in.

  • optimize the size of the temp and redo files

    Hi gurus,

    y at - he of the formulas for finding the optimal size of temporary storage and redo logs groups.

    If redo log groups are often fill what steps to do.

    I know the formula for the optimal size of undo tablespace: rate of transactions * the undo tablespace retention policy * block size

    Please let me know the solution of problems.

    Thank you!

    In general, I try to shoot for redo log files exchange once every 20 minutes. However, in many systems, there can be a wide range of time swap, depending on the workload.

    In regards to the temp, it's pretty easy. Define up to about 32 MB w/autoextend on and let it grow to be larger because it must be. Once that happens, it will be virtually stable at this size - until something new comes along and he needs to grow even more.

  • Redo log buffer question

    Hi master,

    This seems to be very basic, but I would like to know internal process.

    We know all that LGWR writes redo entries to redo logs on the disk online. on validation SCN is generated and the tag to the transaction. and LGWR writes this to online redo log files.

    but my question is, how these redo entries just redo log buffer? Look at all the necessary data are read from the cache of the server process buffers. It is modified it and committed. DBWR wrote this in files of data, but at what time, what process writes this committed transaction (I think again entry) in the log buffers cache?

    LGWR do that? What exactly happens internally?

    If you can please focus you some light on internals, I will be grateful...


    Thanks and greetings
    VD

    Vikrant,
    I will write less coz used pda. In general, this happens
    1. a calculation because how much space is required in the log buffer.
    2 server process acquires redo copy latch to mention some reco will be responsible.

    Redo allocation latch is used to allocate space.

    Redo allocation latch is provided after the space is released.

    Redo copy latch is used copy redo contained in the log buffer.

    Redo copy lock is released

    HTH
    Aman

  • Disk groups are not visible cluster. vSAN datastore exists. 2 guests (on 8) cluster do not see the vSAN data store. Their storage is not recognized.

    http://i.imgur.com/pqAXtFl.PNG

    http://i.imgur.com/BnztaDD.PNG

    Do not know how even tear it down and rebuild it if the disk groups are not visible. The discs are in good health on each host storage adapters.

    Currently the latest version of vCenter 5.5. Hosts running 5.5 build 2068190

    Just built. Happy to demolish and rebuild. Just do not know why it is not visible on the two hosts and the disk groups are only recognized 3 guests when more are contributing. Also strange that I can't get the disk groups to fill in vCenter. I tried two different browsers (chrome and IE).

    I have now works.

    All the identical 5.5 relies on ESXi hosts. All hosts are homogeneous CPU/sum of the prospects for disk controller / installed RAM/storage.

    I have work. I had to manually destroy all traces of the vSAN on each single to help host node:

    (1) put the hosts in maintenance mode and remove the cluster. I was unable to disable vSAN in the cluster, I made on each node host (manually via the CLI below) then disconnected web client vCenter and return to finally refresh the ability to disable on the cluster.

    esxcli vsan cluster get - to check the status of each host.

    esxcli vsan cluster drop - the vSAN cluster host.

    storage of vsan esxcli list - view records in the individual host group

    esxcli vsan storage remove-d naa.id_of_magnetic_disks_here - to remove each of the disks in the disk group (you can ignore this using the following command to remove the SSD only falling each disc in this host group).

    esxcli vsan storage remove s naa.id_of_solid_state_disks_here - this is the SSD and all the magnetic disks in a given disk group.

    After that, I was able to manually add hosts to the cluster, leave maintenance mode and configure the disk groups. Aggregated data of the vSAN data store is correct now, and everything is functional.

    Another question for those of you who still read... How to configure such as the VM storage strategy that migrates towards (or inspired) the vSAN data store will immediately resume the default storage policy, I built for VSANs?

    Thanks for anyone who has followed.

  • That redo log files waiting?

    Hello Experts,

    I read articles on the log redo and undo segment files. I was wondering something very simple. That redo log files waiting in there? It stores the sql statements?

    Lets say that my update statement to modify 800 blocks of data. A unique single update statement can modify different data 800 right blocks? Yes, it may be true. I think that these data blocks can not hold buffers to the log to roll forward, right? I mean I know exactly what to do redo log buffer and redo log file. And I know that the task of backgrounding LGWR. But, I wonder if she she holds the data blocks? It is not supposed to hold data like cache buffer blocks, right?

    My second question is, rollback isn't effect to restore the newspaper to the right buffer? Because it does not need log buffer for effect do it again. Conversely, the restoration; statement is included in the restore log buffer by progression when someone isse, am I right?

    As far as I know, rollback interact directly with UNDO TABLESPACE?

    I hope that I have to express myself clearly.

    Thanks in advance.

    Here's my question:

    My second question is, rollback isn't effect to restore the newspaper to the right buffer? Because it does not need log buffer for effect do it again. Conversely, the restoration; statement is included in the restore log buffer by progression when someone isse, am I right?

    As far as I know, rollback interact directly with UNDO TABLESPACE?

    Yes, where else would the undo data come from? Undo tablespace contains the Undo segments that contain the Undo data required for the restoration of your transaction.

    I can say that rollback does not alter the data of the log buffer rede to the past. In other words, change vectors will be remain the same before restoration. Conversely, rollback command is also recorded in the log file of restoration by progression. As the name, all orders are saved in the REDO LOGS.

    I hope that I am wrong so far?

    Not sure why you even the buffer log roll forward for Rollback? This is the reason why I asked you it was for, where occurs the dose the cancellation? And the answer for this is that it happens in the buffer cache. Before you worry about the drivers of change, you must understand that it is not serious what contains where as long as there is no transaction recorded in the operating of the Undo segment table. If the operating table indicates that the transaction is longer there, there must be a cancellation of the transaction. Vectors of change are saved in the file log roll forward, while the restore happens on blocks of data stored in the file "data" undo blocks stored in the undo file "data".

    At the same time I read an article about redo and undo. In this article process transaction is explained. Here is the link http://pavandba.files.wordpress.com/2009/11/undo_redo1.pdf

    I found some interesting information in this article as follows.

    It is worth noting that during the restore process, recovery logs never participate. The only time where redo logs are read is retrieving and archiving. This is the concept of tuning key: redo logs are written on. Oracle does not read during normal processing. As long as you have sufficient devices so that when the ARC is reading a file, LGWR's writing to a different device, then there no contention for redo logs.

    If redo logs are never involved in the restoration process, how is it Oracle will then know the order of the transaction? As far as I know it is only written in redo logs.

    I have thoughts very amazed to Aman.

    Why you ask?

    Now, before giving a response, I say two things. One, I know Pavan and he is a regular contributor to this forum and on several other forums Facebook and two, with all due respect to him, a little advice for you, when you try to understand a concept, to stick to the Oracle documentation and do not read and merge articles/blog-posts from the web. Everone, which publishes on the web, has their own way to express things and many times, the context of the writing makes it more confusing things. Maybe we can erase the doubts that you can get after reading the various search results on the web.

    Redo logs used for the restoration, not to restore. The reason is the redo log files are applied in sequential order, and this is not the case when we look for the restoration. A restore is required to do for a few blocks away. Basically, what happens in a restoration, is that the records of cancellation required for a block of data are sought in the reverse order of their creation. The entry of the transaction is in the slot ITL of the block of data that point to the necessary undo bytes Address (UBA) using which oracle also knows what that undo the blocks would be necessary for the restoration of your transaction. As soon as the blocks of data will be cancelled, the ITL slots would be cleared as well.

    In addition, you must remember, until the transaction is not qualified as finished, using either a commit or a rollback, the cancellation of this data would remain intact. The reason for this is that oracle would ensure that undo data would be available to make the cancellation of the transaction. The reason why Undo data are also recorded in the journals of recovery is to ensure that in the event of the loss of the cancellation of the data file, retrieving them would be possible. Because it would also require changes that's happened on the blocks cancel, restore the vectors change associated with blocks of cancellation are also saved in the buffer log roll forward and, in the redo log files.

    HTH

    Aman...

  • Redo log and members groups

    Hi I got very confused by looking at the documentation of redo log groups and members of 11g.

    Please could someone help me with that?

    -What is the difference between a group of newspapers of restoration by progress and members within the Group?

    -How does add a member help protect / multiplex a group?

    -I am course I've read that if all members of the current redo log group are damaged, then 2 other groups does not bring the database back to how it was? ... so what's the point of having them?

    Any help would be appreciated

    806595 wrote:
    OK thanks, so again an allows for example, we had only one newspaper group and the database was NOT in archivelog mode, once the newspaper filled redo id, it would be re-written and proevious changes would be lost?

    Before you start, don't forget not that whenever you would need redolog groups, this will be the mandatory requirement, 2:1, and that means that 2 groups of newspaper with 1 member of each. Now, the newspaper group is a logic thing. There is no such thing actually exists. His way to the club/join/merge/combine/group multiple and physics redo log files that are written together. A minimum of a physical member is a must in a group, therefore 2:1.

    Now, groups of newspapers (and in their midst, the log files are in fact) are written sequentially by LGWR, which means that LGWR would write about a group and all its members at the same time, fill it out completely and then made a switch (called a LOG SWITCH) to the next group of inactive log and keeps current processes. This is why a minimum of paper two groups are needed for the LGWR to work. The ius of work done on a single group and when the switch happens, work of the previous, filled will be controlled for the data files and will be marked as inactive thereafter, which makes even elgible for the LGWR to write to.

    HTH
    Aman...

  • [Account] redo logs groups and its members?

    Hello gurus,

    Well well, the theory written in the book may be different on the actual situation, different configuration of the different company...

    How we determine how many redo logs groups it is should be? And how many members each group better?

    What are the considerations?

    Kind regards

    NIA...

    What are the considerations?

    How journal frequent switches under the load of work 'normal' (time in minutes)?
    Can the archiver complete its work before this log file is required to be used again?

  • Redo logs and crash of the disk where they are stored

    What happens when the disk where the redo logs are written becomes HS (e.g., a disk crash). I read somewhere that the database be shutdown. Is this right? You can ask to copy the recovery logs at 2 locations in order to improve the security of the system?

    Thanks in advance,
    Alexandre Bailly

    user8716187 wrote:
    What happens when the disk where the redo logs are written becomes HS (e.g., a disk crash). I read somewhere that the database be shutdown. Is this right? You can ask to copy the recovery logs at 2 locations in order to improve the security of the system?

    Thanks in advance,
    Alexandre Bailly

    Online redo logs can and should be multiplexed, with copies on separate physical devices. The details are in the ALTER DATABASE command, found in the SQL reference Guide. Additional information can be found by going to tahiti.oracle.com, drilling until your product (without name) and version, the by using the "search" feature to find something on "restore". A lot of information in the Administrators Guide.

    In addition, default Oracle will appoint the «redo_*.log» redologs ".Log" is an open invitation to ITS s to open the with a text editor or delete. After all, 'it's just the log file I named mine by the older convention of the "redo_*.rdo" just to reduce this kind of human error.

Maybe you are looking for