Redo Log and Supplemental Logging doubts related

Hi friends,

I am a student extra import logging in detail. Reading a lot of articles and the oracle documentation to this topic and redo logs. But couldnot find answers some doubts...
Please help me to delete.

Scenario: we have a table with primary key. And we execute a query update on the table that does not use the primary key column in a clause...
Question: in this case, the roll forward records entry generated for changes made by put request to update contain primary column values... ?

Question: If we have a table with primary key, we need to enable additional logging on primary column of this table? If so, in what circumstances, do we need to?

Question: If we set up replication streams on this (having the primary key) table, why do we really need to enable saving of its complementary (I read documentation saying that flow requires some then more information., but in reality what information does it. Once again this question is closely related to the first question.)

Please suggest also any good article/site that provide inside details of the redo log and additional logging, if you know.

Kind regards
Lifexisxnotxsoxbeautiful...

(1) assuming that you do not update the primary key column and additional logging is not enabled, Oracle doesn't have to log in the primary key column in the log to roll forward, just the ROWID.

(2) is rather difficult to answer without being tautological. You need to enable additional logging if and only if you have some use downstream for additional columns in the redo logs. Rivers and streams, these technologies built above are the most common reason to enable additional logging.

(3) If you run an update as

UPDATE some_table
  SET some_column = new_value
 WHERE primary_key = some_key_value
   AND <>

and look at an update statement that LogMiner relies on newspapers in recovery in the absence of additional record, basically it would be something like

UPDATE some_table
  SET some_column = new_value
 WHERE rowid = rowid_of_the_row_you_updated

Oracle has no need to replay the exact SQL statement you issued, (and so he doesn't have to write the SQL statement in the redo log, he doesn't have to worry if the UPDATE takes a long time to run (otherwise, it would take so much time to apply an archived log as it did to generate the log) ((, which would be disastrous in a recovery situation) etc.). He just needs to rebuild the SQL statement of the information contained in the restoration by progression, which is just the ROWID and the columns that have changed.

If you try to execute this statement on a different database (via the stream, for example) may the ROWID on totally different destination database (since a ROWID is just a physical address of a line on disc). So adding additional record tells Oracle to connect to the primary again key column and allows LogMiner / flow / etc. to rebuild the statement using the values of the primary key for the changed lines, which would be the same on the source database and destination.

Justin

Tags: Database

Similar Questions

  • Questions about the parameters of database using a fast recovery area and the writing of two copies of archived redo logs.

    My databases are 11.2.0.3.7 Enterprise Edition. My OS is AIX 7.1.

    I am to convert databases to use individual zones of rapid recovery and have two questions about what values to assign to database settings related to archived redo logs. This example refers to a database.

    I read that if I specify

    Log_archive_dest_1 =' LOCATION = USE_DB_RECOVERY_FILE_DEST'

    the names of newspapers archived redo written in the default quick recovery area is '% t_%S_%r.dbf '.

    In the past my archived redo logs have been appointed based on the parameter

    log_archive_format='GPAIDT_archive_log_%t_%s_%r.arc'

    I think log_archive_format will be ignored for logs archived redo written in the fast recovery area.

    I am planning to write a second copy of the archived redo logs based on the parameter

    ALTER system set log_archive_dest_2 = ' LOCATION = / t07/admin/GPAIDT/arch.

    If I do this, the copy of logs placed in /t07 will be called '% t_%S_%r.dbf' or 'GPAIDT_archive_log_%t_%s_%r.arc '?

    Before my use of a fast recovery area, I used the OEM 12 c Console to specify settings of backup of database that has been deleted and archived redo logs after 1 backup. Oracle manuals say rather specify a deletion of "none" policy and allow Oracle delete newspapers in the area of fast recovery if necessary. Since I got to keep a second copy of these log files in /t07 should I keep the policy that says to delete logs after 1 backup? If I don't do that, how will they removed from /t07?

    Thank you

    Bill

    If I do this, the copy of logs placed in /t07 will be called '% t_%S_%r.dbf' or 'GPAIDT_archive_log_%t_%s_%r.arc '?

    They will be "GPAIDT_archive_log_%t_%s_%r.arc". LOG_ARCHIVE_FORMAT is only ignored for directories under OMF.

    Since I got to keep a second copy of these log files in /t07 should I keep the policy that says to delete logs after 1 backup? If I don't do that, how will they removed from /t07?

    You can hold the deletion policy as it is. Oracle documentation, defining the STRATEGY of the ARCHIVELOG DELETION: "the deletion of archived newspaper policy applies to logs archive destinations, including the area of fast recovery."

  • Clusterware and/or CARS on separate storage, synchronized by applying the redo logs?

    Hello Experts,

    I am doing research on architectures high availability to meet high service LEVEL requirements (> = uptime of 99.7 percent and "without loss of important data") for a client.

    I have few resources for the implementation of this architecture: two physical database servers running 11g Standard Edition (so Data Guard is not an option), Enterprise Edition is not an option because of the price. Data storage will be on a San.

    The ideal solution would be an architecture whose node redundancy (Clusterware / RAC) and redundancy of the data (as in physical Standby: application of redo logs instead of data mirroring (corrupt physical) files).

    I did research Clusterware and CARS, but they use a shared storage. I'll use a SAN for storage, but this will not prevent physical mirroring of the corrupt data files.

    Is it possible to set up a PAP/Clusterware architecture with each separate storage node, where the two databases are synchronized by applying the redo logs?

    Is it possible to instantly apply redo logs to minimize the loss of data in case of automatic failover?

    If we need more information, I'll give you a pleasure it.

    Thanks in advance,

    Peter

    A RAC cluster still need a shared storage for database files: each cluster node cannot have its own separate storage.

    You need at least a physical database server 3rd for the standby database that can function without Data Guard as long as you use you own scripts to send and apply archived redo logs or use a product like dbvisit.

    I don't think it's possible to apply again immediately without Data Guard.

  • Standby Redo logs and the Directory Structure in the Backup Site

    Hi guru

    I just want to confirm, I know that if the directory structure is different, I need to talk about these 2 settings in the file pfile

    on the main site:

    DB_CONVERT_DATAFILE = 'sleep', 'primary '.

    LOG_CONVERT_DATAFILE = 'sleep', 'primary '.

    On the secondary Site:

    DB_CONVERT_DATAFILE = 'primary', 'sleep '.

    LOG_CONVERT_DATAFILE = 'primary', 'sleep '.

    But I want to confirm this weather I have to deliver the full path of the directory in the two paramtere above:

    as:

    DB_CONVERT_DATAFILE = ' / u01/oracle/app/oracle/oradata/sleep ', ' / u01/oracle/app/oracle/oradata/primary.

    LOG_CONVERT_DATAFILE = ' / u01/oracle/app/oracle/oradata/sleep ', ' / u01/oracle/app/oracle/oradata/primary.

    Second Confusion: -.

    After that transfer standby Redo logs creates primary and made to sleep on the foregoing, mentioned the directory structure and after restoring the alongwith primary db backup, that ensures the control file will not affect the journal of physics again placed watch on the above mentioned location.

    Thanks in advance for your help

    vk82 wrote:

    In fact, I create the day before by using the RMAN Duplicate command. but where I am confused is the point if I transfer the backup that i taken the pri on the C:\backup_files path. and after that, I transfer the backups to the waiting on C:\backup_files. After that when I restore it will create datafile and other stuff in another directory I mentioned using DB_FILE_NAME_CONVERT as LOG_FILE_NAME_CONVERT. I think Yes but need your advice during the same period.

    Hello

    Yes, files should be created under the directory mentioned in the path of the parameters 'db_file_name_convert' and 'log_file_name_convert.

    Kind regards
    Shivananda

  • datafiles and redo logs on the same drive

    Hi guys,.

    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo002.htm#i1306224

    >

    Data files should also be placed on different disks of redo log files to reduce contention in writing of the data blocks and redo records.
    >

    I really think if he actually any challenge when the first of all oracle can only writes files 1 redo log at a time.
    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo001.htm
    >
    Database only Oracle uses redo log files at the same time to store redo records written since the restore log buffer by progression. Log roll forward LGWR is preparing actively to is called the log during recovery.
    >

    If the process flow, I got after reading the chapters is

    When LGWR fills a log file of redo (roll forward records) then there will be a log switch + control points where writing for data blocks occur. There seems to be a flow series rather than a simultaneous sort of flow. So I don't really understand it when he speaks of contention will take place when records of data and files written redo which is the redo log file are on the same disks.

    Just to confirm with you guys, whenever there is a switch of newspaper one point of control occurs too right.
    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/onlineredo002.htm? You can search checkpoint to access the section she made mention of this documentation.

    Think about this:

    Restore means to keep the information around in case you need to re-do (recovery or standby mode, i.e. the continuous collection). Updates as you do in a broad sense will be so potentially being twice, once for the data files and once to do it again. I'm sure you understand that the data is written in memory, and then later to the data by the writer of the db files, while there could be many more group of writing data, or they may even be long delayed. In addition, the data files are read at random, so you can't really think again in series compared to the data. Do it again is series, archiving is set, but the data is random, and random how depends on how your system is used.

    So that means all the type of reads and writes per redo and archive is fundamentally different from data and cancel. In the first case, you want to be able to breath out as I/O that you can, for the latter, you want to be able to randomly reading or writing at different times, with Oracle being smart enough to do a bit of it in memory and optimistic enough to make assumptions on when to do things and quite lazy for not doing everything right. Roll forward is critical.

    A while ago, someone pointed out that the e/s modern buffered in memory, do not really worry about this, because all the work required to set up and maintain it after spending records, fees are not much better than striping and mirroring everything (you can google SAMI). This is true to a point, and we can debate endlessly about RAID types and their effects on performance and how their buffering makes [url http://www.baarf.com/] useless BAARF. But the real debate is, where is the point that you should use separate to redo and data devices? In the real world, we often receive a standard hardware configuration, which works very well until it's not. A disk or controller will puff in a RAID-5 can happen "is not" real quick.

    You should probably take two thoughts:

    The docs are pretty General, and some old tips do not apply, may have transformed into myth or perhaps too general to be meaningful.

    There is always something in the db, and the more things underway, less you can make generalizations about the serialization.

  • Controlfile and Redo Log on a disk group

    Hello!

    Controlfile and Redo Log on a disk group - it's good or bad?

    You have the best practices or Doc ID on Metalink?

    918027 wrote:
    You must have two control files on two separate disks and at least 2 members of each group redolog, each on a separate drive.

    http://docs.Oracle.com/CD/B10500_01/server.920/a96521/control.htm#4578

    No, you do not have "must have".

    You have not yet 'must have' more than a control queue.
    You do not have 'must have' more than 2 redo log groups, and these groups don't 'must have', but one file each.

    This is the "must have".

    Now, it's something else.

    You "should" have a minimum of 2 files of control, and that they "should" (not "needs") be on separate disks.
    You "should" have at least 2 files of members in each group of redo log, and that they "should" (not "needs") be on separate disks.

    And FWIW, I see no reason to keep necessarily completely separate redo file control file. They key is to have versions multiplexed of a particular type, separated from their 'twin '.

  • Question on create Redo log after restoring files and recover

    Log files to rebuild RMAN backup? Below the blog describes the steps to restore and recover a RMAN backup in a new host.
    http://blog.CSDN.NET/sweethouse1128/article/details/6948273


    RMAN seems to have backed up and restored the redo log files


    After the resumption, blogger issues
    select * from v$logfile;
    and the result obtained some log files. The rows returned are restored files redo log. right? Redo of RMAN backup and restore log files?

    Redo of RMAN backup and restore log files?

    N ° never. Even when you do a 'coherent' backup (i.e. with the database MOUNT but the mode is not OPEN).

    When you restore a full RMAN on a new server backup, RMAN restore REDO LOG files in the respective directories

    Does not apply because RMAN does not save logs in restoration online.

    Isn't better to manually create redo log files after restoration

    When you perform incomplete recovery, you must issue an ALTER DATABASE OPEN RESETLOGS.
    The RESETLOGS creates log files remake it, if they are not present OR reset them if they are present.
    In a scenario of complete restoration, the online redo log files are already present on the drive (i.e. they are not lost or destroyed and they are not restored a backup).

    Hemant K Collette

  • Standby database is down and redo logs is not transmitted

    Hi all

    I have a question about the primary recovery database logs transmition for the standby database.
    What happens if the transmition of redo logs was arrested for a period of time (1 month), if I want to re - build the database ensures what measures must be applied?
    The papers of recovery will be put on hold? and once the database pending is redo these logs will be sent again? or I have to rebuild my database Eve from scratch?

    Kind regards

    Hello;

    When I rebooted the month so I would change the parameter 'log_archive_dest_state_n' on the primary to postpone next it would stay like that.

    I would like to delete the database in waiting, and if the folder structure was not the same I correct it.

    When the month has increased I would use RMAN to duplicate a new sleep mode:

    http://www.Visi.com/~mseberg/duprman2.html

    And then I put "log_archive_dest_state_n" to Activate.

    Best regards

    mseberg

  • REDO LOG BUFFER DOUBT

    Hi EXPERTS,

    IM on 11 G R2, RHEL 5 my question that when the data changes in the database buffer cache how immediately it is copied in the redo log buffer. I mean what process the copy. I read it in the oracle documentation only when waiting for event [log file command (private stream flush incomplete)] LGWR waits DBWR again complete flushing of buffers IMU in the recovery log buffer. Completion of DBWR LGWR can then finish writing the log in process and then move the log files.
  • relationship between redo log buffer, journal of redo and undo tablespace files

    What is the relationship between the redo log buffer, redo log files and undo tablespace?

    what I understand is

    redo log buffer is the logical area where all the news of recovery were stored until they are transferred by LGWR bank roll forward log online... but y at - it report any with the undo tablespace with these two?

    Please correct me if I'm wrong

    Thanks in advance

    redo log buffer is the logical area where all the news of recovery were stored until they are transferred by LGWR bank roll forward log online... but y at - it report any with cancellations

    tablespace with these two?

    There is a link between files redo log and buffer, but the undo tablespace is something else entirely.

    spend it here with this links

    REDO LOG FILES
    http://www.DBA-Oracle.com/concepts/redo_log_files.htm

    BUFFER REDOLOG
    http://www.DBA-Oracle.com/concepts/redo_log_buffer_concepts.htm

    UNDO TABLESPACE
    Undo tablespace to cancel files to undo or roll back uncommitted changes pray the database.

    Hope you understood.

  • Redo log and members groups

    Hi I got very confused by looking at the documentation of redo log groups and members of 11g.

    Please could someone help me with that?

    -What is the difference between a group of newspapers of restoration by progress and members within the Group?

    -How does add a member help protect / multiplex a group?

    -I am course I've read that if all members of the current redo log group are damaged, then 2 other groups does not bring the database back to how it was? ... so what's the point of having them?

    Any help would be appreciated

    806595 wrote:
    OK thanks, so again an allows for example, we had only one newspaper group and the database was NOT in archivelog mode, once the newspaper filled redo id, it would be re-written and proevious changes would be lost?

    Before you start, don't forget not that whenever you would need redolog groups, this will be the mandatory requirement, 2:1, and that means that 2 groups of newspaper with 1 member of each. Now, the newspaper group is a logic thing. There is no such thing actually exists. His way to the club/join/merge/combine/group multiple and physics redo log files that are written together. A minimum of a physical member is a must in a group, therefore 2:1.

    Now, groups of newspapers (and in their midst, the log files are in fact) are written sequentially by LGWR, which means that LGWR would write about a group and all its members at the same time, fill it out completely and then made a switch (called a LOG SWITCH) to the next group of inactive log and keeps current processes. This is why a minimum of paper two groups are needed for the LGWR to work. The ius of work done on a single group and when the switch happens, work of the previous, filled will be controlled for the data files and will be marked as inactive thereafter, which makes even elgible for the LGWR to write to.

    HTH
    Aman...

  • [Account] redo logs groups and its members?

    Hello gurus,

    Well well, the theory written in the book may be different on the actual situation, different configuration of the different company...

    How we determine how many redo logs groups it is should be? And how many members each group better?

    What are the considerations?

    Kind regards

    NIA...

    What are the considerations?

    How journal frequent switches under the load of work 'normal' (time in minutes)?
    Can the archiver complete its work before this log file is required to be used again?

  • Redo logs and crash of the disk where they are stored

    What happens when the disk where the redo logs are written becomes HS (e.g., a disk crash). I read somewhere that the database be shutdown. Is this right? You can ask to copy the recovery logs at 2 locations in order to improve the security of the system?

    Thanks in advance,
    Alexandre Bailly

    user8716187 wrote:
    What happens when the disk where the redo logs are written becomes HS (e.g., a disk crash). I read somewhere that the database be shutdown. Is this right? You can ask to copy the recovery logs at 2 locations in order to improve the security of the system?

    Thanks in advance,
    Alexandre Bailly

    Online redo logs can and should be multiplexed, with copies on separate physical devices. The details are in the ALTER DATABASE command, found in the SQL reference Guide. Additional information can be found by going to tahiti.oracle.com, drilling until your product (without name) and version, the by using the "search" feature to find something on "restore". A lot of information in the Administrators Guide.

    In addition, default Oracle will appoint the «redo_*.log» redologs ".Log" is an open invitation to ITS s to open the with a text editor or delete. After all, 'it's just the log file I named mine by the older convention of the "redo_*.rdo" just to reduce this kind of human error.

  • Two questions relating to the archive redo logs with RMAN backup

    DB version: 11g

    I am new to RMAN.

    My database is in ARCHIVELOG mode. I intend to make a weekly backup for my db (02:00 every Monday). There will be all the incremental backups between these windows(Monday-to-Monday) of backup that I have would function for retrieving archived redo logs.


    Question1.
    I want to save the archived logs every day (for example at 23:00). How can I configure that?

    These are the configuration setting, that I intend to implement. I don't know how to set up the archive log backup
    configure default device type to disk;
    configure retention policy to redundancy;
    configure device type disk parallelism 1;
    configure channel 1 device type disk clear;
    configure channel 2 device type disk clear;
    configure channel 1 device type disk format '/u05/rman1/datafiles/rmnabackup1_%U';
    configure channel 2 device type disk format '/u05/rman2/datafiles/rmnabackup2_%U';
    configure controlfile autobackup on;
    configure controlfile autobackup format for device type disk to '/u05/rman1/control_files/rmnabackup1_%U';
    Question2.
    After that a new full backup is taken at 02:00 on Mondays, the archived redo logs accumulated since the last 7 days become unnecessary. How can I automate the removal of the archive redo logs with RMAN?

    Archive the log delete them all input command will take the destination of the log archiving log backup archive and delete this destination.

    In the log archive destination he has archived log in the sequence 1 to 100 then will he take the backup and delete any of the destination (Monday 23:00).

    In the log archive destination he has archived sequence journal 101 to 150 then will he take the backup and remove those in the destination (Tuesday 23:00).

    In the log archive destination he has archived log in the sequence from 151 to 180 so will he take the backup and delete any of the destination (Wednesday 10:00).

    It will continue like that.

    Concerning
    Asif Kabir

    -If you help brand the response as correct/useful.

  • That redo log files waiting?

    Hello Experts,

    I read articles on the log redo and undo segment files. I was wondering something very simple. That redo log files waiting in there? It stores the sql statements?

    Lets say that my update statement to modify 800 blocks of data. A unique single update statement can modify different data 800 right blocks? Yes, it may be true. I think that these data blocks can not hold buffers to the log to roll forward, right? I mean I know exactly what to do redo log buffer and redo log file. And I know that the task of backgrounding LGWR. But, I wonder if she she holds the data blocks? It is not supposed to hold data like cache buffer blocks, right?

    My second question is, rollback isn't effect to restore the newspaper to the right buffer? Because it does not need log buffer for effect do it again. Conversely, the restoration; statement is included in the restore log buffer by progression when someone isse, am I right?

    As far as I know, rollback interact directly with UNDO TABLESPACE?

    I hope that I have to express myself clearly.

    Thanks in advance.

    Here's my question:

    My second question is, rollback isn't effect to restore the newspaper to the right buffer? Because it does not need log buffer for effect do it again. Conversely, the restoration; statement is included in the restore log buffer by progression when someone isse, am I right?

    As far as I know, rollback interact directly with UNDO TABLESPACE?

    Yes, where else would the undo data come from? Undo tablespace contains the Undo segments that contain the Undo data required for the restoration of your transaction.

    I can say that rollback does not alter the data of the log buffer rede to the past. In other words, change vectors will be remain the same before restoration. Conversely, rollback command is also recorded in the log file of restoration by progression. As the name, all orders are saved in the REDO LOGS.

    I hope that I am wrong so far?

    Not sure why you even the buffer log roll forward for Rollback? This is the reason why I asked you it was for, where occurs the dose the cancellation? And the answer for this is that it happens in the buffer cache. Before you worry about the drivers of change, you must understand that it is not serious what contains where as long as there is no transaction recorded in the operating of the Undo segment table. If the operating table indicates that the transaction is longer there, there must be a cancellation of the transaction. Vectors of change are saved in the file log roll forward, while the restore happens on blocks of data stored in the file "data" undo blocks stored in the undo file "data".

    At the same time I read an article about redo and undo. In this article process transaction is explained. Here is the link http://pavandba.files.wordpress.com/2009/11/undo_redo1.pdf

    I found some interesting information in this article as follows.

    It is worth noting that during the restore process, recovery logs never participate. The only time where redo logs are read is retrieving and archiving. This is the concept of tuning key: redo logs are written on. Oracle does not read during normal processing. As long as you have sufficient devices so that when the ARC is reading a file, LGWR's writing to a different device, then there no contention for redo logs.

    If redo logs are never involved in the restoration process, how is it Oracle will then know the order of the transaction? As far as I know it is only written in redo logs.

    I have thoughts very amazed to Aman.

    Why you ask?

    Now, before giving a response, I say two things. One, I know Pavan and he is a regular contributor to this forum and on several other forums Facebook and two, with all due respect to him, a little advice for you, when you try to understand a concept, to stick to the Oracle documentation and do not read and merge articles/blog-posts from the web. Everone, which publishes on the web, has their own way to express things and many times, the context of the writing makes it more confusing things. Maybe we can erase the doubts that you can get after reading the various search results on the web.

    Redo logs used for the restoration, not to restore. The reason is the redo log files are applied in sequential order, and this is not the case when we look for the restoration. A restore is required to do for a few blocks away. Basically, what happens in a restoration, is that the records of cancellation required for a block of data are sought in the reverse order of their creation. The entry of the transaction is in the slot ITL of the block of data that point to the necessary undo bytes Address (UBA) using which oracle also knows what that undo the blocks would be necessary for the restoration of your transaction. As soon as the blocks of data will be cancelled, the ITL slots would be cleared as well.

    In addition, you must remember, until the transaction is not qualified as finished, using either a commit or a rollback, the cancellation of this data would remain intact. The reason for this is that oracle would ensure that undo data would be available to make the cancellation of the transaction. The reason why Undo data are also recorded in the journals of recovery is to ensure that in the event of the loss of the cancellation of the data file, retrieving them would be possible. Because it would also require changes that's happened on the blocks cancel, restore the vectors change associated with blocks of cancellation are also saved in the buffer log roll forward and, in the redo log files.

    HTH

    Aman...

Maybe you are looking for

  • 31.2.0 does not install just like the two previous versions and my account is not recognized... once again.

    New version is available, so I allowed to download and install. However, TB is not updated. This is not the 1st time this has happened and NORMALLY a 'solution' is provided by Treasury Board to correct the problem. In addition, a user account is not

  • 12.4.1 went playlists

    I have a time machine backup. Can I restore my playlists from there? Others lose their playlists with the last update?

  • Suddenly the videos do not play in my browser to any site

    I use the mozilla13 version, it is good to start. After two days, it suddenly stopped who travel videos online showing a mistake about iteven upgrade the version of mozilla14.0beta, his shows the same problem.but the videos play fine in other browser

  • Pavilion 500-246: GB of ram it shows only GB why usable?

    I have a Pavilion of 500-246 with 8 GB of ram it shows only 7.18 usable - PLEASE HELP. I also try to install a GeForce 8400 GS with 1024 MB ddr3 pci-e 2.0 and cannot operate. If I plug into the video card and that you start the computer NOTHING on th

  • How to decrypt my Tablet

    I have test the encryption function is ok but now, I need to update firmware for 75_ROW I need decryption before upgrade. 1 tell me how my Tablet decryption 2 tell me about how update firmware in recovery mode, while the Tablet is still encryption Th