What exactly contained in redo logs
Hello experts,
Can someone tell me what exactly contained in redo logs? It is mainly the sql statements executed? right?
Thank you and best regards,
Tong Ning
Redo change for many operations (called opcode) as:
Operation code: KDICBPU = 16 Upd Fctn: FADDR (kdxbpu) Prt Fctn: FADDR (kdvbpu)
Descript: row block of branch purge
Operation code: KDICBNE = 17 Upd Fctn: FADDR (kdxbne) Prt Fctn: FADDR (kdvbne)
Descript: initialize the new bundle branch block
Operation code: KDICLUP = 18 Upd Fctn: FADDR (kdxlup) Prt Fctn: FADDR (kdvlup)
Descript: keydata update online
Tags: Database
Similar Questions
-
relationship between redo log buffer, journal of redo and undo tablespace files
What is the relationship between the redo log buffer, redo log files and undo tablespace?
what I understand is
redo log buffer is the logical area where all the news of recovery were stored until they are transferred by LGWR bank roll forward log online... but y at - it report any with the undo tablespace with these two?
Please correct me if I'm wrong
Thanks in advanceredo log buffer is the logical area where all the news of recovery were stored until they are transferred by LGWR bank roll forward log online... but y at - it report any with cancellations
tablespace with these two?
There is a link between files redo log and buffer, but the undo tablespace is something else entirely.
spend it here with this links
REDO LOG FILES
http://www.DBA-Oracle.com/concepts/redo_log_files.htmBUFFER REDOLOG
http://www.DBA-Oracle.com/concepts/redo_log_buffer_concepts.htmUNDO TABLESPACE
Undo tablespace to cancel files to undo or roll back uncommitted changes pray the database.Hope you understood.
-
What is the role of the archived in an incremental backup redo log files?
I want to know, to which RMAN reads what block of data is changed after the last incremental backup.
and what is the role of the archived in an incremental backup redo log files?
Please guide.Hello
First Querstion response->
Each block of data in a data file contains a system change number (SCN), which is the SCN of the last change to the block. During an incremental backup, RMAN reads the RCS of each block of data in the input file and compares at the checkpoint SCN of the incremental backup from parent. If the RCS in the block of input data is greater than or equal to checkpoint SCN of the parent, then RMAN copies the block.2 - the archived redolog files are required for the restoration of the online of rman incremental backup in a consistent state.
Thank you & best regards
Rahul Sharma -
What is the purpose of waiting for redo log files
Hello
What is the purpose of the log files waiting for redo in the Dr?
What happens if the standby redo log files are created? or else is not created?
Please explain
Thank youRe: what is the difference between onlinelog and standbylog
I mentioned the goal of the eve of the redo log in RD files in above thread.
Concerning
Girish Sharma -
to QMI(Quick multi-insert), I can run the following SQL to generate the QMI redo log:
Insert into t1 select * from t2;
or use forall sentence to achieve this goal.
but I do not know how to generate log redo QMD, can someone help me?
Hello
When I restore after an insert... Select I get the following type of repeat folder:
PROGRESS RECORD roll - Threading: 1 RBA: LEN 0x000e78.00005fb5.00cc: 0x00e8 VLD: 0x01 CON_UID: 0
RCS: 0x0000.0b8cc83a SUBSCN: 1 08/12/2015 23:07:08
CHANGE CON_ID:0 #1 TYP:0 CLS:1 AFN: 6 x 0000 .0b8cc837 SEQ:4 OP:11.12 ENC:0 SCN:0 OBJ:116277 DBA:0x018264c2 RBL: 0 FLG:0 X 0000
KTB Redo
OP: worm 0 x 02: 0 x 01
compat bit: 4 (post-11) stuffing: 1
OP: C uba: 0x01001a86.0d68.0c
Op KDO code: line QMD disabled dependencies
xType: XR flags: 0 x 00000000 bdba: hdba 0x018264c2: 0 x 01825972
itli: 1 ispac: 0 maxfr: 4858
Tabn: 0 lock: nrow equals 0: 16
location [0]: 54
location [1]: 55
slot [2]: 56
slot [3]: 57
Housing [4]: 58
Housing [5]: 59
Housing [6]: 60
slot [7]: 61
slot [8]: 62
Housing [9]: 63
slot [10]: 64
slot [11]: 65
Housing [12]: 66
slot [13]: 67
slot [14]: 68
slot [15]: 69
CHANGE CON_ID:0 #2 TYP:0 CLS:24 AFN: 4 x 0000 .0b8cc83a SEQ:13 OP:5.6 ENC:0 SCN:0 OBJ:4294967295 DBA:0x01001a86 RBL: 0 FLG:0 X 0000
ktubu again: am: rci 0: 12 opc: 11.1 objn: 116277 objd: tsn 116277: 4
Cancel type: Regular cancel cancel type: user cancellation made last buffer split: No.
Undo tablespace: No.
0x00000000
ktuxvoff: 0x17cc ktuxvflg: 0x0002
Kind regards
Franck.
PS: this is the replication product that I think?
-
11 GR 2 on RHEL 6.2
4 GB is our size of Redo Log online.
SQL > select bytes/1024/1024/1024 GB of log v$.
GB
----------
4
4
4
4
But the archive logs are 3.55 GB in size instead of 4 GB. Some of the archivelogs that are smaller than 3.55 below must be caused by
Archive log backup RMAN offers jobs at 10:00, 16:00 and 22:00 that initiates log switching (I guess)
SQL > select (blocks * block_size/1024/1024/1024) GB of v$ archived_log where status = 'A ';
GB
----------
3.55978966
3.31046581
3.55826092
3.55963707
1.39474106
3.561553
3.55736685
3.55881786
.135155678
3.55546999
.054887295
1.88027525
.078295708
1.97425985
3.55703735
3.55765438
.421986103
3.55839968
< snipped >
It has something to do with the parameter FAST_START_MTTR_TARGET ? It is set to zero in this PB anyway.
SQL > show parameter mtt
VALUE OF TYPE NAME
------------------------------------ ----------- ------------------------------
fast_start_mttr_target integer 0
SQL >
He could carry on public discussions and private discussions as Jonathan Lewis points out in the other discussion that Mihael provides a link to.
It could concern the parameter archive_lag_target
It could be the result of the BACKUP of RMAN commands
It may be a programmed script / who commands ALTER SYSTEM ARCHIVE LOG of employment issues.
Oracle can create an archivelog that is smaller than the redo log, if it isn't a switch or command before archive the redo log is full.
Hemant K Collette
-
That redo log files waiting?
Hello Experts,
I read articles on the log redo and undo segment files. I was wondering something very simple. That redo log files waiting in there? It stores the sql statements?
Lets say that my update statement to modify 800 blocks of data. A unique single update statement can modify different data 800 right blocks? Yes, it may be true. I think that these data blocks can not hold buffers to the log to roll forward, right? I mean I know exactly what to do redo log buffer and redo log file. And I know that the task of backgrounding LGWR. But, I wonder if she she holds the data blocks? It is not supposed to hold data like cache buffer blocks, right?
My second question is, rollback isn't effect to restore the newspaper to the right buffer? Because it does not need log buffer for effect do it again. Conversely, the restoration; statement is included in the restore log buffer by progression when someone isse, am I right?
As far as I know, rollback interact directly with UNDO TABLESPACE?
I hope that I have to express myself clearly.
Thanks in advance.
Here's my question:
My second question is, rollback isn't effect to restore the newspaper to the right buffer? Because it does not need log buffer for effect do it again. Conversely, the restoration; statement is included in the restore log buffer by progression when someone isse, am I right?
As far as I know, rollback interact directly with UNDO TABLESPACE?
Yes, where else would the undo data come from? Undo tablespace contains the Undo segments that contain the Undo data required for the restoration of your transaction.
I can say that rollback does not alter the data of the log buffer rede to the past. In other words, change vectors will be remain the same before restoration. Conversely, rollback command is also recorded in the log file of restoration by progression. As the name, all orders are saved in the REDO LOGS.
I hope that I am wrong so far?
Not sure why you even the buffer log roll forward for Rollback? This is the reason why I asked you it was for, where occurs the dose the cancellation? And the answer for this is that it happens in the buffer cache. Before you worry about the drivers of change, you must understand that it is not serious what contains where as long as there is no transaction recorded in the operating of the Undo segment table. If the operating table indicates that the transaction is longer there, there must be a cancellation of the transaction. Vectors of change are saved in the file log roll forward, while the restore happens on blocks of data stored in the file "data" undo blocks stored in the undo file "data".
At the same time I read an article about redo and undo. In this article process transaction is explained. Here is the link http://pavandba.files.wordpress.com/2009/11/undo_redo1.pdf
I found some interesting information in this article as follows.
It is worth noting that during the restore process, recovery logs never participate. The only time where redo logs are read is retrieving and archiving. This is the concept of tuning key: redo logs are written on. Oracle does not read during normal processing. As long as you have sufficient devices so that when the ARC is reading a file, LGWR's writing to a different device, then there no contention for redo logs.
If redo logs are never involved in the restoration process, how is it Oracle will then know the order of the transaction? As far as I know it is only written in redo logs.
I have thoughts very amazed to Aman.
Why you ask?
Now, before giving a response, I say two things. One, I know Pavan and he is a regular contributor to this forum and on several other forums Facebook and two, with all due respect to him, a little advice for you, when you try to understand a concept, to stick to the Oracle documentation and do not read and merge articles/blog-posts from the web. Everone, which publishes on the web, has their own way to express things and many times, the context of the writing makes it more confusing things. Maybe we can erase the doubts that you can get after reading the various search results on the web.
Redo logs used for the restoration, not to restore. The reason is the redo log files are applied in sequential order, and this is not the case when we look for the restoration. A restore is required to do for a few blocks away. Basically, what happens in a restoration, is that the records of cancellation required for a block of data are sought in the reverse order of their creation. The entry of the transaction is in the slot ITL of the block of data that point to the necessary undo bytes Address (UBA) using which oracle also knows what that undo the blocks would be necessary for the restoration of your transaction. As soon as the blocks of data will be cancelled, the ITL slots would be cleared as well.
In addition, you must remember, until the transaction is not qualified as finished, using either a commit or a rollback, the cancellation of this data would remain intact. The reason for this is that oracle would ensure that undo data would be available to make the cancellation of the transaction. The reason why Undo data are also recorded in the journals of recovery is to ensure that in the event of the loss of the cancellation of the data file, retrieving them would be possible. Because it would also require changes that's happened on the blocks cancel, restore the vectors change associated with blocks of cancellation are also saved in the buffer log roll forward and, in the redo log files.
HTH
Aman...
-
Hi master,
This seems to be very basic, but I would like to know internal process.
We know all that LGWR writes redo entries to redo logs on the disk online. on validation SCN is generated and the tag to the transaction. and LGWR writes this to online redo log files.
but my question is, how these redo entries just redo log buffer? Look at all the necessary data are read from the cache of the server process buffers. It is modified it and committed. DBWR wrote this in files of data, but at what time, what process writes this committed transaction (I think again entry) in the log buffers cache?
LGWR do that? What exactly happens internally?
If you can please focus you some light on internals, I will be grateful...
Thanks and greetings
VDVikrant,
I will write less coz used pda. In general, this happens
1. a calculation because how much space is required in the log buffer.
2 server process acquires redo copy latch to mention some reco will be responsible.Redo allocation latch is used to allocate space.
Redo allocation latch is provided after the space is released.
Redo copy latch is used copy redo contained in the log buffer.
Redo copy lock is released
HTH
Aman -
Redo Log and Supplemental Logging doubts related
Hi friends,
I am a student extra import logging in detail. Reading a lot of articles and the oracle documentation to this topic and redo logs. But couldnot find answers some doubts...
Please help me to delete.
Scenario: we have a table with primary key. And we execute a query update on the table that does not use the primary key column in a clause...
Question: in this case, the roll forward records entry generated for changes made by put request to update contain primary column values... ?
Question: If we have a table with primary key, we need to enable additional logging on primary column of this table? If so, in what circumstances, do we need to?
Question: If we set up replication streams on this (having the primary key) table, why do we really need to enable saving of its complementary (I read documentation saying that flow requires some then more information., but in reality what information does it. Once again this question is closely related to the first question.)
Please suggest also any good article/site that provide inside details of the redo log and additional logging, if you know.
Kind regards
Lifexisxnotxsoxbeautiful...(1) assuming that you do not update the primary key column and additional logging is not enabled, Oracle doesn't have to log in the primary key column in the log to roll forward, just the ROWID.
(2) is rather difficult to answer without being tautological. You need to enable additional logging if and only if you have some use downstream for additional columns in the redo logs. Rivers and streams, these technologies built above are the most common reason to enable additional logging.
(3) If you run an update as
UPDATE some_table SET some_column = new_value WHERE primary_key = some_key_value AND <
> and look at an update statement that LogMiner relies on newspapers in recovery in the absence of additional record, basically it would be something like
UPDATE some_table SET some_column = new_value WHERE rowid = rowid_of_the_row_you_updated
Oracle has no need to replay the exact SQL statement you issued, (and so he doesn't have to write the SQL statement in the redo log, he doesn't have to worry if the UPDATE takes a long time to run (otherwise, it would take so much time to apply an archived log as it did to generate the log) ((, which would be disastrous in a recovery situation) etc.). He just needs to rebuild the SQL statement of the information contained in the restoration by progression, which is just the ROWID and the columns that have changed.
If you try to execute this statement on a different database (via the stream, for example) may the ROWID on totally different destination database (since a ROWID is just a physical address of a line on disc). So adding additional record tells Oracle to connect to the primary again key column and allows LogMiner / flow / etc. to rebuild the statement using the values of the primary key for the changed lines, which would be the same on the source database and destination.
Justin
-
Here's my question after tons of research and test without have the right solutions.
Target:
(1) I have a 12.1.0.2 database unique main enterprise 'testdb' as database instance running on the server "node1".
(2) I created physical standby database "stbydb" on the server "node2".
(3) DataGuard running on the mode of MaxAvailability (SYNC) with roll forward in real time 12 default c apply.
(4) primary database has 3 groups of one-man redo. (/oraredo/testdb/redo01.log redo02.log redo03.log)
(5) I've created 4 standby redo logfiles (/oraredo/testdb/stby01.log stby02.log stby03.log stby04.log)
(6) I do RMAN backup (database and archivelog) on the site of relief only.
(7) I want to use this backup for full restore of the database on the primary database.
He is a DR test to simulate the scenario that has lost every primary & Eve total servers.
Here is how to save, on the database pending:
(1) performance 'alter database recover managed standby database Cancel' to ensure that compatible data files
(2) RMAN > backup database;
(3) RMAN > backup archivelog all;
I got elements of backup and copied to primary db Server something like:
/Home/Oracle/backupset/o1_mf_nnndf_TAG20151002T133329_c0xq099p_.BKP (data files)
/Home/Oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.BKP (spfile & controlfile)
/Home/Oracle/backupset/o1_mf_annnn_TAG20151002T133357_c0xq15xf_.BKP (archivelogs)
So here's how to restore, on the main site:
I clean all the files (data files, controlfiles oder all gone).
(1) restore spfile from pfile
RMAN > startup nomount
RMAN > restore spfile from pfile ' / home/oracle/pfile.txt' to ' / home/oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.bkp';
(2) modify pfile to convert to db primary content. pFile shows below
*.audit_file_dest='/opt/Oracle/DB/admin/testdb/adump '
* .audit_trail = "db".
* full = '12.1.0.2.0'
*.control_files='/oradata/testdb/control01.ctl','/orafra/testdb/control02.ctl'
* .db_block_size = 8192
* .db_domain = "
*.db_file_name_convert='/testdb/','/testdb /'
* .db_name = "testdb".
* .db_recovery_file_dest ='/ orafra'
* .db_recovery_file_dest_size = 10737418240
* .db_unique_name = "testdb".
*.diagnostic_dest='/opt/Oracle/DB '
* .fal_server = "stbydb".
* .log_archive_config = 'dg_config = (testdb, stbydb)'
* .log_archive_dest_2 = "service = stbydb SYNC valid_for = (ONLINE_LOGFILE, PRIMARY_ROLE) db_unique_name = stbydb'"
* .log_archive_dest_state_2 = 'ENABLE '.
*.log_file_name_convert='/testdb/','/testdb /'
* .memory_target = 1800 m
* .open_cursors = 300
* runoff = 300
* .remote_login_passwordfile = "EXCLUSIVE."
* .standby_file_management = "AUTO".
* .undo_tablespace = "UNDOTBS1.
(3) restart db with updated file pfile
SQLPLUS > create spfile from pfile='/home/oracle/pfile.txt'
SQLPLUS > the judgment
SQLPLUS > startup nomount
(4) restore controlfile
RMAN > restore primary controlfile to ' / home/oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.bkp';
RMAN > change the editing of the database
(5) all elements of backup catalog
RMAN > catalog starts by ' / home/oracle/backupset / '.
(6) restore and recover the database
RMAN > restore database;
RMAN > recover database until the SNA XXXXXX; (this YVERT is the maximum in archivelog backups that extends beyond the scn of the backup of the data file)
(7) open resetlogs
RMAN > alter database open resetlogs;
Everything seems perfect, except one of the file log roll forward pending is not generated
SQL > select * from v$ standby_log;
ERROR:
ORA-00308: cannot open archived log ' / oraredo/testdb/stby01.log'
ORA-27037: unable to get file status
Linux-x86_64 error: 2: no such file or directory
Additional information: 3
no selected line
I intended to use the same backup to restore primary basic & helps record traffic and the downtime between them in the world of real output.
So I have exactly the same steps (except STANDBY restore CONTROLFILE and not recover after database restore) to restore the database pending.
And I got the same missing log file.
The problem is:
(1) complete alert.log filled with this error, not the concern here
(2) now repeat it in real time apply won't work since the Party shall LGWR shows always "WAITING_FOR_LOG."
(3) I can't delete and re-create this log file
Then I tried several and found:
The missing standby logfile was still 'ACTIVE' at present RMAN backup was made.
For example, on db standby, under Group #4 (stby01.log) would be lost after the restoration.
SQL > select GROUP #, SEQUENCE #, USE (s), the STATUS from v$ standby_log;
GROUP # SEQUENCE # USED STATUS
---------- ---------- ---------- ----------
4 19 ACTIVE 133632
5 0 0 UNASSIGNED
6 0 0 not ASSIGNED
7 0 0 UNASSIGNED
So until I take the backup, I tried on the primary database:
SQL > alter system set log_archive_dest_state_2 = delay;
This was the Group of standby_log side Eve #4 was released:
SQL > select GROUP #, SEQUENCE #, USE (s), the STATUS from v$ standby_log;
GROUP # SEQUENCE # USED STATUS
---------- ---------- ---------- ----------
4 0 0 UNASSIGNED
5 0 0 UNASSIGNED
6 0 0 not ASSIGNED
7 0 0 UNASSIGNED
Then, the backup has been restored correctly without missing standby logfile.
However, to change this primary database means break DataGuard protection when you perform the backup. It's not accept on the production environment.
Finally, my real questions come:
(1) what I do may not do on parameter change?
(2) I know I can re-create the control file to redo before delete and then recreate after. Is there any simple/fast to avoid the standby logfile lost or recreate the lost one?
I understand that there are a number of ways to circumvent this. Something to keep a copy of the log file waiting restoration progress and copy up one missing, etc, etc...
And yes I always have done no real-time applies "to the aid of archived logfile" but is also not accept mode of protection of production.
I just want proof that the design (which is displayed in a few oracle doc Doc ID 602299.1 is one of those) that backs up data backup works effectively and can be used to restore the two site. And it may be without spending more time to resume backups or put the load on the primary database to create the database before.
Your idea is very much appreciated.
Thank you!
Hello
1--> when I take via RMAN backup, RMAN does not redo log (ORL or SRL) file, so we cannot expect ORLs or SRL would be restored.
2nd--> when we opened the ORL database should be deleted and created
3rd--> Expecting, SRL should not be an issue.we should be able to do away with the fall.
DR sys@cdb01 SQL > select THREAD #, SEQUENCE #, GROUP #, STATUS from v$ standby_log;
THREAD # SEQUENCE # GROUP # STATUS
---------- ---------- ---------- ----------
1 233 4 ACTIVE
1 238 5 ACTIVE
DR sys@cdb01 SQL > select * from v$ logfile;
GROUP # STATUS TYPE MEMBER IS_ CON_ID
---------- ------- ------- ------------------------------ --- ----------
3 /u03/cdb01/cdb01/redo03.log no. 0 online
/U03/cdb01/cdb01/redo02.log no. 0 2 online
1 /u03/cdb01/cdb01/redo01.log no. 0 online
4 /u03/cdb01/cdb01/stdredo01.log WATCH No. 0
/U03/cdb01/cdb01/stdredo02.log EVE 5 No. 0
DR sys@cdb01 SQL > ! ls - ltr /u03/cdb01/cdb01/stdredo01.log
method: cannot access the /u03/cdb01/cdb01/stdredo01.log: no such file or directory
DR sys@cdb01 SQL >! ls - ltr /u03/cdb01/cdb01/stdredo02.log
-rw - r-. 1 oracle oinstall 52429312 17 Oct 15:32 /u03/cdb01/cdb01/stdredo02.log
DR sys@cdb01 SQL > alter database force claire logfile 4;
change the database group claire logfile 4
*
ERROR on line 1:
ORA-01156: recovery or current flashback may need access to files
DR sys@cdb01 SQL > alter database recover managed standby database cancel;
Database altered.
DR sys@cdb01 SQL > change the database group claire logfile 4;
Database altered.
DR sys@cdb01 SQL > ! ls - ltr /u03/cdb01/cdb01/stdredo01.log
-rw - r-. 1 oracle oinstall 52429312 17 Oct 15:33 /u03/cdb01/cdb01/stdredo01.log
DR sys@cdb01 SQL >
If you do, you can recreate the controlfile without waiting for redo log entry...
If you still think it's something is not acceptable, you must have SR with support to analyze why he does not abandon SRL when controlfile_type is "underway".
Thank you
-
How to replace the members of the Group missing redo log after the failure of a support
Hello
I'm testing a disk failure scenario. I had recovery multiplexes on three sites (Windows drive letters) logs; one of the three locations also contained the data files. I deleted the Windows player that contained data files and saw a restoring with RMAN. All is well except that redo log members who are found on the occult Windows disk are of course still missing; the database opens, etc., but puts messages like this in the ALERT.log:
ORA-00321: log thread 2 1, cannot update log file header
ORA-00312: wire 2 1 online journal: ' F:\APP\NGORA12CR2SRV\ORADATA\ORATEST5\ONLINELOG\O1_MF_2_BW4B5MY0_. JOURNAL"What is the 'right' way to replace missing limbs? Just get all the new groups of newspapers with new members on all three discs? Carefully copy the LOG files representing missing members of one of the other sites? I add new members at this location and remove existing members (missing)?
Is how important it?
Thank you!
Martin
See 'Recover after losing a member of a group of Redo Logs online multiplexed'
at http://docs.oracle.com/cd/E11882_01/backup.112/e10642/osadvsce.htm#sthref2205
Hemant K Collette
-
Can redundancy of the OCR - we have OCR mirror with Redo logs in the diskgroup REDO01?
Hello
I have 2 node Oracle 11 GR 2 (11.2.0.3) RAC on AIX 6.1. Currently there is only an OCR and a VD residing in OCRVD diskgroup what redundancy as external.
# File group universal name Id disk STATE file
-- ----- ----------------- --------- ---------
1. online a27d6a5df2db4f31bf0c9c9d31870b1f (/ dev/rhdisk11) [OCRVD]
Located at 1 vote (s) disk.
$ ocrcheck
Status of the Oracle Cluster registry is:
Version: 3
Total space (in kilobytes): 262120
Space (in kilobytes) used: 3152
Amount of available space (KB): 258968
ID: 899322289
File name of device /: + OCRVD
Verifying the integrity of file/device succeeded
File/device not configured
File/device not configured
File/device not configured
File/device not configured
Checking cluster registry integrity succeeded
Logical corruption control bypassed due to the non-privilegie user
OCRVD Diskgroup is created using 5-disc size 1 G with RAID 1.
1. is it possible that I have OCR and VD multiplex in the same diskgroup?
2. If it is not possible, then I would add OCR and VD in other starts (Eg: REDO01 and REDO02 - so much for Redo logs)?
Published by: Azam on 20 may 2013 14:46Azam says:
HelloI have 2 node Oracle 11 GR 2 (11.2.0.3) RAC on AIX 6.1. Currently there is only an OCR and a VD residing in OCRVD diskgroup what redundancy as external.
# File group universal name Id disk STATE file
-- ----- ----------------- --------- ---------
1. online a27d6a5df2db4f31bf0c9c9d31870b1f (/ dev/rhdisk11) [OCRVD]
Located at 1 vote (s) disk.$ ocrcheck
Status of the Oracle Cluster registry is:
Version: 3
Total space (in kilobytes): 262120
Space (in kilobytes) used: 3152
Amount of available space (KB): 258968
ID: 899322289
File name of device /: + OCRVD
Verifying the integrity of file/device succeededFile/device not configured
File/device not configured
File/device not configured
File/device not configured
Checking cluster registry integrity succeeded
Logical corruption control bypassed due to the non-privilegie user
OCRVD Diskgroup is created using 5-disc size 1 G with RAID 1.
1. is it possible that I have OCR and VD multiplex in the same diskgroup?
No, this isn't posible
2. If it is not possible, then I would add OCR and VD in other starts (Eg: REDO01 and REDO02 - so much for Redo logs)?
Yes you can add other starts. As root, run the following command to add a location of OCR or Oracle ASM
ocrconfig -add +REDO01 ocrconfig -add +REDO02
As you know on level between discs ASM ASM mirroring that is contains ASM DISKGROUP.
Concerning
Mr. Mahir Quluzade -
ORA-16038, ORA-00354, ORA-00312 block header corrupt redo log
Hi all
I'm Ann. I'm not an Oracle DBA. Please bear with me. However, I have to accelerate to become a DBA during the time that the dedicated DBA is on maternity leave.
As usual, we have the online site and test site.
She gave me some notes about how to take care of the database of active and application (Oracle 10 g).
But so far, I can't do a lot of command calls the site online because we have plenty of space for the server, also the biggest part of the task has been scripted. So, it can work automatically.
However, the test database is not like that. There is not automatically restart scripts as in the live system.
Recently I canont access to the test database. So I connect to the test server, find the folder that contains the Archive Log files is nearly 98%.
So I remove some of the old dbf files (I did based on the Advisor to the Chief of this last time, it happened a long time ago)
After clear some old files, make a df h the rate at 58%.
However, the database is still not available (can't open I think)
I connect to the database, making an immediate halt, but Hung Server.
So, I ask a network engineer to restart the database server.
Of course, the machine is stop so the database must be completed.
After the machine restarts, I connect as sysdba but still cannot open the database.
The error is as below
Mounted database
ORA-16038: log 1 sequence # 1013 cannot be archived
ORA-00354: corrupted redo log block header
ORA-00312: thread 1 1 online journal:
*'/Data/oradata/barn/onlinelog/o1_mf_1_2658nhd4_.log'*
ORA-00312: thread 1 1 online journal:
*'/arclogs/oradata/BARNTEST/onlinelog/o1_mf_1_2658nhd4_.log'*
I search and I get these
ORA-16038, ORA-00354, ORA-00312 block header corrupt redo log
+ (http://arjudba.blogspot.co.nz/2008/05/ora-16038ora-00354ora-00312-corrupt.html) +.
Error description:
------------------------
Normal users could not connect to the database. He messaged ORA-00257: internal connection only up to this just released. When you try to archive the redo log, it returns the message.
ORA-16038: log %s sequence # %s cannot be archived
ORA-00354: corrupted redo log block header
ORA-00312: ' %s % s: "%s '" thread online journal
Explanation of the problem:
-------------------------------
Whenever the normal user tried to connect to the database returns the error as it is designed in
ORA-00257. But you've noticed that there is enough space in the V$ RECOVERY_FILE_DEST. Whenever you look at alerts log, you will see the ORA-16038, ORA-00354, error ORA-00312 serial. Produced error as because it doesn't have archives online redolog due to an alteration in the online redo file.
Solution of the problem:
--------------------------------
Step 1) while making your database running clearly non archived redo log.
SQL > alter the clear database untarred logfile "logilename";
What makes corruption disappear which causes the contents of the redo online deleted file.
Step 2) make a full backup of the database
Is my question here safe to apply the following steps:
We have 2 logs online that cause the error, until I need to erase the 2 files.
For step 2, how to make a backup with RMAN.
Can you suggest a command line safe for me to use. (The EM console surrently do not work on the test database server)
I really need that test database managed as my apps APEX run on this need to be exported to the live system.
Really appreciate any help here, once again, it's an Oracle 10 g release 1.
Kind regards
Published by: Ann586341 on April 30, 2012 15:40
Published by: Ann586341 on April 30, 2012 15:40Your problem is with redolog 1
SQL > select * v log$;
check the status of the Group 1 regardless of whether ACTIVE or INACTIVE.
If it is INACTIVE, then run the command below
SQL > ALTER DATABASE CLEAR NO ARCHIVED LOGFILE GROUP 1;
then
SQL > alter database open
Published by: Vishwanath on April 30, 2012 10:11
-
Question about the size of the redo log buffer
Hello
I am a student in Oracle and the book I use says that having a bigger than the buffer log by default, the size is a bad idea.
It sets out the reasons for this are:
>
The problem is that when a statement COMMIT is issued, part of the batch validation means to write the contents of the buffer log for redo log to disk files. This entry occurs in real time, and if it is in progress, the session that issued the VALIDATION will be suspended.
>
I understand that if the redo log buffer is too large, memory is lost and in some cases could result in disk i/o.
What I'm not clear on is, the book makes it sound as if a log buffer would cause additional or IO work. I would have thought that the amount of work or IO would be substantially the same (if not identical) because writing the buffer log for redo log files is based on the postings show and not the size of the buffer itself (or its satiety).
Description of the book is misleading, or did I miss something important to have a larger than necessary log buffer?
Thank you for your help,
John.
Published by: 440bx - 11 GR 2 on August 1st, 2010 09:05 - edited for formatting of the citationA commit evacuates everything that in the buffer redolog for redo log files.
A redo log buffer contains the modified data.
But this is not just commit who empty the redolog buffer to restore the log files.
LGWR active every time that:
(1) a validation occurs
(2) when the redo log is 1/3 full
(3) every 3 seconds
It is not always necessary that this redolog file will contain validated data.
If there is no commit after 3 seconds, redologfile would be bound to contain uncommitted data.Best,
Wissem -
Hi all
Recently we migrated 9.2.0.4 to 10.2.0.4 and performance of the database is slow in a newer version, log alerts of the audit, we found that: -.
Thread 1 cannot allocate new logs, Checkpoint 1779 sequence is not complete
Currently Journal # 6 seq # 1778 mem # 0: /oradata/lipi/redo6.log
Currently Journal # 6 seq # 1778 mem # 1: /oradata/lipi/redo06a.log Wed Mar 10 15:19:27 2010 1 thread forward to log sequence 1779 (switch LGWR)
Currently journal # 1, seq # 1779 mem # 0: /oradata/lipi/redo01.log
Currently journal # 1, seq # 1779 mem # 1: /oradata/lipi/redo01a.log Wed Mar 10 15:20:45 2010 1 thread forward to log sequence 1780 (switch LGWR)
Currently Journal # 2 seq # 1780 mem # 0: /oradata/lipi/redo02.log
Currently Journal # 2 seq # 1780 mem # 1: /oradata/lipi/redo02a.log Wed Mar 10 15:21:44 2010 1 thread forward to log sequence 1781 (switch LGWR)
Currently Journal # 3 seq # 1781 mem # 0: /oradata/lipi/redo03.log
Currently Journal # 3 seq # 1781 mem # 1: /oradata/lipi/redo03a.log Wed Mar 10 15:23 2010 Thread 1 Advanced to save sequence 1782 (switch LGWR)
Currently Journal # 4, seq # 1782 mem # 0: /oradata/lipi/redo04.log
Currently Journal # 4, seq # 1782 mem # 1: /oradata/lipi/redo04a.log Wed Mar 10 15:24:48 2010 1 thread forward to log sequence 1783 (switch LGWR)
Currently journal # 5 seq # 1783 mem # 0: /oradata/lipi/redo5.log
Currently journal # 5 seq # 1783 mem # 1: /oradata/lipi/redo05a.log Wed Mar 10 15:25 2010 1 thread cannot allocate new journal, sequence 1784 Checkpoint ends not
Currently journal # 5 seq # 1783 mem # 0: /oradata/lipi/redo5.log
Currently journal # 5 seq # 1783 mem # 1: /oradata/lipi/redo05a.log Wed Mar 10 15:25:27 2010 1 thread forward to log sequence 1784 (switch LGWR)
Currently Journal # 6 seq # 1784 mem # 0: /oradata/lipi/redo6.log
Currently Journal # 6 seq # 1784 mem # 1: /oradata/lipi/redo06a.log Wed Mar 10 15:28:11 2010 1 thread forward to log sequence 1785 (switch LGWR)
Currently journal # 1, seq # 1785 mem # 0: /oradata/lipi/redo01.log
Currently journal # 1, seq # 1785 mem # 1: /oradata/lipi/redo01a.log Wed Mar 10 15:29:56 2010 1 thread forward to log sequence 1786 (switch LGWR)
Currently Journal # 2 seq # 1786 mem # 0: /oradata/lipi/redo02.log
Currently Journal # 2 seq # 1786 mem # 1: /oradata/lipi/redo02a.log Wed Mar 10 15:31:22 2010 1 wire could not be allocated for new newspapers, private part of 1787 flush sequence is not complete
Currently Journal # 2 seq # 1786 mem # 0: /oradata/lipi/redo02.log
Currently Journal # 2 seq # 1786 mem # 1: /oradata/lipi/redo02a.log Wed Mar 10 15:31:29 2010 1 thread forward to log sequence 1787 (switch LGWR)
Currently Journal # 3 seq # 1787 mem # 0: /oradata/lipi/redo03.log
Currently Journal # 3 seq # 1787 mem # 1: /oradata/lipi/redo03a.log Wed Mar 10 15:31:40 2010 1 thread cannot allocate a new journal, sequence 1788 Checkpoint ends not
Currently Journal # 3 seq # 1787 mem # 0: /oradata/lipi/redo03.log
Currently Journal # 3 seq # 1787 mem # 1: /oradata/lipi/redo03a.log Wed Mar 10 15:31:47 2010 1 thread forward to log sequence 1788 (switch LGWR)
Currently Journal # 4, seq # 1788 mem # 0: /oradata/lipi/redo04.log
Currently Journal # 4, seq # 1788 mem # 1: /oradata/lipi/redo04a.log
so my point is, we should increase the redo log size to set the checkpoint ends not message, if yes, then what should be the optimum size of the redo log file?
PiyushThe REDO LOG file must contain at least 20 minutes of data, the log file will be every 20 minutes.
It is the best practice, otherwise he must log frequent switching and increasing the e/s and waiting.The optimum size can be obtained
by querying the column OPTIMAL_LOGFILE_SIZE of the view V$ INSTANCE_RECOVERY.Published by: adnanKaysar on March 11, 2010 17:03
Maybe you are looking for
-
I just find a folder named "Corrupted files" in my HD. Can I delete it?
I just noticed this DamagedFiles folder in my hard drive; all files are aliases, and they are all dated October 4, 2014. I guess they are the result of an accident or doesn't have the OS installation. But two years ago, so I wonder if it is safe to d
-
Satellite A30: still missing 2 drivers
Hello I have a Toshiba laptop, the thing is I lost my cd original re-installaition, so I don't have the drivers or some thing els... I re-intall windows xp... but now I need drivers... some drivers I could download from the internet, but still miss m
-
Problem downloading HP Support Assistant
I have a HP Pavilion 500-164 with 8.1 Windows, AVG 2014 and my HP Support Assistant show this message: Error: HPSF.exe has stopped working
-
Does anyone know how to make the default size of the documents of my small Canon scanner? Thank you.
-
I have a gmail account and it's wanting me to enter my username and password. user name is my email but I forgot my password. I can't figure out how to reset my password