Archived redo logs in ASM or OS file system?
DB version: 10.2.0.4Version of the operating system: Sun Solaris SPARC 5.10
We are on a node two CARS.
Currently, we store logs archived in BONE cross mounted file systems. This is the recommended method?
What we will have if we keep newspapers archived in ASM?
Ideally, the location of the log archive must be on a shared volume. In solaris, you can use ASM as the shared diskgroup.
When using the common diskgroup say + ARCH / (you can choose your own name diskgroup, but for the ease I posted like this) for archiving logs, two instances can write to the same location.
Need a recovery and restore operation, you can simply restore the archivelogs ASM and run recovery.
Well, there are some other well explained by the experts here. I just did a unique advantage.
Kind regards
Mahesh.
Tags: Database
Similar Questions
-
A PL/SQL to add the archived redo log for logminer files - please help debug
Hello
I have a PL/SQL to add recovery logs a day archived files to logminer utility in order to catch all LMD, which happened for a paticular at a particular time scheme
as below,
-logmnr.sql
Set serveroutput on
declare
v_redo_dictionary_file VARCHAR2 (100);
v_archived_log_file VARCHAR2 (100);
-catch all them archived redo logs today, put in a cursor
CURSOR logfile_cur IS
SELECT name from v$ archived_log WHERE the to_char (COMPLETION_TIME, 'dd_MM_yyyy') > '18_01_2010' and not in sequence # (select sequence # v$ archived_log WHERE DICTIONARY_BEGIN = 'YES' AND dictionary_end = 'YES');
Start
-Create the DICTIONARY file on the files of the SOURCE database redo
sys. DBMS_LOGMNR_D.build (OPTIONS = > sys.) DBMS_LOGMNR_D.store_in_redo_logs);
Select name from v_redo_dictionary_file from v$ archived_log where DICTIONARY_BEGIN = 'YES' AND dictionary_end = 'YES', and sequence # = (select MAX(sequence#) from v$ archived_log WHERE DICTIONARY_BEGIN = 'YES' AND dictionary_end = 'YES');
-Add newspapers of dictionary files
sys.dbms_logmnr.add_logfile (nom_fichier_journal = > v_redo_dictionary_file,)
Options = > sys.dbms_logmnr.new);
-Add the log files
Open logfile_cur;
LOOP
SEEK logfile_cur INTO v_archived_log_file;
EXIT WHEN logfile_cur % NOTFOUND;
sys.dbms_logmnr.add_logfile (nom_fichier_journal = > v_archived_log_file);
END LOOP;
CLOSE Logfile_cur;
-Start LogMiner, also enable the ddl
sys.dbms_logmnr.START_LOGMNR (options = > dbms_logmnr.dict_from_redo_logs + dbms_logmnr.ddl_dict_tracking);
EXCEPTION
WHILE OTHERS
THEN
DBMS_OUTPUT. PUT_LINE (SQLERRM);
END;
/
###############
SQL > @logmnr.sql
ORA-01291: missing logfile
PL/SQL procedure successfully completed.
What I did wrong?
Thank you very much
Roy[Double wire | http://forums.oracle.com/forums/message.jspa?messageID=4035521#4035521]
-
My databases are 11.2.0.3.7 Enterprise Edition. My OS is AIX 7.1.
I am to convert databases to use individual zones of rapid recovery and have two questions about what values to assign to database settings related to archived redo logs. This example refers to a database.
I read that if I specify
Log_archive_dest_1 =' LOCATION = USE_DB_RECOVERY_FILE_DEST'
the names of newspapers archived redo written in the default quick recovery area is '% t_%S_%r.dbf '.
In the past my archived redo logs have been appointed based on the parameter
log_archive_format='GPAIDT_archive_log_%t_%s_%r.arc'
I think log_archive_format will be ignored for logs archived redo written in the fast recovery area.
I am planning to write a second copy of the archived redo logs based on the parameter
ALTER system set log_archive_dest_2 = ' LOCATION = / t07/admin/GPAIDT/arch.
If I do this, the copy of logs placed in /t07 will be called '% t_%S_%r.dbf' or 'GPAIDT_archive_log_%t_%s_%r.arc '?
Before my use of a fast recovery area, I used the OEM 12 c Console to specify settings of backup of database that has been deleted and archived redo logs after 1 backup. Oracle manuals say rather specify a deletion of "none" policy and allow Oracle delete newspapers in the area of fast recovery if necessary. Since I got to keep a second copy of these log files in /t07 should I keep the policy that says to delete logs after 1 backup? If I don't do that, how will they removed from /t07?
Thank you
Bill
If I do this, the copy of logs placed in /t07 will be called '% t_%S_%r.dbf' or 'GPAIDT_archive_log_%t_%s_%r.arc '?
They will be "GPAIDT_archive_log_%t_%s_%r.arc". LOG_ARCHIVE_FORMAT is only ignored for directories under OMF.
Since I got to keep a second copy of these log files in /t07 should I keep the policy that says to delete logs after 1 backup? If I don't do that, how will they removed from /t07?
You can hold the deletion policy as it is. Oracle documentation, defining the STRATEGY of the ARCHIVELOG DELETION: "the deletion of archived newspaper policy applies to logs archive destinations, including the area of fast recovery."
-
Best location for the archived redo logs
Hello
I am OOF instructions and I want to make life for the DBA, that looks like my job easier.
So, as the title says what is the location of the "standard/Best Practice' for archived redo logs? particularly the dest_1 which is usually local on the same server
Thank you.
Hello
For you, I recommend the use of the Flash/fast recovery area.
Configuration of the archived of Redo Log locations
Oracle recommends that you use fast area recovery to an archive location, because the archived logs are managed automatically by the database. The file names generated for the newspapers archived in the fast recovery area correspond to Oracle managed files and are not determined by the parameter
LOG_ARCHIVE_FORMAT
. Whatever archiving scheme you choose, it is always advisable to create multiple copies of archived redo logs.Ref 1:
http://docs.Oracle.com/CD/E11882_01/backup.112/e10642/rcmconfb.htm#CHDEHHDH
Ref 2:
http://docs.Oracle.com/CD/E11882_01/server.112/e17157/unplanned.htm#BABEEEFH
http://docs.Oracle.com/CD/E11882_01/backup.112/e10642/rcmconfb.htm#BRADV89418
Kind regards
Juan M
-
Two questions relating to the archive redo logs with RMAN backup
DB version: 11g
I am new to RMAN.
My database is in ARCHIVELOG mode. I intend to make a weekly backup for my db (02:00 every Monday). There will be all the incremental backups between these windows(Monday-to-Monday) of backup that I have would function for retrieving archived redo logs.
Question1.
I want to save the archived logs every day (for example at 23:00). How can I configure that?
These are the configuration setting, that I intend to implement. I don't know how to set up the archive log backup
Question2.configure default device type to disk; configure retention policy to redundancy; configure device type disk parallelism 1; configure channel 1 device type disk clear; configure channel 2 device type disk clear; configure channel 1 device type disk format '/u05/rman1/datafiles/rmnabackup1_%U'; configure channel 2 device type disk format '/u05/rman2/datafiles/rmnabackup2_%U'; configure controlfile autobackup on; configure controlfile autobackup format for device type disk to '/u05/rman1/control_files/rmnabackup1_%U';
After that a new full backup is taken at 02:00 on Mondays, the archived redo logs accumulated since the last 7 days become unnecessary. How can I automate the removal of the archive redo logs with RMAN?Archive the log delete them all input command will take the destination of the log archiving log backup archive and delete this destination.
In the log archive destination he has archived log in the sequence 1 to 100 then will he take the backup and delete any of the destination (Monday 23:00).
In the log archive destination he has archived sequence journal 101 to 150 then will he take the backup and remove those in the destination (Tuesday 23:00).
In the log archive destination he has archived log in the sequence from 151 to 180 so will he take the backup and delete any of the destination (Wednesday 10:00).
It will continue like that.
Concerning
Asif Kabir-If you help brand the response as correct/useful.
-
PL/SQL to add archived redo log files to logminer
Hello
I'm doing a program to automatically add files of newspapers to logminer utility, all archived redo as below,
-logmnr.sql
Set serveroutput on
declare
v_redo_dictionary_file VARCHAR2 (100);
v_arihived_log_file VARCHAR2 (100);
CURSOR logfile_cur IS
SELECT name from v$ archived_log WHERE the to_char (COMPLETION_TIME, 'dd_MM_yyyy') > '18_01_2010' and not in sequence # (select sequence # v$ archived_log WHERE DICTIONARY_BEGIN = 'YES' AND dictionary_end = 'YES');
Start
-Create the DICTIONARY file on the files of the SOURCE database redo
sys. DBMS_LOGMNR_D.build (OPTIONS = > sys.) DBMS_LOGMNR_D.store_in_redo_logs);
Select name from v_redo_dictionary_file from v$ archived_log where DICTIONARY_BEGIN = 'YES' AND dictionary_end = 'YES', and sequence # = (select MAX(sequence#) from v$ archived_log WHERE DICTIONARY_BEGIN = 'YES' AND dictionary_end = 'YES');
-Add newspapers of dictionary files
sys.dbms_logmnr.add_logfile (nom_fichier_journal = > v_redo_dictionary_file,)
Options = > sys.dbms_logmnr.new);
-Add the log files
Open logfile_cur;
LOOP
SEEK logfile_cur INTO v_arihived_log_file;
sys.dbms_logmnr.add_logfile (nom_fichier_journal = > v_arihived_log_file);
EXIT WHEN logfile_cur % NOTFOUND;
END LOOP;
CLOSE Logfile_cur;
-Start LogMiner, also enable the ddl
sys.dbms_logmnr.START_LOGMNR (options = > sys.dbms_logmnr.dict_from_redo_logs + sys.dbms_logmnr.ddl_dict_tracking);
EXCEPTION
WHILE OTHERS
THEN
DBMS_OUTPUT. PUT_LINE (SQLERRM);
END;
/
##
SQL > @logmnr.sql
ORA-01013: user has requested the cancellation of the current operation
ORA-01289: cannot add logfile in double /usr/tmp/arch/1_4026_573231463.dbf
##
SQL > select name from v$ archived_log WHERE the to_char (COMPLETION_TIME, 'dd_MM_yyyy') > '18_01_2010' and not in sequence # (select sequence # v$ archived_log WHERE DICTIONARY_BEGIN = 'YES' AND dictionary_end = 'YES');
NAME
--------------------------------------------------------------------------------
/usr/tmp/arch/1_3973_573231463.dbf
/usr/tmp/arch/1_3974_573231463.dbf
/usr/tmp/arch/1_3975_573231463.dbf
/usr/tmp/arch/1_3977_573231463.dbf
/usr/tmp/arch/1_3978_573231463.dbf
/usr/tmp/arch/1_3979_573231463.dbf
/usr/tmp/arch/1_3981_573231463.dbf
/usr/tmp/arch/1_3982_573231463.dbf
/usr/tmp/arch/1_3983_573231463.dbf
/usr/tmp/arch/1_3984_573231463.dbf
/usr/tmp/arch/1_3986_573231463.dbf
NAME
--------------------------------------------------------------------------------
/usr/tmp/arch/1_3987_573231463.dbf
/usr/tmp/arch/1_3988_573231463.dbf
/usr/tmp/arch/1_3989_573231463.dbf
/usr/tmp/arch/1_3990_573231463.dbf
/usr/tmp/arch/1_3991_573231463.dbf
/usr/tmp/arch/1_3992_573231463.dbf
/usr/tmp/arch/1_3994_573231463.dbf
/usr/tmp/arch/1_3995_573231463.dbf
/usr/tmp/arch/1_3996_573231463.dbf
/usr/tmp/arch/1_3997_573231463.dbf
/usr/tmp/arch/1_3999_573231463.dbf
NAME
--------------------------------------------------------------------------------
/usr/tmp/arch/1_4001_573231463.dbf
/usr/tmp/arch/1_4003_573231463.dbf
/usr/tmp/arch/1_4004_573231463.dbf
/usr/tmp/arch/1_4005_573231463.dbf
/usr/tmp/arch/1_4006_573231463.dbf
/usr/tmp/arch/1_4007_573231463.dbf
/usr/tmp/arch/1_4009_573231463.dbf
/usr/tmp/arch/1_4010_573231463.dbf
/usr/tmp/arch/1_4011_573231463.dbf
/usr/tmp/arch/1_4012_573231463.dbf
/usr/tmp/arch/1_4013_573231463.dbf
NAME
--------------------------------------------------------------------------------
/usr/tmp/arch/1_4015_573231463.dbf
/usr/tmp/arch/1_4017_573231463.dbf
/usr/tmp/arch/1_4018_573231463.dbf
/usr/tmp/arch/1_4019_573231463.dbf
/usr/tmp/arch/1_4020_573231463.dbf
/usr/tmp/arch/1_4022_573231463.dbf
/usr/tmp/arch/1_4023_573231463.dbf
/usr/tmp/arch/1_4025_573231463.dbf
/usr/tmp/arch/1_4026_573231463.dbf
Why do I get error ' ORA-01289: cannot add duplicates logfile usr/tmp/arch/1_4026_573231463.dbf'
Please suggest,
Thank you
Roysys.dbms_logmnr.add_logfile (nom_fichier_journal-online v_arihived_log_file);
EXIT WHEN logfile_cur % NOTFOUND;not above but below
EXIT WHEN logfile_cur % NOTFOUND;
sys.dbms_logmnr.add_logfile (nom_fichier_journal-online v_arihived_log_file);reverse the order of the 2 lines
-
Database failover - of ASM to the file system
Hello
Based on this article: http://manchev.org/2012/01/building-a-failover-database-using-oracle-database-11g-standard-edition-and-file-watchers/
I would like to know if it would be possible to perform the same operation, but in a case where my source database uses ASM and the failover is not.
I think that this (http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmasmmi.htm#i1014926) should apply, but I don't have time to test it myself right now so I thought to try here and maybe get some valuable information/feedback on it.
Thanks in advance.
Greetings,
NACEURYes that's correct.
The control file can be restored using the to the article. Generate a backup of control file of the primary (which is on the FRA) and restore:
restore controlfile from '/backup/controlfile.bkp' to '/u01/control_file_path.ctl';
And the new location of the data files can be defined during the restoration, as you pointed out, with switch datafile at the end:
SET NEWNAME FOR DATAFILE 1 TO NEW; SET NEWNAME FOR DATAFILE 2 TO NEW; SET NEWNAME FOR DATAFILE 3 TO NEW; SET NEWNAME FOR DATAFILE 4 TO NEW; (..) restore database; SWITCH DATAFILE ALL;
'NEW' article is if you plan to use OMF files, through the db_create_file_dest system. If the data files are to be spread on different mount points you can name them individually, as:
SET NEWNAME FOR DATAFILE 1 TO '/path/to/datafile1.dbf'; (...) SWITCH DATAFILE ALL;
Also don't forget to erase (rebuild) your redo log after restoring files, set DB_CREATE_ONLINE_LOG_DEST_n correctly.
-
RMAN to the ASM and network file system
Hi all
I use Oracle RAC 11 g R2 + ASM.
I am new to RMAN, but I am trying to build daily tasks to do below:
1. backup of my database to the FRA located on ASM diskgroup + FRA.
2. at the same time I want to save my database for Network File System (NFS) to be taken by system administrators to an archive location that I can't access. Since according to my knowledge can't copy files to ASM for outside... so for this I'm planing to use the command format of the NFS mount point.
3. on daily basis in the NFS file will be taken by a work (that I am not responsible for) to the Archive, and then it will be purged from this location.
4. I am planing to have my windows of 15 days retention strategy.
5. on every day, I am planing to cross check my backup and then delete the expired so that taken from NFS in the journal of the archives is deleted.
The RMAN I am planing to run it on a daily basis are:
Now my questions:backup incremental level 0 cumulative device type disk tag 'FULL_DB_INCR_LV0' database; -- this will be to the default location (+FRA) backup incremental level 0 cumulative device type disk tag 'FULL_DB_INCR_LV0' database format 'NFS/oracle/backup%U';--another copy to be taken to the archive location crosscheck backupset; delete expired backupset; -- this will delete the backups taken to the Archive location
1 what good solutions? take account of the fact that I need backup on the file system of the operating system so that the jobs of system administrators take the backup of their archives.
2. If a backup is needed, I use CATALOGUE START WITH to recover and then recover.
3. because my retention period is 15 days, and the backup in NFS files will be deleted on a daily basis, so this retention will apply for backups in ASM, is this a problem?Hello
Easy way is to use the tag. Find the number of backupset is a difficult task.
Create a template for your tag with a date.
How to use the Substitution Variables of orders RMAN [427229.1 ID]
How to use the tag name of RMAN with different attributes or variables. [580283.1 ID]
You can check which backup doesn't have a copy on NFS using the query below:
SELECT BS.RECID BSKEY, BP.RECID KEY, BP.RECID RECID, BS.PIECES PIECECOUNT, BP.HANDLE FILENAME, BP.TAG TAG, BP.COPY# COPYNUMBER, BP.STATUS STATUS, BP.PIECE# PIECENUMBER FROM V$BACKUP_PIECE BP, V$BACKUP_SET BS WHERE (BP.SET_COUNT = BS.SET_COUNT AND BP.SET_STAMP = BS.SET_STAMP) order by bskey,copynumber
-
How to delete the files of the archives journals sleep in ASM?
Hi Experts
We have a downstream real-time replication that uses a location in ASM to the shipped logs.
implemented by
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 =' LOCATION = + BOBOASM/NANPUT/standbyarchs /.
VALID_FOR = (STANDBY_LOGFILE, PRIMARY_ROLE)' Scope = BOTH;
What should I do to clean these files?
Any procedure or a script to do this?
Thank youHello Haggylein
Check this out, seems to work
-redologs used or not?
-When purgeable we can removeCONSUMER_NAME COLUMN heading ' Capture. Process | Name ' FORMAT A15
COLUMN NAME header ' archived Redo Log | File name ' FORMAT A25
FIRST_SCN "First SNA" COLUMN header FORMAT 99999999999
COLUMN NEXT_SCN TOPIC "Next SNA" FORMAT 99999999999
PURGEABLE HEADING COLUMN "Purgeable? FORMAT A10SELECT r.CONSUMER_NAME,
r.NAME,
r.FIRST_SCN,
r.NEXT_SCN,
r.PURGEABLE
OF DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME and PURGEABLE = 'YES ';-Now the script
-must be run on the downstream database
-generate a list of the papers be served and executed in a ksh script
-"virtue sysdba" sqlplus @$HOME/bin/generate_list.sql
NEWPAGE 0 VALUE
SET SPACE 0
SET LINESIZE 150
SET PAGESIZE 0
TERMOUT OFF SET
SET ECHO OFF
SET FEEDBACK OFF
SET THE POSITION
THE VALUE OF MARKUP HTML OFF WIDE COIL
coil list_purgeable_arch_redologs.ksh
SELECT 'ls asmcmd ". r.NAME
OF DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME and PURGEABLE = 'YES ';
spool off
output# Finally we can call it from a script
# ! ksh
# remove the embedded redologs
# to be executed on node 2
# cannot be used on
$HOME/bin/export ORACLE_SID = + ASM2
./list_purgeable_arch_redologs.ksh
output -
Clusterware and/or CARS on separate storage, synchronized by applying the redo logs?
Hello Experts,
I am doing research on architectures high availability to meet high service LEVEL requirements (> = uptime of 99.7 percent and "without loss of important data") for a client.
I have few resources for the implementation of this architecture: two physical database servers running 11g Standard Edition (so Data Guard is not an option), Enterprise Edition is not an option because of the price. Data storage will be on a San.
The ideal solution would be an architecture whose node redundancy (Clusterware / RAC) and redundancy of the data (as in physical Standby: application of redo logs instead of data mirroring (corrupt physical) files).
I did research Clusterware and CARS, but they use a shared storage. I'll use a SAN for storage, but this will not prevent physical mirroring of the corrupt data files.
Is it possible to set up a PAP/Clusterware architecture with each separate storage node, where the two databases are synchronized by applying the redo logs?
Is it possible to instantly apply redo logs to minimize the loss of data in case of automatic failover?
If we need more information, I'll give you a pleasure it.
Thanks in advance,
Peter
A RAC cluster still need a shared storage for database files: each cluster node cannot have its own separate storage.
You need at least a physical database server 3rd for the standby database that can function without Data Guard as long as you use you own scripts to send and apply archived redo logs or use a product like dbvisit.
I don't think it's possible to apply again immediately without Data Guard.
-
ORA-16038, ORA-00354, ORA-00312 block header corrupt redo log
Hi all
I'm Ann. I'm not an Oracle DBA. Please bear with me. However, I have to accelerate to become a DBA during the time that the dedicated DBA is on maternity leave.
As usual, we have the online site and test site.
She gave me some notes about how to take care of the database of active and application (Oracle 10 g).
But so far, I can't do a lot of command calls the site online because we have plenty of space for the server, also the biggest part of the task has been scripted. So, it can work automatically.
However, the test database is not like that. There is not automatically restart scripts as in the live system.
Recently I canont access to the test database. So I connect to the test server, find the folder that contains the Archive Log files is nearly 98%.
So I remove some of the old dbf files (I did based on the Advisor to the Chief of this last time, it happened a long time ago)
After clear some old files, make a df h the rate at 58%.
However, the database is still not available (can't open I think)
I connect to the database, making an immediate halt, but Hung Server.
So, I ask a network engineer to restart the database server.
Of course, the machine is stop so the database must be completed.
After the machine restarts, I connect as sysdba but still cannot open the database.
The error is as below
Mounted database
ORA-16038: log 1 sequence # 1013 cannot be archived
ORA-00354: corrupted redo log block header
ORA-00312: thread 1 1 online journal:
*'/Data/oradata/barn/onlinelog/o1_mf_1_2658nhd4_.log'*
ORA-00312: thread 1 1 online journal:
*'/arclogs/oradata/BARNTEST/onlinelog/o1_mf_1_2658nhd4_.log'*
I search and I get these
ORA-16038, ORA-00354, ORA-00312 block header corrupt redo log
+ (http://arjudba.blogspot.co.nz/2008/05/ora-16038ora-00354ora-00312-corrupt.html) +.
Error description:
------------------------
Normal users could not connect to the database. He messaged ORA-00257: internal connection only up to this just released. When you try to archive the redo log, it returns the message.
ORA-16038: log %s sequence # %s cannot be archived
ORA-00354: corrupted redo log block header
ORA-00312: ' %s % s: "%s '" thread online journal
Explanation of the problem:
-------------------------------
Whenever the normal user tried to connect to the database returns the error as it is designed in
ORA-00257. But you've noticed that there is enough space in the V$ RECOVERY_FILE_DEST. Whenever you look at alerts log, you will see the ORA-16038, ORA-00354, error ORA-00312 serial. Produced error as because it doesn't have archives online redolog due to an alteration in the online redo file.
Solution of the problem:
--------------------------------
Step 1) while making your database running clearly non archived redo log.
SQL > alter the clear database untarred logfile "logilename";
What makes corruption disappear which causes the contents of the redo online deleted file.
Step 2) make a full backup of the database
Is my question here safe to apply the following steps:
We have 2 logs online that cause the error, until I need to erase the 2 files.
For step 2, how to make a backup with RMAN.
Can you suggest a command line safe for me to use. (The EM console surrently do not work on the test database server)
I really need that test database managed as my apps APEX run on this need to be exported to the live system.
Really appreciate any help here, once again, it's an Oracle 10 g release 1.
Kind regards
Published by: Ann586341 on April 30, 2012 15:40
Published by: Ann586341 on April 30, 2012 15:40Your problem is with redolog 1
SQL > select * v log$;
check the status of the Group 1 regardless of whether ACTIVE or INACTIVE.
If it is INACTIVE, then run the command below
SQL > ALTER DATABASE CLEAR NO ARCHIVED LOGFILE GROUP 1;
then
SQL > alter database open
Published by: Vishwanath on April 30, 2012 10:11
-
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
Hello
I am getting below error when executing the activities of flashback. Currently we have backups, so when I tried the command flashback in RMAN my question solved.
But if I want to do the same thing using SQLPLUS what steps should I take?
ERROR:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SNA to SNA 20062842936 20062842926
ORA-38761: redo log sequence 4 in thread 2, 65 incarnation cannot
consulted
DATABASE:11.2.0.3.0 (UAT)
RMAN will automatically restore logs archived redo required for the operation of blowback when they are not present on the drive, while (as meyssoun said) when using sqlplus required archived redo logs must be available on the disc.
Concerning
AJ
-
Increased repository DB redo logging after upgrade EM12cR2?
Has anyone else noticed increased redo logging in their repository database after upgrade to EM12cR2?
I'm running on Linux x 86-64 with a 11.2.0.3 repository database. Before the upgrade, I was running 12.1.0.1.0 with BP1, and my repository database produced, on average, about 80-120 MB of archived redo logs / hour. Since the upgrade to 12.1.0.2.0, my product repository database, on average, 450-500 MB of newspapers archived redo / hour.
Reclassification of deployed agents of 12.1.0.1.0 to 12.1.0.2.0 has not seemed to help; I've updated about 10 of the 15 without any apparent loss of logging of redo.
I was wondering if it's comparable to what others see, or if I have a local question I should study.The following fix has been released for this problem:
Patch 14726136 - INCREASED REDO LOGGING ON REPOSITORY DB AFTER UPGRADING TO OEM 12.1.0.2
Kind regards
Vincent -
Redo Log Buffer 32.8 M, looks great?
I just took on a database (mainly used for OLTP on 11 GR 1 matter) and I'm looking at the level of the log_buffer parameter on 34412032 (32.8 M). Not sure why it is so high.
(database was for 7.5 days)select NAME, VALUE from SYS.V_$SYSSTAT where NAME in ('redo buffer allocation retries', 'redo log space wait time'); redo buffer allocation retries 185 redo log space wait time 5180
Any advice on that? I normally keep trying to stay below 3 m and have not really seen it above 10 M.Sky13 wrote:
I just took on a database (mainly used for OLTP on 11 GR 1 matter) and I'm looking at the level of the log_buffer parameter on 34412032 (32.8 M). Not sure why it is so high.11g, you should not set the log_buffer - only Oracle default set.
The value is derived relative to the setting for the number of CPUS and the transaction parameter, which can be derived from sessions that can be derived processes. In addition, Oracle will allocate at least one granule (which can be 4, 8 MB, 16 MB, 64 MB, or 256 MB depending on the size of the SGA, then you are not likely to save memory by reducing the size of the log buffer.
Here is a link to a discussion that shows you how to find out what is really behind this figure.
Re: Archived redo log size more than online redo logsConcerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
Author: core Oracle -
Hello all,.
For a while I myself have wondered what would be the correct redologsize for a database.
I found the note
the new 10g feature: REDO LOG SIZING ADVISORY [ID 274264.1]
who says:
'+ Rule switching newspapers maximum once every fifteen minutes. + »
Well, I came across a DB with about 3 logswitches per minute (OEL 4.6 environment RAC 10 g 2).
Do you think that this number is WAY over what it should be? What are your redolog sizes?
The note mentioned also using the instance_recovery view v$ to give advice on the size redolog, which takes into account the MTTR.
Thank you.
ARO
Pinela.
Published by: Pinela on November 7, 2011 14:42Pinela wrote:
Thank you all for you comments,jgarry,
Yes, it is a good idea. For the moment, no, there are no errors, messages or complaints. In this case, the impact of the size redolog, refers to restore and cloning operations.Aman,
"more than 4-5 swithces an hour - in general is considered correct". Interestingly, it is a good basis.Give more context. The DB in questions about 800 GB, redologs with 50 MB in size and is logswitches 3/4 times each minute (as mentioned earlier) and 1,200 users.
With these new values, do you everything keep or change your mind?
I would recommend changing the size redolog to 1 or 2 GB.500 MB is very small for a database of ~ 800 GB. For about 2-3 GB of the size of redo log file should be a good start. But please note that only affecting the size of the log file again X value cannot solve the problem. Would be also filled too quickly? If IMO as well as to define the size of redo log to about 3 GB file, you should consider the parameter ARCHIVE_LAG_TARGET so a value which you should expect to maintain for the switching log (as already suggested by others). This would let you control the speed of switching rather than being dependent on the size of the log file or database activity.
HTH
Aman...
Maybe you are looking for
-
Photos: Transfer faces of people
Previously identified must "Faces" in the app transfer Photos to the 'people' in the upgrade of the Sierra? I can't find evidence they did.
-
retrieving messages from alumni of the Iphone?
I tried to recover old (2 years ago) my iphone text messages but I don't see any message in over a year. It's really an important message, I'm trying to recover although I never actually deleted it. I'd appreciate any help, it is really essential f
-
Stream network between end myRIO PC
I want to publish myRIO between variable and real Temp from PC. I try with the network data stream, but I couldn't. who can help me with a simple example and thanks
-
The Windows Vista startup disk crashes after the screen "loading files".
I am trying to install Windows Vista on a new computer. The computer recognizes the boot disk and I can go beyond the "Loading files" screen, but when I get to the screen that looks like the standard login screen, the menu to install Windows never ap
-
Windows Vista automatically "optimized" my office.
Windows Vista tried to optimize my desktop and Solution Explorer. They are now so cuddly, the deesktop looks like a little kids coloring pages with each giant reflection of size. I can't read my eamils because of the low screen resolution and I am