Size of backup log archiving.
HelloI want to know if each archivelog is saved in a single file. Is there any question who? I can't see it in RC_BACKUP_ARCHIVELOG_DETAILS.
My script is:
BACKUP
FORMAT 'arch-s %s - p %p - t %t'
FILESPERSET 4
SETSIZE 300000
ARCHIVELOG
ALL THE
REMOVE THE ENTRY;
Our archived log is 250 MB each.
Thanks a lot before.
Hello
You must increase the size of the backup piece in the game currently backup you create each of 300 MB (300000 SETSIZE) and each archive log size is 250 MB. You will need to create each pice of 1 GB backup to fit 4 archives.
Please take a look at the first link I'm posting for more information in this regard.
See you soon,.
Francisco Munoz Alvarez
http://www.oraclenz.com
Tags: Database
Similar Questions
-
Redlogfile size is 16 MB and generating the archivelog file size is approximately 10 Mb. what could be the reason?
Oracle Version: 10.2.0.4
OS: Windows
Published by: Deccan charger 19 April 2010 23:31First of all:
a redo log contains a lot of things necessary for instance etc. recovery an archived log is used only for the restorations - he didn't need all the stuff that is in a redo log, so when written CRA the archiving log that he does not write all knowledge, all that is required for restores
Second:
Archive logs is created with dimensions smaller, irregular, as the original redo logs. Why? [388627.1 ID]
--------------------------------------------------------------------------------
Last updated 2 June 2007 Type status MODERATE HOWTO
In this Document
Goal
Solution
References--------------------------------------------------------------------------------
This document is available to you through process of rapid visibility (RaV) of the Oracle's Support and therefore was not subject to an independent technical review.
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 11.1
Information in this document applies to any platform.Goal
Archive logs is created with dimensions smaller, irregular, as the original redo logs.
Commands like:
ALTER SYSTEM SWITCH LOGFILE
or
ALTER SYSTEM ARCHIVE LOG...
are not used to generate archives or change the log file. Thus, there is no parameter ARCHIVE_LAG_TARGET set.
What else could cause this behaviour?
Solution
From:
Bug: 5450861: NEWSPAPERS ARCHIVE IS GENERATED with one SIZE SMALLER THAN THE REDO LOG FILES
the explanation of this situation has 2 main reasons:
1 archiving logs do not have to be in the same size. This was decided very long ago, when blank padding archiving logs stopped for a very good reason - in order to save disk space.2. the log command does not exist when a redo log file is 100% full. There is an internal algorithm that determines when to switch journal. It also has a very good reason - do the command of newspaper at the last moment may incur performance problems (for various reasons, outside the scope of this note).
So, after that newspaper ordering occurs the archivers are only copying the information from the redo log files. As recovery logs are not 100% full after the command to log and archive logs are empty not filled after the copy operation is complete, this results in unequal files smaller than the original of redo log files.
This is very apparent for very low (less than 10 MB) log files; as a result, produced 2.5 MB archive logs of 5 MB recovery logs are very visible.
Just note that currently, the default log files are 100 MB in size. If the archives log files would be between 98 and 100 MB person would notice.
The main concern that one must have for newspapers of archives files is a possible corruption. This can be easily verified by attempting a resumption of testing. When it's ok, the size of the log archive uneven should be of no interest, as is expected. -
Dear experts,
I have to test the size of the log archive every transaction. the next step needs to be done for this test
1 running dml scripts in bulk
2 need to find how the size of the log file archive created on the archive location.
based on what we have to give the stats that particular transaction generated as much redo log file from archive. can you please provide the script to do this. Thanks in advance.Oragg wrote:
Dear experts,I have to test the size of the log archive every transaction. the next step needs to be done for this test
1 running dml scripts in bulk
2 need to find how the size of the log file archive created on the archive location.based on what we have to give the stats that particular transaction generated as much redo log file from archive. can you please provide the script to do this. Thanks in advance.
Lets assume that you have loaded data for 1 day or 1 hour as below
alter session set nls_date_format = 'YYYY-MM-DD HH24'; select trunc(COMPLETION_TIME,'HH24') TIME, SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB from V$ARCHIVED_LOG group by trunc (COMPLETION_TIME,'HH24') order by 1;
-
I have 5 redo log files. Each is 4 GB. There are huge transactions (1800-2000 per second). So we have every hour 5-6 journal of the switches. So all the hours of archives journal 20 to 24 GB and daily increase in size, the size 480 GB.
I had planned to take an incremental backup of level 1 and level 0 per week daily with the option delete log archiving 'sysdate-1 '.
But we have little storage if, it is impossible to the sphere 480 GB per day and take backup of these 480 GB for 7 days (480 * 7) GB = 4 TB. How can I correct this situation? Help, please.1 disk is cheap
2 the onus on those who "Advanced" application
3 you can use Log Miner to check what is happening.You need to this address with application developers and/or clients and/or buy the disc.
There are no other solutions.-----------
Sybrand Bakker
Senior Oracle DBAPublished by: sybrand_b on July 16, 2012 11:11
-
Purge logs archiving on things primary and Standby and for Data Guard RMAN
Hi, I saw a couple of orders in regard to the purge archive records in a Data Guard configuration.Set UP the STRATEGY of SUPPRESSION of ARCHIVE to SHIPPED to All RELIEF;
Set UP the STRATEGY of ARCHIVELOG DELETION to APPLIED on All RELIEF;
Q1. The above only removes logs archiving to the primary or primary and Standby?
Q2. If deletions above archive logs on the primary, is it really remove them (immediately) or does the FRA to delete if space is needed?
I also saw
CONFIGURE ARCHIVELOG DELETION POLICY TO SAVED;
Q3. That what precedes, and once again it is something that you re primary side?
I saw the following advice in the manual of Concepts of data protection & Admin
Configure the DB_UNIQUE_NAME in RMAN for each database (primary and Standby) so RMAN can connect remotely to him
Q4. Why would I want my primary ro connect to the RMAN Repository (I use the local control file) of the standby? (is this for me to say define RMAN configuration settings in the Standby) manual
Q5. Should I only work with the RMAN Repository on the primary or should I be also in things in the deposits of RMAN (i.e. control files) of the standby?
Q6. If I am (usually mounted but not open) Physics, of standby I able to connect to its own local repository for RMAN (i.e. control based on files) while sleep mode is just mounted?
Q7. Similiarly if I have an old lofgical (i.e. effectively read-only), to even connect to the local RMAN Repository of Eve?
Q8. What is the most common way to schedule a RMAN backup to such an environment? example cron, planner of the OEM, DBMS_SCHEDULER? MY instinct is cron script as Planner OEM requires running the OEM service and DBMS_SCHEDULER requires the data runs?
Any idea greatly appreciated,
Jim
The above only removes logs archiving to the primary or primary and Standby?
When CONFIGURE you a deletion policy, the configuration applies to all destinations archive
including the flash recovery area. BACKUP - ENTRY DELETE and DELETE - ARCHIVELOG obey this configuration, like the flash recovery area.
You can also CONFIGURE an archived redo log political suppression that newspapers are eligible for deletion only after being applied to or transferred to database backup destinations.
If deletions above archive logs on the primary, is it really remove them (immediately) or does the FRA to delete if space is needed?
Its a configuration, it will not erase by itself.
If you want to use FRA for the automatic removal of the archivelogs on a database of physical before you do this:
1. make sure that DB_RECOVERY_FILE_DEST is set to FRA - view parameter DB_RECOVERY_FILE_DEST - and - setting DB_RECOVERY_FILE_DEST_SIZE
2. do you have political RMAN primary and Standby set - CONFIGURE ARCHIVELOG DELETION POLICY to APPLY ON ALL STANDBY;
If you want to keep archives more you can control how long keep logs by adjusting the size of the FRA
Great example:
http://emrebaransel.blogspot.com/2009/03/delete-applied-archivelogs-on-standby.html
All Oracle is worth a peek here:
http://emrebaransel.blogspot.com/
That what precedes, and once again it is something that you re primary side?
I would never use it. I always had it put it this way:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
I would use only 'BACKED UP' for the data protection system. Usually it tell oracle how many times to back up before removing.
Why would I want my primary ro connect to the RMAN Repository?
Because if you break down, you want to be able to backup there, too.
Also if it remains idle for awhile you want.
Should I only work with the RMAN Repository on the primary or should I be also in things in the deposits of RMAN (i.e. control files) of the standby?
Always use a database of catalog RMAN with Data Guard.
If I am (usually mounted but not open) Physics, of standby I able to connect to its own local repository for RMAN (i.e. control based on files) while sleep mode is just mounted?
Same answer as 5, use a catalog database.
Similiarly if I have an old lofgical (i.e. effectively read-only), to even connect to the local RMAN Repository of Eve?
Same answer as 5, use a catalog database.
What is the most common way to schedule a RMAN backup to such an environment? example cron, planner of the OEM, DBMS_SCHEDULER? MY instinct is cron script as Planner OEM requires running the OEM service and DBMS_SCHEDULER requires the data runs?
I think cron is still, but they all work. I like cron because the database has a problem at work always reports its results.
Best regards
mseberg
Summary
Always use a database with RMAN catalog.
Always use CRF with RMAN.
Always set the deletion «To APPLY ON ALL STANDBY» policy
DB_RECOVERY_FILE_DEST_SIZE determines how long to keep the archives with this configuration.
Post edited by: mseberg
-
Hi Oracle Community,
I have a few databases (11.2.0.4 POWER 2 block and 12.1.0.2) with Data Guard. Back up this database with RMAN to ribbons, only on the primaries.
During the backup, sometimes I got RMAN-08137: WARNING: log archived not deleted, necessary for intelligence or upstream collection procedure
By reason of this global warning to backup status is "COMPLETED WITH WARNINGS". Is it possible to remove this warning, or perhaps change RMAN configuration to avoid the appearance of this warning?
Concerning
J
Where is the problem on these warnings? If you do not want to see such warnings, why not simply to perform the removal of the archivelogs in a separate step? For example:
RMAN> backup database plus archivelog; RMAN> delete noprompt archivelog all completed before 'sysdate-1';
-
Delete logs archive >; 1 day
Hi all
9i
RHEL5
I posted a thread here on how to remove the log archiving liked only 1 day for both PRIMARY and standby database.
But I don't find it anymore
Is it possible to search all the contents of my son using Keywork "delete archive logs?
Thank you all,
JC
Hello;
You old thread:
Remove the archivelogs and old backups
Best regards
mseberg
-
Logs archiving for the RAC ASM basics
Hello
I have a question about logs archiving on the ASM database located on a RAC. I created a database orcl who has orcl1 instance on node1 and orcl2 on Node2. For the backup of this database, I enabled for the database to archivelog.
After a few transactions and backups, I noticed that there are two sets of archiving logs created on each node in the folder $ORACLE_HOME/dbs. In node 1, it starts with arch1_ * and node2 is arch2_ *.
IWhy is it creates logs archiving on local disks, in which she should ideally create disks asm which is shared between the nodes. My backup application fails with journal archive not found error, because it searches newspaper archives in the other node.
All entries on this will be useful.
AmithHello
I have a question about logs archiving on the ASM database located on a RAC. I created a database orcl who has orcl1 instance on node1 and orcl2 on Node2. For the backup of this database, I enabled for the database to archivelog.
After a few transactions and backups, I noticed that there are two sets of archiving logs created on each node in the folder $ORACLE_HOME/dbs. In node 1, it starts with arch1_ * and node2 is arch2_ *.
I believe that it is missing from your configuration database and Oracle uses the default location. (i.e. your "$ORACLE_HOME/dbs")
ARCHIVELOG must focus on a shared domain.
You need the parameter config below:
SQL> show parameter db_recovery_file NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string db_recovery_file_dest_size big integer
Or location of default config:
SQL> show parameter log_archive_dest NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_dest string
IWhy is it creates logs archiving on local disks, in which she should ideally create disks asm which is shared between the nodes. My backup application fails with journal archive not found error, because it searches newspaper archives in the other node.
To resolve this problem see this example:
SQL> show parameter recover NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string db_recovery_file_dest_size big integer 1 SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination /u01/app/oracle/product/10.2.0/db_1/dbs/ Oldest online log sequence 2 Next log sequence to archive 3 Current log sequence 3 SQL> SQL> alter system set db_recovery_file_dest_size=20G scope=both sid='*'; System altered. SQL> alter system set db_recovery_file_dest='+FRA' scope=both sid='*'; System altered. SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 5 Next log sequence to archive 6 Current log sequence 6 SQL>
With RMAN
RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT 'sys/oracle@db10g1'; new RMAN configuration parameters: CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT '*'; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT 'sys/oracle@db10g2'; new RMAN configuration parameters: CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT '*'; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete RMAN> list archivelog all; using target database control file instead of recovery catalog List of Archived Log Copies Key Thrd Seq S Low Time Name ------- ---- ------- - --------- ---- 1 1 3 A 28-FEB-11 /u01/app/oracle/product/10.2.0/db_1/dbs/arch1_3_744216789.dbf 2 2 2 A 27-FEB-11 /u01/app/oracle/product/10.2.0/db_1/dbs/arch2_2_744216789.dbf RMAN> crosscheck archivelog all; allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=127 instance=db10g1 devtype=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: sid=135 instance=db10g2 devtype=DISK validation succeeded for archived log archive log filename=/u01/app/oracle/product/10.2.0/db_1/dbs/arch1_3_744216789.dbf recid=1 stamp=744292116 Crosschecked 1 objects validation succeeded for archived log archive log filename=/u01/app/oracle/product/10.2.0/db_1/dbs/arch2_2_744216789.dbf recid=2 stamp=743939327 Crosschecked 1 objects RMAN> backup archivelog all delete input; Starting backup at 28-FEB-11 current log archived using channel ORA_DISK_1 using channel ORA_DISK_2 channel ORA_DISK_1: starting archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set input archive log thread=1 sequence=3 recid=1 stamp=744292116 channel ORA_DISK_1: starting piece 1 at 28-FEB-11 channel ORA_DISK_2: starting archive log backupset channel ORA_DISK_2: specifying archive log(s) in backup set input archive log thread=2 sequence=2 recid=2 stamp=743939327 channel ORA_DISK_2: starting piece 1 at 24-FEB-11 channel ORA_DISK_1: finished piece 1 at 28-FEB-11 piece handle=+FRA/db10g/backupset/2011_02_28/annnf0_tag20110228t120354_0.265.744293037 tag=TAG20110228T120354 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02 channel ORA_DISK_1: deleting archive log(s) archive log filename=/u01/app/oracle/product/10.2.0/db_1/dbs/arch1_3_744216789.dbf recid=1 stamp=744292116 channel ORA_DISK_2: finished piece 1 at 24-FEB-11 piece handle=+FRA/db10g/backupset/2011_02_24/annnf0_tag20110228t120354_0.266.743940249 tag=TAG20110228T120354 comment=NONE channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03 channel ORA_DISK_2: deleting archive log(s) archive log filename=/u01/app/oracle/product/10.2.0/db_1/dbs/arch2_2_744216789.dbf recid=2 stamp=743939327 channel ORA_DISK_1: starting archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set input archive log thread=1 sequence=4 recid=4 stamp=744293023 input archive log thread=2 sequence=3 recid=3 stamp=743940232 channel ORA_DISK_1: starting piece 1 at 28-FEB-11 channel ORA_DISK_1: finished piece 1 at 28-FEB-11 piece handle=+FRA/db10g/backupset/2011_02_28/annnf0_tag20110228t120354_0.267.744293039 tag=TAG20110228T120354 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02 channel ORA_DISK_1: deleting archive log(s) archive log filename=+FRA/db10g/archivelog/2011_02_28/thread_1_seq_4.264.744293023 recid=4 stamp=744293023 archive log filename=+FRA/db10g/archivelog/2011_02_24/thread_2_seq_3.263.743940231 recid=3 stamp=743940232 Finished backup at 28-FEB-11 Starting Control File and SPFILE Autobackup at 28-FEB-11 piece handle=+FRA/db10g/autobackup/2011_02_28/s_744293039.263.744293039 comment=NONE Finished Control File and SPFILE Autobackup at 28-FEB-11 SQL> alter system archive log current; System altered. RMAN> list archivelog all; using target database control file instead of recovery catalog List of Archived Log Copies Key Thrd Seq S Low Time Name ------- ---- ------- - --------- ---- 5 1 5 A 28-FEB-11 +FRA/db10g/archivelog/2011_02_28/thread_1_seq_5.264.744293089 6 2 4 A 24-FEB-11 +FRA/db10g/archivelog/2011_02_24/thread_2_seq_4.268.743940307 RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE DISK CLEAR; old RMAN configuration parameters: CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT '*'; old RMAN configuration parameters are successfully deleted RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE DISK CLEAR; old RMAN configuration parameters: CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT '*'; old RMAN configuration parameters are successfully deleted RMAN> exit Recovery Manager complete.
Kind regards
Levi PereiraPublished by: Levi Pereira on February 28, 2011 12:16
-
impossible to the arhcive by rman backup log.
Hello
Oracle Version: 10.2.0.2
Operating system: linux
I was unable to save the logs to archive by rman. Here, I changed the archive log destinatin also but still I get the error
Published by: SIDDABATHUNI on December 4, 2009 02:59RMAN> backup database plus archivelog; Starting backup at 04-DEC-09 current log archived using channel ORA_DISK_1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of backup plus archivelog command at 12/04/2009 16:23:18 RMAN-06059: expected archived log not found, lost of archived log compromises recoverability ORA-19625: error identifying file /u01/app/oracle/flash_recovery_area/VSMIG/archivelog/2009_09_10/o1_mf_1_3_5bkc1o5q_.arc ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3
RMAN > delete all. expired archivelog
-
what sequence in which check the backup log
Hello
in 10g, I have logs archived backups (by RMAN with a repository database). Is it possible to know what sequence is in each?
Thank you.We do not currently use a repository but to check what is available through v$ backup_archivelog_details
Otherwise the data is always available through the original database view v$ archice_log?
HTH - Mark D Powell.
-
How do to the size of the log buffer reduce with huge Pages Linux
Data sheet:
Database: Oracle Standard Edition 11.2.0.4
OS: Oracle Linux 6.5
Processor: AMD Opteron
Sockets: 2
Carrots / power outlet: 16
MEM: 252 GB
Current SGA: GB 122 automatic shared memory (EAMA) management
Special configuration: Linux huge Pages for 190 GB of memory with the page size of 2 MB.
Special configuration II: The use of the LUKS encryption to all drives.
Question:
1. How can I reduce the size of the log buffer? Currently, it appears that 208 MB. I tried to use log_buffer, and it does not change a thing. I checked the granule size is 256 MB with the current size of the SGA.
Reason to reduce:
With the largest size of log buffer the file parallel write newspaper and the synchronization log file is averaged over 45 ms most of the time because she has dumped a lot of stuff at the same time.
Post edited by: CsharpBsharp
You have 32 processors and 252 GB of memory, so 168 private discussions, so 45 MB as the public threads size is not excessive. My example came from a machine running Oracle 32-bit (as indicated by the size of 64 KB of threads private relative to the size of 128 KB of your son) private with (I think) 4 GB of RAM and 2 CPUs - so a much smaller scale.
Your instance was almost inactive in the meantime so I'd probably look outside the Oracle - but verification of the OS Stats may be informative (is something outside the Oracle using a lot of CPU) and I would like to ask you a few questions about encrypted filesystems (LUKS). It is always possible that there is an accounting error - I remember a version of Oracle who report sometimes in milliseconds while claiming to centisecondes - so I'll try to find a way to validate the log file parallel write time.
Check the forum for any stats activity all over again (there are many in your version)
Check the histogram to wait for additional log file writes event (and journal of file synchronization - the lack of EPA top 5 looks odd given the appearance of the LFPW and and the number of transactions and redo generated.)
Check the log file to track writer for reports of slow writes
Try to create a controlled test that might show whether or not write reported time is to trust (if you can safely repeat the same operation with extended follow-up enabled for the author of newspaper which would be a very good test of 30 minutes).
My first impression (which would of course be carefully check) is that the numbers should not be approved.
Concerning
Jonathan Lewis
-
RESTORE the size of the log file with JEREMIAH and without JEREMIAH
According to the architecture of replication vsphere regardless of the changed block information that is sent by the vsphere replication agent are captured by vsphere ReplicationServer in the form of recovery logs and once all the blocks are captured and then redo logs gets has collapsed, but in JEREMIAH, they are now there. My query is now what is the size of the logs of recovery in both cases with JEREMIAH and JEREMIAH.
It depends on the size of your virtual machines, their rate of change, you want to keep, how many points in time how far out, etc. There is no formula for what there are too many variables.
If you want to have an idea, take an average VM for your environment (rate of change, size, etc.), set the JEREMIAH where you want, run it to the period (for example. If you 6 points per day for 4 days, you will have to wait 4 days) and see what size all snapshots are on the recovering site.
Does that answer your question?
-
Reg apply log archiving after the transfer of data files
Hi all
That I reinstalled the main server of the D-Drive E-reader data files using the command line.
The redo logs for the move operation will apply on the eve of the database?C:\>Move <source_path> <destination_path>
In addition, what happens if the data files are moved manually in the primary database (i.e. without using the command prompt)?
Thank you
MadhuSee this doc. Keyword search Rename a data file in the primary database
http://docs.Oracle.com/CD/B28359_01/server.111/b28294/manage_ps.htm#i1034172
Also, you need to update primary database controlfile if some moment of the file made...
And also close this thread
Reg apply log archiving after the transfer of data files
As it would help in the forum of maintenance to clean.
-
Thin Provisioning not reduce size of backup
I use ghettoVCB to save some of my virtual machines. I changed some of these virtual machines to use a provisioning in an attempt to reduce the size of backup. When I browse my store of data using vSphere Client, I see that the thin provisioned disks take less space. However, when I go back to a NFS store on a Windows Server 2008 R2, backup sizes are the same as they were when I used thick provisioning. Is this normal on a Windows NFS share? I did specify a provisioning for ghettoVCB.
I ran sdelete and VMWare Converter to try to reduce the disks; This decreases the size of the disk that I see in the browser to store data, but the size of the backups on the NFS share was always the same.
I don't have access to an NFS server ATM, but tests on a ZFS volume w/de-dupe that is exported outside like NFS, I can confirm that on the host ESX virtual machine appears as thin provisioned and when you do ls - lha on the VMDK is 8 GB and not 0 GB, but when you look at the free data store no space has been consumed during the creation of this new virtual machine. It all depends on your server and it is the configuration of NFS, I agree with RParker, you'll want to enable deduplication on your NFS server if it is supported. Put into operation end is handled differently on the NFS Server vs on VMFS volume and it dictated by the server NFS itself on how it is configured to manage a provisioning.
=========================================================================
William Lam
VMware vExpert 2009
Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/
Introduction to the vMA (tips/tricks)
Getting started with vSphere SDK for Perl
VMware Code Central - Scripts/code samples for developers and administrators
If you find this information useful, please give points to "correct" or "useful".
-
Restore the size of the log in Data Guard configurations
DB version: 11.2
Platform: Solaris 10
We have currently a DB production which is not Dataguard.It has a load of joint working: some processing OLTP and batch.
Its size of redo log is * 100 * MB.
We will create a database with the requirement very similair but this DB will have primary standby (Data guard) and real time applies.
To adapt to the requirements of dataguard, we should reduce the size of the recovery online newspapers? That is to say. Transport of small pieces of roll forward is better than carrying more. Right?Hello;
If you use "real-time applies" the key is not the size so that the standby Redo Logs.
In most cases, 100 MB is fine. Newspapers to sleep again must be the same size that it again.
With 'real time applies' SRL act as a buffer.
Unless you have a real problem with the size of do it again I would not change it.
An excellent source of information on this is 'Restore the Services of Transport' in ' Data Guard Concepts and Administration 11 g Release 2 (11.2) "E10700-02".
If you believe that your logs are too big departure "Troubleshooting performance problems with the database and base/MFG MRP [ID 100964.1]"
Best regards
mseberg
Published by: mseberg on May 31, 2012 11:33
Maybe you are looking for
-
I have an iPad Air and my browser Chrome was completely blocked from use with a likely virus. The description of ' Data: / / (null) we Marshals.gov ' to block the iPhone or iPad. Virus blocking US Marshals.gov data: / / (null) on iPhone or iPad is in
-
How does Firefox crash reporter know my email address?
When Firefox crashes, crash reporter is pre-populated with my email address. How does do that? I don't remember precisely what gives to any point in the past, other than to sign up for this help forum some time back. Even if I * fact * enter it in th
-
R7000 does not appear on Arlo App
Try to connect the camera to Arlo using the R7000 router. I can not get the R7000 recognized with the app arlo. Went through several forums and Feb. and looks is a nightmare to get them to implement. A short cut?
-
[.ini files] Get the value of multiple labels with the same name
Hello! In an ini.files, I need to get the value of each tag named in a certain way in a section, but unfortunately, I have 3 or 4 tags with the same name in several sections. I don't know how to retrieve these values, the program always consider that
-
I am running Win XP Professional Edition, version 2003, Service Pack 3 (beyond that, I don't know!). I'm putting images (jpeg, for the most part) ON a 4 GB SD card that I can use it in a digital photo frame. I got 231 photos on the card using a card