Switch redo logs every hour to provide point-in-time?
Hi all
Oracle 11G on Windows 2008 R2
I I learn the details of the backup/restore management and want to be able to provide a good point-in-time capacity in the event of a disaster. It is advisable to plan a hourly job making ALTER SYSTEM SWITCH LOGFILE;? This causes the redo log archiving every hour so if I need to restore, I know that I can restore at least the previous hour (using newspapers to archive) and before?
Thanks in advance
From time to time, this may come as a result of the excessive workload. Having the EA edition, having defined FAST_START_MTTR_TARGET? If Yes, you can take use Advisor Redo Log suggest the adequate size of logs again for you. A safer side, you can also increase the size of f they are greater as 1gig and order the rate switching by using the parameter Archive_lag_Target.
HTH
Aman...
Tags: Database
Similar Questions
-
How do I know the delay in the redo log apply on Active 11g Dataguard
Hi all
How do I know the delay in the redo log apply on Active Dataguard 11 g...
Do we need to wait until the log switch occurs?
Or is it recommended to provide a switch to log every 15 min, regardless of data is updated / inserted in primary school or not?
Please suggest...
Oracle: oracle 11g Release 2
OS: RHEL 5.4
Thank you
Published by: user1687821 on February 23, 2012 12:02apply for time of arrival + 00 00:00:00.0 day (2) to second (1) interval February 23, 2012 03:02:16
Here apply finish time is 'Zéro' if there are gaps in log files, this parameter shows the time it will take to resolve the discrepancy.
apply the offset + 00 01:50:49 day (2) second (0) interval February 23, 2012 03:02:16
transportation of shift + 00 01:50:49 day (2) second (0) interval February 23, 2012 03:02:16the time that redo data is not available (or) on the basis of data behind pending with the primary database.
SQL >! date
Thu Feb 23 03:03:34 CST 2012If you see your date system current and last does the end is not even a second. So probably there is no delay I would say.
You can test by setting the timeout setting in log_archive_dest_2 (or) disable log_archive_Dest_2 for a while do two or three switches of newspaper and made attention to discrepancies between primary & eve and how does it point of view. -
How to create Redo Log as a good way to avoid the problem of performance
Hi Experts,
I work in the following environment.
OS - Windows server 2012
version - 11.2.0.1.0
Server: production server
1 to 10 of each month, we have huge process. Like many DML and ddl involve.
I've implemented the RMAN also.
My Alerts log entry is as below,
Kills Sep 08 17:05:34 2015
Thread 1 cannot allot of new newspapers, 88278 sequence
Private stream flush is not complete
Currently journal # 1, seq # 88277 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO01. JOURNAL
Thread 1 Advanced to record the sequence 88278 (switch LGWR)
Currently Journal # 2 seq # 88278 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL
Kills Sep 08 17:05:38 2015
Archived journal 81073 extra for each sequence 1 88277 0x38b3fdf6 dest ID thread entry 1:
Kills Sep 08 17:05:52 2015
Thread 1 cannot allot of new newspapers, sequence 88279
Checkpoint is not complete
Currently Journal # 2 seq # 88278 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL
Thread 1 Advanced to record the sequence 88279 (switch LGWR)
Currently Journal # 3 seq # 88279 mem # 0: D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO03. JOURNAL
Kills Sep 08 17:05:58 2015
Archived journal 81074 extra for each sequence 1 88278 0x38b3fdf6 dest ID thread entry 1:
Kills Sep 08 17:06:16 2015
Thread 1 cannot allot of new newspapers, sequence 88280
Checkpoint is not complete
When I check in the internet I found a few points and need clarity also by the following.
-It is recommended that this switch Redo log 5 times maximum per hour, but I have this time of maximum peak about 100 redo log changes occur.
-It is recommended to have the large size of the Redo log. But I have 3 Group of redo and each size is 50 MB by default.
-My group of Redo log not multiplexed and it by default with each group have only a single Member
Group 1 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO01. JOURNAL
Group 2 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO02. JOURNAL
Group 3 - D:\APP\ADMINISTRATOR\ORADATA\AWSPROCESS\REDO03. JOURNAL
And advised by experts with add that again group will give more performance in the other location. Is it mean that I need to add the Redo log as a different group, player or Redo Log itself I need to replace the other location.
Say in the end, My Live server works with the location and size of restore default. I'm trying to create a better way and get all confused
How to do it perfectly.
Experts, please share your comments to my environment to log redo instance as better through your comment or recommended weblink based on my above needs, or a document.
Thanks in advance, if you need more information to help me, then ask me, I'll share you.
Hello
You can't resize an existing redo log group.
You will need to create 3 new redo log groups and drop old log redo (with 200 MB size) groups.
Here you can find a good explanation: DBA for Oracle tips Archives
-
Loss/Corruption of a Redo Log multiplex group member
Grid infrastructure: 11.2.0.4
DB version: 11.2.0.4
OS: RHEL 6.5
All recovery logs are multiplexed IE once. Copy of a mirror
I created a document on how to cope with the loss of a member of a group of Redo Log multiplex. It applies to DBs in ASM or Linux Filesystem.
If a member of a multiplex redo logs group is lost/damaged is in State assets or CURRENT (v$ log.status), will be the following work?
Assuming that the DB would not crash (well that's the whole point of multiplexing)
Step 1. Switch redo log group and bring him to the INACTIVE State IE. LGWR is not written for her now
Step 2. Remove the Member lost/damaged.
ALTER DATABASE DROP LOGFILE MEMBER "+ DATA_DG1/mbhsprd/onlinelog/group_1.256.834497203";
Apparently, this command does not actually remove the log file at the level of the ASM/OS; It updates only the control file
Step 3. If it is a corrupt then log file physically remove the damaged location ASM/OS file
Step 4. I hope the below will create a mirror copy of the survivor
ALTER DATABASE ADD LOGFILE MEMBER '+ DATA_DG1/mbhsprd/onlinelog/group_1.256.834497203' to GROUP 3.
The above steps won't work?
There is no point in writing in the document if you do not actually test what you write
It seems that you are working with Oracle managed files. In this case:
You will find that your "apparently" in step 2 is not correct, and that your step 4 will throw an ora-01276.
-
One group of standby Redo Log is ACTIVE
Hi guys,.
I have successfully configured a custodian of data between a primary database (oradb) and a database ensures Physics (oradb_s8).
However, I have noticed that in V$ STANDBY_LOG only one group of standby Redo Log is ACTIVE, regardless of how many times I go log in the primary database.
' The following was stated in the documentation:
When a switch of newspaper occurs on the database of the source again, redo incoming is then written to the next group of waiting for redo log, and the group used newspaper before again Eve is archived by a foreground ARCn process.
So, I guess that group of standby Redo Log is turned on when the Redo Log is enabled in the primary database.
Could you please clarify it for me?
It's the Oracle 11 g R2 (11.2.0.1) on Red Hat Server 5.2.
On autonomy in standby:
SQL > SELECT GROUP #, THREAD #, SEQUENCE #, ARCHIVED, STATUS FROM V$ STANDBY_LOG
GROUP # THREAD # SEQUENCE # ARC STATUS
---------- ---------- ---------- --- ----------
4 1 248 YES ACTIVE <-this is the only group that is still ACTIVE
5 1 0 NOT ASSIGNED NO.
6 0 YES 0 UNASSIGNED
7 0 YES 0 UNASSIGNED
SQL > SELECT SEQUENCE #, APPLIED, FIRST_TIME NEXT_TIME
V $ ARCHIVED_LOG
SEQUENCE ORDER #;
SEQUENCE # FIRST_TIM NEXT_TIME APPLIED
---------- --------- --------- ---------
232 06-06-SEPT.-15-15-SEP YES
233 06-06-SEPT.-15-15-SEP YES
234 06-06-SEPT.-15-15-SEP YES
235 06-06-SEPT.-15-15-SEP YES
236 06-06-SEPT.-15-15-SEP YES
237 06-06-SEPT.-15-15-SEP YES
238 06-06-SEPT.-15-15-SEP YES
239 07-06-SEPT.-15-15-SEP YES
240 YES 15-SEP-07 07-SEVEN.-15
241 YES 15-SEP-07 07-SEVEN.-15
242 YES 15-SEP-07 07-SEVEN.-15
YES 243 15-SEP-07 07-SEVEN.-15
244 YES 15-SEP-07 07-SEVEN.-15
245 YES 15-SEP-07 07-SEVEN.-15
246 YES 15-SEP-08 15-SEP-07
247 15 - SEP - 08 IN-MEMORY 15-SEP-08
16 selected lines.
On the primary:
SQL > SELECT SEQUENCE #, APPLIED, FIRST_TIME NEXT_TIME
V $ ARCHIVED_LOG
WHERE NAME = 'ORADB_S8 '.
SEQUENCE ORDER #;
SEQUENCE # FIRST_TIM NEXT_TIME APPLIED
---------- --------- --------- ---------
232 06-06-SEPT.-15-15-SEP YES
233 06-06-SEPT.-15-15-SEP YES
234 06-06-SEPT.-15-15-SEP YES
235 06-06-SEPT.-15-15-SEP YES
236 06-06-SEPT.-15-15-SEP YES
237 06-06-SEPT.-15-15-SEP YES
238 06-06-SEPT.-15-15-SEP YES
239 07-06-SEPT.-15-15-SEP YES
240 YES 15-SEP-07 07-SEVEN.-15
241 YES 15-SEP-07 07-SEVEN.-15
YES 07-SEP-15 242 07-SEP-15
YES 243 15-SEP-07 07-SEVEN.-15
244 YES 15-SEP-07 07-SEVEN.-15
245 YES 15-SEP-07 07-SEVEN.-15
246 YES 15-SEP-08 15-SEP-07
247 NO. 15-SEP-08 08 - SEPT.-15
If you have a large DML activity on primary, you see more than one group # in v$ active standby_log.
RFS will always try to assign the next available standby log, because your group changes # 4 are already applied, it allocates this group again after the switch.
Check metalink doc: bug 2722195 and 219344.1
-
Standby database is down and redo logs is not transmitted
Hi all
I have a question about the primary recovery database logs transmition for the standby database.
What happens if the transmition of redo logs was arrested for a period of time (1 month), if I want to re - build the database ensures what measures must be applied?
The papers of recovery will be put on hold? and once the database pending is redo these logs will be sent again? or I have to rebuild my database Eve from scratch?
Kind regardsHello;
When I rebooted the month so I would change the parameter 'log_archive_dest_state_n' on the primary to postpone next it would stay like that.
I would like to delete the database in waiting, and if the folder structure was not the same I correct it.
When the month has increased I would use RMAN to duplicate a new sleep mode:
http://www.Visi.com/~mseberg/duprman2.html
And then I put "log_archive_dest_state_n" to Activate.
Best regards
mseberg
-
Redo log switch always calls the control point?
Hello
Is a log switch redo always calls process checkpoint?
I have a database of test on which nothing is not running, but when I switch the redo log file, so it takes time to make the State of redo log active to INACTIVE.
But when I change check system; the State immediately changes from ACTIVE to INACTIVE.
Gives me a feeling that the redo log switch, it is not mandatory for the checkpoint occurs.
DB version is 11.2.0.1
Comment of nicely.
A 11.2.0.4 alert after log is 3 'alter system switch logfile' in a line:
ALTER SYSTEM SET log_checkpoints_to_alert = TRUE SCOPE = BOTH;
Wed Apr 16 22:17:09 2014
Beginning log switch checkpoint up to RBA [0xd8.2.10], SNA: 12670277595657 *.
Thread 1 Advanced to save the 216 (switch LGWR) sequence
Currently Journal # 3 seq # 216 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_3_938s43lb_.log
Currently Journal # 3 seq # 216 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_3_938s49xz_.log
Wed Apr 16 22:17:25 2014
Beginning log switch checkpoint up to RBA [0xd9.2.10], SNA: 12670277595726 *.
Thread 1 Advanced to record the sequence 217 (switch LGWR)
Currently journal # 1, seq # 217 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_1_938s3lbv_.log
Currently journal # 1, seq # 217 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_1_938s3nmc_.log
Wed Apr 16 22:17:36 2014
Thread 1 cannot allot of new newspapers, sequence 218
Checkpoint is not complete
Currently journal # 1, seq # 217 mem # 0: /u01/app/oracle/oradata/TEST/onlinelog/o1_mf_1_938s3lbv_.log
Currently journal # 1, seq # 217 mem # 1: /u01/app/oracle/fast_recovery_area/TEST/onlinelog/o1_mf_1_938s3nmc_.log
Wed Apr 16 22:17:40 2014
Checkpoint completed up to [0xd8.2.10], YVERT RBA: 12670277595657 +++
Beginning log switch checkpoint up to RBA [0xda.2.10], SNA: 12670277596242 *.
Thread 1 Advanced to record the sequence of 218 (switch LGWR)
Notice how we have lines (marked *) what does "beginning log switch checkpoint."
However, note that control points are not "emergency" until we have a problem with "checkpoint is not complete", and the first control point in the bucket fill in only some time after the second control point in the dumpster. Log switch checkpoint he still 'place' (or, at least, its necessity is noticed) - but it is not the urgent case where it used to be in early versions of Oracle.
Concerning
Jonathan Lewis
-
Many redo logs switches the line seconds on an irregular basis
Hello
We are 11 GR 2 RAC. We migrated a DB 10 g that used to pass on every 20-25 minutes all the
While doing about 160 to 200 GB per day; I noticed this newly migrated DB now show
pics of quick saves switches from time to time, for example yesterday (excerpt):
SQL > select thread #, FIRST_TIME from v$ log_history by 2;
...
1 20140524 10:19.36
1 20140524 11:24.34
2 20140524 11:24.35
2 20140524 12:00.02
1 20140524 12:00.03
1 20140524 12:00.05
2 20140524 12:00.08
1 20140524 12:00.14
1 20140524 13:06.19
2 20140524 13:09.44
2 20140524 14:15.46
1 20140524 15:00.08
2 20140524 15:00.50
2 20140524 15:53.56
2 20140524 16:45.37
1 20140524 16:45.39
2 20140524 17:43.26Ignore lines because of lunch at this time that I start a backup of archive logs, but to explain
switches 11:24.34 et.35 for example, or 15:00.08 et.50, then at 16:45, where
Do you think I should study? I have no reason why in these moments my DB suddenly
give 3 GB of Oder, before you wait another hour to do this (e.g. between 16:45 and 17:43)...
Everything about the fact that we are on CARS now?
Thank you very much.
Kind regards
SEB
Nothing to worry about.
Each node in a RAC has it of own redo logs and performs his own logswitches.
If you look for each thread (1 and 2 that correspond to the node) itself, everything is ok.
-
11 GR 2
Discovered newspaper spent all the minutes (2, 3 minutes) at peak periods. I'll make the recommendation for the largest newspapers of recovery such as switches will be every 20 to 30 minutes.
Wanted to know what would be the negative side of the great redo logs?You have the active setting FAST_START_MTTR_TARGET? If you have it, you can use the Redo size for the log file Advisor who can tell you the right size for them for your redo log files. In addition, you can set parameter archive_lag_target while fixing a large size for her redo log files.
Aman...
-
Bandwidth of the network for the redo log switches
Hi all
I'm using Oracle 11 g R2, I want to clarify the necessary bandwidth of my primary site to the secondary site (using Oracle Data guard)
My database is an Oracle RAC 2 nodes each node has a log size to 2 GB again. listen to this file in operation every 5 minutes. It means that I'm spending a 2 GB redo log file every 5 minutes from each node. This means that 4 GB in total every 5 minutes.
I'm planing to set up Oracle Dataguard in maximum performance mode. Is that mean I need a link between the primary and secondary site that is able to transfer files of 2 GB + 2 GB every 5 minutes?
I do the calculation below and need your advice:
4 GB / (5x60s) = 13981 KByte/s
This means that I need to link to 13981x8bites/s = 111, 848Kbit/s?
Is the above correct? your advice please
Kind regards
Please check below two
http://www.Oracle.com/au/products/database/MAA-WP-10gR2-dataguardnetworkbestpr-134557.PDF
How to calculate bandwidth transfer network required recovery in care of data (Doc ID 736755.1)
-
How do I know if there's bottleneck in the redo logs?
Hi all
EBS R12.2
11 GR 2
OL 6.5
We have two groups of log of size 1 GB.
and oyour archiving logs generates 13 newspapers every hour.
-rw - r-. 1 oraprod s/n 956896768 Jan 25 14:00 1_3372_898613761.dbf
-rw - r-. 1 oraprod s/n 1004083200 Jan 25 14:03 1_3373_898613761.dbf
-rw - r-. 1 oraprod s/n 928530432 Jan 25 14:10 1_3374_898613761.dbf
-rw - r-. 1 oraprod s/n 928728576 Jan 25 14:12 1_3375_898613761.dbf
-rw - r-. 1 oraprod s/n 967805952 Jan 25 14:20 1_3376_898613761.dbf
-rw - r-. 1 oraprod s/n 916065792 Jan 25 14:22 1_3377_898613761.dbf
-rw - r-. 1 oraprod s/n 951790592 Jan 25 14:30 1_3378_898613761.dbf
-rw - r-. 1 oraprod s/n 978358272 Jan 25 14:32 1_3379_898613761.dbf
-rw - r-. 1 oraprod s/n 974519808 Jan 25 14:40 1_3380_898613761.dbf
-rw - r-. 1 oraprod s/n 960421376 Jan 25 14:42 1_3381_898613761.dbf
-rw - r-. 1 oraprod s/n 917438976 Jan 25 14:49 1_3382_898613761.dbf
-rw - r-. 1 oraprod s/n 920794624 Jan 25 14:51 1_3383_898613761.dbf
-rw - r-. 1 oraprod s/n 920704000 Jan 25 14:59 1_3384_898613761.dbf
I got this alert saves messages:
Mon Jan 25 15:08:37 2016
Filled checkpoint up to RBA [0xd3b.2.10], RCS: 5978324588151
Mon Jan 25 15:10:57 2016
Thread 1 cannot allot of new newspapers, sequence 3388
Private stream flush is not complete
Currently journal # 1, seq # 3387 mem # 0: /home/oraprod/PROD/data/log01a.dbf
Currently journal # 1, seq # 3387 mem # 1: /home/oraprod/PROD/data/log01b.dbf
Beginning log switch checkpoint up to RBA [0xd3c.2.10], RCS: 5978324634623
Thread 1 Advanced for you connect to sequence 3388 (switch LGWR)
Currently Journal # 2 seq # 3388 mem # 0: /home/oraprod/PROD/data/log02a.dbf
Currently Journal # 2 seq # 3388 mem # 1: /home/oraprod/PROD/data/log02b.dbf
Mon Jan 25 15:11:01 2016
LNS: Standby redo log file selected for thread 1 sequence 3388 for destination LOG_ARCHIVE_DEST_2
Mon Jan 25 15:11:04 2016
Archived journal 6791 extra for each sequence 1 3387 ID thread entry 0 x 12809081 dest 1:
Mon Jan 25 15:11:17 2016
Filled checkpoint up to RBA [0xd3c.2.10], RCS: 5978324634623
Mon Jan 25 15:13 2016
Additional checkpoint up to RBA to RBA [0xd3c.1a8e82.0] [0xd3c.18d210.0], current journal line
Mon Jan 25 15:13:04 2016
Thread 1 cannot allot of new newspapers, sequence 3389
Private stream flush is not complete
Currently Journal # 2 seq # 3388 mem # 0: /home/oraprod/PROD/data/log02a.dbf
Currently Journal # 2 seq # 3388 mem # 1: /home/oraprod/PROD/data/log02b.dbf
Beginning log switch checkpoint up to RBA [0xd3d.2.10], RCS: 5978324673444
Thread 1 Advanced for you connect to 3389 (switch LGWR) sequence
Currently journal # 1, seq # 3389 mem # 0: /home/oraprod/PROD/data/log01a.dbf
Currently journal # 1, seq # 3389 mem # 1: /home/oraprod/PROD/data/log01b.dbf
Mon Jan 25 15:13:07 2016
LNS: Standby redo log file selected for thread 1 sequence 3389 for destination LOG_ARCHIVE_DEST_2
Mon Jan 25 15:13:09 2016
Archived journal 6793 extra for each sequence 1 3388 ID thread entry 0 x 12809081 dest 1:
Mon Jan 25 15:13:11 2016
Filled checkpoint up to RBA [0xd3d.2.10], RCS: 5978324673444
Is it a sign of botteneck? Users complained that each 15:00 they encounter performance degradation.
Kind regards
JC
Jenna_C wrote:
We have two groups of size 1 GB log and our archiving logs generates 13 newspapers every hour.
Is it a sign of botteneck? Users complained that each 15:00 they encounter performance degradation.
If your users are complaining about slow at 15:00 then look at what is happening on the system at this point in time. Use the Active Session history, providing that you have the license, at 3 pm to see if there is a problem. Or take snapshots of the AWR manually around 15:00 in an attempt to capture any anomaly. Or capture instant V$ SESSION yourself manually around 15:00 and see so many sessions is pending or not.
I have seen problems like this on shared physical infrastructure systems where the activity on a system can interfere with each other. Attention to this kind of thing, because if it is caused by another system then you will never find the cause of the problem inside the Oracle. Examples include backups are made at a specific time, causing large amounts of sequential disk i/o and slows down access to the drive for everything else. Such a scenario could cause some writing redo log time to increase significantly. Or the backup could be another system completely who shares the same disks in the same drive bay or the SAN and Bay drives or SAN is inundated by applications e/s backup. But as it's another system, that you will not see what is happening on your Oracle database system.
Or a virtualized with environment sharing CPU and memory where another virtual machine running a CPU intensive job and steals the Oracle system CPU capacity. I saw the same thing with memory so - Vmware has a strange way to virtualize memory, which means it can use all of the memory on the physical hardware and run out, and then he had to intervene to release somehow little memory. What ends up slowing things down.
It could be a network problem - is something that the network of the floods at 15:00? Still, backups can be a common cause of this, during the transfer of data over the network to another system. Is there a work extracted regularly data which runs at 15:00 for example extract given OLTP to feed into a data warehouse, which is copied on the network?
It could be many different things, causing the slowdown. I would definitely recommend watching any expected sessions know and see if this gets worse around 15:00, or if it remains the same, in which case the problem is elsewhere. You must also eliminate everything else - the disks, network, etc.
Good luck
John Brady
-
11 GR 2 on RHEL 6.2
4 GB is our size of Redo Log online.
SQL > select bytes/1024/1024/1024 GB of log v$.
GB
----------
4
4
4
4
But the archive logs are 3.55 GB in size instead of 4 GB. Some of the archivelogs that are smaller than 3.55 below must be caused by
Archive log backup RMAN offers jobs at 10:00, 16:00 and 22:00 that initiates log switching (I guess)
SQL > select (blocks * block_size/1024/1024/1024) GB of v$ archived_log where status = 'A ';
GB
----------
3.55978966
3.31046581
3.55826092
3.55963707
1.39474106
3.561553
3.55736685
3.55881786
.135155678
3.55546999
.054887295
1.88027525
.078295708
1.97425985
3.55703735
3.55765438
.421986103
3.55839968
< snipped >
It has something to do with the parameter FAST_START_MTTR_TARGET ? It is set to zero in this PB anyway.
SQL > show parameter mtt
VALUE OF TYPE NAME
------------------------------------ ----------- ------------------------------
fast_start_mttr_target integer 0
SQL >
He could carry on public discussions and private discussions as Jonathan Lewis points out in the other discussion that Mihael provides a link to.
It could concern the parameter archive_lag_target
It could be the result of the BACKUP of RMAN commands
It may be a programmed script / who commands ALTER SYSTEM ARCHIVE LOG of employment issues.
Oracle can create an archivelog that is smaller than the redo log, if it isn't a switch or command before archive the redo log is full.
Hemant K Collette
-
bottleneck during the passage of the redo log files.
Hi all
I'm using Oracle 11.2.0.3.
The enforcement team has indicated that they are facing slow at some point.
I followed the database and I found that at some passage of redo log files (not always), I am facing a slow at the application level.
I have 2 son since my database is CARS, each thread has 3 groups of multiplexed redo logs the FIU, with size of 300 MB each.
Is it possible to optimize the switch of the redo log files? knowing that my database is running in ARCHIVELOG mode.
Kind regardsHello
Yes, Oracle recommends 1 validation by 30 calls from users or less. Of course, every database is different, so this rule cannot be taken too literally, but in your case, this rule seems to apply. In any case State, 900 undertakes seconds it looks like a very large number and the need for a high number of transactions should be questioned. You should talk to your analysts/application management/enterprise architect if warranted - that is to say if the application does in fact almost 2 000 business transactions per second.
What about DB CPU: here is a link to a blog that I wrote on this subject, it should help to better understand this number:
http://Savvinov.com/2012/04/06/AWR-reports-interpreting-CPU-usage/
But briefly: DB processor isn't a real event, it is simply an indication that the sessions are on CPU (or waiting for CPU) rather than wait on i/o requests or events in the database. It is not necessarily a bad thing, because the database must perform tasks and he cannot do without loading the CPU. It may indicate a problem in two cases: when the CPU usage is close to the limit of the host (OS stats section indicates that you are very far from there) or when the CPU is a % of DB time - in the latter case, this could mean that you are making too many logical reads due to inefficient plans or analysis too. In any case, this does not apply to you, because 20 percent is not very high a number.
Other items in the list of the top 5 deserve attention, too - gc buffer busy acquire, gc current block busy, enq: TX - line lock conflict.
To summarize, your database under a lot of stress - whether it is the legitimate workload, and if this is the case, you may need to upgrade your hardware later. There is chance that it isn't - for example a high number of runs may indicate that rather than to bulk operations database code using PL/SQL loops, which is a big performance killer. Check "Top SQL by executions" about whether or not this is the case.
Good luck!
Best regards
Nikolai -
Hello all,.
For a while I myself have wondered what would be the correct redologsize for a database.
I found the note
the new 10g feature: REDO LOG SIZING ADVISORY [ID 274264.1]
who says:
'+ Rule switching newspapers maximum once every fifteen minutes. + »
Well, I came across a DB with about 3 logswitches per minute (OEL 4.6 environment RAC 10 g 2).
Do you think that this number is WAY over what it should be? What are your redolog sizes?
The note mentioned also using the instance_recovery view v$ to give advice on the size redolog, which takes into account the MTTR.
Thank you.
ARO
Pinela.
Published by: Pinela on November 7, 2011 14:42Pinela wrote:
Thank you all for you comments,jgarry,
Yes, it is a good idea. For the moment, no, there are no errors, messages or complaints. In this case, the impact of the size redolog, refers to restore and cloning operations.Aman,
"more than 4-5 swithces an hour - in general is considered correct". Interestingly, it is a good basis.Give more context. The DB in questions about 800 GB, redologs with 50 MB in size and is logswitches 3/4 times each minute (as mentioned earlier) and 1,200 users.
With these new values, do you everything keep or change your mind?
I would recommend changing the size redolog to 1 or 2 GB.500 MB is very small for a database of ~ 800 GB. For about 2-3 GB of the size of redo log file should be a good start. But please note that only affecting the size of the log file again X value cannot solve the problem. Would be also filled too quickly? If IMO as well as to define the size of redo log to about 3 GB file, you should consider the parameter ARCHIVE_LAG_TARGET so a value which you should expect to maintain for the switching log (as already suggested by others). This would let you control the speed of switching rather than being dependent on the size of the log file or database activity.
HTH
Aman... -
ADDM-Redo log size increase on asm
HII,
I ran addm and next msg.
CONCLUSION 4: 5.8% impact (762 seconds)
------------------------------------
Switch to the log file operations consumed in times of important data while
pending the completion of the control point.
RECOMMENDATION 1: Configuration of the DB, 5.8% enjoy (762 seconds)
ACTION: Check if extra shipping has been used for sleep
databases.
RECOMMENDATION 2: Configuration of the DB, 5.8% enjoy (762 seconds)
ACTION: Increase the size of the log files to 1839 to hold at least 20 M
minutes of redo information.
My database is in log mode archive... and I asm storage... How increase the size of the redo log file... can I do when the instance is open?
or I should have gone to mount the mode to do this.To increase the size of redologfiles online, you will need to drop and create them again. But you can't drop current/active group, you can drop the Group inactive and unused. To provide what you need by using alter system switch logfile; change checkpoint system; for example
SQL> select member from v$logfile; MEMBER -------------------------------------------------------------------------------- D:\ORACLE\PRODUCT\10.2.0\ORADATA\SB\REDO3.LOG D:\ORACLE\PRODUCT\10.2.0\ORADATA\SB\REDO2.LOG D:\ORACLE\PRODUCT\10.2.0\ORADATA\SB\REDO1.LOG SQL> select group#,status from v$log; GROUP# STATUS ---------- ---------------- 1 INACTIVE 2 CURRENT 3 INACTIVE SQL> alter database drop logfile group 1; Database altered. SQL> alter database add logfile group 1 ('D:\ORACLE\PRODUCT\10.2.0\ORADATA\SB\RE DO1.LOG') size 60M reuse; Database altered. SQL> select group#,status from v$log; GROUP# STATUS ---------- ---------------- 1 UNUSED 2 CURRENT 3 INACTIVE SQL> alter system switch logfile; System altered. SQL> alter system checkpoint; System altered. SQL> select group#,status from v$log; GROUP# STATUS ---------- ---------------- 1 CURRENT 2 INACTIVE 3 INACTIVE SQL> alter database drop logfile group 2; Database altered. SQL> alter database add logfile group 2 ('D:\ORACLE\PRODUCT\10.2.0\ORADATA\SB\RE DO2.LOG') size 60M reuse; Database altered. SQL>
Maybe you are looking for
-
Fix runtime error 216 at 03E61872
starting the computer... when I try to fix runtime error 216 at o3E61872... I select Run type regedit... Count watch Devil Administrator regedit... give me a solution to slove this
-
Audio and USB will not work on Sony vpceb2s3c.
Original title: help looking for drivers please Hi im having an absolute Mare! I have a Chinese built laptop sony vpceb2s3c (pcg - 71211t) with 7 home premium the lappy had a h.d fgaulty ive succeeded him and installed 7 Home premium 64-bit drivers i
-
Don't send email Z30 Mac blackBerry
Hi all Z30, fully updated software. I have 4 email clients on my phone. Mail Mac decided not to send emails at all, although I can still receive all emails as usual. Other clients work perfectly in all respects, receiving and sending. I tested all cl
-
Twice in the space of a few days our internet connection has been disabled. System Restore solve the problem - temporarily. It would be good to know exactly which file is causing the problem.
-
Customization of the Portal helps ASA
I am trying to modify the help file for the web application, but it doesn't show any changes, I went the customization assistance under the portal, imported html file and save the config. But always in the browser, is to show the old help content of