long log file sync waits
HelloI am trying to improve the performance of a 10.2.0.5 database who is suffering from the high value of the log file sync wait. AWR reports show that it is almost never less than 20% of the time of database, normally it is 20-30%, and it is consistently above not-idle wait event. Many sessions he takes 80-90% of the time or more.
Log file parallel write is an order of magnitude lower, probably isn't an I/O problem. The database performs about 100 commits per second, the ratio of user-call-to-commit is about 20, again generated 500 k/s, log buffer is big 14 MB. There are about 6-7 journal file switches per hour on average.
There are several oddities on log file sync wait here and I'd appreciate any help unravel them:
1) there are almost two file sync wait events in the log by a posting on average (number of parallel expectations of log file is about the same number of validations). Why?
(2) according to the ASH, nearly half of log file sync waits take near Ms. 97.7 is something special reason for this pic?
(3) approximately 0.5% of the log file sync waits captured by ASH see the multisecond wait times. No idea where they may come from or how to diagnose this kind of problem?
Some parts of an AWR report below, please let me know if anything else is needed.
Best regards
Nikolai
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 44,592M 44,672M Std Block Size: 16K
Shared Pool Size: 3,104M 3,024M Log Buffer: 14,288K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 545,833.77 6,659.29
Logical reads: 225,711.23 2,753.73
Block changes: 1,788.11 21.82
Physical reads: 1,195.98 14.59
Physical writes: 119.02 1.45
User calls: 2,368.64 28.90
Parses: 737.35 9.00
Hard parses: 94.58 1.15
Sorts: 261.75 3.19
Logons: 5.93 0.07
Executes: 1,796.12 21.91
Transactions: 81.97
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time 23,580 19.8
log file sync 491,840 21,976 45 18.5 Commit
db file sequential read 1,902,069 12,604 7 10.6 User I/O
read by other session 743,414 4,159 6 3.5 User I/O
log file parallel write 220,772 3,069 14 2.6 System I/O
-------------------------------------------------------------
Instance Activity Stats ******************
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
Cached Commit SCN referenced 11,130 3.1 0.0
Commit SCN cached 3 0.0 0.0
DB time 12,547,531 3,483.3 42.5
DBWR checkpoint buffers written 145,437 40.4 0.5
DBWR checkpoints 6 0.0 0.0
...
IMU CR rollbacks 2,566 0.7 0.0
IMU Flushes 36,411 10.1 0.1
IMU Redo allocation size 187,480,124 52,045.3 635.0
IMU commits 255,350 70.9 0.9
IMU contention 11,998 3.3 0.0
IMU ktichg flush 11 0.0 0.0
IMU pool not allocated 4,295 1.2 0.0
IMU recursive-transaction flush 175 0.1 0.0
IMU undo allocation size 2,029,937,952 563,519.5 6,875.1
IMU- failed to get a private str 4,295 1.2 0.0
...
background checkpoints completed 6 0.0 0.0
background checkpoints started 6 0.0 0.0
background timeouts 11,400 3.2 0.0
...
change write time 9,398 2.6 0.0
cleanout - number of ktugct call 54,233 15.1 0.2
cleanouts and rollbacks - consis 9,436 2.6 0.0
cleanouts only - consistent read 2,028 0.6 0.0
...
Instance Activity Stats *******************
Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
commit batch performed 61 0.0 0.0
commit batch requested 61 0.0 0.0
commit batch/immediate performed 94 0.0 0.0
commit batch/immediate requested 94 0.0 0.0
commit cleanout failures: block 51 0.0 0.0
commit cleanout failures: buffer 14 0.0 0.0
commit cleanout failures: callba 8,632 2.4 0.0
commit cleanout failures: cannot 10,575 2.9 0.0
commit cleanouts 991,186 275.2 3.4
commit cleanouts successfully co 971,914 269.8 3.3
commit immediate performed 33 0.0 0.0
commit immediate requested 33 0.0 0.0
commit txn count during cleanout 61,737 17.1 0.2
...
redo blocks written 4,078,889 1,132.3 13.8
redo buffer allocation retries 177 0.1 0.0
redo entries 1,924,106 534.1 6.5
redo log space requests 174 0.1 0.0
redo log space wait time 2,333 0.7 0.0
redo ordering marks 4,553 1.3 0.0
redo size 1,966,229,700 545,833.8 6,659.3
redo subscn max counts 38,227 10.6 0.1
redo synch time 2,233,166 619.9 7.6
redo synch writes 352,935 98.0 1.2
redo wastage 56,259,980 15,618.0 190.5
redo write time 316,495 87.9 1.1
redo writer latching time 19 0.0 0.0
redo writes 220,866 61.3 0.8
rollback changes - undo records 134 0.0 0.0
rollbacks only - consistent read 11,242 3.1 0.0
...
...
transaction rollbacks 94 0.0 0.0
transaction tables consistent re 25 0.0 0.0
transaction tables consistent re 8,704 2.4 0.0
undo change vector size 1,176,156,772 326,506.2 3,983.5
user I/O wait time 2,039,881 566.3 6.9
user calls 8,532,422 2,368.6 28.9
user commits 295,139 81.9 1.0
user rollbacks 122 0.0 0.0
...
-------------------------------------------------------------
High "log file synchronization" waits arise also when LGWR is unable to get the CPU to republish on the boy that the log write is complete. The boy expected this event until LGWR may be a sign but if LGWR is unable to get the CPU, it is unable to signal quite quickly.
8 processors for 1 hour is 28 800 seconds. 7 UC is an odd number. This is equivalent to 25 200 seconds available. Your AWR shows that Oracle has represented time seconds 23 580 CPU. So, your server is probably encounter situations where processes are unable to get the CPU.
You can pin or renice LGWR - but you need to check with Oracle Support if this is doable on your platform or supported.
What you need to do is to adjust the sessions that make very high logical reads (a simple rule is 10 k blocks per CPU per second and you hit 225 K blocks for 7 processors!) to reduce the logical reads and CPU consumption.
OR add more CPU.
Hemant K Collette
http://hemantoracledba.blogspot.com
Hemant K Collette
Tags: Database
Similar Questions
-
Hello world
DB 11.2.0.1 - hearts 32 - 64 GB of RAM
I have a database of 1.5 million records are inserting all hours into it. This database is suffering too log sync wait. Googling the question I found that the reason is the way enforcement is the insertion of data in the database that has an insert preceded a commit after each of them. Currently, we are unable to change the method of insertion to use batch inserts instead. The number of files in the database redolog is 22 each of size 200 MB and the log_buffer is about 150 MB.
is there a solution to reduce the number of log file sync waits?
I tried to increase the size of log file to roll forward to 700 MB of each, but there was once some claim buffer waits. the SGA is 38 GB.
Thanks for any guidance
concerning
You could try commit year writing setting method. In the example here,.
One of the ways to eliminate the log_file_sync awaits
I eliminate log file sync.
-
Events of waiting "log file parallel write" / "log file sync", in CREATE INDEX
Hello guys,.
my current project I'm running a few tests of performance for oracle data guard. The question is "How LGWR SYNC transfer influence the performance of the system?"
For the performance of the values, that I can compare I just built a normal oracle database in the first step.
Now I perform various tests such as creating index 'broad', massive parallel inserts/validations, etc to get the marks.
My database is an oracle 10.2.0.4 with multiplexed on AIX log files.
I create an index on a table of "normal"... I have run "dbms_workload_repository.create_snapshot ()" before and after the CREATE INDEX for an equivalent period for the AWR report.
Once the index is built (round about 9 GB), I made an awrrpt.sql for the AWR report.
And now take a look at these values of the AWR
How can it be possible?Avg %Time Total Wait wait Waits Event Waits -outs Time (s) (ms) /txn ---------------------------- -------------- ------ ----------- ------- --------- ...... ...... log file parallel write 10,019 .0 132 13 33.5 log file sync 293 .7 4 15 1.0 ...... ......
With regard to the documentation
-> synchronization of log file: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104Wait Time: The wait time includes the writing of the log buffer and the post.
This was also my understanding... "log file sync" wait time should be higher than the 'parallel log writing' timeout, because of, it includes the e/s and the response time for the user's session.Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.
I could accept it, if the values are near each other (perhaps around 1 second about altogether)... but the difference between 132 and 4 seconds is too visible.
Is the behavior of the log file sync/write different when you do a DOF as CREATE INDEX (maybe async... like you can influence it with COMMIT_WRITE initialization parameter?)?
You have no idea how these values born?
Ideas/thoughts are welcome.
Thanks and greetings
-
synchronization of log file much larger than log file parallel write
Hi all
average of log file sync wait is 30 ms log file parallel write is only 10 ms, this mean? What are the main reasons for this difference?
Sincerely yours.
A. U.
Hello
average of log file sync wait is 30 ms log file parallel write is only 10 ms, this mean? What are the main reasons for this difference?
Essentially, when newspaper writer writes, several session may be waiting. During 10 ms of time, you can have a written lgwr and sessions of 3 user waiting on the 'log file sync '.
Kind regards
Franck.
-
Expectations extended on synchronization of log file while the parallel writing journal is fine
We have 9.2.0.8 as experiences of long waits on the database log file sync (average waiting time = 46 ms) while waiting for the log file write Parallels is very good (average waiting time is less than 1 millisecond).
The application is of type middleware, it connects to several other applications. A single user in a single application action train several requests to send back through this middleware, so he needs response time of db in milliseconds.
The database is quite simple:
-It has a few config tables that the application reads but rarely updated
-She has table TRANSACTION_HISTORY: the application inserts records into this table using Insert einreihig (about 100 lines per second); each insert is followed by a validation.
Records are kept for several months and then purged. The table has only column VARCHAR2/NUMBER/DATE, no LOBS, LONG, etc. The table has 4 non-unique single-column index.
The average line length is 100 bytes.
The load profile does not appear something unusual, the main figures: 110 transactions per second average transaction = 1.5 KB size.
The data below are to 1 hour interval (purge wasn't running during this interval), physical reads or writes physical rate is low:
Load profile
~ ~ ~ Per second per Transaction
--------------- ---------------
Size: 160,164.75 1,448.42
Logical reads: 521,58 57 675,25
Block changes: 934,90 8.45
Physical reads: 76,27 0.69
Physical writings: 86,10 0.78
Calls of the user: 491,69 4.45
Analysis: 321,24 2.91
Hard analysis: 0.09 0.00
Kinds: 126.96 1.15
Logons: 0.06 0.00
Runs: 17.70 1 956,91
Operations: 110,58
Top 5 events are dominated by the synchronization of log file:
Top 5 timed events
~~~~~~~~~~~~~~~~~~ % Total
Event expects Ela time (s)
-------------------------------------------- ------------ ----------- --------
401 608 18 448 59.94 file synchronization log
db file parallel write 124 044 3 404 11.06
CPU time 3,097 10.06
Enqueue 10 476 2 916 9.48
DB file sequential read 261 947 2 435 7.91
Section events:
AVG
Total wait wait wait
Hour of wait time wait for the event (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
Synchronize 0 401 608 46 18 448 1.0 file log
db file parallel write 124 044 0 3 404 27 0.3
Enqueue 10 476 277 2 916 278 0.0
DB file sequential read 261 947 0 2 435 9 0.7
buffer busy waits 11 467 67 173 15 0.0
SQL * Net more data to the client 1 565 619 0 79 0 3.9
lock row cache 2 800 0 52 18 0.0
control file parallel write 1 294 0 45 35 0,0
Log end of switch file 261 0 36 138 0.0
latch free 2 087 1 446 24 12 0.0
PL/SQL 1 1 20 19531 0,0 lock timer
log file parallel write 0 143 739 17 0.4 0
db file scattered read 1 644 0 17 10 0.0
sequential log file read 636 0 8 13 0.0
Log buffer is about 1.3 MB. We could increase the log buffer, but there is no log buffer space waits, so I doubt this will help.
Newspapers in recovery have their own file systems, not shared with the data files. This explains the difference between waiting avg on parallel writing of log (less than 1 ms) file and db file parallel write (27 ms).
Restoring logs is 100 MB, there are about 120 journal switches per day.
What has changed: the pads/validations rate grew. Several months ago there were 25 inserts/validations per second in the TRANSACTION_HISTORY table, now get us 110 inserts/validation per second.
What problem it causes application: due to slow down the reaction of the basis of the application (Java-based) requires discussions more and more.
MOS documents on synchronization of log file (for example, 1376916,1 waits for troubleshooting "log file sync") recommend to compare the average waiting time on synchronization of log file and the log file parallel write.
If the values are close (for example log file sync = 20 ms and log file parallel write = 10 ms) so expectations are caused by nits IO. However, it is not the case here.
There was a bug (2669566) in 9.2 which resulted in underreporting lgwr parallel time of writing to the log file. I was talking about September 2005, during which the bug was present in 9.2.0.6, reported 10.1 fixed in: file parallel journal written (JL Comp) it is possible that your problem IS written to the log file.
Concerning
Jonathan Lewis
-
log files synchronization problem
Oracle 11.2.0.1
IBM AIX - 6.1 and 7 power machine with 64 GB of memory.
32 processors logical and 8 more physical...
Here is the excerpt of 20 minutes AWR report, DB time is 132,95 (mins).
Machine is not exchanging the process... From what I understand, the log file parallel write should offset a significant part of the time for synchronization of the log file.Top 5 Timed Foreground Events Event Waits Time(s) Avg wait (ms) % DB time Wait Class log file sync 1,769,997 3,939 2 52.38 Commit
Currently the log file sync take a total of 3 939 seconds while the log file parallel write takes a total of seconds 299.
I went thro different blogs on this event, checked the CPU use/machine Swap/memeory etc... All thoughts to settle this one because I'm out of ideas.
Thanks for your time...It might be useful to define the 'batch validation' on. If you accept the possible risk of course...
-
"redo the write time" includes "log file parallel write".
IO performance Guru Christian said in his blog:
http://christianbilien.WordPress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-IO/
Waiting for synchronization of log file can be divided into:
1. the 'writing time again': this is the total elapsed time of the writing of the redo log buffer in the log during recovery (in centisecondes).
2. the 'parallel log writing' is actually the time for the e/s of journal writing complete
3. the LGWR may have some post processing to do, signals, then the foreground process on hold that Scripture is complete. The foreground process is finally awake upward by the dispatcher of system. This completes pending "journal of file synchronization.
In his mind, there is no overlap between "redo write time" and "log file parallel write.
But in the metalink 34592.1:
Waiting for synchronization of log file may be broken down into the following:
1 awakening LGWR if idle
2 LGWR gathers again to be written and issue the e/s
3. the time for the IO journal writing complete
4 LGWR I/O post processing
...
Notes on adjustment according to log file sync component breakdown above:
Steps 2 and 3 are accumulated in the statistic "redo write time". (that is found in the SEO Title Statspack and AWR)
Step 3 is the wait event "log file parallel write. (Note.34583.1: "log file parallel write" reference Note :) )
MetaLink said there is overlap as "remake the writing time" include steps 2 and and "log file parallel write" don't understand step 3, so time to "log file parallel write" is only part of the time of 'redo write time. Won't the metalink note, or I missed something?
-
synchronization of log file event
Hi all
We use Oracle 9.2.0.4 on SUSE Linux 10. In the statspack report, one of the best timed event islog file sysnc
We are in the process. We do not use any storage.IS this a bug of 9.2.0.4 or what is the solution of it
STATSPACK report for DB Name DB Id Instance Inst Num Release Cluster Host ------------ ----------- ------------ -------- ----------- ------- ------------ ai 1495142514 ai 1 9.2.0.4.0 NO ai-oracle Snap Id Snap Time Sessions Curs/Sess Comment ------- ------------------ -------- --------- ------------------- Begin Snap: 241 03-Sep-09 12:17:17 255 63.2 End Snap: 242 03-Sep-09 12:48:50 257 63.4 Elapsed: 31.55 (mins) Cache Sizes (end) ~~~~~~~~~~~~~~~~~ Buffer Cache: 1,280M Std Block Size: 8K Shared Pool Size: 160M Log Buffer: 1,024K Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 7,881.17 8,673.87 Logical reads: 14,016.10 15,425.86 Block changes: 44.55 49.04 Physical reads: 3,421.71 3,765.87 Physical writes: 8.97 9.88 User calls: 254.50 280.10 Parses: 27.08 29.81 Hard parses: 0.46 0.50 Sorts: 8.54 9.40 Logons: 0.12 0.13 Executes: 139.47 153.50 Transactions: 0.91 % Blocks changed per Read: 0.32 Recursive Call %: 42.75 Rollback per transaction %: 13.66 Rows per Sort: 120.84 Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 100.00 Redo NoWait %: 100.00 Buffer Hit %: 75.59 In-memory Sort %: 99.99 Library Hit %: 99.55 Soft Parse %: 98.31 Execute to Parse %: 80.58 Latch Hit %: 100.00 Parse CPU to Parse Elapsd %: 67.17 % Non-Parse CPU: 99.10 Shared Pool Statistics Begin End ------ ------ Memory Usage %: 95.32 96.78 % SQL with executions>1: 74.91 74.37 % Memory for SQL w/exec>1: 68.59 69.14 Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Ela Time -------------------------------------------- ------------ ----------- -------- log file sync 11,558 10,488 67.52 db file sequential read 611,828 3,214 20.69 control file parallel write 436 541 3.48 buffer busy waits 626 522 3.36 CPU time 395 2.54 ------------------------------------------------------------- ^LWait Events for DB: ai Instance: ai Snaps: 241 -242 -> s - second -> cs - centisecond - 100th of a second -> ms - millisecond - 1000th of a second -> us - microsecond - 1000000th of a second -> ordered by wait time desc, waits desc (idle events last) Avg Total Wait wait Waits Event Waits Timeouts Time (s) (ms) /txn ---------------------------- ------------ ---------- ---------- ------ -------- log file sync 11,558 9,981 10,488 907 6.7 db file sequential read 611,828 0 3,214 5 355.7 control file parallel write 436 0 541 1241 0.3 buffer busy waits 626 518 522 834 0.4 control file sequential read 661 0 159 241 0.4 BFILE read 734 0 110 151 0.4 db file scattered read 595,462 0 81 0 346.2 enqueue 15 5 19 1266 0.0 latch free 109 22 1 8 0.1 db file parallel read 102 0 1 6 0.1 log file parallel write 1,498 1,497 1 0 0.9 BFILE get length 166 0 0 3 0.1 SQL*Net break/reset to clien 199 0 0 1 0.1 SQL*Net more data to client 5,139 0 0 0 3.0 BFILE open 76 0 0 0 0.0 row cache lock 5 0 0 0 0.0 BFILE internal seek 734 0 0 0 0.4 BFILE closure 76 0 0 0 0.0 db file parallel write 173 0 0 0 0.1 direct path read 18 0 0 0 0.0 direct path write 4 0 0 0 0.0 SQL*Net message from client 480,888 0 284,247 591 279.6 virtual circuit status 64 64 1,861 29072 0.0 wakeup time manager 59 59 1,757 29781 0.0
Your elapsed time is about 2000 seconds (31: 55 rounded up) - and your log file sync time is 10,000 - which is 5 seconds per second for the duration. Otherwise your session count is about 250 at the beginning and end of snapshot - so if we assume that the number of sessions is stable for the duration, each session has undergone 40 seconds synchronization log file in the meantime. You have saved roughly 1 500 operations in the meantime (0.91 per second, about 13 per cent of restorations) - so synchronize your time log file was on average more than 6.5 seconds by validation.
Regardless of how you look at it, this suggests that numbers of synchronization of the log file are false, or you had a temporary outage. Given that you had some expectations occupied buffer and control file write expects about 900 m/s each, the hardware failure seems likely.
Check log file parallel write times to see if this helps to confirm the hypothesis. (Unfortunately some platforms report liog, parallel wriite time of the files properly for earlier versions of 9.2 - so this may not help.)
You also 15 enqueue waits with an average of 1.2 seconds - check the enqueue statistics in the report section to see what enqueue it was: if it was for example (CF - control file), then it also helps confirm the hypothesis of material.
It is possible that you had a couple of resets of material or something like this in the meantime that stopped your system quite dramatically for a minute or two.
Concerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
http://www.jlcomp.demon.co.UK"Science is more than a body of knowledge; It's a way of thinking. "
Carl Sagan -
Here's my question after tons of research and test without have the right solutions.
Target:
(1) I have a 12.1.0.2 database unique main enterprise 'testdb' as database instance running on the server "node1".
(2) I created physical standby database "stbydb" on the server "node2".
(3) DataGuard running on the mode of MaxAvailability (SYNC) with roll forward in real time 12 default c apply.
(4) primary database has 3 groups of one-man redo. (/oraredo/testdb/redo01.log redo02.log redo03.log)
(5) I've created 4 standby redo logfiles (/oraredo/testdb/stby01.log stby02.log stby03.log stby04.log)
(6) I do RMAN backup (database and archivelog) on the site of relief only.
(7) I want to use this backup for full restore of the database on the primary database.
He is a DR test to simulate the scenario that has lost every primary & Eve total servers.
Here is how to save, on the database pending:
(1) performance 'alter database recover managed standby database Cancel' to ensure that compatible data files
(2) RMAN > backup database;
(3) RMAN > backup archivelog all;
I got elements of backup and copied to primary db Server something like:
/Home/Oracle/backupset/o1_mf_nnndf_TAG20151002T133329_c0xq099p_.BKP (data files)
/Home/Oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.BKP (spfile & controlfile)
/Home/Oracle/backupset/o1_mf_annnn_TAG20151002T133357_c0xq15xf_.BKP (archivelogs)
So here's how to restore, on the main site:
I clean all the files (data files, controlfiles oder all gone).
(1) restore spfile from pfile
RMAN > startup nomount
RMAN > restore spfile from pfile ' / home/oracle/pfile.txt' to ' / home/oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.bkp';
(2) modify pfile to convert to db primary content. pFile shows below
*.audit_file_dest='/opt/Oracle/DB/admin/testdb/adump '
* .audit_trail = "db".
* full = '12.1.0.2.0'
*.control_files='/oradata/testdb/control01.ctl','/orafra/testdb/control02.ctl'
* .db_block_size = 8192
* .db_domain = "
*.db_file_name_convert='/testdb/','/testdb /'
* .db_name = "testdb".
* .db_recovery_file_dest ='/ orafra'
* .db_recovery_file_dest_size = 10737418240
* .db_unique_name = "testdb".
*.diagnostic_dest='/opt/Oracle/DB '
* .fal_server = "stbydb".
* .log_archive_config = 'dg_config = (testdb, stbydb)'
* .log_archive_dest_2 = "service = stbydb SYNC valid_for = (ONLINE_LOGFILE, PRIMARY_ROLE) db_unique_name = stbydb'"
* .log_archive_dest_state_2 = 'ENABLE '.
*.log_file_name_convert='/testdb/','/testdb /'
* .memory_target = 1800 m
* .open_cursors = 300
* runoff = 300
* .remote_login_passwordfile = "EXCLUSIVE."
* .standby_file_management = "AUTO".
* .undo_tablespace = "UNDOTBS1.
(3) restart db with updated file pfile
SQLPLUS > create spfile from pfile='/home/oracle/pfile.txt'
SQLPLUS > the judgment
SQLPLUS > startup nomount
(4) restore controlfile
RMAN > restore primary controlfile to ' / home/oracle/backupset/o1_mf_ncsnf_TAG20151002T133329_c0xq0sgz_.bkp';
RMAN > change the editing of the database
(5) all elements of backup catalog
RMAN > catalog starts by ' / home/oracle/backupset / '.
(6) restore and recover the database
RMAN > restore database;
RMAN > recover database until the SNA XXXXXX; (this YVERT is the maximum in archivelog backups that extends beyond the scn of the backup of the data file)
(7) open resetlogs
RMAN > alter database open resetlogs;
Everything seems perfect, except one of the file log roll forward pending is not generated
SQL > select * from v$ standby_log;
ERROR:
ORA-00308: cannot open archived log ' / oraredo/testdb/stby01.log'
ORA-27037: unable to get file status
Linux-x86_64 error: 2: no such file or directory
Additional information: 3
no selected line
I intended to use the same backup to restore primary basic & helps record traffic and the downtime between them in the world of real output.
So I have exactly the same steps (except STANDBY restore CONTROLFILE and not recover after database restore) to restore the database pending.
And I got the same missing log file.
The problem is:
(1) complete alert.log filled with this error, not the concern here
(2) now repeat it in real time apply won't work since the Party shall LGWR shows always "WAITING_FOR_LOG."
(3) I can't delete and re-create this log file
Then I tried several and found:
The missing standby logfile was still 'ACTIVE' at present RMAN backup was made.
For example, on db standby, under Group #4 (stby01.log) would be lost after the restoration.
SQL > select GROUP #, SEQUENCE #, USE (s), the STATUS from v$ standby_log;
GROUP # SEQUENCE # USED STATUS
---------- ---------- ---------- ----------
4 19 ACTIVE 133632
5 0 0 UNASSIGNED
6 0 0 not ASSIGNED
7 0 0 UNASSIGNED
So until I take the backup, I tried on the primary database:
SQL > alter system set log_archive_dest_state_2 = delay;
This was the Group of standby_log side Eve #4 was released:
SQL > select GROUP #, SEQUENCE #, USE (s), the STATUS from v$ standby_log;
GROUP # SEQUENCE # USED STATUS
---------- ---------- ---------- ----------
4 0 0 UNASSIGNED
5 0 0 UNASSIGNED
6 0 0 not ASSIGNED
7 0 0 UNASSIGNED
Then, the backup has been restored correctly without missing standby logfile.
However, to change this primary database means break DataGuard protection when you perform the backup. It's not accept on the production environment.
Finally, my real questions come:
(1) what I do may not do on parameter change?
(2) I know I can re-create the control file to redo before delete and then recreate after. Is there any simple/fast to avoid the standby logfile lost or recreate the lost one?
I understand that there are a number of ways to circumvent this. Something to keep a copy of the log file waiting restoration progress and copy up one missing, etc, etc...
And yes I always have done no real-time applies "to the aid of archived logfile" but is also not accept mode of protection of production.
I just want proof that the design (which is displayed in a few oracle doc Doc ID 602299.1 is one of those) that backs up data backup works effectively and can be used to restore the two site. And it may be without spending more time to resume backups or put the load on the primary database to create the database before.
Your idea is very much appreciated.
Thank you!
Hello
1--> when I take via RMAN backup, RMAN does not redo log (ORL or SRL) file, so we cannot expect ORLs or SRL would be restored.
2nd--> when we opened the ORL database should be deleted and created
3rd--> Expecting, SRL should not be an issue.we should be able to do away with the fall.
DR sys@cdb01 SQL > select THREAD #, SEQUENCE #, GROUP #, STATUS from v$ standby_log;
THREAD # SEQUENCE # GROUP # STATUS
---------- ---------- ---------- ----------
1 233 4 ACTIVE
1 238 5 ACTIVE
DR sys@cdb01 SQL > select * from v$ logfile;
GROUP # STATUS TYPE MEMBER IS_ CON_ID
---------- ------- ------- ------------------------------ --- ----------
3 /u03/cdb01/cdb01/redo03.log no. 0 online
/U03/cdb01/cdb01/redo02.log no. 0 2 online
1 /u03/cdb01/cdb01/redo01.log no. 0 online
4 /u03/cdb01/cdb01/stdredo01.log WATCH No. 0
/U03/cdb01/cdb01/stdredo02.log EVE 5 No. 0
DR sys@cdb01 SQL > ! ls - ltr /u03/cdb01/cdb01/stdredo01.log
method: cannot access the /u03/cdb01/cdb01/stdredo01.log: no such file or directory
DR sys@cdb01 SQL >! ls - ltr /u03/cdb01/cdb01/stdredo02.log
-rw - r-. 1 oracle oinstall 52429312 17 Oct 15:32 /u03/cdb01/cdb01/stdredo02.log
DR sys@cdb01 SQL > alter database force claire logfile 4;
change the database group claire logfile 4
*
ERROR on line 1:
ORA-01156: recovery or current flashback may need access to files
DR sys@cdb01 SQL > alter database recover managed standby database cancel;
Database altered.
DR sys@cdb01 SQL > change the database group claire logfile 4;
Database altered.
DR sys@cdb01 SQL > ! ls - ltr /u03/cdb01/cdb01/stdredo01.log
-rw - r-. 1 oracle oinstall 52429312 17 Oct 15:33 /u03/cdb01/cdb01/stdredo01.log
DR sys@cdb01 SQL >
If you do, you can recreate the controlfile without waiting for redo log entry...
If you still think it's something is not acceptable, you must have SR with support to analyze why he does not abandon SRL when controlfile_type is "underway".
Thank you
-
That redo log files waiting?
Hello Experts,
I read articles on the log redo and undo segment files. I was wondering something very simple. That redo log files waiting in there? It stores the sql statements?
Lets say that my update statement to modify 800 blocks of data. A unique single update statement can modify different data 800 right blocks? Yes, it may be true. I think that these data blocks can not hold buffers to the log to roll forward, right? I mean I know exactly what to do redo log buffer and redo log file. And I know that the task of backgrounding LGWR. But, I wonder if she she holds the data blocks? It is not supposed to hold data like cache buffer blocks, right?
My second question is, rollback isn't effect to restore the newspaper to the right buffer? Because it does not need log buffer for effect do it again. Conversely, the restoration; statement is included in the restore log buffer by progression when someone isse, am I right?
As far as I know, rollback interact directly with UNDO TABLESPACE?
I hope that I have to express myself clearly.
Thanks in advance.
Here's my question:
My second question is, rollback isn't effect to restore the newspaper to the right buffer? Because it does not need log buffer for effect do it again. Conversely, the restoration; statement is included in the restore log buffer by progression when someone isse, am I right?
As far as I know, rollback interact directly with UNDO TABLESPACE?
Yes, where else would the undo data come from? Undo tablespace contains the Undo segments that contain the Undo data required for the restoration of your transaction.
I can say that rollback does not alter the data of the log buffer rede to the past. In other words, change vectors will be remain the same before restoration. Conversely, rollback command is also recorded in the log file of restoration by progression. As the name, all orders are saved in the REDO LOGS.
I hope that I am wrong so far?
Not sure why you even the buffer log roll forward for Rollback? This is the reason why I asked you it was for, where occurs the dose the cancellation? And the answer for this is that it happens in the buffer cache. Before you worry about the drivers of change, you must understand that it is not serious what contains where as long as there is no transaction recorded in the operating of the Undo segment table. If the operating table indicates that the transaction is longer there, there must be a cancellation of the transaction. Vectors of change are saved in the file log roll forward, while the restore happens on blocks of data stored in the file "data" undo blocks stored in the undo file "data".
At the same time I read an article about redo and undo. In this article process transaction is explained. Here is the link http://pavandba.files.wordpress.com/2009/11/undo_redo1.pdf
I found some interesting information in this article as follows.
It is worth noting that during the restore process, recovery logs never participate. The only time where redo logs are read is retrieving and archiving. This is the concept of tuning key: redo logs are written on. Oracle does not read during normal processing. As long as you have sufficient devices so that when the ARC is reading a file, LGWR's writing to a different device, then there no contention for redo logs.
If redo logs are never involved in the restoration process, how is it Oracle will then know the order of the transaction? As far as I know it is only written in redo logs.
I have thoughts very amazed to Aman.
Why you ask?
Now, before giving a response, I say two things. One, I know Pavan and he is a regular contributor to this forum and on several other forums Facebook and two, with all due respect to him, a little advice for you, when you try to understand a concept, to stick to the Oracle documentation and do not read and merge articles/blog-posts from the web. Everone, which publishes on the web, has their own way to express things and many times, the context of the writing makes it more confusing things. Maybe we can erase the doubts that you can get after reading the various search results on the web.
Redo logs used for the restoration, not to restore. The reason is the redo log files are applied in sequential order, and this is not the case when we look for the restoration. A restore is required to do for a few blocks away. Basically, what happens in a restoration, is that the records of cancellation required for a block of data are sought in the reverse order of their creation. The entry of the transaction is in the slot ITL of the block of data that point to the necessary undo bytes Address (UBA) using which oracle also knows what that undo the blocks would be necessary for the restoration of your transaction. As soon as the blocks of data will be cancelled, the ITL slots would be cleared as well.
In addition, you must remember, until the transaction is not qualified as finished, using either a commit or a rollback, the cancellation of this data would remain intact. The reason for this is that oracle would ensure that undo data would be available to make the cancellation of the transaction. The reason why Undo data are also recorded in the journals of recovery is to ensure that in the event of the loss of the cancellation of the data file, retrieving them would be possible. Because it would also require changes that's happened on the blocks cancel, restore the vectors change associated with blocks of cancellation are also saved in the buffer log roll forward and, in the redo log files.
HTH
Aman...
-
Files I do not save. a tecnicl problem, you can do this quick please. for a long time, I'm waiting
I does not
-
I have to create the new group for waiting for redo log files?
I have 10 group of files redo log with 2 members of each group for my primary database, I need to create new group for redo log files for the database of relief pending
Group # members
==============
1 2
2 2
3 2
4 2
5 2
6 2
7 2
8 2
9 2
2 of 10
If so, the following statement is correct? or nto
ALTER DATABASE ADD STANDBY LOGFILE GROUP 1 ('D:\Databases\epprod\StandbyRedoLog\REDO01.) LOG',D:\Databases\epprod\StandbyRedoLog\REDO01_1.log');
Please correct me if am doin wrong
because when I run the statement I have error message saying: the group is already created.
Thanks John
I just found the answer
Yes, it of recomeded to add the new group, for instnace if I have 10 group of 1 to 10, then the wait should be from 11 to 20
Thanks I found the answer.
-
What is the purpose of waiting for redo log files
Hello
What is the purpose of the log files waiting for redo in the Dr?
What happens if the standby redo log files are created? or else is not created?
Please explain
Thank youRe: what is the difference between onlinelog and standbylog
I mentioned the goal of the eve of the redo log in RD files in above thread.
Concerning
Girish Sharma -
Can someone help me interpret the results of a log cbs.log file?
My laptop worked little strange - gel sometimes and only excessively slow. I ran chkdsk /f checks and / r. Both went well. I then ran sfc/scannow. The results of this analysis are displayed below. I'm not a pc expert, so I don't know if I'm all together or if there is something else, I have to do. Specifically, the last line of the cbs.log file says, "Verify and Repair Transaction completed. All of the files and registry keys listed in the framework of this operation were properly repaired".
Any help would be appreciated! Sorry for the text copy/pasted for a long time. I cut about 99% of the out and about 1% of the folder (due to a limit of 60 000 characters) just left. I couldn't find a way to attach a file.
Thank you!
POQ 64 ends.
2012-10-09 22:06:25, Info CSI 00000171 [SR] check complete
2012-10-09 22:06:26, info CSI 00000172 [SR] components check 100 (0 x 0000000000000064)
2012-10-09 22:06:26, transaction Info CSI 00000173 [SR] beginning verify and repair
2012-10-09 22:06:37, info CSI 00000174 Member \SystemRoot\WinSxS\amd64_microsoft-windows-sidebar_31bf3856ad364e35_6.0.6002.18005_none_2ce6c04cdc275758\settings.ini file hashes are not actual file [l:24 {12}] "settings.ini": "
Found: {l:32 b:sKFy6962 + 2YBWdYMZ6Z/UOVMGpEOdEczYmmYd2o9CE4 =} expected: {l:32 = b:v6OQf2AJO5FVbRBJuIwXxkdkCoOaSk3y0ol6uTH491o}
2012-10-09 22:06:37, info CSI 00000175 [SR] cannot repair the military record [l:24 {12}] "settings.ini" Microsoft-Windows-Sidebar, Version = 6.0.6002.18005, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), the Culture neutral, VersionScope is 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, type neutral, TypeName neutral, neutral to the public key in the store, hash mismatch "
2012-10-09 22:06:40, info CSI 00000176 Member \SystemRoot\WinSxS\amd64_microsoft-windows-sidebar_31bf3856ad364e35_6.0.6002.18005_none_2ce6c04cdc275758\settings.ini file hashes are not actual file [l:24 {12}] "settings.ini": "
Found: {l:32 b:sKFy6962 + 2YBWdYMZ6Z/UOVMGpEOdEczYmmYd2o9CE4 =} expected: {l:32 = b:v6OQf2AJO5FVbRBJuIwXxkdkCoOaSk3y0ol6uTH491o}
2012-10-09 22:06:40, info CSI 00000177 [SR] cannot repair the military record [l:24 {12}] "settings.ini" Microsoft-Windows-Sidebar, Version = 6.0.6002.18005, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), the Culture neutral, VersionScope is 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, type neutral, TypeName neutral, neutral to the public key in the store, hash mismatch "
2012-10-09 22:06:40, info CSI 00000178 [SR] this element is referenced by [l:162 {81}] "' Package_17_for_KB948465 ~ 31bf3856ad364e35 ~ amd64 ~ ~ 6.0.1.18005.948465 - 60_neutral_GDR" "
2012-10-09 22:06:41, Info CSI 00000179 repair results created:
POQ 65 begins:
2012-10-09 22:17:24, info CSI 00000301 [SR] cannot repair the military record [l:24 {12}] "settings.ini" Microsoft-Windows-Sidebar, Version = 6.0.6002.18005, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), the Culture neutral, VersionScope is 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, type neutral, TypeName neutral, neutral to the public key in the store, hash mismatch "
2012-10-09 22:17:24, info CSI 00000302 Member \SystemRoot\WinSxS\amd64_microsoft-windows-sidebar_31bf3856ad364e35_6.0.6002.18005_none_2ce6c04cdc275758\settings.ini file hashes are not actual file [l:24 {12}] "settings.ini": "
Found: {l:32 b:sKFy6962 + 2YBWdYMZ6Z/UOVMGpEOdEczYmmYd2o9CE4 =} expected: {l:32 = b:v6OQf2AJO5FVbRBJuIwXxkdkCoOaSk3y0ol6uTH491o}
2012-10-09 22:17:24, info CSI 00000303 [SR] cannot repair the military record [l:24 {12}] "settings.ini" Microsoft-Windows-Sidebar, Version = 6.0.6002.18005, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), the Culture neutral, VersionScope is 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, type neutral, TypeName neutral, neutral to the public key in the store, hash mismatch "
2012-10-09 22:17:24, Info CSI 00000304 [SR] this element is referenced by [l:162 {81}] "' Package_17_for_KB948465 ~ 31bf3856ad364e35 ~ amd64 ~ ~ 6.0.1.18005.948465 - 60_neutral_GDR" "
2012-10-09 22:17:24, info CSI 00000305 hashes for Member file? \C:\Windows\PolicyDefinitions\inetres.ADMX are not real file [l:24 {12}] "inetres.admx": "
Found: {l:32 b:DjclSPQ + c3ju7E53XXW47eR94SH7ICruHSUKg8YAkO0 =} expected: {l:32 b:3 T / Xc + 0 k/wBxJ4k/vlPd86jLOYtWOjRsHrz0hHH9H8s =}
2012-10-09 22:13:42, CSI 0000027e Info [SR] repair corrupted file [ml:520 {260}, l:64 {32}] '------? \C:\windows\policydefinitions"\[l:24{12}]"Inetres.ADMX' of the store
2012-10-09 22:13:42, CSI Info 0000027f WARNING: file [l:24 {12}] "inetres.admx" in [l:64 {32}] '-? ' "" \C:\windows\policydefinitions' switching property
Old: Microsoft-Windows-InetRes-Adm, Version = 9.1.8112.16421, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral
New: Microsoft-Windows-InetRes-Adm, Version = 8.0.6001.18702, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral
2012-10-09 22:13:44, info CSI 00000280 hashes for Member file? \C:\Windows\PolicyDefinitions\en-US\InetRes.adml are not real file [l:24 {12}] "InetRes.adml": "
Found: {l:32 b:8uqfOni5TmKQ2 + wymJKX9uLDOmUV2H1RKpYV3gacaRw =} expected: {l:32 = b:f2Ca02GHu2Yr3ccXiLvfpdfLkfeeDX2UExmZb6pQm2U}
2012-10-09 22:13:44, info CSI 00000281 [SR] repair file corrupted [ml:520 {260}, l:76 {38}] '------? \C:\Windows\PolicyDefinitions\en-us"\[l:24{12}]"InetRes.adml' of the store
2012-10-09 22:13:44, info CSI 00000282 WARNING: file [l:24 {12}] "InetRes.adml" in [l:76 {38}] '-? ' "" \C:\Windows\PolicyDefinitions\en-us' switching property
Old: Microsoft-Windows-InetRes - Adm.Resources, Version = 9.1.8112.16421, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), Culture = [l:10 {5}] 'en-US', VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral
New: Microsoft-Windows-InetRes - Adm.Resources, Version = 8.0.6001.18702, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture = [l:10 {5}] 'en-US', VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral
2012-10-09 22:17:25, 00000306 CSI info [SR] repair corrupted file [ml:520 {260}, l:64 {32}] '------? \C:\windows\policydefinitions"\[l:24{12}]"Inetres.ADMX' of the store
2012-10-09 22:17:25, info CSI 00000307 WARNING: file [l:24 {12}] "inetres.admx" in [l:64 {32}] '-? ' "" \C:\windows\policydefinitions' switching property
Old: Microsoft-Windows-InetRes-Adm, Version = 9.1.8112.16421, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral
New: Microsoft-Windows-InetRes-Adm, Version = 8.0.6001.18702, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral
2012-10-09 22:17:25, info CSI 00000308 hashes for Member file? \C:\Windows\PolicyDefinitions\en-US\InetRes.adml are not real file [l:24 {12}] "InetRes.adml": "
Found: {l:32 b:8uqfOni5TmKQ2 + wymJKX9uLDOmUV2H1RKpYV3gacaRw =} expected: {l:32 = b:f2Ca02GHu2Yr3ccXiLvfpdfLkfeeDX2UExmZb6pQm2U}
2012-10-09 22:17:25, 00000309 CSI info [SR] repair corrupted file [ml:520 {260}, l:76 {38}] '------? \C:\Windows\PolicyDefinitions\en-us"\[l:24{12}]"InetRes.adml' of the store
2012-10-09 22:17:25, CSI Info 0000030a WARNING: file [l:24 {12}] "InetRes.adml" in [l:76 {38}] '-? ' "" \C:\Windows\PolicyDefinitions\en-us' switching property
Old: Microsoft-Windows-InetRes - Adm.Resources, Version = 9.1.8112.16421, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), Culture = [l:10 {5}] 'en-US', VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral
New: Microsoft-Windows-InetRes - Adm.Resources, Version = 8.0.6001.18702, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture = [l:10 {5}] 'en-US', VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral
2012-10-09 22:17:25, created results CSI 0000030b repair Info:
POQ 127 ends.
2012-10-09 22:17:25, all repairs [SR] CSI Info 0000030 c
2012-10-09 22:17:25, CSI Info 0000030 d [SR] validation of transaction
2012-10-09 22:17:25, transaction CSI Info 0000030e Creating NT (seq 1), objectname [6] "(null) '"
2012-10-09 22:17:25, CSI Info 0000030f NT created transaction (seq 1) result 0x00000000, manage @0x14c4
2012-10-09 22:17:25, Info CSI 00000310@2012/10/10:02:17:25.662 CSI perf trace:
CSIPERF:TXCOMMIT; 143298
2012-10-09 22:17:25, Info CSI 00000311 [SR] check and complete repair operation. All of the files and registry keys listed in this operation were repaired successfullyHello
As noted at the end of your message SFC/scannow points out that there is no rest
questions that he can fix.More information on how to easily read the important information as SFC/scannow
adds to the cbs.log.Many files that SFC cannot resolve are not important.
Start - type in the search box-> find CMD in top - click right on - RUN AS ADMIN
put the command from below (copy and paste) in this box and her and then press ENTER.
findstr/c: "[SR]" %windir%\logs\cbs\cbs.log > sfcdetails.txt
who creates the sfcdetails.txt file in the folder that you are in when you run it.
So if you're in C:\Windows\System32 > then you will need to look in that folder for the file.
How to analyze the log file entries that the Microsoft Windows Resource Checker (SFC.exe) program
in Windows Vista
http://support.Microsoft.com/kb/928228This creates sfcdetails.txt in C:\Windows\System32 find and you can post the errors in a message
here. NOTE: there are probably duplicates so please only post once each section error.You can read the newspaper/txt files easier if you right click on Notepad or Wordpad then RUN AS ADMIN - then
You can navigate to sfcdetails.txt (in C:\Windows\System32) or cbs.log (in C:\Windows\Logs) as needed.
(You may need to search sfcdetails.txt if it is not created in the default folders.)=======================================================
Troubleshooting:
Use the startup clean and other methods to try to determine the cause of and eliminate
the questions.---------------------------------------------------------------
What antivirus/antispyware/security products do you have on the machine? Be one you have NEVER
on this machine, including those you have uninstalled (they leave leftovers behind which can cause
strange problems).----------------------------------------------------
Follow these steps:
Start - type this in the search box-> find COMMAND at the top and RIGHT CLICK – RUN AS ADMIN
Enter this at the command prompt - sfc/scannow
How to analyze the log file entries that the Microsoft Windows Resource Checker (SFC.exe) program
generates in Windows Vista cbs.log
http://support.Microsoft.com/kb/928228Also run CheckDisk, so we cannot exclude as much as possible of the corruption.
How to run the check disk at startup in Vista
http://www.Vistax64.com/tutorials/67612-check-disk-Chkdsk.html==========================================
After the foregoing:
How to troubleshoot a problem by performing a clean boot in Windows Vista
http://support.Microsoft.com/kb/929135
How to troubleshoot performance issues in Windows Vista
http://support.Microsoft.com/kb/950685Optimize the performance of Microsoft Windows Vista
http://support.Microsoft.com/kb/959062
To see everything that is in charge of startup - wait a few minutes with nothing to do - then right-click
Taskbar - the Task Manager process - take a look at stored by - Services - this is a quick way
reference (if you have a small box at the bottom left - show for all users, then check that).How to check and change Vista startup programs
http://www.Vistax64.com/tutorials/79612-startup-programs-enable-disable.htmlA quick check to see that load method 2 is - using MSCONFIG then put a list of
those here.
--------------------------------------------------------------------Tools that should help you:
Process Explorer - free - find out which files, key of registry and other objects processes have opened.
What DLLs they have loaded and more. This exceptionally effective utility will show you even who has
each process.
http://TechNet.Microsoft.com/en-us/Sysinternals/bb896653.aspxAutoruns - free - see what programs are configured to start automatically when you start your system
and you log in. Autoruns also shows you the full list of registry and file locations where applications can
Configure auto-start settings.
http://TechNet.Microsoft.com/en-us/sysinternals/bb963902.aspx
Process Monitor - Free - monitor the system files, registry, process, thread and DLL real-time activity.
http://TechNet.Microsoft.com/en-us/Sysinternals/bb896645.aspxThere are many excellent free tools from Sysinternals
http://TechNet.Microsoft.com/en-us/Sysinternals/default.aspx-Free - WhatsInStartUP this utility displays the list of all applications that are loaded automatically
When Windows starts. For each request, the following information is displayed: Type of startup (registry/Startup folder), Command - Line String, the product name, Version of the file, the name of the company;
Location in the registry or the file system and more. It allows you to easily disable or remove unwanted
a program that runs in your Windows startup.
http://www.NirSoft.NET/utils/what_run_in_startup.htmlThere are many excellent free tools to NirSoft
http://www.NirSoft.NET/utils/index.htmlWindow Watcher - free - do you know what is running on your computer? Maybe not. The window
Watcher says it all, reporting of any window created by running programs, if the window
is visible or not.
http://www.KarenWare.com/PowerTools/ptwinwatch.aspMany excellent free tools and an excellent newsletter at Karenware
http://www.KarenWare.com/===========================================
Vista and Windows 7 updated drivers love then here's how update the most important.
This is my generic how updates of appropriate driver:
This utility, it is easy see which versions are loaded:
-Free - DriverView utility displays the list of all device drivers currently loaded on your system.
For each driver in the list, additional useful information is displayed: load address of the driver,
Description, version, product name, company that created the driver and more.
http://www.NirSoft.NET/utils/DriverView.htmlFor drivers, visit manufacturer of emergency system and of the manufacturer of the device that are the most common.
Control Panel - device - Graphics Manager - note the brand and complete model
your video card - double - tab of the driver - write version information. Now, click on update
Driver (this can do nothing as MS is far behind the certification of drivers) - then right-click.
Uninstall - REBOOT it will refresh the driver stack.Repeat this for network - card (NIC), Wifi network, sound, mouse, and keyboard if 3rd party
with their own software and drivers and all other main drivers that you have.Now in the system manufacturer (Dell, HP, Toshiba as examples) site (in a restaurant), peripheral
Site of the manufacturer (Realtek, Intel, Nvidia, ATI, for example) and get their latest versions. (Look for
BIOS, Chipset and software updates on the site of the manufacturer of the system here.)Download - SAVE - go to where you put them - right click - RUN AD ADMIN - REBOOT after
each installation.Always check in the Device Manager - drivers tab to be sure the version you actually install
presents itself. This is because some restore drivers before the most recent is installed (sound card drivers
in particular that) so to install a driver - reboot - check that it is installed and repeat as
necessary.Repeat to the manufacturers - BTW in the DO NOT RUN THEIR SCANNER device - check
manually by model.Look at the sites of the manufacturer for drivers - and the manufacturer of the device manually.
http://pcsupport.about.com/od/driverssupport/HT/driverdlmfgr.htmHow to install a device driver in Vista Device Manager
http://www.Vistax64.com/tutorials/193584-Device-Manager-install-driver.htmlIf you update the drivers manually, then it's a good idea to disable the facilities of driver under Windows
Updates, that leaves about Windows updates but it will not install the drivers that will be generally
older and cause problems. If updates offers a new driver and then HIDE it (right click on it), then
get new manually if you wish.How to disable automatic driver Installation in Windows Vista - drivers
http://www.AddictiveTips.com/Windows-Tips/how-to-disable-automatic-driver-installation-in-Windows-Vista/
http://TechNet.Microsoft.com/en-us/library/cc730606 (WS.10) .aspx===========================================
Refer to these discussions because many more excellent advice however don't forget to check your antivirus
programs, the main drivers and BIOS update and also solve the problems with the cleanboot method
first.Problems with the overall speed of the system and performance
http://support.Microsoft.com/GP/slow_windows_performance/en-usPerformance and Maintenance Tips
http://social.answers.Microsoft.com/forums/en-us/w7performance/thread/19e5d6c3-BF07-49ac-a2fa-6718c988f125Explorer Windows stopped working
http://social.answers.Microsoft.com/forums/en-us/w7performance/thread/6ab02526-5071-4DCC-895F-d90202bad8b3I hope this helps.
Rob Brown - Microsoft MVP<- profile="" -="" windows="" expert="" -="" consumer="" :="" bicycle="">-><- mark="" twain="" said="" it="">->
-
The log files can be removed automatically in the environment of the HA
Hi experts BDB.
I write db HA 4.6.21 the bdb version-based application. Two processes are running on two machines, a master who will be read/write db, one as a client/backup reads only the db. There is a thread in demon master who perform checkpoint all the 1 second: dbenv-> txn_checkpoint (dbenv, 1, 1, 0), and dbenv-> log_archive (dbenv, NULL, DB_ARCH_REMOVE) will be called after runnng checkpoint each time. The env has been opened with indicator: DB_CREATE | DB_INIT_TXN | DB_INIT_LOCK | DB_INIT_LOG | DB_REGISTER | DB_RECOVER | DB_INIT_MPOOL | DB_THREAD | DB_INIT_REP; Autoremove indicator has been defined: envp-> set_flags (uid_dbs.envp, DB_LOG_AUTOREMOVE, 1) before open env.
I found this thread , who discussed non-ha environment and I tested my code in an env without DB_INIT_REP non-ha, it worked. However in HA env these log files were never deleted. Could help you in this matter? The customer needs to run checkpoint? Can it be a bug in the bdb?
Thank you
Min
There is a thread in demon master who perform checkpoint all the 1 second: dbenv-> txn_checkpoint (dbenv, 1, 1, 0), and dbenv-> log_archive (dbenv, NULL, DB_ARCH_REMOVE) will be called after runnng checkpoint each time. The env has been opened with indicator: DB_CREATE | DB_INIT_TXN | DB_INIT_LOCK | DB_INIT_LOG | DB_REGISTER | DB_RECOVER | DB_INIT_MPOOL | DB_THREAD | DB_INIT_REP; Autoremove indicator has been defined: envp-> set_flags (uid_dbs.envp, DB_LOG_AUTOREMOVE, 1) before open env.
I do not say that this is causing a problem, but make the DB_ENV-> log_archive (DB_ARCH_REMOVE) in your thread and setting DB_ENV-> set_flags (DB_LOG_AUTOREMOVE) are redundant. In your thread, you control the timing. The DB_ENV-> set_flags (DB_LOG_AUTOREMOVE) option checks and removes unnecessary log files when we create a new log file.
Have you seen in the documentation of DB_ENV-> set_flags (DB_LOG_AUTOREMOVE) we do not recommend do deleting files of automatic newspaper with replication? While this warning is not repeated in DB_ENV-> log_archive (DB_ARCH_REMOVE), it also applies to this option. You should review the use of this option, especially if it is possible that your customer could go down for a long time.
But it is a warning and automatic log removal should work. My first thought here is to ask if your client has recently experienced a synchronization? Internally, we block archive on the client during certain parts of a synchronization master to improve the chances that we will keep all records required by the sync client. We block archiving for 30 seconds after the client synchronization.
I found this thread https://forums.oracle.com/message/10945602#10945602 that discussed non-ha environment and I tested my code in an env without DB_INIT_REP non-ha, it worked. However in HA env these log files were never deleted.
This thread is to discuss a different issue. The reason for our BDB 4.6 warning against the use of automatic elimination of log with replication is that it only takes into account all sites in your replication group, so that we can remove a log of the master who always needs a client.
We added remove auto journal group that supports replication manager replication 5.3 BDB, and this discussion is about a change in the behavior of this addition. With this addition, we no longer need to recommend against the use of elimination automatic log with replication in BDB 5.3 and later versions.
Could help you in this matter? The customer needs to run checkpoint? Can it be a bug in the bdb?
I'm not sure that the customer needs to run its own control points, because it performs checkpoints when it receives control point of the master journal records.
But none of these options to delete log on the master does nothing to remove the logs on the client. You will need to do the following: to archive logs separately to the customer and the master.
Paula Bingham
Oracle
Maybe you are looking for
-
Can I add an ssd sata-m and can be larger than 32 GB for my HP ENVY m6-1232ea there a processor i5 - 3230M Thanks in advance...
-
Sony 55w790b tv with connected Wifi can not access internet services.
I can use the Internet browser on this TV app, but discover the button on the remote control, which brings me to an Error Message that says services can not be access at this time. try again or cancel. When I click on a movie title. Everywhere I turn
-
DVD decoder not installed? I went to play a DVD on my computer this evening and received this msg: Windows Media Player cannot play the DVD because no compatible DVD decoder is not installed on your computer.A few weeks ago, I played DVD on my comput
-
Security log of mail window on the problem
Recently, I got a windows security box when trying to access the all new incoming mail-it ask for a username and password, which I never needed to use before and I don't remember if I created these when I got to my laptop. It's on a Vista operating s
-
BlackBerry Q10 Q10 - disable 3G
Hi all I finally buy a Q10. Now, I just read in a post that should not be possible to turn off the 3G - in other words: I only want to exploit the Q10 in 2G! I live in an area not 3G and I don't want that the device is always looking for the 3G net.