Events of waiting "log file parallel write" / "log file sync", in CREATE INDEX
Hello guys,.my current project I'm running a few tests of performance for oracle data guard. The question is "How LGWR SYNC transfer influence the performance of the system?"
For the performance of the values, that I can compare I just built a normal oracle database in the first step.
Now I perform various tests such as creating index 'broad', massive parallel inserts/validations, etc to get the marks.
My database is an oracle 10.2.0.4 with multiplexed on AIX log files.
I create an index on a table of "normal"... I have run "dbms_workload_repository.create_snapshot ()" before and after the CREATE INDEX for an equivalent period for the AWR report.
Once the index is built (round about 9 GB), I made an awrrpt.sql for the AWR report.
And now take a look at these values of the AWR
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
......
......
log file parallel write 10,019 .0 132 13 33.5
log file sync 293 .7 4 15 1.0
......
......
How can it be possible?With regard to the documentation
-> synchronization of log file: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
Wait Time: The wait time includes the writing of the log buffer and the post.
-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.
This was also my understanding... "log file sync" wait time should be higher than the 'parallel log writing' timeout, because of, it includes the e/s and the response time for the user's session.I could accept it, if the values are near each other (perhaps around 1 second about altogether)... but the difference between 132 and 4 seconds is too visible.
Is the behavior of the log file sync/write different when you do a DOF as CREATE INDEX (maybe async... like you can influence it with COMMIT_WRITE initialization parameter?)?
You have no idea how these values born?
Ideas/thoughts are welcome.
Thanks and greetings
Tags: Database
Similar Questions
-
"redo the write time" includes "log file parallel write".
IO performance Guru Christian said in his blog:
http://christianbilien.WordPress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-IO/
Waiting for synchronization of log file can be divided into:
1. the 'writing time again': this is the total elapsed time of the writing of the redo log buffer in the log during recovery (in centisecondes).
2. the 'parallel log writing' is actually the time for the e/s of journal writing complete
3. the LGWR may have some post processing to do, signals, then the foreground process on hold that Scripture is complete. The foreground process is finally awake upward by the dispatcher of system. This completes pending "journal of file synchronization.
In his mind, there is no overlap between "redo write time" and "log file parallel write.
But in the metalink 34592.1:
Waiting for synchronization of log file may be broken down into the following:
1 awakening LGWR if idle
2 LGWR gathers again to be written and issue the e/s
3. the time for the IO journal writing complete
4 LGWR I/O post processing
...
Notes on adjustment according to log file sync component breakdown above:
Steps 2 and 3 are accumulated in the statistic "redo write time". (that is found in the SEO Title Statspack and AWR)
Step 3 is the wait event "log file parallel write. (Note.34583.1: "log file parallel write" reference Note :) )
MetaLink said there is overlap as "remake the writing time" include steps 2 and and "log file parallel write" don't understand step 3, so time to "log file parallel write" is only part of the time of 'redo write time. Won't the metalink note, or I missed something?
-
synchronization of log file much larger than log file parallel write
Hi all
average of log file sync wait is 30 ms log file parallel write is only 10 ms, this mean? What are the main reasons for this difference?
Sincerely yours.
A. U.
Hello
average of log file sync wait is 30 ms log file parallel write is only 10 ms, this mean? What are the main reasons for this difference?
Essentially, when newspaper writer writes, several session may be waiting. During 10 ms of time, you can have a written lgwr and sessions of 3 user waiting on the 'log file sync '.
Kind regards
Franck.
-
Hello
on the 11g R2. I have the following:
Is it too or not? These valuses, which should be compared?SQL> select total_waits, time_waited from v$system_event where event='log file parallel write'; TOTAL_WAITS TIME_WAITED ----------- ----------- 74144 28100
Thank you.Hello
You can refer to http://docs.oracle.com/cd/E11882_01/server.112/e16638/instance_tune.htm#PFGRF94563 .
What is the current size of the redo log files?
Anand
-
Hello
I am trying to improve the performance of a 10.2.0.5 database who is suffering from the high value of the log file sync wait. AWR reports show that it is almost never less than 20% of the time of database, normally it is 20-30%, and it is consistently above not-idle wait event. Many sessions he takes 80-90% of the time or more.
Log file parallel write is an order of magnitude lower, probably isn't an I/O problem. The database performs about 100 commits per second, the ratio of user-call-to-commit is about 20, again generated 500 k/s, log buffer is big 14 MB. There are about 6-7 journal file switches per hour on average.
There are several oddities on log file sync wait here and I'd appreciate any help unravel them:
1) there are almost two file sync wait events in the log by a posting on average (number of parallel expectations of log file is about the same number of validations). Why?
(2) according to the ASH, nearly half of log file sync waits take near Ms. 97.7 is something special reason for this pic?
(3) approximately 0.5% of the log file sync waits captured by ASH see the multisecond wait times. No idea where they may come from or how to diagnose this kind of problem?
Some parts of an AWR report below, please let me know if anything else is needed.
Best regards
Nikolai
Cache Sizes ~~~~~~~~~~~ Begin End ---------- ---------- Buffer Cache: 44,592M 44,672M Std Block Size: 16K Shared Pool Size: 3,104M 3,024M Log Buffer: 14,288K Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 545,833.77 6,659.29 Logical reads: 225,711.23 2,753.73 Block changes: 1,788.11 21.82 Physical reads: 1,195.98 14.59 Physical writes: 119.02 1.45 User calls: 2,368.64 28.90 Parses: 737.35 9.00 Hard parses: 94.58 1.15 Sorts: 261.75 3.19 Logons: 5.93 0.07 Executes: 1,796.12 21.91 Transactions: 81.97 Top 5 Timed Events Avg %Total ~~~~~~~~~~~~~~~~~~ wait Call Event Waits Time (s) (ms) Time Wait Class ------------------------------ ------------ ----------- ------ ------ ---------- CPU time 23,580 19.8 log file sync 491,840 21,976 45 18.5 Commit db file sequential read 1,902,069 12,604 7 10.6 User I/O read by other session 743,414 4,159 6 3.5 User I/O log file parallel write 220,772 3,069 14 2.6 System I/O ------------------------------------------------------------- Instance Activity Stats ****************** Statistic Total per Second per Trans -------------------------------- ------------------ -------------- ------------- Cached Commit SCN referenced 11,130 3.1 0.0 Commit SCN cached 3 0.0 0.0 DB time 12,547,531 3,483.3 42.5 DBWR checkpoint buffers written 145,437 40.4 0.5 DBWR checkpoints 6 0.0 0.0 ... IMU CR rollbacks 2,566 0.7 0.0 IMU Flushes 36,411 10.1 0.1 IMU Redo allocation size 187,480,124 52,045.3 635.0 IMU commits 255,350 70.9 0.9 IMU contention 11,998 3.3 0.0 IMU ktichg flush 11 0.0 0.0 IMU pool not allocated 4,295 1.2 0.0 IMU recursive-transaction flush 175 0.1 0.0 IMU undo allocation size 2,029,937,952 563,519.5 6,875.1 IMU- failed to get a private str 4,295 1.2 0.0 ... background checkpoints completed 6 0.0 0.0 background checkpoints started 6 0.0 0.0 background timeouts 11,400 3.2 0.0 ... change write time 9,398 2.6 0.0 cleanout - number of ktugct call 54,233 15.1 0.2 cleanouts and rollbacks - consis 9,436 2.6 0.0 cleanouts only - consistent read 2,028 0.6 0.0 ... Instance Activity Stats ******************* Statistic Total per Second per Trans -------------------------------- ------------------ -------------- ------------- commit batch performed 61 0.0 0.0 commit batch requested 61 0.0 0.0 commit batch/immediate performed 94 0.0 0.0 commit batch/immediate requested 94 0.0 0.0 commit cleanout failures: block 51 0.0 0.0 commit cleanout failures: buffer 14 0.0 0.0 commit cleanout failures: callba 8,632 2.4 0.0 commit cleanout failures: cannot 10,575 2.9 0.0 commit cleanouts 991,186 275.2 3.4 commit cleanouts successfully co 971,914 269.8 3.3 commit immediate performed 33 0.0 0.0 commit immediate requested 33 0.0 0.0 commit txn count during cleanout 61,737 17.1 0.2 ... redo blocks written 4,078,889 1,132.3 13.8 redo buffer allocation retries 177 0.1 0.0 redo entries 1,924,106 534.1 6.5 redo log space requests 174 0.1 0.0 redo log space wait time 2,333 0.7 0.0 redo ordering marks 4,553 1.3 0.0 redo size 1,966,229,700 545,833.8 6,659.3 redo subscn max counts 38,227 10.6 0.1 redo synch time 2,233,166 619.9 7.6 redo synch writes 352,935 98.0 1.2 redo wastage 56,259,980 15,618.0 190.5 redo write time 316,495 87.9 1.1 redo writer latching time 19 0.0 0.0 redo writes 220,866 61.3 0.8 rollback changes - undo records 134 0.0 0.0 rollbacks only - consistent read 11,242 3.1 0.0 ... ... transaction rollbacks 94 0.0 0.0 transaction tables consistent re 25 0.0 0.0 transaction tables consistent re 8,704 2.4 0.0 undo change vector size 1,176,156,772 326,506.2 3,983.5 user I/O wait time 2,039,881 566.3 6.9 user calls 8,532,422 2,368.6 28.9 user commits 295,139 81.9 1.0 user rollbacks 122 0.0 0.0 ... -------------------------------------------------------------
High "log file synchronization" waits arise also when LGWR is unable to get the CPU to republish on the boy that the log write is complete. The boy expected this event until LGWR may be a sign but if LGWR is unable to get the CPU, it is unable to signal quite quickly.
8 processors for 1 hour is 28 800 seconds. 7 UC is an odd number. This is equivalent to 25 200 seconds available. Your AWR shows that Oracle has represented time seconds 23 580 CPU. So, your server is probably encounter situations where processes are unable to get the CPU.
You can pin or renice LGWR - but you need to check with Oracle Support if this is doable on your platform or supported.
What you need to do is to adjust the sessions that make very high logical reads (a simple rule is 10 k blocks per CPU per second and you hit 225 K blocks for 7 processors!) to reduce the logical reads and CPU consumption.
OR add more CPU.Hemant K Collette
http://hemantoracledba.blogspot.comHemant K Collette
-
Hello world
DB 11.2.0.1 - hearts 32 - 64 GB of RAM
I have a database of 1.5 million records are inserting all hours into it. This database is suffering too log sync wait. Googling the question I found that the reason is the way enforcement is the insertion of data in the database that has an insert preceded a commit after each of them. Currently, we are unable to change the method of insertion to use batch inserts instead. The number of files in the database redolog is 22 each of size 200 MB and the log_buffer is about 150 MB.
is there a solution to reduce the number of log file sync waits?
I tried to increase the size of log file to roll forward to 700 MB of each, but there was once some claim buffer waits. the SGA is 38 GB.
Thanks for any guidance
concerning
You could try commit year writing setting method. In the example here,.
One of the ways to eliminate the log_file_sync awaits
I eliminate log file sync.
-
How to cancel the installation of event on my computer files
Not too long ago I should have allowed an upgrade which installed... event on my computer files. Now my filing system is becoming littered with event files. I don't want them... they are a nuisance in total but removing one does not solve the fundamental problem.
I want to get rid of this change... update... program... what it is. I'm afraid that the updates on my other computers will create the same problem on them.
I am running windows 7 Ultimate on all computers.
I can partially solved the problem.
It sews files have all were created on one day at a time. I don't know why this happened, but occurred after a Windows Update.
No additional files were created which were implanted in files... at least not yet.
I deleted all the files by using the search and because all of the files have the same day and time, I was able to delete them as a group from the search results... nearly 2,000 of them.
Hope this helps anyone else with this problem of peculuar.
Thanks for the suggestions.
-
assuming that I have 5 pc as my host, I want that they cannot delete all files, but can create a file or a folder and can change?
becauseI want that all transactions are recordedAny version of Windows supports this type of file access and logging. You may be able to find a third-party program that can but I highly doubt it.
-
I am trying to use the PSR as a domain user, but get the following error message when you try to save my record:
"You don't have permissions of system files needed to create the specified output file.If I run the present when logged in as a domain administrator, I can record without problem. But when logged in as a domain user, I get the error. I tried "Run as Administrator" as well as "Run as different user", enter the domain administrator credentials, but still get the error.Everyone knows about this problem?Hello Giblits,
Because the computer is connected to a domain, it would be better on the TechNet forum. I suggest you post your question on the following link to get the exact resolution.
http://social.technet.Microsoft.com/forums/en-us/category/w7itpro
-
Broker configuration files not be created at the level of the BONE
DB version: 12.1.0.2 on Oracle Linux 6.7
Type: Physical standby
I'm trying to set up data guard broker for my own DB. Primary and standby phyical are autonomous DBs.
As the first step, I tried to create a broker configuration files in the following locations. But it is not created is at the level of the BONE.
No idea why?
SQL > ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1 = ' / oradata/DG_BROKER/dr1APGCMS.dat' scope = both;
Modified system.
SQL > ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2 = ' / datastore/DG_BROKER/dr2APGCMS.dat' scope = both;
Modified system.
-The configuration files will be created at the level of the BONE
$ ls-l /oradata/DG_BROKER/dr1APGCMS.dat
method: cannot access the /oradata/DG_BROKER/dr1APGCMS.dat: no such file or directory
$
$
$ ls-l /datastore/DG_BROKER/dr2APGCMS.dat
method: cannot access the /datastore/DG_BROKER/dr2APGCMS.dat: no such file or directory
So, I thought that the configuration files will be created only when Michael is started. So, I started using
SQL > alter system set dg_broker_start = TRUE scope = both;
Modified system.
But the configuration files are not yet created. These directories are empty. I can see that the process Michael started in elementary school
$ ps - ef | grep Michael
Oracle 7577 1 0 22:43? 00:00:00 ora_dmon_APGCMS
-extract from the primary alerts log
ALTER SYSTEM SET dg_broker_config_file1='/oradata/DG_BROKER/dr1APGCMS.dat' SCOPE = BOTH;
Sat Dec 26 22:41:49 2015
ALTER SYSTEM SET dg_broker_config_file2='/datastore/DG_BROKER/dr2APGCMS.dat' SCOPE = BOTH;
Sat Dec 26 22:43:39 2015
From MICHAEL background process
Sat Dec 26 22:43:39 2015
ALTER SYSTEM SET dg_broker_start = TRUE SCOPE = BOTH;
Sat Dec 26 22:43:39 2015
MICHAEL started with pid = 40, OS id = 7577
Sat Dec 26 22:43:42 2015
From Data Guard Broker (MICHAEL)
From INSV background process
Sat Dec 26 22:43:47 2015
INSV started with pid = 43, OS id = 7579
Sorry... I just removed the post for security reasons. (Host names has been a matter of concern). But I have the day before with my comment. The configuration files will be created once you create the dgmgrl configuration.
-Jonathan Rolland
-
Unable to turn on Creative Cloud File Sync
I can't turn on Creative Cloud File Sync. On a Mac equipped with Mavericks, I go to the creative cloud icon and click on it, then the assets. It takes me to the files tab and gives me a big blue button that says "Start Sync". If I click it, the app will blink briefly, and then I'm back to the tab of the files with the big blue button "Start Sync". If I click it again, I get an icon of spinning for about five minutes, and then I am to the app once again, with the big blue button.
Please check the full path that is not found, if you see "no such file or directory". I think the message in the command line to look like this:
rm: /Users/
/Library/Application Support/Adobe/CoreSync/options.tix: No such file or directory If you do not see /Users/
initially,'re missing you the tilde character ("~") instructions. If the path is similar to that above, and options.tix is not found, the problem you are experiencing is distinct from that described in the knowledge base article. In this case, could you send us your log files so that we can investigate further? If Please zip the entire folder "CoreSync" at the following location and send it to me at [email protected].
Mac:
/Users/
/Library/Application Support/Adobe/CoreSync Windows:
C:\Users\
\AppData\Roaming\Adobe\CoreSync 'Library' on Mac and "AppData" in Windows is two hidden folders. Please read this help page on these folder view:
http://helpx.Adobe.com/x-productkb/global/show-hidden-files-folders-extensions.html
Thank you
Ben
-
How to view the hold file that is created in sql
Hello
Oracle version: 10g
OS version: linux 4
I'm upgrading to 10.2.0.1.0 to 10.2.0.4.0
in the process, I followed a document.
In this, he said
coil track.log
- - -- -
- -- - -- - -
spool off
now, I want to view the track.log file?
It is created?
He was born at the level of the os or it is just a logical file for this period, as long as we are connected to sql?
pls help
thnxits created on the operating system where you ran the command the coil.
That is, if you run the command of the coil of the directory will, the file will be created there.
the file is created immediately after you run the
command. But what command is specified that ends in writing in this file. If you want to generate the file in a specific location/path, you can specify the full path when executing this command of the coil, for example,.
coil /abc/track.log
Here the track.log file will be created in the folder will.
PS mark the post responded to clean up the forum. Thank you.
-
I have two portable devices that produce both the same error. one is windows mobile 5 the second is windows mobile 6.1
When I download and install an application the odb file (obs, the file data of two files and) are not created. even in the demo of transport which is published on the mobile server when downloaded
in the Pocket and properly installed, odb files are not created. I tried to create my own application, create the tables in the software library, snapshots, etc. sequences, then then I published and downloaded to the terminal. odb database was not created.
Please assist.thank you for your time
PSI do not know if it has to do with my error
If I choose the Packaging Wizard to create a sql query records only in some .sql files sql statements that implement the database?
If so how can I run from my pocket computer.
Published by: user2955130 on Sep 17, 2009 07:41
Published by: user2955130 on Sep 17, 2009 07:41
Published by: user2955130 on Sep 17, 2009 07:42
Published by: user2955130 on Sep 17, 2009 07:42for the first question, the database file does not is downloaded, because when you created a new user, that it worked, my guess would be that the user does not instantiate correctly.
In mobile manager > data synchronization > publications > users you should see a list of users associated with the application, and there is a flay to show if the user is successfully instantiated. If it is NOT, then there is a problem with the user being added to the publication, perhaps in data binding specified which create subsets of parameters.
The database if you
Select * from c$ all_client_items where clientid =then you should see all Articles in the publication for publication-, if you not, yet once something does not. If everything seems to be ok, then turn on the trace of the synchronization on the user in mobile manager and look at the logs generated, they tend to point you to the problem
Regarding the access violation, for MSQL for conscli data do not change the user or password, simply connect (name of user and password is actually System/Manager, which is the default). for a database application, the user names is still system, but the password is the password for user
so, if you have a user test1, password pass1
for conscli - Manager/system
application - system/pass1 -
Why dreamweaver wants to replace my existing css file when you create a new page
I created a Web site and I can not dreamweaver to accept my existing css file when I create a new page in the same Web site. I click on file, create new page, I select page fluid and in the lower right, click on the link to an existing css file and click ok, but dreamweaver tells me so my css file does not exist and wants to replace my existing css file? What's happening?
Thank you guys for this report we
It's a little surprising that it is the way it is for a while now. Rather than create a fg with the attached css html, DW invites you to create a new css / replace the existing one. Why?
The part confusing here is that the "css Attach leader"offers two options.
1. fix like a normal CSS
2. attach like a css FG
If you check the preference to "Join as fluid grid CSS" (see attachment) you should not see this question.
that is, unless you attach the css like a css FG, DW treats like any other css and waiting for you to bind one another css FG
However, I'm too confused at first until I realized the difference. Hope this helps
And thank you for bringing this to our notice
.. Henin :)
-
creating index Impdp and parallel
This can be a very basic question for most of you, but...
Can someone explain that under with parallel impdp = n will lead to creating treats 1 index both with n PX (suppose I have n dump files)? Or it will create index n at the same time each process n PX? Issue DDL (1 parallel) parallel impdp = n option should not be indexed?
How the result is different if I have only 1 file dump? Impdp will use n processes to read the dump file 1?
I want to talk about 11 GR 2 EA.Indexes are created in series, but built using parallel slaves.
Let's say you have foo index that was created like this
create indexes on foo_tab foo (a) parallel 1;
If you have run with parallel impdp = 5 Data Pump will do this:
create indexes on foo_tab foo (a) parallel 5;
ALTER index foo parallel 1Thus, the index will be created using parallel slaves until 5. When the build is finished, the parallel value is reset to the original value.
Dean
Maybe you are looking for
-
Hello Just a small problem.My desktop icons have all highlighted and I can't seem to fix this Anyone have clues Thank you Fred
-
HP 2000-2d50SM Hotkeys driver for windows 7 64B
Does anyone know where I can find a driver for access keys (function Fn) for HP 2000-2d50SM model?
-
When I update my AMD Radeon HD M 7730 display adapter to 8.982.6.0, the State of the device says 'Windows has stopped this device because it has reported problems.' The Catalyst Control Center also stops working. How to solve this?
-
Windows Phone 8 is not detected by the computer
Hello Recently I tried to connect my phone to my computer windows, he chrges but didn't appear on the computer.ive tried to connect my ch yoke, but it will not pick up. Ive tried uninstalling the usb device controllers manager but whil im making it r