DB file sequential read and read of the direct path
HelloCould someone please clear my doubts about 'db file sequential read' and ' path direct reading. And help me understand the tkprof report correctly.
Please suggest if my understanding for scenario below is correct.
We have a 11.2.0.1 ' cluster 2 node rac version + asm' production and environment of his test environment that is a stand-alone database.
The query performs well in production compared to the test database.
The table is to have 254 + columns (264) with many lobs coulumns however LOB is not currently selected in the query.
I read in metalink this 254 table + column a intra-line-chaining, causing "db file sequential read" full table Scan.
Here are some details on the table which is similar to the prod and test, block size is 8 k:
TABLE UNUSED BLOCKS TOTAL BLOCKS HIGH WATER MARK
------------------------------ --------------- --------------- ---------------
PROBSUMMARYM1 0 17408 17407
What I understand less tkprof in production environment for a given session is:1 - the request resulted in disk 19378 readings and 145164 consistent readings.
2 19378 disc bed, 2425 reads disc has given rise to the wait event 'db file sequential read'.
This statement is correct this disc remaining readings were "db file sequential reads" but real quick so didn't wait event tied to it?
3 - 183 'direct path read' there also. Is it because of the order by clause of the query?
SQL ID: 72tvt5h4402c9
Plan Hash: 1127048874
select "NUMBER" num
from
smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486'
order by num asc
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.53 4.88 19378 145164 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.53 4.88 19378 145164 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Rows Row Source Operation
------- ---------------------------------------------------
0 SORT ORDER BY (cr=145164 pr=19378 pw=0 time=0 us cost=4411 size=24 card=2)
0 TABLE ACCESS FULL PROBSUMMARYM1 (cr=145164 pr=19378 pw=0 time=0 us cost=4410 size=24 card=2)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
ges message buffer allocation 3 0.00 0.00
enq: KO - fast object checkpoint 2 0.00 0.00
reliable message 1 0.00 0.00
KJC: Wait for msg sends to complete 1 0.00 0.00
Disk file operations I/O 1 0.00 0.00
kfk: async disk IO 274 0.00 0.00
direct path read 183 0.01 0.72
db file sequential read 2425 0.05 3.71
SQL*Net message from client 1 2.45 2.45
The same query when ran in no no rac - asm test stand alone database has given below tkprof.Does this mean that:
1-here too, reads happen through "db file sequential read", but they were so fast that has failed to the wait event?
2. "direct path read," it's because of the order clause in the query. "
SQL ID: 72tvt5h4402c9
Plan Hash: 1127048874
select "NUMBER" num
from
smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486'
order by num asc
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.06 0 0 0 0
Fetch 1 0.10 0.11 17154 17298 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.10 0.18 17154 17298 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Rows Row Source Operation
------- ---------------------------------------------------
0 SORT ORDER BY (cr=17298 pr=17154 pw=0 time=0 us cost=4694 size=12 card=1)
0 TABLE ACCESS FULL PROBSUMMARYM1 (cr=17298 pr=17154 pw=0 time=0 us cost=4693 size=12 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
Disk file operations I/O 1 0.00 0.00
db file sequential read 3 0.00 0.00
direct path read 149 0.00 0.03
SQL*Net message from client 1 2.29 2.29
For trace files in the database Production and Test, I see that 'direct path read' is against the same data file that's stored table.Then how come 'this direct path read' because of the order by clause of the query and would have been in sort field size or pga?
Or this direct path read extracts real PGA disk data, and "db file sequential read" do not extract data?
I know, it's 'direct path read' is wait event when data are asked in PGA drive or what kind segment or temp tablespace is used.
Here is the example of trace file in the Production database:
*** 2013-01-04 13:49:15.109
WAIT #1: nam='SQL*Net message from client' ela= 11258483 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278555109496
CLOSE #1:c=0,e=9,dep=0,type=1,tim=1357278555109622
=====================
PARSING IN CURSOR #1 len=113 dep=0 uid=0 oct=3 lid=0 tim=1357278555109766 hv=138414473 ad='cfc02ab8' sqlid='72tvt5h4402c9'
select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc
END OF STMT
PARSE #1:c=0,e=98,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278555109765
EXEC #1:c=0,e=135,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278555109994
WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278555110053
WAIT #1: nam='ges message buffer allocation' ela= 3 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555111630
WAIT #1: nam='enq: KO - fast object checkpoint' ela= 370 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1357278555112098
WAIT #1: nam='reliable message' ela= 1509 channel context=3691837552 channel handle=3724365720 broadcast message=3692890960 obj#=-1 tim=1357278555113975
WAIT #1: nam='ges message buffer allocation' ela= 2 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555114051
WAIT #1: nam='enq: KO - fast object checkpoint' ela= 364 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1357278555114464
WAIT #1: nam='KJC: Wait for msg sends to complete' ela= 9 msg=3686348728 dest|rcvr=65536 mtype=8 obj#=-1 tim=1357278555114516
WAIT #1: nam='ges message buffer allocation' ela= 2 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555114680
WAIT #1: nam='Disk file operations I/O' ela= 562 FileOperation=2 fileno=6 filetype=2 obj#=85520 tim=1357278555115710
WAIT #1: nam='kfk: async disk IO' ela= 5 count=1 intr=0 timeout=4294967295 obj#=85520 tim=1357278555117332
*** 2013-01-04 13:49:15.123
WAIT #1: nam='direct path read' ela= 6243 file number=6 first dba=11051 block cnt=5 obj#=85520 tim=1357278555123628
WAIT #1: nam='db file sequential read' ela= 195 file#=6 block#=156863 blocks=1 obj#=85520 tim=1357278555123968
WAIT #1: nam='db file sequential read' ela= 149 file#=6 block#=156804 blocks=1 obj#=85520 tim=1357278555124216
WAIT #1: nam='db file sequential read' ela= 155 file#=6 block#=156816 blocks=1 obj#=85520 tim=1357278555124430
WAIT #1: nam='db file sequential read' ela= 4826 file#=6 block#=156816 blocks=1 obj#=85520 tim=1357278555129317
WAIT #1: nam='db file sequential read' ela= 987 file#=6 block#=156888 blocks=1 obj#=85520 tim=1357278555130427
WAIT #1: nam='db file sequential read' ela= 3891 file#=6 block#=156888 blocks=1 obj#=85520 tim=1357278555134394
WAIT #1: nam='db file sequential read' ela= 155 file#=6 block#=156912 blocks=1 obj#=85520 tim=1357278555134645
WAIT #1: nam='db file sequential read' ela= 145 file#=6 block#=156920 blocks=1 obj#=85520 tim=1357278555134866
WAIT #1: nam='db file sequential read' ela= 234 file#=6 block#=156898 blocks=1 obj#=85520 tim=1357278555135332
WAIT #1: nam='db file sequential read' ela= 204 file#=6 block#=156907 blocks=1 obj#=85520 tim=1357278555135666
WAIT #1: nam='kfk: async disk IO' ela= 4 count=1 intr=0 timeout=4294967295 obj#=85520 tim=1357278555135850
WAIT #1: nam='direct path read' ela= 6894 file number=6 first dba=72073 block cnt=15 obj#=85520 tim=1357278555142774
WAIT #1: nam='db file sequential read' ela= 4642 file#=6 block#=156840 blocks=1 obj#=85520 tim=1357278555147574
WAIT #1: nam='db file sequential read' ela= 162 file#=6 block#=156853 blocks=1 obj#=85520 tim=1357278555147859
WAIT #1: nam='db file sequential read' ela= 6469 file#=6 block#=156806 blocks=1 obj#=85520 tim=1357278555154407
WAIT #1: nam='db file sequential read' ela= 182 file#=6 block#=156826 blocks=1 obj#=85520 tim=1357278555154660
WAIT #1: nam='db file sequential read' ela= 147 file#=6 block#=156830 blocks=1 obj#=85520 tim=1357278555154873
WAIT #1: nam='db file sequential read' ela= 145 file#=6 block#=156878 blocks=1 obj#=85520 tim=135727855515
Here are the trace file for the test database:*** 2013-01-04 13:46:11.354
WAIT #1: nam='SQL*Net message from client' ela= 10384792 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278371354075
CLOSE #1:c=0,e=3,dep=0,type=3,tim=1357278371354152
=====================
PARSING IN CURSOR #1 len=113 dep=0 uid=0 oct=3 lid=0 tim=1357278371363427 hv=138414473 ad='c7bd8d00' sqlid='72tvt5h4402c9'
select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc
END OF STMT
PARSE #1:c=0,e=9251,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278371363426
EXEC #1:c=0,e=63178,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278371426691
WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278371426766
WAIT #1: nam='Disk file operations I/O' ela= 1133 FileOperation=2 fileno=55 filetype=2 obj#=93574 tim=1357278371428069
WAIT #1: nam='db file sequential read' ela= 51 file#=55 block#=460234 blocks=1 obj#=93574 tim=1357278371428158
WAIT #1: nam='direct path read' ela= 31 file number=55 first dba=460235 block cnt=5 obj#=93574 tim=1357278371428956
WAIT #1: nam='direct path read' ela= 47 file number=55 first dba=136288 block cnt=8 obj#=93574 tim=1357278371429099
WAIT #1: nam='direct path read' ela= 80 file number=55 first dba=136297 block cnt=15 obj#=93574 tim=1357278371438529
WAIT #1: nam='direct path read' ela= 62 file number=55 first dba=136849 block cnt=15 obj#=93574 tim=1357278371438653
WAIT #1: nam='direct path read' ela= 17 file number=55 first dba=136881 block cnt=7 obj#=93574 tim=1357278371438750
WAIT #1: nam='direct path read' ela= 35 file number=55 first dba=136896 block cnt=8 obj#=93574 tim=1357278371438855
WAIT #1: nam='direct path read' ela= 22 file number=55 first dba=136913 block cnt=7 obj#=93574 tim=1357278371438936
WAIT #1: nam='direct path read' ela= 19 file number=55 first dba=137120 block cnt=8 obj#=93574 tim=1357278371439029
WAIT #1: nam='direct path read' ela= 36 file number=55 first dba=137145 block cnt=7 obj#=93574 tim=1357278371439114
WAIT #1: nam='direct path read' ela= 18 file number=55 first dba=137192 block cnt=8 obj#=93574 tim=1357278371439193
WAIT #1: nam='direct path read' ela= 16 file number=55 first dba=137201 block cnt=7 obj#=93574 tim=1357278371439252
WAIT #1: nam='direct path read' ela= 17 file number=55 first dba=137600 block cnt=8 obj#=93574 tim=1357278371439313
WAIT #1: nam='direct path read' ela= 15 file number=55 first dba=137625 block cnt=7 obj#=93574 tim=1357278371439369
WAIT #1: nam='direct path read' ela= 22 file number=55 first dba=137640 block cnt=8 obj#=93574 tim=1357278371439435
WAIT #1: nam='direct path read' ela= 702 file number=55 first dba=801026 block cnt=126 obj#=93574 tim=1357278371440188
WAIT #1: nam='direct path read' ela= 1511 file number=55 first dba=801154 block cnt=126 obj#=93574 tim=1357278371441763
WAIT #1: nam='direct path read' ela= 263 file number=55 first dba=801282 block cnt=126 obj#=93574 tim=1357278371442547
WAIT #1: nam='direct path read' ela= 259 file number=55 first dba=801410 block cnt=126 obj#=93574 tim=1357278371443315
WAIT #1: nam='direct path read' ela= 294 file number=55 first dba=801538 block cnt=126 obj#=93574 tim=1357278371444099
WAIT #1: nam='direct path read' ela= 247 file number=55 first dba=801666 block cnt=126 obj#=93574 tim=1357278371444843
WAIT #1: nam='direct path read' ela= 266 file number=55 first dba=801794 block cnt=126 obj#=93574 tim=1357278371445619
Thanks & Rgds,Vijay
911786 wrote:
Direct path readings can be done on the series tablescans in your version of Oracle, but if you have chained rows in the table and then Oracle can read read read the beginning of the line in the path directly, but must make a single block in the cache (the db file sequential read) to get the next part of the line.
It is possible that your production system has a lot of chained rows, while your test system is not. As corroboration (though not convincing) indicator of what you might notice that if you take (reads disc - db file sequential reads) - who might get you close to all the blocks direct read - the numbers are very similar.
I'm not 100% convinced that it's the answer for the difference in behavior, but worth a visit. If you can force and indexed access path in the table, do something like "select / * + index (use {pk}) * / max table (last_column_in_table)" and check the number of "table fetch continued lines" could be close to the number of db file sequential reads you. (There are other options for the counting of the chained rows that could be faster).
Concerning
Jonathan Lewis
Tags: Database
Similar Questions
-
DB file sequential reads on the table scan complete and LRU (new)
I would like to add a question on the subject
According to MOS doc ument 1457693.1
«.. signs diluvium readings of the caching blocks can be divided into a number of small multiblock and self-contained bed. »
The question is if sequential readings of db file submitted by a FULL SCAN operation will be cached on LRU or MRU list?
I'm afraid the flushes of heat/floods the buffer cache with a lot of FULL SCAN db file sequential reads.
For which direct path series reason readings will be inapplicable, so using this new feature of 11g is out of scope for this question.
Thank you for your interest,
Rainer Stenzel
There are a few different patterns of behavior depends on the size of the table (relative to the size of the buffer cache), but the key question is probably "readings will increment the counter of touch" - because if they are not the blocks will fall the LRU list fairly quickly, if they do then the blocks could (after a few tablescans) eventually be promoted in half hot from the cache.
I did some quick tests (that requires a little care in the Installer) which suggest the number touch was not incremented has therefore not had any effect on the question of if the block would get preferential treatment when they have reached the end of the LRU.
I'm a little puzzled by your expression: "cached on LRU or MRU list" - it's not two different lists; but people talk about "the end of the MRU of the LRU list.
Concerning
Jonathan Lewis
-
"the db file sequential read" waiting for event slow down an application.
"the db file sequential read" waiting for event slow down an application.
It is a rather strange problem. There is an update statement that hangs on the wait event 'db file sequential read' and until you restart the database, the query works fine. It happens once a week, usually Monday or after several days of large amount of work.
I checked the processor and is fine, memory is very good, although the SGA and PGA have taken maximum memory. Flow of the disc seems to be ok since each another session on the basis of data looks very good.
I guess that there is a missing configuration to avoid having to restart the database each week.
Any help is greatly appreciated.Hello
If you want same order of the tables as plain exp after reboot just go with ordered hint
UPDATE item_work_step SET user_name = :b1, terminal = SYS_CONTEXT ('USERENV', 'TERMINAL'), status_cd = 'IN PROCESS' WHERE item_work_step_route_id = (SELECT item_work_step_route_id FROM (SELECT /*+ORDERED */ iws.item_work_step_route_id FROM user_role ur, work_step_role wsr, work_step ws, app_step aps, item_work_step iws, item_work iw, item i WHERE wsr.role_cd = ur.role_cd AND ws.work_step_id = wsr.work_step_id AND aps.step_cd = ws.step_cd AND iws.work_step_id = ws.work_step_id AND iws.work_id = ws.work_id AND iws.step_cd = ws.step_cd AND iws.status_cd = 'READY' AND iw.item_work_id = iws.item_work_id AND iw.item_id = iws.item_id AND iw.work_id = iws.work_id AND i.item_id = iws.item_id AND i.item_id = iw.item_id AND i.deleted = 'N' AND i.item_type_master_cd = :b3 AND ur.user_name = :b1 AND aps.app_name = :b2 AND ( iws.assignment_user_or_role IS NULL OR ( iws.assignment_user_or_role IN ( SELECT ur.role_cd FROM user_role ur WHERE ur.user_name = :b1 UNION ALL SELECT :b1 FROM dual) AND iws.assignment_expiration_time > SYSDATE ) OR ( iws.assignment_user_or_role IS NOT NULL AND iws.assignment_expiration_time <= SYSDATE ) ) AND (iws.pend_date IS NULL OR iws.pend_date <= SYSDATE ) ORDER BY aps.priority, LEAST (NVL (iw.priority, 9999), NVL ((SELECT NVL (priority, 9999) FROM item_work WHERE item_id = i.parent_id AND work_id = 42), 9999 ) ), DECODE (i.a3, NULL, 0, 1), NVL (iw.sla_deadline, (SELECT sla_deadline FROM item_work WHERE item_id = i.parent_id AND work_id = 42) ), i.parent_id, i.item_id) unclaimed_item_work_step WHERE ROWNUM <= 1)
If you want to get rid of the nested loops use USE_HASH
UPDATE item_work_step SET user_name = :b1, terminal = SYS_CONTEXT ('USERENV', 'TERMINAL'), status_cd = 'IN PROCESS' WHERE item_work_step_route_id = (SELECT item_work_step_route_id FROM (SELECT /*+ORDERED USE_HASH(ur wsr ws aps iws iw i) */ iws.item_work_step_route_id FROM user_role ur, work_step_role wsr, work_step ws, app_step aps, item_work_step iws, item_work iw, item i WHERE wsr.role_cd = ur.role_cd AND ws.work_step_id = wsr.work_step_id AND aps.step_cd = ws.step_cd AND iws.work_step_id = ws.work_step_id AND iws.work_id = ws.work_id AND iws.step_cd = ws.step_cd AND iws.status_cd = 'READY' AND iw.item_work_id = iws.item_work_id AND iw.item_id = iws.item_id AND iw.work_id = iws.work_id AND i.item_id = iws.item_id AND i.item_id = iw.item_id AND i.deleted = 'N' AND i.item_type_master_cd = :b3 AND ur.user_name = :b1 AND aps.app_name = :b2 AND ( iws.assignment_user_or_role IS NULL OR ( iws.assignment_user_or_role IN ( SELECT ur.role_cd FROM user_role ur WHERE ur.user_name = :b1 UNION ALL SELECT :b1 FROM dual) AND iws.assignment_expiration_time > SYSDATE ) OR ( iws.assignment_user_or_role IS NOT NULL AND iws.assignment_expiration_time <= SYSDATE ) ) AND (iws.pend_date IS NULL OR iws.pend_date <= SYSDATE ) ORDER BY aps.priority, LEAST (NVL (iw.priority, 9999), NVL ((SELECT NVL (priority, 9999) FROM item_work WHERE item_id = i.parent_id AND work_id = 42), 9999 ) ), DECODE (i.a3, NULL, 0, 1), NVL (iw.sla_deadline, (SELECT sla_deadline FROM item_work WHERE item_id = i.parent_id AND work_id = 42) ), i.parent_id, i.item_id) unclaimed_item_work_step WHERE ROWNUM <= 1)
and for small tables, you can try adding for example FULL (your) FULL (wsr)
It can be rewritten in a different way, but it's the fastest way to try how query will be if you rewrite it. Check the explain plan command if certain partially ordered tables are not joined because you can get the Cartesian join, it seems that it will be ok.
View query result in the em console.
Concerning
-
Reason for 'control file sequential read' wait?
Hello
We have a 10.2.0.4.0 2. node RAC database on Windows 2003 (all 64-bit).
By looking at the 'top 5 timed events' section AWR reports (for 1 hour), we still see the 'time of CPU", as the number one event (due to our application, certain questions if all goes well in the study now by developers...), but recently I see"sequential read from the command file"as the event number two, with 3 574 633 expects and 831 time s. I was hoping to find out what was causing this high number of expectations. I started by trying to find a particular query that has experienced this expectation often, so I ran this SQL:
As I hoped, the sql_id top of page returned really stands out, with an equal number of 14 182 (the next sql_id has a counter of 68). This is the text of the sql for this id:select sql_id, count(*) from dba_hist_active_sess_history where event_id = (select event_id from v$event_name where name = 'control file sequential read') group by sql_id order by 2 desc ;
(Ok, code a little strange, but I already have them change it.)WITH unit AS( SELECT UNIQUE S.unit_id FROM STOCK S, HOLDER H WHERE H.H_ID = S.H_ID AND H.LOC_ID = :B2 AND S.PROD_ID = :B1 ) SELECT DECODE((SELECT COUNT(*) FROM unit), 1, unit_ID, NULL) FROM unit WHERE ROWNUM = 1 ;
My question is:
Why / what is this code should read in the control file?
Kind regards
ADOS
PS - I also checked the block number in dba_hist_active_sess_history for this sql_id and event_id p2, and it is still one of the 5 blocks in the controlfile. I've spilled the controlfile, but do not see anything interesting (even if it is true, it is the first time I've thrown a controlfile so have no idea really what to do!).Hello
ADO wrote:
WITH unit AS( SELECT UNIQUE S.unit_id FROM STOCK S, HOLDER H WHERE H.H_ID = S.H_ID AND H.LOC_ID = :B2 AND S.PROD_ID = :B1 ) SELECT DECODE((SELECT COUNT(*) FROM unit), 1, unit_ID, NULL) FROM unit WHERE ROWNUM = 1 ;
This query contains a subquery factoring clause; and as it refers to the unit twice in the main part, chances are subquery factoring is to be materialized in a global temporary table in the schema SYS with SYS_TEMP_ Name% and is accessible on two occasions in the execution plan later (check presence stage of TRANSFORMATION TABLE TEMP). The step of filling of this intermediate table requires you to write in the temporary tablespace - and it's done via direct writing. Each direct entry to the segment (i.e. directly in the file on the disk) requires a data file status check - what is online or offline - that is done by access control files. So you see these expectations. This is one of the many things why subquery factoring is not good for production OLTP environments.
-
query on dba_free_space ends waiting for events db file sequential read
Hi all
Env: 10gr 2 on Windows NT
I gave the query
Select tablespace_name, sum (bytes) / 1024/1024 dba_free_space group by tablespace_name and her for still waiting.
I checked the event waiting for v$ session and «db file sequential read»
I put a trace on the session before launching the above query:
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
-----
Parse 1 0.06 0.06 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
-----
total 2 0.06 0.06 0 0 0 0
Misses in library cache during parse: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
----------------------------------------
db file sequential read 13677 0.16 151.34
SQL*Net message to client 1 0.00 0.00
db file scattered read 281 0.01 0.53
latch: cache buffers lru chain 2 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------
Parse 13703 0.31 0.32 0 0 0 0
Execute 14009 0.75 0.83 0 0 0 0
Fetch 14139 0.48 0.74 26 56091 0 15496
------
total 41851 1.54 1.89 26 56091 0 15496
Misses in library cache during parse: 16
Misses in library cache during execute: 16
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
----------------------------------------
db file sequential read 26 0.00 0.12
1 user SQL statements in session.
14010 internal SQL statements in session.
14011 SQL statements in session.
Event Waits Time (s) (ms) Time Wait Class
The PHYRDS (from dba_hist_filestatxs) on my system01.dbf is 161,028,980 for the final nod.
------
db file sequential read 1,134,643 7,580 7 56.8 User I/O
db file scattered read 940,880 5,025 5 37.7 User I/O
CPU time 967 7.2
control file sequential read 4,987 3 1 0.0 System I/O
control file parallel write 2,408 1 1 0.0 System I/O
Could someone shed some light on what is happening here?
TIA,
JJIn certain circumstances, questioning the dictionary can be slow, usually due to problems with bad bad statistics-related implementation plans, trying to collect statistics using dbms_stats.gather_fixed_objects_stats (); It has worked for me before.
You can also read Note 414256.1 poor performance for Tablespace Page in Grid Control display Console that also indicates a possible problem with the trash.HTH
Enrique
-
How can I get rid of the file name, date and size of the photo in the upper left corner of the module develop when I'm not working on the photos?
All shortcuts are listed on the view > menu Info Magnifier.
-
When I try to open a game (Chess Titans, inkball, etc.), I get the following message: "c:\Program Files\Microsoft Games\Chess\Chess.exe the specified path does not exist. Check the path and then try again. "Should I do to get my games back so I can play them? I've tried enabling/disabling of Windows features, but it did nothing. I also tried to go through the run, but that has not worked. And, I tried command prompt, but it says that I am not the administrator, who I am.
Not accessible games
I haven't had any changes consciously, and the games do not work in any user account. When I've done sfc/scannow (command prompt) he said that I was not the administrator. I could learn from another source to solve this problem - in the start menu, right click on the command prompt icon, and then click Run as administrator. Hope everybody help with this problem. If you do not have command prompt in your Start menu, any type of command prompt in your search box and it should appear.
Anyway, my games work now. Thank you
-
All MXF files are read as the same as that?
Hello
I recorded a few interviews with a Canon XF100 which records raw files in format MXF, not a big fan of this codec, but I had no other camera. I put all the files in Premiere pro CC and began to publish them, but the next time I opened the project that first read all the files as the same. Canon XF100 automatically cut one long take in 5 min clips if it is longer than 5 minutes. Thus, all the clips for an interview are now the same as the first 5 minutes of this interview. It is only in the first, the real files are fine, I checked.
So, what is the problem and how to fix it?Thanks in advance
Ante
Did you start this project in CC 7.1 or an earlier version and then upgrade to 7.2 and 7.2.1? If so, then you have probably reached the issue explained by Wil Renczes in this thread: http://forums.adobe.com/thread/1358012?tstart=0
-
[Database Toolbox] possibility to Load From xml file in Labview and then in the database
Before you write in my database, I want to save it and reload if the user wants to cancel the new charge that can last several minutes.
If he cancels the load I get back my previous database data.
I managed to save my database in XML through the "DB tools SaveRecordSet To File" VI. This VI records directly contains my database table in the xml file. And finally, I only if the other VI to load file will do the same thing, means load the file and save it in my database but it gives just a recordset in labview.
Question: is it possible just to load the xml file into the database directly through Labview?
Why, finally, these screws do not have the same behavior?
I don't know, but I just thought I would chime, that if you fail to do with LabvIEW, you might want to look by making your writing in a 'Transaction' database, if your DBM takes in charge (most except MS Access)
-
Well I'm puzzled! I was ofline for a week, but I have recently installed to realpayer, also have itunes and media center. Well, someone removed my music! Beegees to mr. bungle, the files are gone! not in the trash, Whats up? No one else has access to my computer. It's a dvV9620 HP Pavillion w / vista.
I installed to bitcomet and just installed malwarebytes anti-malware. already running mcaffe suite only.
ALSO, I got a message from one of my drives is going to die, its my DATA d, the disk drive is ok. I'm doing a backup but cnanot, it gives me an error code: 0 x 80070002. Plan on replacing soon with a bigger drive.
None of my videos are deleted, lost stuff from youtube.
I can't understand where will my music! Help! And since I downloaded RealPlayer, I noticed that when I burn a disc with windows media center, it burns in cda instead of mp3 format. Just who prevailed here?Thanks, I was able to make a creation of HP recovery disk, which I hope is what will be needed for the installation. my D drive is a failure, will pull on it, install a 500 GB on my cdrive, and put the old drive to 120 GB of drive c as the d drive.
any advice is welcome.
Deleting files of MUSIC thing - appears only full albums and CD have been affected, none of the songs from youtube are touched, weird...
I checked under my PC USERS, delete a user called ASP.NET, have no idea where it came from, bitcomet blocking in the future.
SUX having to rip my CD in my media library again... I have hacked huh? -
File recorded incorrectly and parts of the image are missing
I was working in Photoshop CC and saved that I made changes. I have added teeth, skull, the eyes and recorded. No problems. I added the background and not registered, no problem. I added the flame and saved... photo is destroyed. When I clicked saved it changed the image and now the bottom is that partially, the flames have missing parts, the dog has missing pieces and my picture once done for a class assignment is destroyed. If there is a solution, please help me. Otherwise I'll have to go back and redo everything. Which waste far too much time because I have 3 other assignments to do and who does not understand to make posts on the site of the student and watching the video tutorials for other assignments.
Here is the version incorrectly saved as a png image of the image that it was recorded so wrong what he was:
Help, please!
Julie S.
Please see file saved incorrectly and that some parts of the image are missing
(Double Post)
-
Is it really true that I can't view file creation date and time in the form of column bin?
I need to sort my material by the date and time of creation of the file.
It seems that this basic metadata more are missing in Premiere Pro
I hope someone can point to my error here - otherwise, it's a MAJOR FAILURE by Adobe. Surely, after file name, it is the most "basic" bit of METADATA required from a file.
Otherwise there is no solution workaround, other than the batch renaming clips at the finder level. (I tried the tags and comments at the research level, but can't seem to access it either)
Hi Jim,.
Unfortunately, even when these data are present for the clips, the ability to sort by these data is broken.
It caused a lot of performance issues, then Yes, that it has been disabled.
and they still haven't fixed yet.
This is fixed in the next version. However, you must add the columns in the display of the metadata.
Thank you
Kevin
-
Hello
I am currently using Apex 4.2 and I want to know if you still have to import images, CSS and Javascript files, respectively, in a workspace (like the shared components) to use them in an Application? Alternatively, they could be referenced in a directory on the file system? I'm still not familiar with CSS or Javascript technology and I'm trying to find out as much as possible about how to use and where Apex can do. All comments gratefully received,
Kind regards
Kevin.
Since you're using the EPG you will not be able to reference all files directly in your operating system. You can load in XDB using FTP, WebDav or PL/SQL location. The PL/SQL method is described in these posts:
A random overview of Oracle: loading images in Oracle XDB
Loading images custom when using EPG
I don't know if there is any benefit from their download in XDB compared to the APEX workspace. Guess it would make sense that you had a file that you wanted to be referenced by more than one workspace and do not want to download the file in each workspace.
-
Disabling the direct paths of ESX and VM guests
Hi all
Hope that this question has not been asked before. (I did a search but could not find anything specific).
I'm looking for reassurance about a proposed change, we seek to bring to our system.
We need to upgrade one of our Brocade switches that will cause some interruptions in service of the said switch. (We have two bucks and each ESX host has 2 x single port hba with a connection to each Brocade card)
Literature, that I read, it seems to me that I have to force data store traffic over the other switch to allow me to remove the Brocade for its upgrade.
-Proposed method for each data store attached to a host, I intend to disable the relevant path and force a failover to the remaining path that leads to the secondary Brocade.
I have a few questions about this approach.
(1) I guess that manually disabling a path should result in no time downtime or loss of connectivity failover occurs in an operational environment? (We will off hours, but it's a 24/7 environment).
(2) Although the majority of our servers use VMFS, some use LUN RAW. These servers would encounter difficulties if I force a failover situation through a path of disabling?
Thanks in advance!
Hello
Welcome to the community
Hope that this question has not been asked before. (I did a search but could not find anything specific).
It was asked before several times, in any case
(1) I guess that manually disabling a path should result in no time downtime or loss of connectivity failover occurs in an operational environment? (We will off hours, but it's a 24/7 environment).
(2) Although the majority of our servers use VMFS, some use LUN RAW. These servers would encounter difficulties if I force a failover situation through a path of disabling?
- OK, even if you remove a cable adapter HBA, ESX would shift all traffic to the next available path. Don't forget that you have the vmware tools installed in the virtual machine
- No problem with RAW LUN, it would behave like VMFS
-
See bug handles (and confiscation) with the direct Selection of the 5.5 tool?
Install previous Illy (including 5.5), clicking on a path with the direct Selection "chose" tool this path or shape and showed all the anchors and handles, so I could catch. Now, when I click on a path or shape with direct selection, I get nothing. If I know where is an anchor, I can enter directly (with a guess) and move it - which shows me its handles, but I can't catch them.
I've never had this problem before. Is there some preference that I need to put? I have a bad installation? Does anyone else have this problem?
Thank you
ND
Fact View > Show Edges help?
Maybe you are looking for
-
G560 Upgrade (CPU, GPU)
Hello I have a Lenovo G560 Intel Core i3 - 330 M processor with an nVidia GeForce 310 M, 4 GB of RAM. Is there a list of CPUs, I use (favorite i5)? Is it possible to upgrade the GPU too? Thank you very much!
-
Football Manager 2011 is compatible with Windows XP
-
Windows Home premium - Portuguese
-
Hi Forum, My radio feature suddenly stopped working. Other stuff with card and internal memory works fine. Any ideas? Thank you! Mike
-
Synchronize the Playlist E200R files don't
Helllo, I loaded my E20R the other day and I synced everything that is new in my music collection, but when I went to make a playlist, it was not at all! Except the playlist OnTheGo, of course. There were once there are about 15 or so a couple of m