Direct path read event
Hi allWe use 10.2.0.3 on sparc solaris 10 64 bit OS.
I wondered about two events in my environment that will take a long time. Yes, we do a lot of grouping, sorting and hash joins that temp on spill data.
For the most part, all time will "lead the read temp path. I took route and seen it's just 1 block reading all time (p3) vs other reads like scattered or direct path write 32 blocks read/write.
direct path read temp 241389 4.19 2511.08
direct path write 2 0.00 0.00
So, is there a reason why the direct path read temp to read than a single block at the time? Can we define something that can read multiple blocks in an extraction
Nico wrote:
Thank you Greg for more information.Have you found any workaround to 10.2.0.3
It is not a work around. The changes in the fixed a bug how temp blocks are read for positional kinds. It should all but remove the one-piece temp readings. Much more effective with this fix. I saw about 2 x increase in performance in many cases actual test using this hotfix.
You can try and see if Oracle Support will backport of 10.2.0.3, but I recommend you do 10.2.0.4 anyway due to the considerable number of bug fixes. It won't take long and 10.2.0.5 comes out as well.
--
Kind regards
Greg Rahn
http://structureddata.org
Tags: Database
Similar Questions
-
With regard to 'Direct Path read' expected
As we know, the direct path read waits is a new feature of Oracle 11 g. Oracle Documents or other objects, when it's the full table scan, direct path reading took place. But why it happened? I don't know clearly.
Here is the description of the Oracle Document:
http://docs.Oracle.com/CD/E18283_01/server.112/e17110/waitevents003.htm#sthref3849
During operations of Direct Reading of data asynchronously access path in the database files. At some point, the session must ensure that all the asynchronous i/o in circulation have been realized on the drive. This can also happen if during a direct reading without more slots is available for storing exceptional load requests (a charge application might consist of several e/s).
Question:
1 during operations of Direct data asynchronously reading path in database files. > > this statement means what? What of "asynchronous reading '?
2 describe above description for me in details.
3. who can clearly explain "why read direct path wait to happen ' for me?
Thanks in advance.
Lonion
Lonion wrote:
Question:
1. to path operations Direct playback of data asynchronously in the database files. > This statement means what? What of "asynchronous reading '?
2 describe above description for me in details.
3. who can clearly explain "why read direct path wait to happen ' for me?
If you want to get very technical, Frits Hoogland wrote a lot about the implementation:
http://fritshoogland.WordPress.com/2013/05/09/direct-path-read-and-fast-full-index-scans/
http://fritshoogland.WordPress.com/2013/01/04/Oracle-11-2-and-the-direct-path-read-event/
http://fritshoogland.WordPress.com/2012/12/27/Oracle-11-2-0-1-and-the-KFK-async-disk-IO-wait-event/
http://www.UKOUG.org/what-we-offer/library/about-Multiblock-reads/about-Multiblock-reads.PDF
Concerning
Jonathan Lewis
-
Wait events "direct path write" and "direct path read".
Hello
We have a query that takes more than 2 minutes. It's a 9.2.0.7 database. We took the request trace/tkprof and identified there so manay 'direct entry path' and 'direct path read' wait for events in the trace file.
WAITING #3: nam = "Write" direct path ela = 5 201 p1 = p2 = p3 70710 = 15
WAITING #3: nam = "direct path read" ela = 170 201 p1 = p2 = 71719 p3 = 15
In the light of the foregoing, "p1 = 201" is a the file_id, but we could not find any data file, the temporary file, the control file with this id # 201.
Can you please let us know what "p1 = 201" here, how to identify the file that is causing the problem.
Thank you
SravanWhatever it is:
show parameter db_files
back? I think, is that it returns 200.
Read the file live and direct file writing events are reads and writes of tablespace TEMP. Wait events, folder # is reported as db_files + id of a temporary file. So, 201 means temp #1 file.
Now, as to your real performance issue.
Without seeing the SQL and the corresponding implementation plan, it is impossible to be sure. However, the most frequent causes of temporary entries are the operations of sorting and group by operations.
If you decide to display your plan and SQL execution, please be sure to make it readable by formatting it. Information on how to do this can be found here.
Hope that helps,
-Mark
Published by: mbobak on May 1st, 2011 01:50
-
Insert adds parallel sessions expect "direct path read temp".
Hi all.
The database is 11.2.0.3 on a linux machine.
I published the following query, and I found that parallel query sessions were waiting for "direct path read temp.
And SDPCSM. Table SDP_CHILD_SVC_PROFATTR_HIS does NOT index and is in nologging mode.
Why the pq sessions expect "direct path read/write temp?
Thanks in advance.insert /*+ append parallel (8) */ into SDPCSM.SDP_CHILD_SVC_PROFATTR_HIS select /*+ parallel (SDP_CHILD_SVC_PROFATTR_HIS_E, 8) */ * from SDPCSM.SDP_CHILD_SVC_PROFATTR_HIS_E@tb_link;
Best regards.Please check this blog:
http://www.Confio.com/blog/Oracle-wait-event-explained-direct-path-read-temp -
Is it reasonable to require a Full table Scan (Direct path read) on a large Table
Hello
I have an Oracle 11.2.0.3 on a Sun Solaris 10 sparc with SGA 25 GB machine
One of my made SQL an analysis of indexes on the table to 45 GB where the size of the index is 14FR and the thourghput for the sequential read is 2MBPS.
so now (Index) 14 000 Mo, 14 GB / 2 MB = dry 7000 = 2 hours to scan the index.
Flow of the direct path read is 500 Mbps for an another SQL and reads assimilate them all live.
and because of this flow, a read (FTS) of direct path on the table of 7 GB out in 12-13 seconds. So in my case, it probably takes 100 seconds to scan the entire table. It is a monthly report and does not often.
Should I try to use a trick to force sql to do a full scan and exploit the possibilities of direct reading and is done in a few minutes? Assuming it to do a FTS (direct play)
Of course, it will be tested and verified, but the question is, is this a sensible apprioach? is - anyone has experience by forcing it?
Any suggestions would be helpful... I'll test this on a different machine tomorrow.
Best regards
Maran
82million lines and a grouping of 18million factor? And really 17million from lines to recover? However, your result set after the hash join is 3500 lines, although the optimizer expects 16 lines.
I would say that the statistics are not representative or not in use for this SQL. Certainly, the index does not match query predicates.
The fact that the index is also using the virtual columns only adds to confusion.
Hemant K Collette
-
Why a parallel query use direct path read?
I think because the cache must Access pads, lock and lock on buffer block if parallel query do not reading of the direct path, parallel queries will be affected by the serialization as the latch and oracle .so lock mechanism choose direct path read to avoid what he.
that someone has a good idea?
Published by: Jeremiah on December 8, 2008 07:52Jinyu wrote:
I think because the cache must Access pads, lock and lock on buffer block if parallel query do not reading of the direct path, parallel queries will be affected by the serialization as the latch and oracle .so lock mechanism choose direct path read to avoid what he.Jinyu,
actually, Yes, I think that's it. The parallel query is designed to scan very much, because the load of communication between processes and maintenance/commissioning the parallel slave makes inefficient operation for small segments.
So I guess that the assumption is that the segment to analyze is probably very large, the fraction of the blocks in the buffer cache is very low compared to the blocks to scan and so the fresh reduction General read directly blocks without moving through all the questions of serialization of the buffer cache should prevail on the issue of blocks "unbuffered" and save the buffer for objects cache more benefit from development caching.
Kind regards
RandolfOracle related blog stuff:
http://Oracle-Randolf.blogspot.com/SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676 /.
http://sourceforge.NET/projects/SQLT-pp/ -
DB file sequential read and read of the direct path
Hello
Could someone please clear my doubts about 'db file sequential read' and ' path direct reading. And help me understand the tkprof report correctly.
Please suggest if my understanding for scenario below is correct.
We have a 11.2.0.1 ' cluster 2 node rac version + asm' production and environment of his test environment that is a stand-alone database.
The query performs well in production compared to the test database.
The table is to have 254 + columns (264) with many lobs coulumns however LOB is not currently selected in the query.
I read in metalink this 254 table + column a intra-line-chaining, causing "db file sequential read" full table Scan.
Here are some details on the table which is similar to the prod and test, block size is 8 k:
What I understand less tkprof in production environment for a given session is:TABLE UNUSED BLOCKS TOTAL BLOCKS HIGH WATER MARK ------------------------------ --------------- --------------- --------------- PROBSUMMARYM1 0 17408 17407
1 - the request resulted in disk 19378 readings and 145164 consistent readings.
2 19378 disc bed, 2425 reads disc has given rise to the wait event 'db file sequential read'.
This statement is correct this disc remaining readings were "db file sequential reads" but real quick so didn't wait event tied to it?
3 - 183 'direct path read' there also. Is it because of the order by clause of the query?
The same query when ran in no no rac - asm test stand alone database has given below tkprof.SQL ID: 72tvt5h4402c9 Plan Hash: 1127048874 select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.53 4.88 19378 145164 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 0.53 4.88 19378 145164 0 0 Misses in library cache during parse: 0 Optimizer mode: ALL_ROWS Parsing user id: SYS Rows Row Source Operation ------- --------------------------------------------------- 0 SORT ORDER BY (cr=145164 pr=19378 pw=0 time=0 us cost=4411 size=24 card=2) 0 TABLE ACCESS FULL PROBSUMMARYM1 (cr=145164 pr=19378 pw=0 time=0 us cost=4410 size=24 card=2) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 ges message buffer allocation 3 0.00 0.00 enq: KO - fast object checkpoint 2 0.00 0.00 reliable message 1 0.00 0.00 KJC: Wait for msg sends to complete 1 0.00 0.00 Disk file operations I/O 1 0.00 0.00 kfk: async disk IO 274 0.00 0.00 direct path read 183 0.01 0.72 db file sequential read 2425 0.05 3.71 SQL*Net message from client 1 2.45 2.45
Does this mean that:
1-here too, reads happen through "db file sequential read", but they were so fast that has failed to the wait event?
2. "direct path read," it's because of the order clause in the query. "
For trace files in the database Production and Test, I see that 'direct path read' is against the same data file that's stored table.SQL ID: 72tvt5h4402c9 Plan Hash: 1127048874 select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.06 0 0 0 0 Fetch 1 0.10 0.11 17154 17298 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 0.10 0.18 17154 17298 0 0 Misses in library cache during parse: 0 Optimizer mode: ALL_ROWS Parsing user id: SYS Rows Row Source Operation ------- --------------------------------------------------- 0 SORT ORDER BY (cr=17298 pr=17154 pw=0 time=0 us cost=4694 size=12 card=1) 0 TABLE ACCESS FULL PROBSUMMARYM1 (cr=17298 pr=17154 pw=0 time=0 us cost=4693 size=12 card=1) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 Disk file operations I/O 1 0.00 0.00 db file sequential read 3 0.00 0.00 direct path read 149 0.00 0.03 SQL*Net message from client 1 2.29 2.29
Then how come 'this direct path read' because of the order by clause of the query and would have been in sort field size or pga?
Or this direct path read extracts real PGA disk data, and "db file sequential read" do not extract data?
I know, it's 'direct path read' is wait event when data are asked in PGA drive or what kind segment or temp tablespace is used.
Here is the example of trace file in the Production database:
Here are the trace file for the test database:*** 2013-01-04 13:49:15.109 WAIT #1: nam='SQL*Net message from client' ela= 11258483 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278555109496 CLOSE #1:c=0,e=9,dep=0,type=1,tim=1357278555109622 ===================== PARSING IN CURSOR #1 len=113 dep=0 uid=0 oct=3 lid=0 tim=1357278555109766 hv=138414473 ad='cfc02ab8' sqlid='72tvt5h4402c9' select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc END OF STMT PARSE #1:c=0,e=98,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278555109765 EXEC #1:c=0,e=135,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278555109994 WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278555110053 WAIT #1: nam='ges message buffer allocation' ela= 3 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555111630 WAIT #1: nam='enq: KO - fast object checkpoint' ela= 370 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1357278555112098 WAIT #1: nam='reliable message' ela= 1509 channel context=3691837552 channel handle=3724365720 broadcast message=3692890960 obj#=-1 tim=1357278555113975 WAIT #1: nam='ges message buffer allocation' ela= 2 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555114051 WAIT #1: nam='enq: KO - fast object checkpoint' ela= 364 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1357278555114464 WAIT #1: nam='KJC: Wait for msg sends to complete' ela= 9 msg=3686348728 dest|rcvr=65536 mtype=8 obj#=-1 tim=1357278555114516 WAIT #1: nam='ges message buffer allocation' ela= 2 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555114680 WAIT #1: nam='Disk file operations I/O' ela= 562 FileOperation=2 fileno=6 filetype=2 obj#=85520 tim=1357278555115710 WAIT #1: nam='kfk: async disk IO' ela= 5 count=1 intr=0 timeout=4294967295 obj#=85520 tim=1357278555117332 *** 2013-01-04 13:49:15.123 WAIT #1: nam='direct path read' ela= 6243 file number=6 first dba=11051 block cnt=5 obj#=85520 tim=1357278555123628 WAIT #1: nam='db file sequential read' ela= 195 file#=6 block#=156863 blocks=1 obj#=85520 tim=1357278555123968 WAIT #1: nam='db file sequential read' ela= 149 file#=6 block#=156804 blocks=1 obj#=85520 tim=1357278555124216 WAIT #1: nam='db file sequential read' ela= 155 file#=6 block#=156816 blocks=1 obj#=85520 tim=1357278555124430 WAIT #1: nam='db file sequential read' ela= 4826 file#=6 block#=156816 blocks=1 obj#=85520 tim=1357278555129317 WAIT #1: nam='db file sequential read' ela= 987 file#=6 block#=156888 blocks=1 obj#=85520 tim=1357278555130427 WAIT #1: nam='db file sequential read' ela= 3891 file#=6 block#=156888 blocks=1 obj#=85520 tim=1357278555134394 WAIT #1: nam='db file sequential read' ela= 155 file#=6 block#=156912 blocks=1 obj#=85520 tim=1357278555134645 WAIT #1: nam='db file sequential read' ela= 145 file#=6 block#=156920 blocks=1 obj#=85520 tim=1357278555134866 WAIT #1: nam='db file sequential read' ela= 234 file#=6 block#=156898 blocks=1 obj#=85520 tim=1357278555135332 WAIT #1: nam='db file sequential read' ela= 204 file#=6 block#=156907 blocks=1 obj#=85520 tim=1357278555135666 WAIT #1: nam='kfk: async disk IO' ela= 4 count=1 intr=0 timeout=4294967295 obj#=85520 tim=1357278555135850 WAIT #1: nam='direct path read' ela= 6894 file number=6 first dba=72073 block cnt=15 obj#=85520 tim=1357278555142774 WAIT #1: nam='db file sequential read' ela= 4642 file#=6 block#=156840 blocks=1 obj#=85520 tim=1357278555147574 WAIT #1: nam='db file sequential read' ela= 162 file#=6 block#=156853 blocks=1 obj#=85520 tim=1357278555147859 WAIT #1: nam='db file sequential read' ela= 6469 file#=6 block#=156806 blocks=1 obj#=85520 tim=1357278555154407 WAIT #1: nam='db file sequential read' ela= 182 file#=6 block#=156826 blocks=1 obj#=85520 tim=1357278555154660 WAIT #1: nam='db file sequential read' ela= 147 file#=6 block#=156830 blocks=1 obj#=85520 tim=1357278555154873 WAIT #1: nam='db file sequential read' ela= 145 file#=6 block#=156878 blocks=1 obj#=85520 tim=135727855515
Thanks & Rgds,*** 2013-01-04 13:46:11.354 WAIT #1: nam='SQL*Net message from client' ela= 10384792 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278371354075 CLOSE #1:c=0,e=3,dep=0,type=3,tim=1357278371354152 ===================== PARSING IN CURSOR #1 len=113 dep=0 uid=0 oct=3 lid=0 tim=1357278371363427 hv=138414473 ad='c7bd8d00' sqlid='72tvt5h4402c9' select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc END OF STMT PARSE #1:c=0,e=9251,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278371363426 EXEC #1:c=0,e=63178,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278371426691 WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278371426766 WAIT #1: nam='Disk file operations I/O' ela= 1133 FileOperation=2 fileno=55 filetype=2 obj#=93574 tim=1357278371428069 WAIT #1: nam='db file sequential read' ela= 51 file#=55 block#=460234 blocks=1 obj#=93574 tim=1357278371428158 WAIT #1: nam='direct path read' ela= 31 file number=55 first dba=460235 block cnt=5 obj#=93574 tim=1357278371428956 WAIT #1: nam='direct path read' ela= 47 file number=55 first dba=136288 block cnt=8 obj#=93574 tim=1357278371429099 WAIT #1: nam='direct path read' ela= 80 file number=55 first dba=136297 block cnt=15 obj#=93574 tim=1357278371438529 WAIT #1: nam='direct path read' ela= 62 file number=55 first dba=136849 block cnt=15 obj#=93574 tim=1357278371438653 WAIT #1: nam='direct path read' ela= 17 file number=55 first dba=136881 block cnt=7 obj#=93574 tim=1357278371438750 WAIT #1: nam='direct path read' ela= 35 file number=55 first dba=136896 block cnt=8 obj#=93574 tim=1357278371438855 WAIT #1: nam='direct path read' ela= 22 file number=55 first dba=136913 block cnt=7 obj#=93574 tim=1357278371438936 WAIT #1: nam='direct path read' ela= 19 file number=55 first dba=137120 block cnt=8 obj#=93574 tim=1357278371439029 WAIT #1: nam='direct path read' ela= 36 file number=55 first dba=137145 block cnt=7 obj#=93574 tim=1357278371439114 WAIT #1: nam='direct path read' ela= 18 file number=55 first dba=137192 block cnt=8 obj#=93574 tim=1357278371439193 WAIT #1: nam='direct path read' ela= 16 file number=55 first dba=137201 block cnt=7 obj#=93574 tim=1357278371439252 WAIT #1: nam='direct path read' ela= 17 file number=55 first dba=137600 block cnt=8 obj#=93574 tim=1357278371439313 WAIT #1: nam='direct path read' ela= 15 file number=55 first dba=137625 block cnt=7 obj#=93574 tim=1357278371439369 WAIT #1: nam='direct path read' ela= 22 file number=55 first dba=137640 block cnt=8 obj#=93574 tim=1357278371439435 WAIT #1: nam='direct path read' ela= 702 file number=55 first dba=801026 block cnt=126 obj#=93574 tim=1357278371440188 WAIT #1: nam='direct path read' ela= 1511 file number=55 first dba=801154 block cnt=126 obj#=93574 tim=1357278371441763 WAIT #1: nam='direct path read' ela= 263 file number=55 first dba=801282 block cnt=126 obj#=93574 tim=1357278371442547 WAIT #1: nam='direct path read' ela= 259 file number=55 first dba=801410 block cnt=126 obj#=93574 tim=1357278371443315 WAIT #1: nam='direct path read' ela= 294 file number=55 first dba=801538 block cnt=126 obj#=93574 tim=1357278371444099 WAIT #1: nam='direct path read' ela= 247 file number=55 first dba=801666 block cnt=126 obj#=93574 tim=1357278371444843 WAIT #1: nam='direct path read' ela= 266 file number=55 first dba=801794 block cnt=126 obj#=93574 tim=1357278371445619
Vijay911786 wrote:
Direct path readings can be done on the series tablescans in your version of Oracle, but if you have chained rows in the table and then Oracle can read read read the beginning of the line in the path directly, but must make a single block in the cache (the db file sequential read) to get the next part of the line.
It is possible that your production system has a lot of chained rows, while your test system is not. As corroboration (though not convincing) indicator of what you might notice that if you take (reads disc - db file sequential reads) - who might get you close to all the blocks direct read - the numbers are very similar.
I'm not 100% convinced that it's the answer for the difference in behavior, but worth a visit. If you can force and indexed access path in the table, do something like "select / * + index (use {pk}) * / max table (last_column_in_table)" and check the number of "table fetch continued lines" could be close to the number of db file sequential reads you. (There are other options for the counting of the chained rows that could be faster).
Concerning
Jonathan Lewis -
Redo generation for Direct path Inserts
Hello, I'm trying to understand some test results confused I see this morning on the generation of redo direct path for the pads. Based on my understanding of Tom to ask several discussions directly inserts path on a set of data to force the record should generate much less remake a traditional insert because the insertion of the direct route does not generate as much cancel which in turn should generate less do on cancellation.
https://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:5280714813869
Of course, always connect the actual inserted rows but I expected to the remake was less because less cancel has been generated. Instead the roll forward is actually bigger and I don't know why.
Here's a test case to prove my example.
set autotrace traceonly; create table test_redo as select * from all_tables where 1=0; insert into test_redo select * from all_tables; rollback; insert /*+ append */ into test_redo select * from all_tables; rollback;
Stats without Append Hint Statistics ---------------------------------------------------------- 387 recursive calls 1275 db block gets 19604 consistent gets 9 physical reads 2409204 redo size 501 bytes sent via SQL*Net to client 897 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 9031 rows processed Stats with Append Hint Statistics ---------------------------------------------------------- 59 recursive calls 162 db block gets 18675 consistent gets 0 physical reads 2596904 redo size 490 bytes sent via SQL*Net to client 911 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 9031 rows processed
Any ideas on what I'm missing?
Thank you
The / * + append * / copies all blocks Oracle in roll forward, with a little extra for recording etc. headers.
The standard insert creates change descriptions which save odd odd little bits of space and adds little bits of information links (as well as some redo to describe a very small amount to cancel)...
The difference between the two has always been pretty low (assuming that you are running in archivelog mode, or force logging) on a right append. It is possible that curious little details of efficiencies in future versions of the average of the Oracle that the standard Insert wins a place a bit more efficient - it used to be the other way around in earlier versions.
Concerning
Jonathan Lewis
-
Multithreading works only with the direct path load
Oracle DB version: Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production
I'm on my way live load to load data from a flat file into a table using SQL * Loader. I also kept as parallel. However, I can't in multithreading is used at all, based on the report of the log file.
I use the settings according to the true value in the sqlldr: -.
parallel=true , multithreading=true , skip_index_maintenance=true
Output in the journal sqlldr:-
Path used: Direct Insert option in effect for this table: APPEND
Trigger DEV."R_TM_BK_BORROWER" was disabled before the load.
DEV."R_TM_BK_BORROWER" was re-enabled.
The following index(es) on table "YO"."TM_BK_BORROWER" were processed:
index DEV.I_NK_TM_BK_BORR_1 loaded successfully with 1554238 keys
index DEV.I_NK_TM_BK_BORR_2 loaded successfully with 1554238 keys
index DEV.I_NK_TM_BK_BORR_3 loaded successfully with 1554238 keys
index DEV.I_NK_TM_BK_BORR_31 loaded successfully with 1554238 keys
Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes: 1048576
Total logical records skipped: 1
Total logical records read: 1554241
Total logical records rejected: 48
Total logical records discarded: 2
Total stream buffers loaded by SQL*Loader main thread: 7695
Total stream buffers loaded by SQL*Loader load thread: 0
So, I can still see the newspaper sqlldr that all data flow buffers loaded by the main thread and load wire is not always used.
SQL * Loader load wire do not unload the SQL * Loader main thread. If the load wire supports the current stream buffers, then it allows the primary thread to build the buffer to the next stream while the thread of load load the current stream to the server. We have a server CPU 24.
I'm not able to find a clue on Google too. Any help is appreciated.
People, Tom Kyte has finally responded to my message. Here's the thread on asktom-
http://asktom.Oracle.com/pls/Apex/f?p=100:11:0:P11_QUESTION_ID:1612304461350 #7035459900346550399
-
Archiving log / nologging / direct path insert
Could you please confirm if the following are true or correct me if my interpretation is wrong:
Log mode archive 1) and logging is necessary to deal with the media recovery; It was not necessary for the recovery of the instance.
(2) IF the insert is no. APPEND mode, redo is generated, even if the table is in nologging mode and database is in log noachive mode. This recovery is necessary for recovery of the instance.
(3) direct path insert ignore cancel the generation and can jump redo generation if the object is in nologging mode.
Thank you.
In case if it is relevant, I'm using Oracle 11.2.0.3.(1) Yes, archive logs is required for media recovery.
2 and 3) even if the table is in nologging mode, it generates little recovery data dictionary and index maintenance. After a reboot of a failure - Oracle reads the online redo logs and replay any what transaction it finds in there. It's the little "roll forward." The binary redo information are used to re-read everything that did not get written to the data files. This review included refreshing the information for CANCELLATION (UNDO is protected by roll forward).After the restoration has been applied, the database is generally available for use now - and restore begins. For any transaction that was being processed when the instance failed - we have to revert his changes, roll back. We do this by treating the cancellation of all uncommitted transactions.
The database is now fully recovered.
Also read he following link
http://docs.Oracle.com/CD/B19306_01/server.102/b14220/startup.htm
http://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:5280714813869 -
HOW to: Change the default "save under" path Reader 8.1.2
HOW to: Change the default "save under" path Reader 8.1.2
Thank you!
Not possible, unfortunately.
-
I do not understand why, in the course of a direct path load insert an exclusive lock on the table is required.
Suppose I have a T table without any indexes or constraints, why can't update the table in a session and in bulk load in another session above the HWM?
Thanks in advanceClaire wrote:
I do not understand why, in the course of a direct path load insert an exclusive lock on the table is required.Suppose I have a T table without any indexes or constraints, why can't update the table in a session and in bulk load in another session above the HWM?
What would ensue if/when another session DML requiring HWM be triggered BEFORE the completed direct path loading?
-
Direct-path (without) Archivelog and recovery
Hello
Our database is in a Data Warehouse environment, NOARCHIVELOG. When we do an Insert / * + APPEND * / an attribute table withLOGGING, a miniminal redo log is generated.
We want to move the database in ARCHIVELOG mode and change the attribute of the table in NOLOGGING.
My question is, is body recovery work?
I made a matrix of the effect of a crash all in a way direct insertion operation:
ConcerningDatabase mode Table mode Instance recovery Media recovery ---------------------------------------------------------------------------------------------------------------------------------- NOARCHIVELOG LOGGING OK NOT OK ARCHIVELOG LOGGING OK OK ARCHIVELOG NOLOGGING OK NOT OK Do you agree with this matrix ?
Yes, instance recovery will work regardless of its direct-path use access and nologgging.
Here is an example with Oracle XE on Windows (database is running in ARCHIVELOG mode and FORCE_LOGGING is not defined):
c:\tmp>rman target / Recovery Manager: Release 11.2.0.2.0 - Production on Jeu. Janv. 12 20:05:32 2012 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: XE (DBID=2642463371) RMAN> report unrecoverable; using target database control file instead of recovery catalog Report of files that need backup due to unrecoverable operations File Type of Backup Required Name ---- ----------------------- ----------------------------------- RMAN> exit Recovery Manager complete. c:\tmp>sqlplus xxx/xxx @nolog.sql SQL*Plus: Release 11.2.0.2.0 Production on Jeu. Janv. 12 20:05:48 2012 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production SQL> select log_mode, force_logging from v$database; LOG_MODE FOR ------------ --- ARCHIVELOG NO SQL> drop table tnl purge; Table dropped. SQL> create table tnl(x int) nologging tablespace users; Table created. SQL> insert /*+ APPEND */ into tnl select object_id from all_objects; 17971 rows created. SQL> commit; Commit complete. SQL> connect / as sysdba Connected. SQL> startup force ORACLE instance started. Total System Global Area 1071333376 bytes Fixed Size 1388352 bytes Variable Size 658505920 bytes Database Buffers 406847488 bytes Redo Buffers 4591616 bytes Database mounted. Database opened. SQL> exit Disconnected from Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production c:\tmp>rman target / Recovery Manager: Release 11.2.0.2.0 - Production on Jeu. Janv. 12 20:07:34 2012 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: XE (DBID=2642463371) RMAN> report unrecoverable; using target database control file instead of recovery catalog Report of files that need backup due to unrecoverable operations File Type of Backup Required Name ---- ----------------------- ----------------------------------- 4 full or incremental C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF RMAN> exit Recovery Manager complete. c:\tmp>
Edited by: P. Forstmann on 12 Jan. 2012 20:08
-
Insert / * + Append * / and direct-path INSERT
Hi guys
Insert / * + Append * / hint cause Oracle 10 G using direct-path INSERT?
and so insert / * + Append * / suspicion causes an Oracle of using direct-path INSERT, insert / * + Append * / is subject to the same restrictions as direct-path, such as "the target table cannot have any triggers or referential integrity constraints defined on it.»
Thank youHow it would be difficult for you to look for the answer in the documentation and do not abuse this forum asking questions doc and flaming posters colleagues?
------------
Sybrand Bakker
Senior Oracle DBA -
Direct path SQLLDR allows duplicates in the primary key
I would use sqlldr path direct to charge millions of records in the table but direct way allows duplicates on the primary key constraints.
inserts of duplicates
sqlldr control deploy_ctl/deploy_ctl@dba01mdm = direct ctl_test.ctl = true
primary key is enabled
I do not understand the behavior that's why the primary key is always enabled--(logiquement il doit avoir désactivé que doublons insérés)
Inserts no duplicates
sqlldr control = ctl_test.ctl deploy_ctl/deploy_ctl@dba01mdm
primary key is enabled
Please can I know if there is any work around to use direct path with constraints of primary school in place.The only solution is to not use direct load, if your dataset contains records in duplicate, of the documentation:
/*
A record that violates a UNIQUE constraint is not rejected (the file is not available in the memory when the constraint violation is detected).
*/
Maybe you are looking for
-
Organization of the wrong understanding OSX and presentation files?
I'm a former developer/project manager who comes to Mac OSX after 15 years with Linux and 10 years of Windows, and I find myself confused in how file names are presented under OSX. The documents seems to provide a list of all the documents independen
-
Subscription cancellation Soundcloud GB that has accidentally been renewed by Apple?
I put a reminder on my iPhone to remind me to cancel my trial Soundcloud Go, but he didn't alert me, so I forgot to cancel my free trial. The subscription is automatically renewed by Apple, but I don't want that. I disabled the auto-renewal, but I st
-
I have downloads Manager download firefox, but when I right click them, Open and Open the containing folder are eliminated.
-
Apple Image Capture &; Officejet Pro 8600
How to configure my OfficeJet Pro 8600 to work with Apple Image Capture on my Macbook Air with OS X (10.9.3)?
-
How serious is what? SuperAntiSpyware picked up that following on a recent analysis of my PC: Trojan.Agent/Gen-FakeDoc. C:\DellHD\Windows| Installer\ {91110409-6000-11D3-8CFE-0050048383C9} \WORDICON. EXEG : | DellHD\Windows | Installer\ {91110409-60