The DIRECT path load... Please advice!
I want to load the table called staging.There are approximately 72 million data should be loaded. And the table has a UNIQUE index
(reg_no, rel_no, hip_member_id, ind), the control file that I use in my script is less than
Control file:
options (silent=feedback, errors=999999)
load data
append
into table STAGING
WHEN status!='30' AND status!='31' AND status!='32' AND status!='33' AND status!='34' AND status!='40' AND status!='41' AND status!='42' AND status!='43'
(
reg_no position (01:13) char,
rel_no position (290:298) char "decode(ascii(:rel_no), 0,NULL,OPS_ARW.ALTID_CERT_CONVERSION(:rel_no,'A'))",
HIP_MEMBER_ID position (14:24) char,
PLAN_TYPE position (25:42) char,
START_DATE position (43:50) char,
status position (51:54) char,
status_DATE position (55:62) char,
TAX_ID position (63:71) char,
CHECK_NO position (72:83) char "decode(:check_no,'99999999','00000000',nvl(:check_no,'00000000'))",
CHECK_DATE position (84:91) char "nvl(:check_date,'00000000')",
AMT_PAID position (92:104) DECIMAL EXTERNAL,
AMT_BILLED position (105:117) DECIMAL EXTERNAL,
DATE_SETTLED position (118:125) char "nvl(:date_settled,'00000000')",
PAYEE position (126:126) char,
PREVIOUS_reg_no position (127:139) char,
PAY_TO_ID position (149:163) char,
ACCOUNT_NO position (164:183) char,
PAYEE_ADDRESS position (184:283) char,
DIAGNOSIS_CODE position (284:289) char,
pst_id position (290:303) char,
ALT_ID position (290:298) char,
MEM_LASTNAME position (304:318) char,
MEM_FIRSTNAME position (319:333) char,
MEM_DOB position (334:341) char,
ind constant 'E'
)
My question is can I use load path DIRECT to this charge? SE direct path load works with a table with a unique index?If I use DIRCT how can I fit it into the script
Thank you
Hena
Can I use direct path load if the table has a unique index on it?
Yes, you can:
SQL> create table t (a int)
/
Table created.
SQL> create unique index t_idx on t(a) compute statistics
/
Index created.
SQL> explain plan for insert /*+ append */ into t select empno from emp
/
Explain complete.
SQL> select * from table(dbms_xplan.display)
/
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2133360128
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 14 | 56 | 1 (0)| 00:00:01 |
| 1 | LOAD AS SELECT | T | | | | |
| 2 | INDEX FULL SCAN| PK_EMP | 14 | 56 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------
9 rows selected.
Note the LOAD AS SELECT step, which indicates a path direct insert.
Tags: Database
Similar Questions
-
Multithreading works only with the direct path load
Oracle DB version: Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production
I'm on my way live load to load data from a flat file into a table using SQL * Loader. I also kept as parallel. However, I can't in multithreading is used at all, based on the report of the log file.
I use the settings according to the true value in the sqlldr: -.
parallel=true , multithreading=true , skip_index_maintenance=true
Output in the journal sqlldr:-
Path used: Direct Insert option in effect for this table: APPEND
Trigger DEV."R_TM_BK_BORROWER" was disabled before the load.
DEV."R_TM_BK_BORROWER" was re-enabled.
The following index(es) on table "YO"."TM_BK_BORROWER" were processed:
index DEV.I_NK_TM_BK_BORR_1 loaded successfully with 1554238 keys
index DEV.I_NK_TM_BK_BORR_2 loaded successfully with 1554238 keys
index DEV.I_NK_TM_BK_BORR_3 loaded successfully with 1554238 keys
index DEV.I_NK_TM_BK_BORR_31 loaded successfully with 1554238 keys
Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes: 1048576
Total logical records skipped: 1
Total logical records read: 1554241
Total logical records rejected: 48
Total logical records discarded: 2
Total stream buffers loaded by SQL*Loader main thread: 7695
Total stream buffers loaded by SQL*Loader load thread: 0
So, I can still see the newspaper sqlldr that all data flow buffers loaded by the main thread and load wire is not always used.
SQL * Loader load wire do not unload the SQL * Loader main thread. If the load wire supports the current stream buffers, then it allows the primary thread to build the buffer to the next stream while the thread of load load the current stream to the server. We have a server CPU 24.
I'm not able to find a clue on Google too. Any help is appreciated.
People, Tom Kyte has finally responded to my message. Here's the thread on asktom-
http://asktom.Oracle.com/pls/Apex/f?p=100:11:0:P11_QUESTION_ID:1612304461350 #7035459900346550399
-
I do not understand why, in the course of a direct path load insert an exclusive lock on the table is required.
Suppose I have a T table without any indexes or constraints, why can't update the table in a session and in bulk load in another session above the HWM?
Thanks in advanceClaire wrote:
I do not understand why, in the course of a direct path load insert an exclusive lock on the table is required.Suppose I have a T table without any indexes or constraints, why can't update the table in a session and in bulk load in another session above the HWM?
What would ensue if/when another session DML requiring HWM be triggered BEFORE the completed direct path loading?
-
DB file sequential read and read of the direct path
Hello
Could someone please clear my doubts about 'db file sequential read' and ' path direct reading. And help me understand the tkprof report correctly.
Please suggest if my understanding for scenario below is correct.
We have a 11.2.0.1 ' cluster 2 node rac version + asm' production and environment of his test environment that is a stand-alone database.
The query performs well in production compared to the test database.
The table is to have 254 + columns (264) with many lobs coulumns however LOB is not currently selected in the query.
I read in metalink this 254 table + column a intra-line-chaining, causing "db file sequential read" full table Scan.
Here are some details on the table which is similar to the prod and test, block size is 8 k:
What I understand less tkprof in production environment for a given session is:TABLE UNUSED BLOCKS TOTAL BLOCKS HIGH WATER MARK ------------------------------ --------------- --------------- --------------- PROBSUMMARYM1 0 17408 17407
1 - the request resulted in disk 19378 readings and 145164 consistent readings.
2 19378 disc bed, 2425 reads disc has given rise to the wait event 'db file sequential read'.
This statement is correct this disc remaining readings were "db file sequential reads" but real quick so didn't wait event tied to it?
3 - 183 'direct path read' there also. Is it because of the order by clause of the query?
The same query when ran in no no rac - asm test stand alone database has given below tkprof.SQL ID: 72tvt5h4402c9 Plan Hash: 1127048874 select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.53 4.88 19378 145164 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 0.53 4.88 19378 145164 0 0 Misses in library cache during parse: 0 Optimizer mode: ALL_ROWS Parsing user id: SYS Rows Row Source Operation ------- --------------------------------------------------- 0 SORT ORDER BY (cr=145164 pr=19378 pw=0 time=0 us cost=4411 size=24 card=2) 0 TABLE ACCESS FULL PROBSUMMARYM1 (cr=145164 pr=19378 pw=0 time=0 us cost=4410 size=24 card=2) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 ges message buffer allocation 3 0.00 0.00 enq: KO - fast object checkpoint 2 0.00 0.00 reliable message 1 0.00 0.00 KJC: Wait for msg sends to complete 1 0.00 0.00 Disk file operations I/O 1 0.00 0.00 kfk: async disk IO 274 0.00 0.00 direct path read 183 0.01 0.72 db file sequential read 2425 0.05 3.71 SQL*Net message from client 1 2.45 2.45
Does this mean that:
1-here too, reads happen through "db file sequential read", but they were so fast that has failed to the wait event?
2. "direct path read," it's because of the order clause in the query. "
For trace files in the database Production and Test, I see that 'direct path read' is against the same data file that's stored table.SQL ID: 72tvt5h4402c9 Plan Hash: 1127048874 select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.06 0 0 0 0 Fetch 1 0.10 0.11 17154 17298 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 0.10 0.18 17154 17298 0 0 Misses in library cache during parse: 0 Optimizer mode: ALL_ROWS Parsing user id: SYS Rows Row Source Operation ------- --------------------------------------------------- 0 SORT ORDER BY (cr=17298 pr=17154 pw=0 time=0 us cost=4694 size=12 card=1) 0 TABLE ACCESS FULL PROBSUMMARYM1 (cr=17298 pr=17154 pw=0 time=0 us cost=4693 size=12 card=1) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 Disk file operations I/O 1 0.00 0.00 db file sequential read 3 0.00 0.00 direct path read 149 0.00 0.03 SQL*Net message from client 1 2.29 2.29
Then how come 'this direct path read' because of the order by clause of the query and would have been in sort field size or pga?
Or this direct path read extracts real PGA disk data, and "db file sequential read" do not extract data?
I know, it's 'direct path read' is wait event when data are asked in PGA drive or what kind segment or temp tablespace is used.
Here is the example of trace file in the Production database:
Here are the trace file for the test database:*** 2013-01-04 13:49:15.109 WAIT #1: nam='SQL*Net message from client' ela= 11258483 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278555109496 CLOSE #1:c=0,e=9,dep=0,type=1,tim=1357278555109622 ===================== PARSING IN CURSOR #1 len=113 dep=0 uid=0 oct=3 lid=0 tim=1357278555109766 hv=138414473 ad='cfc02ab8' sqlid='72tvt5h4402c9' select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc END OF STMT PARSE #1:c=0,e=98,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278555109765 EXEC #1:c=0,e=135,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278555109994 WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278555110053 WAIT #1: nam='ges message buffer allocation' ela= 3 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555111630 WAIT #1: nam='enq: KO - fast object checkpoint' ela= 370 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1357278555112098 WAIT #1: nam='reliable message' ela= 1509 channel context=3691837552 channel handle=3724365720 broadcast message=3692890960 obj#=-1 tim=1357278555113975 WAIT #1: nam='ges message buffer allocation' ela= 2 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555114051 WAIT #1: nam='enq: KO - fast object checkpoint' ela= 364 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1357278555114464 WAIT #1: nam='KJC: Wait for msg sends to complete' ela= 9 msg=3686348728 dest|rcvr=65536 mtype=8 obj#=-1 tim=1357278555114516 WAIT #1: nam='ges message buffer allocation' ela= 2 pool=0 request=1 allocated=0 obj#=-1 tim=1357278555114680 WAIT #1: nam='Disk file operations I/O' ela= 562 FileOperation=2 fileno=6 filetype=2 obj#=85520 tim=1357278555115710 WAIT #1: nam='kfk: async disk IO' ela= 5 count=1 intr=0 timeout=4294967295 obj#=85520 tim=1357278555117332 *** 2013-01-04 13:49:15.123 WAIT #1: nam='direct path read' ela= 6243 file number=6 first dba=11051 block cnt=5 obj#=85520 tim=1357278555123628 WAIT #1: nam='db file sequential read' ela= 195 file#=6 block#=156863 blocks=1 obj#=85520 tim=1357278555123968 WAIT #1: nam='db file sequential read' ela= 149 file#=6 block#=156804 blocks=1 obj#=85520 tim=1357278555124216 WAIT #1: nam='db file sequential read' ela= 155 file#=6 block#=156816 blocks=1 obj#=85520 tim=1357278555124430 WAIT #1: nam='db file sequential read' ela= 4826 file#=6 block#=156816 blocks=1 obj#=85520 tim=1357278555129317 WAIT #1: nam='db file sequential read' ela= 987 file#=6 block#=156888 blocks=1 obj#=85520 tim=1357278555130427 WAIT #1: nam='db file sequential read' ela= 3891 file#=6 block#=156888 blocks=1 obj#=85520 tim=1357278555134394 WAIT #1: nam='db file sequential read' ela= 155 file#=6 block#=156912 blocks=1 obj#=85520 tim=1357278555134645 WAIT #1: nam='db file sequential read' ela= 145 file#=6 block#=156920 blocks=1 obj#=85520 tim=1357278555134866 WAIT #1: nam='db file sequential read' ela= 234 file#=6 block#=156898 blocks=1 obj#=85520 tim=1357278555135332 WAIT #1: nam='db file sequential read' ela= 204 file#=6 block#=156907 blocks=1 obj#=85520 tim=1357278555135666 WAIT #1: nam='kfk: async disk IO' ela= 4 count=1 intr=0 timeout=4294967295 obj#=85520 tim=1357278555135850 WAIT #1: nam='direct path read' ela= 6894 file number=6 first dba=72073 block cnt=15 obj#=85520 tim=1357278555142774 WAIT #1: nam='db file sequential read' ela= 4642 file#=6 block#=156840 blocks=1 obj#=85520 tim=1357278555147574 WAIT #1: nam='db file sequential read' ela= 162 file#=6 block#=156853 blocks=1 obj#=85520 tim=1357278555147859 WAIT #1: nam='db file sequential read' ela= 6469 file#=6 block#=156806 blocks=1 obj#=85520 tim=1357278555154407 WAIT #1: nam='db file sequential read' ela= 182 file#=6 block#=156826 blocks=1 obj#=85520 tim=1357278555154660 WAIT #1: nam='db file sequential read' ela= 147 file#=6 block#=156830 blocks=1 obj#=85520 tim=1357278555154873 WAIT #1: nam='db file sequential read' ela= 145 file#=6 block#=156878 blocks=1 obj#=85520 tim=135727855515
Thanks & Rgds,*** 2013-01-04 13:46:11.354 WAIT #1: nam='SQL*Net message from client' ela= 10384792 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278371354075 CLOSE #1:c=0,e=3,dep=0,type=3,tim=1357278371354152 ===================== PARSING IN CURSOR #1 len=113 dep=0 uid=0 oct=3 lid=0 tim=1357278371363427 hv=138414473 ad='c7bd8d00' sqlid='72tvt5h4402c9' select "NUMBER" num from smprd.probsummarym1 where flag ='f' and affected_item = 'PAUSRWVP39486' order by num asc END OF STMT PARSE #1:c=0,e=9251,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278371363426 EXEC #1:c=0,e=63178,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1127048874,tim=1357278371426691 WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1357278371426766 WAIT #1: nam='Disk file operations I/O' ela= 1133 FileOperation=2 fileno=55 filetype=2 obj#=93574 tim=1357278371428069 WAIT #1: nam='db file sequential read' ela= 51 file#=55 block#=460234 blocks=1 obj#=93574 tim=1357278371428158 WAIT #1: nam='direct path read' ela= 31 file number=55 first dba=460235 block cnt=5 obj#=93574 tim=1357278371428956 WAIT #1: nam='direct path read' ela= 47 file number=55 first dba=136288 block cnt=8 obj#=93574 tim=1357278371429099 WAIT #1: nam='direct path read' ela= 80 file number=55 first dba=136297 block cnt=15 obj#=93574 tim=1357278371438529 WAIT #1: nam='direct path read' ela= 62 file number=55 first dba=136849 block cnt=15 obj#=93574 tim=1357278371438653 WAIT #1: nam='direct path read' ela= 17 file number=55 first dba=136881 block cnt=7 obj#=93574 tim=1357278371438750 WAIT #1: nam='direct path read' ela= 35 file number=55 first dba=136896 block cnt=8 obj#=93574 tim=1357278371438855 WAIT #1: nam='direct path read' ela= 22 file number=55 first dba=136913 block cnt=7 obj#=93574 tim=1357278371438936 WAIT #1: nam='direct path read' ela= 19 file number=55 first dba=137120 block cnt=8 obj#=93574 tim=1357278371439029 WAIT #1: nam='direct path read' ela= 36 file number=55 first dba=137145 block cnt=7 obj#=93574 tim=1357278371439114 WAIT #1: nam='direct path read' ela= 18 file number=55 first dba=137192 block cnt=8 obj#=93574 tim=1357278371439193 WAIT #1: nam='direct path read' ela= 16 file number=55 first dba=137201 block cnt=7 obj#=93574 tim=1357278371439252 WAIT #1: nam='direct path read' ela= 17 file number=55 first dba=137600 block cnt=8 obj#=93574 tim=1357278371439313 WAIT #1: nam='direct path read' ela= 15 file number=55 first dba=137625 block cnt=7 obj#=93574 tim=1357278371439369 WAIT #1: nam='direct path read' ela= 22 file number=55 first dba=137640 block cnt=8 obj#=93574 tim=1357278371439435 WAIT #1: nam='direct path read' ela= 702 file number=55 first dba=801026 block cnt=126 obj#=93574 tim=1357278371440188 WAIT #1: nam='direct path read' ela= 1511 file number=55 first dba=801154 block cnt=126 obj#=93574 tim=1357278371441763 WAIT #1: nam='direct path read' ela= 263 file number=55 first dba=801282 block cnt=126 obj#=93574 tim=1357278371442547 WAIT #1: nam='direct path read' ela= 259 file number=55 first dba=801410 block cnt=126 obj#=93574 tim=1357278371443315 WAIT #1: nam='direct path read' ela= 294 file number=55 first dba=801538 block cnt=126 obj#=93574 tim=1357278371444099 WAIT #1: nam='direct path read' ela= 247 file number=55 first dba=801666 block cnt=126 obj#=93574 tim=1357278371444843 WAIT #1: nam='direct path read' ela= 266 file number=55 first dba=801794 block cnt=126 obj#=93574 tim=1357278371445619
Vijay911786 wrote:
Direct path readings can be done on the series tablescans in your version of Oracle, but if you have chained rows in the table and then Oracle can read read read the beginning of the line in the path directly, but must make a single block in the cache (the db file sequential read) to get the next part of the line.
It is possible that your production system has a lot of chained rows, while your test system is not. As corroboration (though not convincing) indicator of what you might notice that if you take (reads disc - db file sequential reads) - who might get you close to all the blocks direct read - the numbers are very similar.
I'm not 100% convinced that it's the answer for the difference in behavior, but worth a visit. If you can force and indexed access path in the table, do something like "select / * + index (use {pk}) * / max table (last_column_in_table)" and check the number of "table fetch continued lines" could be close to the number of db file sequential reads you. (There are other options for the counting of the chained rows that could be faster).
Concerning
Jonathan Lewis -
Disabling the direct paths of ESX and VM guests
Hi all
Hope that this question has not been asked before. (I did a search but could not find anything specific).
I'm looking for reassurance about a proposed change, we seek to bring to our system.
We need to upgrade one of our Brocade switches that will cause some interruptions in service of the said switch. (We have two bucks and each ESX host has 2 x single port hba with a connection to each Brocade card)
Literature, that I read, it seems to me that I have to force data store traffic over the other switch to allow me to remove the Brocade for its upgrade.
-Proposed method for each data store attached to a host, I intend to disable the relevant path and force a failover to the remaining path that leads to the secondary Brocade.
I have a few questions about this approach.
(1) I guess that manually disabling a path should result in no time downtime or loss of connectivity failover occurs in an operational environment? (We will off hours, but it's a 24/7 environment).
(2) Although the majority of our servers use VMFS, some use LUN RAW. These servers would encounter difficulties if I force a failover situation through a path of disabling?
Thanks in advance!
Hello
Welcome to the community
Hope that this question has not been asked before. (I did a search but could not find anything specific).
It was asked before several times, in any case
(1) I guess that manually disabling a path should result in no time downtime or loss of connectivity failover occurs in an operational environment? (We will off hours, but it's a 24/7 environment).
(2) Although the majority of our servers use VMFS, some use LUN RAW. These servers would encounter difficulties if I force a failover situation through a path of disabling?
- OK, even if you remove a cable adapter HBA, ESX would shift all traffic to the next available path. Don't forget that you have the vmware tools installed in the virtual machine
- No problem with RAW LUN, it would behave like VMFS
-
iOS 10; requires the direct source loading
Hi, someone else (always) of the problems their support since the installation of ios10 device; I now connect directly on ipad and iPhone a taken (rather than an extension cord) and even in this case not all caps will never charge. Dear Apple, we need longer cables if it is an ongoing problem
It seems to be something new with iOS10. They seem to have tweaked the levels of input power accepted a minimum.
If you have your power supply a budget work around is to buy a simple home extension and use that will gain more length.
-
Cannot locate the patch tool. Please advice.
Hello
I just installed Photoshop CS6 extended version and I can't find the tool room inside. I can't find Lasso tool but can not find the patch tool in the tools list.
Kindly help.
Thank you
Hello and welcome to the Adobe Forums.
The Patch tool is bundled with the corrector. Click and hold on the icon of the Healing Brush tool, you will see a menu of bundled tools. The Patch tool is here.
You can also use Adobe Help to look for features of the basic tool.
-
With regard to 'Direct Path read' expected
As we know, the direct path read waits is a new feature of Oracle 11 g. Oracle Documents or other objects, when it's the full table scan, direct path reading took place. But why it happened? I don't know clearly.
Here is the description of the Oracle Document:
http://docs.Oracle.com/CD/E18283_01/server.112/e17110/waitevents003.htm#sthref3849
During operations of Direct Reading of data asynchronously access path in the database files. At some point, the session must ensure that all the asynchronous i/o in circulation have been realized on the drive. This can also happen if during a direct reading without more slots is available for storing exceptional load requests (a charge application might consist of several e/s).
Question:
1 during operations of Direct data asynchronously reading path in database files. > > this statement means what? What of "asynchronous reading '?
2 describe above description for me in details.
3. who can clearly explain "why read direct path wait to happen ' for me?
Thanks in advance.
Lonion
Lonion wrote:
Question:
1. to path operations Direct playback of data asynchronously in the database files. > This statement means what? What of "asynchronous reading '?
2 describe above description for me in details.
3. who can clearly explain "why read direct path wait to happen ' for me?
If you want to get very technical, Frits Hoogland wrote a lot about the implementation:
http://fritshoogland.WordPress.com/2013/05/09/direct-path-read-and-fast-full-index-scans/
http://fritshoogland.WordPress.com/2013/01/04/Oracle-11-2-and-the-direct-path-read-event/
http://fritshoogland.WordPress.com/2012/12/27/Oracle-11-2-0-1-and-the-KFK-async-disk-IO-wait-event/
http://www.UKOUG.org/what-we-offer/library/about-Multiblock-reads/about-Multiblock-reads.PDF
Concerning
Jonathan Lewis
-
How to find the direction of address using google map
Hi all
Please help me solve this problem.
I want to see the direction (path) of the Office at the address on the account in google map below everyone each contact details.
I created an account Web Applet, there is proof of address count correctly
http://maps.Google.com/?q=%%%Bill_To_ADDR_Address1%%%, + % Bill_To_CITY_City %, + % Bill_To_COUNTRY_Country %.
But I want to see the direction of the office...
Thanks in advance.
[email protected]
Vivian.Hello
That's what you have to use as an example
http://maps.Google.com/maps?f=d&source=s_d&saddr=Collins+Street, + Melbourne + Victoria, + Australia & daddr = Elizabeth + Street, Melbourne, Victoria, + AustraliaH5. This is the address
Street + Collins, Melbourne, Victoria, + AustraliaH5. It's the end location
Elizabeth + Street, Melbourne, Victoria, + AustraliaYou'll need to do is already these values with % Bill_To_Addr_Address1% and so on
-
SQL * LOADER skip the header and the footer while loading
Hi, how can I ignore the header and footer both a flat file during the loading of a table using sql * loader?
Also, can I use the direct method if my target table has a primary key and not null constraints defined on it?Hello
To ignore the header, you can use JUMP, for the footer there is no specific way to ignore it (maybe it would be rejected).
For more information on the laoding modes, you can check [11 classics and Direct Path loads | http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_modes.htm#g1023818].
Kind regards
-
Why a parallel query use direct path read?
I think because the cache must Access pads, lock and lock on buffer block if parallel query do not reading of the direct path, parallel queries will be affected by the serialization as the latch and oracle .so lock mechanism choose direct path read to avoid what he.
that someone has a good idea?
Published by: Jeremiah on December 8, 2008 07:52Jinyu wrote:
I think because the cache must Access pads, lock and lock on buffer block if parallel query do not reading of the direct path, parallel queries will be affected by the serialization as the latch and oracle .so lock mechanism choose direct path read to avoid what he.Jinyu,
actually, Yes, I think that's it. The parallel query is designed to scan very much, because the load of communication between processes and maintenance/commissioning the parallel slave makes inefficient operation for small segments.
So I guess that the assumption is that the segment to analyze is probably very large, the fraction of the blocks in the buffer cache is very low compared to the blocks to scan and so the fresh reduction General read directly blocks without moving through all the questions of serialization of the buffer cache should prevail on the issue of blocks "unbuffered" and save the buffer for objects cache more benefit from development caching.
Kind regards
RandolfOracle related blog stuff:
http://Oracle-Randolf.blogspot.com/SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676 /.
http://sourceforge.NET/projects/SQLT-pp/ -
Is it reasonable to require a Full table Scan (Direct path read) on a large Table
Hello
I have an Oracle 11.2.0.3 on a Sun Solaris 10 sparc with SGA 25 GB machine
One of my made SQL an analysis of indexes on the table to 45 GB where the size of the index is 14FR and the thourghput for the sequential read is 2MBPS.
so now (Index) 14 000 Mo, 14 GB / 2 MB = dry 7000 = 2 hours to scan the index.
Flow of the direct path read is 500 Mbps for an another SQL and reads assimilate them all live.
and because of this flow, a read (FTS) of direct path on the table of 7 GB out in 12-13 seconds. So in my case, it probably takes 100 seconds to scan the entire table. It is a monthly report and does not often.
Should I try to use a trick to force sql to do a full scan and exploit the possibilities of direct reading and is done in a few minutes? Assuming it to do a FTS (direct play)
Of course, it will be tested and verified, but the question is, is this a sensible apprioach? is - anyone has experience by forcing it?
Any suggestions would be helpful... I'll test this on a different machine tomorrow.
Best regards
Maran
82million lines and a grouping of 18million factor? And really 17million from lines to recover? However, your result set after the hash join is 3500 lines, although the optimizer expects 16 lines.
I would say that the statistics are not representative or not in use for this SQL. Certainly, the index does not match query predicates.
The fact that the index is also using the virtual columns only adds to confusion.
Hemant K Collette
-
Hi all
We use 10.2.0.3 on sparc solaris 10 64 bit OS.
I wondered about two events in my environment that will take a long time. Yes, we do a lot of grouping, sorting and hash joins that temp on spill data.
For the most part, all time will "lead the read temp path. I took route and seen it's just 1 block reading all time (p3) vs other reads like scattered or direct path write 32 blocks read/write.
So, is there a reason why the direct path read temp to read than a single block at the time? Can we define something that can read multiple blocks in an extractiondirect path read temp 241389 4.19 2511.08 direct path write 2 0.00 0.00
Nico wrote:
Thank you Greg for more information.Have you found any workaround to 10.2.0.3
It is not a work around. The changes in the fixed a bug how temp blocks are read for positional kinds. It should all but remove the one-piece temp readings. Much more effective with this fix. I saw about 2 x increase in performance in many cases actual test using this hotfix.
You can try and see if Oracle Support will backport of 10.2.0.3, but I recommend you do 10.2.0.4 anyway due to the considerable number of bug fixes. It won't take long and 10.2.0.5 comes out as well.
--
Kind regards
Greg Rahn
http://structureddata.org -
Direct path SQLLDR allows duplicates in the primary key
I would use sqlldr path direct to charge millions of records in the table but direct way allows duplicates on the primary key constraints.
inserts of duplicates
sqlldr control deploy_ctl/deploy_ctl@dba01mdm = direct ctl_test.ctl = true
primary key is enabled
I do not understand the behavior that's why the primary key is always enabled--(logiquement il doit avoir désactivé que doublons insérés)
Inserts no duplicates
sqlldr control = ctl_test.ctl deploy_ctl/deploy_ctl@dba01mdm
primary key is enabled
Please can I know if there is any work around to use direct path with constraints of primary school in place.The only solution is to not use direct load, if your dataset contains records in duplicate, of the documentation:
/*
A record that violates a UNIQUE constraint is not rejected (the file is not available in the memory when the constraint violation is detected).
*/ -
Validation table - advice on the best approach to please? :)
Hi All - would appreciate some advice please...
I have a (static) table in my form according to the directions of the image below:
Each drop-down list is the same, the user can choose to "Minor", "Moderate", "Significant", "Major" or "Intolerable". When they do the selection I need several things:
- The background color of the cell needs to change to 'Minor' = green, "Moderate" amber, "Significant" = amber =, 'Major' = red or "Intolerable" = red.
- The selection should auto fill all right with the same value that some - for example if a user selects 'Major' in the second cell of the row from the top (finance), all the cells to the right should default to "Major" and change color accordingly.
- The selection must define the deposit available towards the bottom of choice to ensure the impact assessment can only increase over time – for example, if a user selects 'major' all the downs fall standing on this line (to the right of the selected dd) have only the option of selecting 'Major' or 'intolerable '. The line should be hidden if the user then changes a selection, the result of which means the coast got less over time - which means that the line must be performed again.
The dropdown values are related to specific numbers ('Minor' = 1, 'Significant' = 2, etc.) and that must remain the case that the values are used in the XML at a later stage.
I had managed to get the color change of work and had tried to implement a "Switch" statement to define the drop down lists (number 3 above), does this have worked if the user makes selections according to the rule regarding the impact that worsen over time, but I had a very odd behavior when the user changed a selection subsequently and 'broken' rule. I have since deleted all javascript code to leave.
My question is this: what would be the best way to get what I want to do?
Could make you in an efficiant manner using a function / script object or other?
I would be very grateful if someone could suggest a method, provide a snippet of code perhaps or even write the code for a single cell for me stick in every other ?
Here is the form: https://www.dropbox.com/s/ncvq9cyoqolh2tn/BIA%20Impacts%20Table.pdf?dl=0
Thank you very much
Ellis
Hi, Ellis,.
Sorry, misread the requirement. If the date should be copied across, without worrying, then you can delete the statement altogether.
dateField var = contextItem.parent.resolveNode ('#field.) ([ui.oneOfChild.className == "dateTimeEdit"]');
dateField.rawValue = ImpactsSubForm.DateTimeField1.rawValue;
This version also manages the date fields with different names (referencing of nominally ui elements child element class... just do not add another date field to the line)
Here's another link https://sites.google.com/site/livecycledesignercookbooks/BIA%20Impacts%20Table.date.1.pdf? attredirects = 0 & d = 1
Concerning
Bruce
Maybe you are looking for
-
Missing on the Satellite 2430 201
Hello The Satellite laptop power adapter broke, we got a new. But the adapter has been too long, so the connector in the broken laptop and a component behind it.I fixed the AC connector in Notepad, but still broken component is important, because its
-
Why the my internet explore keep stopping every 30 minutes?
Windows 7 Laptop Compaq PresarioCQ62-417NR No Error Message - says just restart
-
Pavilion g6 1320 - is: laptop won't start not
I recently had a problem with my pavilion laptop g6 series 1320 - themselves. When I hit the power button a black screen will appear saying that the hard drive is not found. However, when I rebooted the system, windows load appeared but it does not c
-
Can I swap the hard drive for an SSD on my Pavillion - 15z Beats? Thank you
-
Hello world I tried to create a library on mine, but there is a problem: I created a new project of BlackBerry with the JDE plugin for eclipse. I chose the project type library in the properties of the project, but after that I built the project that