QUERY, LOGMINER OPTIMIZER
HI friendsI have very less knowledge about the following topics, I want to improve it. Please give me some good links, notes and PDFS on it.
1 QUERY OPTIMIZER
2 STATSPACK
3 SQLTRACE
2 h TKPROF
I have no knowledge of these two topics
1 LOGMINER
4 DATAGUARD
Thank you
You have received the documentation links already. I'll give you some links to book that supplement the documentation.
susdba wrote:
HI friends
I have very less knowledge about the following topics, I want to improve it. Please give me some good links, notes, & PDFs on it.1 QUERY OPTIMIZER
2 STATSPACK
3 SQLTRACE
2 h TKPROF
http://www.Amazon.com/cost-based-Oracle-fundamentals-experts-voice/DP/1590596366
http://www.Amazon.com/Optimizing-Oracle-performance-Cary-Millsap/DP/059600527X
http://www.Amazon.com/effective-Oracle-design-Osborne-Oracle/DP/0072230657
I have no knowledge of these two topics
1 LOGMINER
4 DATAGUARD
Thank you
There is no book about Logminer, it's just a package nothing else. You can read the documentation for him, play with him and he should be fine. For the Dataguard, buy this book,
http://www.Amazon.com/Oracle-guard-Handbook-Osborne-Oracle/DP/0071621113
HTH
Aman...
Tags: Database
Similar Questions
-
Query performance optimization
HI, I need to know if there is another way to make this request.
The REVIEWS table have this structure:
DATE OF EXAM_DATE
OBJECT VARCHAR2 (50);
NUMBER OF GRADE;
The idea is to get statistics of exams.
Select EXAM_DATE,
Object
(SELECT COUNT (1))
REVIEWS
where GRADE to THE title (9,10)
AND THE SUBJECT = EXA. Object
AND EXAM_DATE = EXA. EXAM_DATE) exceptional.
(select count (1))
REVIEWS
where THE title (4,5,6,7,8) RANK
AND THE SUBJECT = EXA. Object
AND EXAM_DATE = EXA. EXAM_DATE) approved,
(select count (1))
REVIEWS
where THE title (0,1,2,3) RANK
AND THE SUBJECT = EXA. Object
AND EXAM_DATE = EXA. EXAM_DATE) disapproves of,
EXA REVIEWS
EXAM_DATE GROUP, TOPIC;
Thank you!!
Hello
If all the data are in the same table, you shouldn't do any subqueries.
Maybe something like that
SELECT exam_date, object
, COUNT (CASE WHEN rank IN (9, 10) THEN 1 END) pending
, COUNTY (CASE grade WHEN (4, 5, 6, 7, 8) THEN 1 END) as approved
, COUNTY (CASE of rank WHEN IN (0, 1, 2, 3) THEN 1 END) DISAPPROVE
Reviews
GROUP BY exam_date, object
;
If you would care to post a small example of data (CREATE TABLE and INSERT statements) and the results desired from these sample data, then I could test this.
See the FAQ Forum:
-
HelloI'm on 11.1.2.1.0.
In SQl, we use / * + FIRST_ROWS (10) * /.
one)
For use in VO we can go to
VO.xml-> general-> Tuning
Query FIRST_ROWS optimizer indicator (10)
Access to the scroll mode
Size of the beach 1
or
(b)
VO.xml-> general-> Tuning
Query FIRST_ROWS optimizer indicator
Access to the scroll mode
Set size 10
(Should I use option a) or B)?
Thank you
Kiran
(b) it is not sensible to try to get the first 10 lines as quickly as possible and then use 10 back and forth to get the data.
Timo
-
Slow index by using the query. Fast with full table Scan.
Salvation;
(Thanks for the links)
Here's my question correctly formatted.
The query:
Works on 32 seconds!SELECT count(1) from ehgeoconstru ec where ec.TYPE='BAR' AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) and deathdate is null and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'
Same query, but with an extra where clause:
This is 400 seconds.SELECT count(1) from ehgeoconstru ec where ec.TYPE='BAR' and ( (ec.contextVersion = 'REALWORLD') --- ADDED HERE AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) ) and deathdate is null and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'
It should return data from a table, given the conditions.
The database version is Oracle9i Release 9.2.0.7.0
These are the parameters relevant for the optimizer:
Here is the output of the PLAN to EXPLAIN for the first quick query:SQL> show parameter optimizer NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_dynamic_sampling integer 1 optimizer_features_enable string 9.2.0 optimizer_index_caching integer 99 optimizer_index_cost_adj integer 10 optimizer_max_permutations integer 2000 optimizer_mode string CHOOSE SQL>
Here is the output of the EXPLAIN of PLAN for slow queries:PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | | | 1 | SORT AGGREGATE | | | | | |* 2 | TABLE ACCESS FULL | EHCONS | | | | -------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- 2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE" IS NULL AND "EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy -mm-dd hh24:mi:ss') AND "EC"."TYPE"='BAR') Note: rule based optimization
The TKPROF output for this slow statement is:PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- | | | 1 | SORT AGGREGATE | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID| ehgeoconstru | | | | |* 3 | INDEX RANGE SCAN | ehgeoconstru_VSN | | | | PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE" IS NULL AND "EC"."TYPE"='BAR') PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- 3 - access("EC"."CONTEXTVERSION"='REALWORLD' AND "EC"."BIRTHDATE"<=TO_DATE('2 009-10-06 11:52:12', 'yyyy-mm-dd hh24:mi:ss')) filter("EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy-mm-dd hh24: mi:ss')) Note: rule based optimization
Published by: PauloSMO on November 17, 2009 04:21TKPROF: Release 9.2.0.7.0 - Production on Tue Nov 17 14:46:32 2009 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. Trace file: gen_ora_3120.trc Sort options: prsela exeela fchela ******************************************************************************** count = number of times OCI procedure was executed cpu = cpu time in seconds executing elapsed = elapsed time in seconds executing disk = number of physical reads of buffers from disk query = number of buffers gotten for consistent read current = number of buffers gotten in current mode (usually for update) rows = number of rows processed by the fetch or execute call ******************************************************************************** SELECT count(1) from ehgeoconstru ec where ec.TYPE='BAR' and ( (ec.contextVersion = 'REALWORLD') AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) ) and deathdate is null and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText' call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 538.12 162221 1355323 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.00 538.12 162221 1355323 0 1 Misses in library cache during parse: 0 Optimizer goal: CHOOSE Parsing user id: 153 Rows Row Source Operation ------- --------------------------------------------------- 1 SORT AGGREGATE 27747 TABLE ACCESS BY INDEX ROWID OBJ#(73959) 2134955 INDEX RANGE SCAN OBJ#(73962) (object id 73962) ******************************************************************************** alter session set sql_trace=true call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 0 0.00 0.00 0 0 0 0 Execute 1 0.00 0.02 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 1 0.00 0.02 0 0 0 0 Misses in library cache during parse: 0 Misses in library cache during execute: 1 Optimizer goal: CHOOSE Parsing user id: 153 ******************************************************************************** OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 2 0.00 0.02 0 0 0 0 Fetch 2 0.00 538.12 162221 1355323 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 5 0.00 538.15 162221 1355323 0 1 Misses in library cache during parse: 0 Misses in library cache during execute: 1 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 0 0.00 0.00 0 0 0 0 Execute 0 0.00 0.00 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 0 0.00 0.00 0 0 0 0 Misses in library cache during parse: 0 2 user SQL statements in session. 0 internal SQL statements in session. 2 SQL statements in session. ******************************************************************************** Trace file: gen_ora_3120.trc Trace file compatibility: 9.02.00 Sort options: prsela exeela fchela 2 sessions in tracefile. 2 user SQL statements in trace file. 0 internal SQL statements in trace file. 2 SQL statements in trace file. 2 unique SQL statements in trace file. 94 lines in trace file.
Published by: PauloSMO on November 17, 2009 07:07
Published by: PauloSMO on November 17, 2009 07:38 - title changed to be more correct.Although your optimizer_mode is choosing, it seems that there are no statistics collected on ehgeoconstru. The absence of estimated costs and estimated row counts of each of the stages of the plan and the "Note: optimization based on rules" at the end of these two plans would tend to confirm this.
Optimizer_mode choose means that if statistics are collected then it will use the CBO, but if no statistic is present in any of the tables in the query, the optimizer to rule will be used. The RBO tends to be happy in the best of the index case. I guess the index ehgeoconstru_VSN contextversion as the main column and also includes the date of birth.
You can either gather statistics on the table (if all other tables have statistics) using dbms_stats.gather_table_stats, or suggest the query to use a full scan instead of index. Another solution would be to apply a function or an operation against the contextversion to prevent the use of the index. something like this:
SELECT COUNT(*) FROM ehgeoconstru ec WHERE ec.type='BAR' and ec.contextVersion||'' = 'REALWORLD' ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') and deathdate is null and SUBSTR(ec.strgfd, 1, LENGTH('[CIMText')) <> '[CIMText'
or maybe UPPER (ec.contextVersion) so that would not change the rows returned.
John
-
Can we all define the difference of hard and soft analysis the analysis
Can we all define the difference of hard and soft analysis the analysis coming in SQL tuning advisor and ADDM recommendations.
To improve the same needs to be done. Please advise on the same.
Thank you
What is analysis?
-> any query SQL under validation goes to the shared pool
Validation: check the syntax, semantic check, etc...
Hard analysis SQL going for below in the shared pool
Syntax
Semantics
Transformation of the query
Optimization
Create an executable
E/S
soft analysis (reduced time spent by jumping the redundant task)
Transformation of the query
Optimization
Create an executable
E/S
Thank you.
-
Need information about SQL dynamic
Hello
I have an obligation to pass the table name and the column names in the dynamic SQL statements to retrieve data from the table passed and columns. But here we cannot use the binding variable because we can link the names of tables and columns in dynamic SQL. Only, we can link the values in dynamic SQL and use the same query several times with different values for which we binded in dynamic SQL. I know that using dynamic SQL link variable will be more effective because the SQL engine will parse the query string, optimize, and then bind the value to obtain the correct data.
But my question is even if if we concatenate the table name and the column name in the dynamic SQL statements SQL if engine will reuse the same query string and optimize it and then bind the value of what we had him. Otherwise, every time it will generate for this query optimizer. Suppose that I'm passing always five name of the table with the same column names. Thus, each performance will be there still and always the same SQL string or it will create the optimizer again to each plate?
Please clarify my doubt.
Thank youuser212310 wrote:
But my question is even if if we concatenate the table name and the column name in the dynamic SQL statements SQL if engine will reuse the same query string and optimize it and then bind the value of what we had him. Otherwise, every time it will generate for this query optimizer. Suppose that I'm passing always five name of the table with the same column names. Thus, each performance will be there still and always the same SQL string or it will create the optimizer again to each plate?
Create how unique SQLs code? This is the fundamental question of ito SQL shareable.
Names of object (table, column, the function names and so on) can be variable in a SQL cursor. You cannot use bind variables for that. This following is - this not supported:
UPDATE :tablename SET col2 = :newvalue WHERE col1 = :pkey
So if your code produces the above code 5 different tables SQL, SQL statements unique 5 will be created. For example
UPDATE tab1 SET col2 = :newvalue WHERE col1 = :pkey .. UPDATE tab5 SET col2 = :newvalue WHERE col1 = :pkey
Creating dynamic SQL statements at best means a hard analysis first, followed by a soft analysis for each subsequent run.
It is still not enough good for performance - only the best is no soft analysis either. The code of the application creating the cursor once (via a soft or hard analysis) and then re-use the handle of the cursor again and again, each time with the bind variable, is approach the most optimal.
Noises of it - your reasons and dynamic SQL approach are not valid. Dynamic SQL is an exception. Always. And the vast majority of the time it is used, it is used correctly and for the wrong reasons.
The interface most appropriate between a client and a server is a formal application programming interface. In Oracle, this means the creation of the server API (for customers to call) using PL/SQL packages and ref Cursor.
Dynamic SQL is at the opposite extreme is end of what an API. How robust systems and applications would be if the core API does not exist and a "dynamic code" approach has been used? Performance and robustness would have been non-existent.
Now, you want to use this approach for Oracle? How does make any sense?
-
I think I know the answers to these, but simply a confirmation for my assumptions.
I have a Table that looks like for example:
create table root.table (id binary (21) not null,)
the whole of the State,
whole state_change_time
new_login_name tank (64));
I have the following indexes:
create an index only root.table_unique_idx on root.table (id);
create index root.table_login_name_idx on root.table (new_login_name);
Show on the following query plan:
Command > select id from table where new_login_name like 'unnom' order by id;
Query optimizer plan:
STEP: 1
LEVEL: 1
OPERATION: TblLkTtreeScan
TABLENAME: TABLE
IXNAME: TABLE_UNIQUE_IDX
CONDITION INDEX: < NULL >
NOT INDEXED: TABLE. New_login_name AS 'unnom '.
< 010000000043DD170D298BB2A41C3F589B494D619A >
< 01000000005A0A1DA939D09440080C4CA3E89A4A63 >
< 01000000005BADB6E29884909B4DC765407DF7A209 >
3 lines found.
Time of execution (SQLExecute + Fetch loop) = 0,000360 seconds.
Command >
I expect the table_login_name_idx to use. I suspect that the "order by id" clause modifies the index to use, so I change the table_login_name_idx index:
create an index only root.table_unique_idx on root.table (id);
create index root.table_login_name_idx on root.table (new_login_name, id);
Now show on the same query:
Command > select id from table where new_login_name like 'unnom' order by id;
Query optimizer plan:
STEP: 1
LEVEL: 1
OPERATION: TblLkTtreeScan
TABLENAME: TABLE
IXNAME: TABLE_LOGIN_NAME_IDX
INDEXED CONDITION: TABLE. NEW_LOGIN_NAME = 'ABC '.
NOT INDEXED: < NULL >
< 010000000043DD170D298BB2A41C3F589B494D619A >
< 01000000005A0A1DA939D09440080C4CA3E89A4A63 >
< 01000000005BADB6E29884909B4DC765407DF7A209 >
3 lines found.
Time of execution (SQLExecute + Fetch loop) = 0,000320 seconds.
Command >
In the execution plan, it isn't 'INDEXÉS CONDITION' and "NOT INDEXED" means? I guess the first request does not use the index that I hoped that he was using while the second application is. Although there is no difference in the moment, I guess that if I had several thousand rows in the table, instead of 3, the second query would give much better time than the first.
My assumptions are correct?The present CONDITION INDEXEE for this step in the plan, the predicates are evaluated through the Index (good) and which are not (less good). The first rule here is to be sure that you have updated statistics of optimizer for all tables involved before you start to look at plans (command statsupdate in ttIsql for example), otherwise the optimizer can work with inaccurate/limited information.
In the case that you've given here, for the first query, the optimizer has a choice between (a) using the index table_unique_idx to order the returned data (thus avoiding a separate sorting step) and (b) using the index table_login_name_idx to filter the lines and then perform an explicit sort. Based on the information it has available ot he he chose option (a).
For the second case, with the revised index, it can use table_login_name_idx to do both the filtering and sorting, which is nice!
As you note, the difference in performance between the two will become much greater that the number of rows in the table increases.
Chris
-
Help in the optimization of a query update
Hi gurus,
I'm trying to optimize the query update on a large TT_TERM_HIST table below (table size is 13 GB).
The update statement is supposed to update the lines ~ 7 M. Total number of lines are ~ 9 M.
The TT_TERM table is also large (table size is 9.5 GB) and PK on column DEAL_NUM.
UPDATE tt_term_hist hist SET LOCAL_BANKING_SYSTEM19 = (SELECT LOCAL_BANKING_SYSTEM19 FROM tt_term tt WHERE tt.deal_num = hist.deal_num) WHERE hist.deal_num IN (SELECT deal_num FROM tt_term WHERE SUBSTR (LOCAL_BANKING_SYSTEM19, 1, 5) IN ('FT7FC', 'FT7MC', 'FT7TM')) ;
Performance plan is as follows:
----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 266K| 6763K| 1756K (16)| 05:51:23 | | 1 | UPDATE | TT_TERM_HIST | | | | | | 2 | NESTED LOOPS | | 266K| 6763K| 691K (1)| 02:18:16 | |* 3 | TABLE ACCESS FULL | TT_TERM | 44729 | 742K| 333K (1)| 01:06:41 | |* 4 | INDEX RANGE SCAN | IRTERM_HIST_PK | 6 | 54 | 2 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| TT_TERM | 1 | 17 | 3 (0)| 00:00:01 | |* 6 | INDEX UNIQUE SCAN | IRTERM_PK | 1 | | 2 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter(SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7FC' OR SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7MC' OR SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7TM') 4 - access("HIST"."DEAL_NUM"="DEAL_NUM") 6 - access("TT"."DEAL_NUM"=:B1)
Then, I created a function-based index table TT_TERM using the function 'SUBSTR (LOCAL_BANKING_SYSTEM19, 1, 5)' and the plan amended as follows:
------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 89688 | 2364K| 480K (19)| 01:36:06 | | 1 | UPDATE | TT_TERM_HIST | | | | | | 2 | NESTED LOOPS | | 89688 | 2364K| 121K (1)| 00:24:21 | | 3 | INLIST ITERATOR | | | | | | | 4 | TABLE ACCESS BY INDEX ROWID| TT_TERM | 15060 | 264K| 1225 (0)| 00:00:15 | |* 5 | INDEX RANGE SCAN | CS_TERM_LBS19 | 6024 | | 17 (0)| 00:00:01 | |* 6 | INDEX RANGE SCAN | IRTERM_HIST_PK | 6 | 54 | 2 (0)| 00:00:01 | | 7 | TABLE ACCESS BY INDEX ROWID | TT_TERM | 1 | 17 | 3 (0)| 00:00:01 | |* 8 | INDEX UNIQUE SCAN | IRTERM_PK | 1 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access(SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7FC' OR SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7MC' OR SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7TM') 6 - access("HIST"."DEAL_NUM"="DEAL_NUM") 8 - access("TT"."DEAL_NUM"=:B1)
Try to use the index PARALLEL is shooting to the high cost in Millions.
UPDATE /*+ PARALLEL */ tt_term_hist hist SET LOCAL_BANKING_SYSTEM19 = (SELECT LOCAL_BANKING_SYSTEM19 FROM tt_term tt WHERE tt.deal_num = hist.deal_num) WHERE hist.deal_num IN (SELECT deal_num FROM tt_term WHERE SUBSTR (LOCAL_BANKING_SYSTEM19, 1, 5) IN ('FT7FC', 'FT7MC', 'FT7TM')) ;
---------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | ---------------------------------------------------------------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 6096K| 156M| 24M (25)| 81:18:18 | | | | | 1 | UPDATE | TT_TERM_HIST | | | | | | | | | 2 | PX COORDINATOR | | | | | | | | | | 3 | PX SEND QC (RANDOM) | :TQ10002 | 6096K| 156M| 4482 (1)| 00:00:54 | Q1,02 | P->S | QC (RAND) | |* 4 | HASH JOIN BUFFERED | | 6096K| 156M| 4482 (1)| 00:00:54 | Q1,02 | PCWP | | | 5 | BUFFER SORT | | | | | | Q1,02 | PCWC | | | 6 | PX RECEIVE | | 1023K| 17M| 1225 (0)| 00:00:15 | Q1,02 | PCWP | | | 7 | PX SEND HASH | :TQ10000 | 1023K| 17M| 1225 (0)| 00:00:15 | | S->P | HASH | | 8 | INLIST ITERATOR | | | | | | | | | | 9 | TABLE ACCESS BY INDEX ROWID| TT_TERM | 1023K| 17M| 1225 (0)| 00:00:15 | | | | |* 10 | INDEX RANGE SCAN | CS_TERM_LBS19 | 6024 | | 17 (0)| 00:00:01 | | | | | 11 | PX RECEIVE | | 9007K| 77M| 3257 (1)| 00:00:40 | Q1,02 | PCWP | | | 12 | PX SEND HASH | :TQ10001 | 9007K| 77M| 3257 (1)| 00:00:40 | Q1,01 | P->P | HASH | | 13 | PX BLOCK ITERATOR | | 9007K| 77M| 3257 (1)| 00:00:40 | Q1,01 | PCWC | | | 14 | TABLE ACCESS FULL | TT_TERM_HIST | 9007K| 77M| 3257 (1)| 00:00:40 | Q1,01 | PCWP | | | 15 | TABLE ACCESS BY INDEX ROWID | TT_TERM | 1 | 17 | 3 (0)| 00:00:01 | | | | |* 16 | INDEX UNIQUE SCAN | IRTERM_PK | 1 | | 2 (0)| 00:00:01 | | | | ---------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - access("HIST"."DEAL_NUM"="DEAL_NUM") 10 - access(SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7FC' OR SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7MC' OR SUBSTR("LOCAL_BANKING_SYSTEM19",1,5)='FT7TM') 16 - access("TT"."DEAL_NUM"=:B1)
The Pb, I train of CARS with 2 nodes. DB version details are as follows:
SQL> select banner from v$version; Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production PL/SQL Release 11.2.0.4.0 - Production CORE 11.2.0.4.0 Production TNS for Linux: Version 11.2.0.4.0 - Production NLSRTL Version 11.2.0.4.0 - Production
Please let know us your opinion on how to optimize the query. Please let me know in case you need other inputs.
Hello
"The update statement is supposed to update the lines ~ 7 M." "Total number of lines are ~ 9 M."
Could specify total number by each table? It makes sense to use "hash join" to join table?
Try to replace 'in' also exists.You can try to update the join, it might help to exclude a single step to join as:
UPDATE ( SELECT HIST.LOCAL_BANKING_SYSTEM19 OLD_VAL , TT.LOCAL_BANKING_SYSTEM19 NEW_VAL FROM TT_TERM_HIST HIST, TT_TERM TT WHERE TT.DEAL_NUM = HIST.DEAL_NUM AND SUBSTR (LOCAL_BANKING_SYSTEM19, 1, 5) IN ('FT7FC', 'FT7MC', 'FT7TM') ) SET OLD_VAL = NEW_VAL ;
! WARNING! It is just not tested sample.
WBR,
-
Optimize query with function in where clause
Hello
I have a query like this:
SELECT * FROM table_1 t WHERE ( -- Clause A (very long clause that filters a lot of rows) ) AND f(t.field) = 'Y' -- This function is heavy but it should filter few rows
This query, it is very slow because I think he's trying to evaluate f() for all rows in table_1.
Howerver, if I have database query:
SELECT f(t.field) FROM table_1 t WHERE ( -- very long clause that filters a lot of rows )
It's very fast.
How can I reference the query to filter the lines of division A, then by function?
Thanks in advance!
If you wrap the function in a select clause then the optimizer can use a scalar subquery caching:
SELECT * FROM table_1 t
WHERE ( -- Clause A (very long clause that filters a lot of rows) )
AND (Select f(t.field) From Dual) = 'Y' -- This function is heavy but it should filter few rows
-
How to optimize this query?
Hello
I have a query like this:
Merge into the table st1
using (select * from (select pk, value, diff_value, m_date, row_number () over (PARTITION pk ORDER BY diff_value) rnk)
from (select distinct / * + Full (t1) full (t2) * / t1.pk, t2.m_date)
, Case when (t1.m_date = t2.m_date) then "CORRESPONDENCE".
When (t2.m_date BETWEEN t1.m_date-1 and t1.m_date + 1) then ' MATCHED WITH +/-1gg.
When (t2.m_date BETWEEN t1.m_date-2 and t1.m_date + 2) then "MATCHED WITH +/-2 days.
else "
end value_match
Case when (t1.m_date = t2.m_date) then 0
Where (t2.m_date BETWEEN t1.m_date + 1 and t1.m_date - 1) then 1
Where (t2.m_date BETWEEN t1.m_date + 1 and t1.m_date - 1) then 2
else "
end diff_value
of table t2, t1 table
where t1.value is null
and t1.id = t2.id)
where value_match is not null)
where rnk = 1) s
on (st1.pk = s.pk)
WHEN MATCHED THEN
Update set st1.value = s.value_match, st1.diff_value = s.diff_value, st1.up_date = s.m_date
where st1.value is null.
Explain the plan:
Table1 a record 3Million and table 2 has 1 million records.
I used gather stats before you run this query and 'Full' trick, even in this case, he is running for 45 minutes.
Please suggest the best solution to optimize this query.
Thanks in advance.
Remove the tips.
No need for the separate.
Get the diff by ceil (abs(t2.m_date-t1.m_date)) and the filter for that where value_diff<>
Assing the statement ".. MATCHED" lately in the update clause.
Maybe give exactly to your needs with a small example may be the query may be getting more simplified or not what you want it to do.
-
How to optimize this query XMLDB
Hello dear community, we have two XMLType table they are very similar but not identical and we do not have the XSD for validation for this exercise.
We need to make a join between these tables and the data code example go like that
create table xmltst ( xmldata xmltype); create table xmltst2 ( xmldata xmltype); declare idata varchar2(4000); idata2 varchar2(4000); begin idata := '<?xml version="1.0" encoding="UTF-8"?> <SWs> <SW s_ID="T6B890.00-01" t_ID="T6B890.00"> <Ds> <De sX="59" sY="-57" rX="7" rY="22" m_ID="L" eTime_s="2014-12-12T02:22:11+08:00" eTime_e="2014-12-12T02:22:42+08:00" mst="0.631"/> <De sX="70" sY="-57" rX="7" rY="23" m_ID="L" eTime_s="2014-12-12T02:22:12+08:00" eTime_e="2014-12-12T02:22:33+08:00" mst="0.217"/> <De sX="69" sY="-57" rX="47" rY="1" m_ID="R" eTime_s="2014-12-12T02:22:16+08:00" eTime_e="2014-12-12T02:22:56+08:00" mst="0.974"/> </Ds> </SW> <SW s_ID="T6B890.00-02" t_ID="T6B890.00"> <Ds> <De sX="56" sY="-1" rX="72" rY="19" m_ID="R" eTime_s="2014-12-12T02:36:01+08:00" eTime_e="2014-12-12T02:36:29+08:00" mst="0.541"/> <De sX="57" sY="-1" rX="39" rY="42" m_ID="L" eTime_s="2014-12-12T02:22:12+08:00" eTime_e="2014-12-12T02:23:01+08:00" mst="0.426"/> <De sX="58" sY="-1" rX="72" rY="20" m_ID="R" eTime_s="2014-12-12T02:36:07+08:00" eTime_e="2014-12-12T02:36:18+08:00" mst="0.716"/> </Ds> </SW> </SWs>'; idata2 := '<?xml version="1.0" encoding="UTF-8"?> <SWs> <SW s_ID="T6B890.00-01" t_ID="T6B890.00"> <Ds> <De sX="59" sY="-57" rX="7" rY="22" m_ID="L" eTime_s="2014-12-12T02:22:11+08:00" eTime_e="2014-12-12T02:22:42+08:00"/> <De sX="70" sY="-57" rX="7" rY="23" m_ID="L" eTime_s="2014-12-12T02:22:12+08:00" eTime_e="2014-12-12T02:22:33+08:00"/> <De sX="69" sY="-57" rX="47" rY="1" m_ID="R" eTime_s="2014-12-12T02:22:16+08:00" eTime_e="2014-12-12T02:22:56+08:00"/> <De sX="72" sY="-57" rX="47" rY="2" armID="R" eTime_s="2014-12-12T02:22:18+08:00" eTime_e="2014-12-12T02:23:28+08:00"/> <De sX="82" sY="-57" rX="7" rY="25" armID="L" eTime_s="2014-12-12T02:22:19+08:00" eTime_e="2014-12-12T02:22:58+08:00"/> </Ds> </SW> <SW s_ID="T6B890.00-02" t_ID="T6B890.00"> <Ds> <De sX="56" sY="-1" rX="72" rY="19" m_ID="R" eTime_s="2014-12-12T02:36:01+08:00" eTime_e="2014-12-12T02:36:29+08:00"/> <De sX="57" sY="-1" rX="39" rY="42" m_ID="L" eTime_s="2014-12-12T02:22:12+08:00" eTime_e="2014-12-12T02:23:01+08:00"/> <De sX="58" sY="-1" rX="72" rY="20" m_ID="R" eTime_s="2014-12-12T02:36:07+08:00" eTime_e="2014-12-12T02:36:18+08:00"/> </Ds> </SW> </SWs>'; insert into xmltst values (idata); insert into xmltst2 values (idata2); end; commit;
The SQL code, we try to optimize:
with tt as ( SELECT /*+ materialize */ x.* FROM xmltst t, XMLTABLE ('/SWs/SW[@s_ID="T6B890.00-01"]/Ds/De' PASSING t.xmldata COLUMNS sX number PATH '@sX', sY number PATH '@sY', rX number PATH '@rX', rY number PATH '@rY', eTime_s varchar2(30) PATH '@eTime_s', eTime_e varchar2(30) PATH '@eTime_e', mst number PATH '@mst' ) x ) ,tt2 as ( SELECT /*+ materialize */ x.* FROM xmltst2 t, XMLTABLE ('/SWs/SW[@s_ID="T6B890.00-01"]/Ds/De' PASSING t.xmldata COLUMNS sX number PATH '@sX', sY number PATH '@sY', rX number PATH '@rX', rY number PATH '@rY', eTime_s varchar2(30) PATH '@eTime_s', eTime_e varchar2(30) PATH '@eTime_e' ) x ) select tt2.*,tt.mst from tt2 left outer join tt on (tt2.sX = tt.sX and tt2.sY = tt.sY and tt2.rX = tt.rX and tt.rY=tt.rY)
CREATE INDEX xmltst_idx ON xmltst (xmldata) INDEXTYPE IS XDB.XMLIndex PARAMETERS ( 'XMLTable SW_tab ''/SWs/Sw'' COLUMNS s_ID VARCHAR2(100) PATH ''@s_ID'', sX NUMBER PATH ''Ds/De/@sX'', sY NUMBER PATH ''Ds/De/@sY'', rX NUMBER PATH ''Ds/De/@rX'', rY NUMBER PATH ''Ds/De/@rY''');
Create an index as above, but it does not seem to be used to explain the plan for the part of XML query.
and a lot of time, I also get this error but I cannot now re - produce for some reason.
I thought that its because I can't index after branch out according to s_ID
SQL Error: ORA-29879: cannot create multiple domain indexes on a column list using same indextype 29879. 00000 - "cannot create multiple domain indexes on a column list using same indextype" *Cause: An attempt was made to define multiple domain indexes on the same column list using identical indextypes. *Action: Check to see if a different indextype can be used or if the index can be defined on another column list.
and this index below seems to have choice as shown explain plan.
Why can't I see the index above in the plan to explain it?
CREATE INDEX "OE"."XMLTST_INDX01" ON "OE"."XMLTST" ("XMLDATA") INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS ('paths (include (/SWs/SW/@s_ID))');
However, it is still the loop nest join when the join of two tables after the XML in the process... Is it possible to tell Oracle to a join index or some kind of faster join after the XML select part.
My real case got way as many lines to make the join of X - Y and it may be nice to have an index to quickly reach?
When do some small tests, the clause will eventually cause oracle core dump. It should not happen even it is a virtual machine with 3G of memory max and the max_memory_target = 800 M as all my data are not not even 50 M.
We are the team of analysts and Dev team suggest that is a little too much time to contact Oracle Support and I finally create 3 global temporary table with commit preserve rows and operate with performance much better.
-
Query with join optimization research and details of the extra column
I have the following SQL used for a report that comes out some stats (with some research of names). There is a good chance it is probably possible to optimize with better SQL, but I also hope to add an additional column, which I'm not sure.
I want the extra column at one percent, which is total % of the lines of the value of the units, for the combination of category/group.
Oracle SQL is v11.2.0
Here's the SQL code, as it is currently:
select a.date_adjusted, a.task_name, sum(case when a.units_adjusted is not null then a.units_adjusted else a.units_original end) Units, b.group_name, b.category_name from actuals_intake a left join -- lookups to obtain group and category names from their ID's in the groupings table (select c.task_id, d.group_name, e.category_name, c.business_unit_id from task_groupings c, task_groups d, task_categories e where c.group_id = d.id and c.business_unit_id = d.business_unit_id and c.category_id = e.id and c.business_unit_id = e.business_unit_id ) b on a.task_id = b.task_id and a.business_unit_id = b.business_unit_id where a.business_unit_id = :P10_SELECT_BUSINESS_UNIT and a.date_adjusted between to_date(:P10_DATE_START, 'dd-mon-yyyy') and to_date(:P10_DATE_END, 'dd-mon-yyyy') group by a.date_adjusted, a.task_name, b.group_name, b.category_name order by a.date_adjusted, b.category_name, b.group_name
This will set up the tables and data:
CREATE TABLE ACTUALS_INTAKE ( ID NUMBER, DATE_ORIGINAL DATE, TASK_NAME VARCHAR2(500 CHAR), TASK_ID NUMBER, UNITS_ORIGINAL NUMBER, BUSINESS_UNIT_ID NUMBER, SUB_UNIT_ID NUMBER, DATE_ADJUSTED DATE, UNITS_ADJUSTED NUMBER ); CREATE TABLE TASK_CATEGORIES ( ID NUMBER, CATEGORY_NAME VARCHAR2(100 CHAR), BUSINESS_UNIT_ID NUMBER ); CREATE TABLE TASK_GROUPS ( ID NUMBER, GROUP_NAME VARCHAR2(100 CHAR), BUSINESS_UNIT_ID NUMBER ); CREATE TABLE TASK_GROUPINGS ( TASK_ID NUMBER, GROUP_ID NUMBER, CATEGORY_ID NUMBER, BUSINESS_UNIT_ID NUMBER ); INSERT ALL INTO ACTUALS_INTAKE (ID, DATE_ORIGINAL, TASK_NAME, TASK_ID, UNITS_ORIGINAL, BUSINESS_UNIT_ID, SUB_UNIT_ID, DATE_ADJUSTED, UNITS_ADJUSTED) VALUES (1, '03/15/2014', 'Task One', 1, 200, 10, null, '03/15/2014', null) INTO ACTUALS_INTAKE (ID, DATE_ORIGINAL, TASK_NAME, TASK_ID, UNITS_ORIGINAL, BUSINESS_UNIT_ID, SUB_UNIT_ID, DATE_ADJUSTED, UNITS_ADJUSTED) VALUES (2, '03/15/2014', 'Task Two', 2, 30, 10, null, '03/15/2014', null) INTO ACTUALS_INTAKE (ID, DATE_ORIGINAL, TASK_NAME, TASK_ID, UNITS_ORIGINAL, BUSINESS_UNIT_ID, SUB_UNIT_ID, DATE_ADJUSTED, UNITS_ADJUSTED) VALUES (3, '03/15/2014', 'Task Three', 3, 650, 10, null, '03/15/2014', null) INTO ACTUALS_INTAKE (ID, DATE_ORIGINAL, TASK_NAME, TASK_ID, UNITS_ORIGINAL, BUSINESS_UNIT_ID, SUB_UNIT_ID, DATE_ADJUSTED, UNITS_ADJUSTED) VALUES (4, '03/15/2014', 'Task Four', 4, 340, 10, null, '03/15/2014', null) INTO ACTUALS_INTAKE (ID, DATE_ORIGINAL, TASK_NAME, TASK_ID, UNITS_ORIGINAL, BUSINESS_UNIT_ID, SUB_UNIT_ID, DATE_ADJUSTED, UNITS_ADJUSTED) VALUES (5, '03/14/2014', 'Task Four', 4, 60, 10, null, '03/15/2014', null) INTO ACTUALS_INTAKE (ID, DATE_ORIGINAL, TASK_NAME, TASK_ID, UNITS_ORIGINAL, BUSINESS_UNIT_ID, SUB_UNIT_ID, DATE_ADJUSTED, UNITS_ADJUSTED) VALUES (6, '03/15/2014', 'Task Five', 5, 15, 10, null, '03/15/2014', null) INTO ACTUALS_INTAKE (ID, DATE_ORIGINAL, TASK_NAME, TASK_ID, UNITS_ORIGINAL, BUSINESS_UNIT_ID, SUB_UNIT_ID, DATE_ADJUSTED, UNITS_ADJUSTED) VALUES (7, '03/15/2014', 'Task Six', 6, 40, 10, null, '03/15/2014', null) SELECT 1 FROM DUAL; INSERT ALL INTO TASK_GROUPS (ID, GROUP_NAME, BUSINESS_UNIT_ID) VALUES (1, 'Group One', 10) INTO TASK_GROUPS (ID, GROUP_NAME, BUSINESS_UNIT_ID) VALUES (2, 'Group Two', 10) INTO TASK_GROUPS (ID, GROUP_NAME, BUSINESS_UNIT_ID) VALUES (3, 'Group Three', 10) select 1 from dual; INSERT ALL INTO TASK_CATEGORIES (ID, CATEGORY_NAME, BUSINESS_UNIT_ID) VALUES (1, 'Category A', 10) INTO TASK_CATEGORIES (ID, CATEGORY_NAME, BUSINESS_UNIT_ID) VALUES (2, 'Category A', 10) INTO TASK_CATEGORIES (ID, CATEGORY_NAME, BUSINESS_UNIT_ID) VALUES (3, 'Category B', 10) select 1 from dual; INSERT ALL INTO TASK_GROUPINGS (TASK_ID, GROUP_ID, CATEGORY_ID, BUSINESS_UNIT_ID) VALUES (1, 1, 1, 10) INTO TASK_GROUPINGS (TASK_ID, GROUP_ID, CATEGORY_ID, BUSINESS_UNIT_ID) VALUES (2, 1, 1, 10) INTO TASK_GROUPINGS (TASK_ID, GROUP_ID, CATEGORY_ID, BUSINESS_UNIT_ID) VALUES (3, 2, 2, 10) INTO TASK_GROUPINGS (TASK_ID, GROUP_ID, CATEGORY_ID, BUSINESS_UNIT_ID) VALUES (4, 2, 3, 10) INTO TASK_GROUPINGS (TASK_ID, GROUP_ID, CATEGORY_ID, BUSINESS_UNIT_ID) VALUES (5, 3, 3, 10) INTO TASK_GROUPINGS (TASK_ID, GROUP_ID, CATEGORY_ID, BUSINESS_UNIT_ID) VALUES (6, 3, 3, 10) select 1 from dual;
Results will look like this. The last column is what I want the extra column to look like:
Date_Adjusted TaskName Units of GroupName Category_Name Units % 15/03/2014 A task 200 Group 1 Category A 87 15/03/2014 Task 2 30 Group 1 Category A 13 15/03/2014 Task 3 650 Group two Category A 100 15/03/2014 Task 5 15 Group three Category B 27 15/03/2014 Task 6 40 Group three Category B 73 15/03/2014 Task 4 400 Group two Category B 100 Hope all that makes sense... Anyone able to help me do this effectively?
Hello
Use the analytical RATIO_TO_REPORT function to calculate the % of units column.
If you're serious about performance, please refer to the Forum:
Re: 3. how to improve the performance of my query? / My query is slow.
Do you really need an outer join? Inner joins are faster. With the given sample data, they produce the same results.
COALESCE may be a little faster than the CASE.
Try this:
WITH got_units AS
(
SELECT a.date_adjusted,
a.Task_Name,
sum of units (COALESCE (a.units_adjusted, a.units_original));
b.group_name,
b.category_name
of actuals_intake one
the left join - or just JOINED
-research for the group names and category of their ID in the table of groupings
(select c.task_id,
d.group_name,
e.category_name,
c.business_unit_id
of task_groupings c,.
task_groups d,
e task_categories
where d.id = c.group_id
and c.business_unit_id = d.business_unit_id
and c.category_id = e.id
and c.business_unit_id = e.business_unit_id
) b
On a.task_id = b.task_id
and a.business_unit_id = b.business_unit_id
-where a.business_unit_id =: P10_SELECT_BUSINESS_UNIT - if necessary
- and a.date_adjusted between to_date (: P10_DATE_START, 'Mon-dd-yyyy') and to_date (: P10_DATE_END, ' mon-dd-yyyy "")
Group of a.date_adjusted, a.task_name, b.group_name, b.category_name
)
SELECT u.*
ROUND (100 * RATIO_TO_REPORT (units) OVER (PARTITION BY groupname)
category_name
)
) AS units_pct
OF got_units u
ORDER BY date_adjusted, category_name, GroupName
;
Thanks for the display of the data of the sample; It is very useful. Don't try to insert strings, for example, March 15, 2014", in DATE columns. TO_DATE allows to convert strings to DATEs.
-
What is the best way to optimize a SQL query: call a function or doing a join?
Hi, I want to know what is the best way to optimize a SQL query, call a function inside the SELECT statement, or make a simple join?
It depends on. Could be a. Could be the other. Could be no difference. You would need to compare with your tables in your environment with your settings.
If you put a gun to my head, I was given no other information and required that I answered the question, I would tend to wait that the join would be more effective. In general, if you can do something in pure SQL, it will be more effective than if you call PL/SQL.
Justin
-
Setting the query: optimizer does not use the index function
Hello
I have a request written by a developer that I can't change.
It is here that the condition:
OÙ ( UPPER(TRIM (CODFSC)) = UPPER (TRIM ( '01923980500'))
OR UPPER(TRIM (CODUIC)) = UPPER (TRIM ( '01923980500')))
There is an index on CODFSC and on CODUIC1.
the plan is:
Plan
INSTRUCTION SELECT ALL_ROWS cost: 9 194 bytes: 3 206 502 cardinality: 15 054
ACCESS FULL ANAGRAFICA cost TABLE TABLE 1: 9 194 bytes: 3 206 502 cardinality: 15 054
So I created two new index on SUPERIOR (TRIM ()CODFSC)) and SUPERIOR (TRIM ()CODUIC)) but the plan
complete analysis of STIL.
Modifing where condition in:
OÙ ( CODFSC = UPPER (TRIM ( '01923980500'))
OR CODUIC = UPPER (TRIM ( '01923980500')))
the plan is:
SELECT STATEMENT ALL_ROWSCost: 157 bytes: 426 cardinality: 2
CONCATENATION OF 5
TABLE ACCESS BY INDEX ROWID ANAGRAFICA cost TABLE 2: cardinality of 5 bytes: 213: 1
1 INDEX RANGE SCAN INDEX ANAGRAFICA_IDX01 cost: cardinality 3: 1
TABLE ACCESS BY INDEX ROWID ANAGRAFICA cost TABLE 4: cardinality 152 bytes: 213: 1
3 INDEX SKIP SCAN INDEX ANAGRAFICA_IDX02 cost: cardinality 1: 151
Why optimizer not use my funct index?
Thank you.
Franck,
I always forget that the default value for the GOLD expansion depends on a path indexed for each branch.
2 in your use of or_predicates (2) depends on the position of complex predicate which must be expanded. If you change the order of predicate 'State = 0' to display AFTER the complex predicate, you must change the indicator of "or_predicates (1).
Outside of the current state of undocumented indicator, it also introduces the disturbing thought that, for a more complex query, a change in the transformation may result in another set of query blocks generated with a different ranking of the predicates. Yet another case to ensure that if you suggest anything suggest you (or create a SQL database).
Concerning
Jonathan Lewis
-
Need help on query optimization
Hi experts,
I have the following query that lasts more than 30 minutes to retrieve data. We use Oracle 11 g.
If I run the inline queries (subqueries) A and B separately, it comes in a few seconds, but when I try to join the two should we close at 1: 00 sometimes. In my case query A specific, 52 returns records and query B return 120 files only.SELECT B.serv_item_id, B.document_number, DECODE(B.activity_cd,'I','C',B.activity_cd) activity_cd, DECODE(B.activity_cd, 'N', 'New', 'I', 'Change', 'C', 'Change', 'D', 'Disconnect', B.activity_cd ) order_activity, b.due_date, A.order_due_date , A.activity_cd order_activty_cd FROM (SELECT SRSI2.serv_item_id , NVL(to_date(TO_CHAR(asap.PKG_GMT.sf_gmt_as_local(TASK2.revised_completion_date),'J'),'J'), SR2.desired_due_date) order_due_date , 'D' activity_cd FROM asap.serv_req_si SRSI2, asap.serv_req SR2, asap.task TASK2 WHERE SRSI2.document_number = 10685440 AND SRSI2.document_number = SR2.document_number AND SRSI2.document_number = TASK2.document_number (+) AND SRSI2.activity_cd = 'D' AND TASK2.task_type (+) = 'DD' ) A , (SELECT SRSI1.serv_item_id, SR1.document_number, SRSI1.activity_cd, NVL(to_date(TO_CHAR(asap.PKG_GMT.sf_gmt_as_local(TASK1.revised_completion_date),'J'),'J'), SR1.desired_due_date) due_date FROM asap.serv_req_si SRSI1, asap.serv_req SR1, asap.task TASK1, asap.serv_req_si CURORD WHERE CURORD.document_number = 10685440 AND SRSI1.document_number = SR1.document_number AND SRSI1.document_number != CURORD.document_number AND SRSI1.serv_item_id = CURORD.serv_item_id AND SRSI1.document_number = TASK1.document_number (+) AND TASK1.task_type (+) = 'DD' AND SR1.type_of_sr = 'SO' AND SR1.service_request_status < 801 AND SRSI1.activity_cd IN ('I', 'C', 'N') ) B WHERE B.serv_item_id = A.serv_item_id;
For me, it looks like the failure of the optimizer to determine the amount of data, it will return for each subquery. I feel the need to fool the optimizer through workaround to get the result more quickly. But I'm not able to find a work around for this. If someone of you can give some light on it, it would be really useful.
Thank you very much
GAF
Published by: user780504 on August 7, 2012 02:16
Published by: BluShadow on August 7, 2012 10:17
addition of {noformat}{noformat} tags for readability and replace <> with != to circumvent forum issue. Please read {message:id=9360002}
Perhaps using / * + materialize * / advice? See above
Concerning
Etbin
Maybe you are looking for
-
Content in the options tab does not appear for the page
I can't get the data to display in this window. I started to have problems with lots of pop-up windows and wanted to check the setting. Also, I am sure that there must be a way to search in the forums, but I couldn't see how. If someone could answer
-
Vista not load/boot after update
Hi all I use Windows Vista on a Dell Dimension. Last evening, vista automatically downloaded and installed new updates and probably it needed a reboot. somewhere in the middle of the night, windows tried to restart. Today morning that I wake up to fi
-
Windows Vista GeForce 8600 M GS graphic card does not. HOWTO fix?
I really need help. Just a couple of days my HP Pavilion dv9000 series laptop was working perfectly and now the graphics do not work. Any time I try to watch a video anywhere like on youtube, my screen fills with colorful random horizontal lines. I h
-
MSE has disabled windows Defender. I can't use both?
When I click on windows Defender, a popup says WD off .i try to turn it on, but it just repeats. the popup may be difficult to remove. If I have mse but no WD is that a problem?
-
I'm unable to use my printer. "printer spooler under app" does not. When / how long this problem will be solved?