Join cardinality estimate
I am using version - 11.2.0.4.0 - Oracle.
I have below details of stats for the two tables with no histograms on columns
Table T1 - NUM_ROWS - 8 900 759
------------------------------
column_name num_nulls num_distinct density
C1 100800 0 9.92063492063492E - 6
0-7184 0.000139198218262806 C2
Table T2 - NUM_ROWS - 28835
---------------------------------
column_name num_nulls num_distinct density
C1 0 101 0.0099009900990099
0 39 0.0256410256410256 C2
Query:
------
Select * from T1, T2
WHERE t1.c1 = t2.c1;
Execution plan
----------------------------------------------------------
Hash value of plan: 4149194932
--------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
--------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2546K | 675 M | | 65316 (1) | 00:13:04 | | |
|* 1 | HASH JOIN | | 2546K | 675 M | 5944K | 65316 (1) | 00:13:04 | | |
| 2. STORE TABLE FULL ACCESS | T2 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 3. RANGE OF PARTITION ALL THE | | 8900K | 670 M | | 26453 (1) | 00:05:18 | 1. 2.
| 4. STORE TABLE FULL ACCESS | T1 | 8900K | 670 M | | 26453 (1) | 00:05:18 | 1. 2.
--------------------------------------------------------------------------------------------------------------------------------
as the below rule says its
Join selectivity =
((num_rows (t1) - num_nulls (t1.c1)) / (t1) num_rows) *.
((num_rows (t2) - num_nulls (t2.c2)) / (t2) num_rows).
Greater (num_distinct (T1. (C1), num_distinct (t2.c2))
Join selectivity = (((28835-0) / (28835)) * ((8900759-0)/8900759)) / 100800)
Join cardinality = join selectivity * num_rows (t1) * num_rows (t2)
= (((28835-0) / (28835)) * ((8900759-0)/8900759)) / 100800) * (8900759 * 28835)
= 2546164.54, which corresponds to the output of the above plan.
but when I add a different join condition as below, I am not able to understand, how the cardinality of the join becomes 28835? And what a difference it will behave in case of presence of histogram?
Select * from T1, T2
WHERE t1.c1 = t2.c1
and t1.c2 = t2.c2;
Execution plan
----------------------------------------------------------
Hash value of plan: 1645075573
---------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
---------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 28835 | 7828K | | 65316 (1) | 00:13:04 | | |
|* 1 | HASH JOIN | | 28835 | 7828K | 5944K | 65316 (1) | 00:13:04 | | |
| 2. JOIN FILTER PART CREATE | : BF0000 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 3. STORE TABLE FULL ACCESS | T2 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 4. RANGE OF PARTITION-JOIN FILTER | | 8900K | 670 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
| 5. STORE TABLE FULL ACCESS | T1 | 8900K | 670 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
---------------------------------------------------------------------------------------------------------------------------------
Total of selectivity = selectivity of c1 * c2 selectivity
= ((((28835 - 0)/(28835)) * ((8900759-0)/8900759))/ 100800)*((((28835 - 101)/(28835)) * ((8900759-0)/8900759))/ 7184)
total of cardinality = selectivity * num_rows (t1) * num_rows (t2) total
=
((((28835 - 0)/(28835)) * ((8900759-0)/8900759))/ 100800) * ((((28835 - 101)/(28835)) * ((8900759-0)/8900759))/ 7184) * (8900759) * ()28835) = 353.18 but its does not not at the outut above
--> C2 for table T2 is partitioned column. T1 is not partitioned.
--> There are two partitions of the range for the T2. And one of them is empty, the data resides in a single partition.
--> As a single partition is empty, so it would be to visit only one partition for the final results.
--> I use "set autotrace traceonly explain" to get the plan for the query.
--> Here is the max and min for c1 and c2 for the T2 value
Max (C1) min (c1) (c2) max Min (c2)
86 383759 2/28 / 2011 23:59:38 28/02/2011 12:00:02 AM
Here is the max and min for c1 and c2 for the T1 value
Max (C1) min (c1) (c2) max Min (c2)
4860 354087 2/28 / 2011 23:55:47 28/02/2011 12:07:49 AM
--> Given below is the plan with the predicate section
Execution plan
----------------------------------------------------------
Hash value of plan: 1645075573
---------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
---------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 28835 | 8166K | | 70364 (1) | 00:14:05 | | |
|* 1 | HASH JOIN | | 28835 | 8166K | 5944K | 70364 (1) | 00:14:05 | | |
| 2. JOIN FILTER PART CREATE | : BF0000 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 3. STORE TABLE FULL ACCESS | T1 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 4. RANGE OF PARTITION-JOIN FILTER | | 8900K | 772 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
| 5. STORE TABLE FULL ACCESS | T2 | 8900K | 772 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
---------------------------------------------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
1 - access("T2".") C2 '= 'T1'.' C2"AND"T2 ". ' C1 '= "T1". ("" C1 ")
--> I see below three values in all current data for Q2 of having County > 10 000
C1 C2 Count (*)
171966 2/28 / 2011 07:21:14 14990
41895 2/28 / 2011 08:41:36 12193
7408 2/28 / 2011 06:16:20 12158
53120 2/28 / 2011 06:16:13 7931
51724 2/28 / 2011 18:03:22 6783
51724 2/28 / 2011 18:02:58 6757
51724 2/28 / 2011 16:02:22 6451
51724 2/28 / 2011 16:02:01 6388
51724 2/28 / 2011 14:01:29 5979
234233 2/28 / 2011 07:21:14 5975
51724 2/28 / 2011 14:01:09 5917
7408 2/28 / 2011 06:16:13 5355
51724 2/28 / 2011 20:04:18 5074
51724 2/28 / 2011 20:03:54 5058
I see below three values in the current data set for T1, which is to have County > 75
C1 C2 Count (*)
4860 2/28 / 2011 19:33:45
31217 2/28 / 2011 23:27:54
31217 2/28 / 2011 23:48:14
4860 2/28 / 2011 17:36:07
4860 2/28 / 2011 20:00:11
4860 2/28 / 2011 18:20:13
4860 2/28 / 2011 14:35:39
4860 2/28 / 2011 19:48:06
4860 2/28 / 2011 12:30:29
4860 2/28 / 2011 15:32:31
4860 2/28 / 2011 17:48:05
4860 2/28 / 2011 17:02:26
4860 2/28 / 2011 22:27:02
--> Yes the join is targeted on the larger partition, because the other is just empty.
--> Here is the stats and the plan after having extended his stats collected on the Group column c1, c2 from T1 (by converting it to physics) with no histogram. now its giving a better estimate, which is the closure of real cardinality. But the problem is that in reality, table T1 is a global temporary table, so I'm not able to gather extended on that stat. Are there other work around for this quote?
column_name density histogram Num_distinct
SYS_STUMW3X8MDKZEJOG$ AHPEND1W $2699 NO 0.000370507595405706
Execution plan
----------------------------------------------------------
Hash value of plan: 1645075573
---------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
---------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 432K | 124 M | | 70380 (1) | 00:14:05 | | |
|* 1 | HASH JOIN | | 432K | 124 M | 6280K | 70380 (1) | 00:14:05 | | |
| 2. JOIN FILTER PART CREATE | : BF0000 | 28835 | 5941K | | 239 (1) | 00:00:03 | | |
| 3. STORE TABLE FULL ACCESS | T1 | 28835 | 5941K | | 239 (1) | 00:00:03 | | |
| 4. RANGE OF PARTITION-JOIN FILTER | | 8900K | 772 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
| 5. STORE TABLE FULL ACCESS | T2 | 8900K | 772 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
---------------------------------------------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
1 - access("T2".") C2 '= 'T1'.' C2"AND"T2 ". ' C1 '= "T1". ("" C1 ")
Tags: Database
Similar Questions
-
Subquery Factoring - cardinality estimate good but bad sql response times
This is Exadata 11.2.0.4.0 database, all tables have statistics of up to date. Cardinality estimation is good compared to the actual cardinality. It is a way to tune this sql to reduce its response time.
Sorry for the long sql and the execution plan.
WITH SUBWITH0 AS (SELECT D1.c1 AS c1 FROM ( (SELECT D1.c1 AS c1 FROM (SELECT DISTINCT T7171.CH_ID_SYM AS c1 FROM DW.TM_R_REP T7171 WHERE ( T7171.CHILD_REP_ID = 939 ) ) D1 UNION SELECT D1.c1 AS c1 FROM (SELECT DISTINCT T7167.MEMBER_KEY_SYM AS c1 FROM DW.PC_T_REP T7167 WHERE ( T7167.ANCESTOR_KEY = 939 ) ) D1 ) ) D1 ), SUBWITH1 AS (SELECT D1.c1 AS c1 FROM ( (SELECT D1.c1 AS c1 FROM (SELECT DISTINCT T7171.CH_ID_SYM AS c1 FROM DW.TM_R_REP T7171 WHERE ( T7171.CHILD_REP_ID = 939 ) ) D1 UNION SELECT D1.c1 AS c1 FROM (SELECT DISTINCT T7167.MEMBER_KEY_SYM AS c1 FROM DW.PC_T_REP T7167 WHERE ( T7167.ANCESTOR_KEY = 939 ) ) D1 ) ) D1 ), SUBWITH2 AS (SELECT DISTINCT T7171.CH_ID_SYM AS c1 FROM ( DW.PC_T_REP T7167 LEFT OUTER JOIN DW.TM_R_REP T7171 ON T7167.ANCESTOR_KEY = T7171.CHILD_REP_ID AND T7167.SALESYEARMONTH = T7171.SALES_YEAR_MONTH) LEFT OUTER JOIN DW.TM_REP T6715 ON T7171.CHILD_REP_ID_N = T6715.REP_ID WHERE ( CASE WHEN T7171.CHILD_REP_ID_N LIKE '9999%' THEN concat(concat('UNASSIGNED', lpad(' ', 2)), CAST(T7167.TERRITORY_ID AS VARCHAR ( 20 ) )) ELSE concat(concat(concat(concat(T6715.FIRST_NAME, lpad(' ', 2)), T6715.MIDDLE_NAME), lpad(' ', 2)), T6715.LAST_NAME) END = 'JOES CRAMER' AND T7171.SALES_YEAR_MONTH BETWEEN '201505' AND '201505' AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH0 D1 ) ) ), SUBWITH3 AS (SELECT MEMBER_KEY_SYM AS c1 FROM DW.PC_T_REP T7167 WHERE ( IS_LEAF = 1 ) ), SAWITH0 AS (SELECT DISTINCT CASE WHEN T7171.CHILD_REP_ID_N LIKE '9999%' THEN concat(concat('UNASSIGNED', lpad(' ', 2)), CAST(T7167.TERRITORY_ID AS VARCHAR ( 20 ) )) ELSE concat(concat(concat(concat(T6715.FIRST_NAME, lpad(' ', 2)), T6715.MIDDLE_NAME), lpad(' ', 2)), T6715.LAST_NAME) END AS c1, T6715.REP_NUM AS c2, T7171.SALES_YEAR_MONTH AS c3, T7315.MONTH_NUMERIC AS c4, CASE WHEN T7171.CH_ID_SYM IN (SELECT D1.c1 AS c1 FROM SUBWITH3 D1 ) THEN 1 ELSE 0 END AS c5, CAST(T7171.PARENT_REP_ID AS CHARACTER ( 30 ) ) AS c6, T7171.CH_ID_SYM AS c7, T7171.PARENT_REP_ID_SYM AS c8 FROM DW.TIM_MON T7315 , ( ( DW.PC_T_REP T7167 LEFT OUTER JOIN ( (SELECT TO_NUMBER(TO_CHAR(L_OPP.CloseDate,'YYYYMM')) AS Sales_Year_Month, Tm_Rep.Rep_Id AS Rep_Id, L_OPP.Account_Name__C AS Account_Name__C, L_OPP.Closedate AS Closedate, L_OPP.Forecastcategory AS Forecastcategory, L_OPP.Forecastcategoryname AS Forecastcategoryname, L_User.NAME AS Opp_Owner_S_Sales_Org__C, L_OPP.Opportunity_Id__C AS Opportunity_Id__C, L_OPP.Renewal_Date__C AS Renewal_Date__C, L_OPP.Total_Incremental__C AS Total_Incremental__C, L_OPP.Offer_Code__C AS Offer_Code__C, L_OPP.ID AS Opportunity_ID, L_OPP.TERRITORYID AS TERRITORYID, L_OPP.ACCOUNTID AS ACCOUNTID, L_OPP.OWNERID AS OWNERID, L_OPP.TOTAL_RENEWAL__C AS TOTAL_RENEWAL__C, L_OPP.NAME AS NAME, L_OPP.STAGENAME AS STAGE_NAME, L_OPP.STAGE_DESCRIPTION__C AS STAGE_DESCRIPTION, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN L_OPP.Total_Incremental__C END , 0) AS Closed_Oppurtunity, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN 'Closed_Oppurtunity_Drill' END AS Closed_Oppurtunity_Drill, NVL( CASE WHEN L_OPP.Forecastcategoryname IN ('Pipeline', 'Potential', 'Commit') AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN L_OPP.Total_Incremental__C END , 0) AS OPEN_Oppurtunity, CASE WHEN L_OPP.Forecastcategoryname IN ('Pipeline', 'Potential', 'Commit') AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN 'OPEN_Oppurtunity_Drill' END AS OPEN_Oppurtunity_Drill, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year1_Closed_Opp, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN 'Renewal_Year1_Closed_Opp_Drill' END AS Renewal_Year1_Closed_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year1_OPEN_Opp, CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN 'Renewal_Year1_OPEN_Opp_Drill' END AS Renewal_Year1_OPEN_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year2_Closed_Opp, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN 'Renewal_Year2_Closed_Opp_Drill' END AS Renewal_Year2_Closed_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year2_OPEN_Opp, CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN 'Renewal_Year2_OPEN_Opp_Drill' END AS Renewal_Year2_OPEN_Opp_Drill FROM DW.OPP_C_DIM RIGHT OUTER JOIN RT.L_OPP ON (TO_CHAR(OPP_C_DIM.OFFER_CODE) =TO_CHAR(L_OPP.Offer_Code__C) AND (TO_CHAR(L_OPP.CloseDate,'YYYYMM')) = TO_CHAR(OPP_C_DIM.PERIOD)) LEFT OUTER JOIN RT.L_User ON (L_OPP.Ownerid=L_User.Id) LEFT OUTER JOIN DW.Tm_Rep ON (Tm_Rep.Rep_Num='0' ||L_User.Rep_Employee_Number__C) )) T774110 ON T7167.MEMBER_KEY = T774110.Rep_Id AND T7167.SALESYEARMONTH = T774110.Sales_Year_Month) LEFT OUTER JOIN DW.TM_R_REP T7171 ON T7167.ANCESTOR_KEY = T7171.CHILD_REP_ID AND T7167.SALESYEARMONTH = T7171.SALES_YEAR_MONTH) LEFT OUTER JOIN DW.TM_REP T6715 ON T7171.CHILD_REP_ID_N = T6715.REP_ID WHERE ( T774110.Sales_Year_Month = T7315.YEAR_MONTH AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH2 D1 ) AND T7171.SALES_YEAR_MONTH BETWEEN '201505' AND '201505' AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH1 D1 ) ) ), SAWITH1 AS (SELECT SUM(T774110.Renewal_Year2_OPEN_Opp) AS c9, SUM(T774110.Renewal_Year2_Closed_Opp) AS c10, SUM(T774110.Renewal_Year1_OPEN_Opp) AS c11, SUM(T774110.Renewal_Year1_Closed_Opp) AS c12, SUM(T774110.OPEN_Oppurtunity) AS c13, SUM(T774110.Closed_Oppurtunity) AS c14, T7315.MONTH_NUMERIC AS c15, T7171.CH_ID_SYM AS c16 FROM DW.TIM_MON T7315 , ( RT.L_ACCOUNT T765190 LEFT OUTER JOIN ( DW.PC_T_REP T7167 LEFT OUTER JOIN ( (SELECT TO_NUMBER(TO_CHAR(L_OPP.CloseDate,'YYYYMM')) AS Sales_Year_Month, Tm_Rep.Rep_Id AS Rep_Id, L_OPP.Account_Name__C AS Account_Name__C, L_OPP.Closedate AS Closedate, L_OPP.Forecastcategory AS Forecastcategory, L_OPP.Forecastcategoryname AS Forecastcategoryname, L_User.NAME AS Opp_Owner_S_Sales_Org__C, L_OPP.Opportunity_Id__C AS Opportunity_Id__C, L_OPP.Renewal_Date__C AS Renewal_Date__C, L_OPP.Total_Incremental__C AS Total_Incremental__C, L_OPP.Offer_Code__C AS Offer_Code__C, L_OPP.ID AS Opportunity_ID, L_OPP.TERRITORYID AS TERRITORYID, L_OPP.ACCOUNTID AS ACCOUNTID, L_OPP.OWNERID AS OWNERID, L_OPP.TOTAL_RENEWAL__C AS TOTAL_RENEWAL__C, L_OPP.NAME AS NAME, L_OPP.STAGENAME AS STAGE_NAME, L_OPP.STAGE_DESCRIPTION__C AS STAGE_DESCRIPTION, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN L_OPP.Total_Incremental__C END , 0) AS Closed_Oppurtunity, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN 'Closed_Oppurtunity_Drill' END AS Closed_Oppurtunity_Drill, NVL( CASE WHEN L_OPP.Forecastcategoryname IN ('Pipeline', 'Potential', 'Commit') AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN L_OPP.Total_Incremental__C END , 0) AS OPEN_Oppurtunity, CASE WHEN L_OPP.Forecastcategoryname IN ('Pipeline', 'Potential', 'Commit') AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN 'OPEN_Oppurtunity_Drill' END AS OPEN_Oppurtunity_Drill, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year1_Closed_Opp, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN 'Renewal_Year1_Closed_Opp_Drill' END AS Renewal_Year1_Closed_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year1_OPEN_Opp, CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN 'Renewal_Year1_OPEN_Opp_Drill' END AS Renewal_Year1_OPEN_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year2_Closed_Opp, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN 'Renewal_Year2_Closed_Opp_Drill' END AS Renewal_Year2_Closed_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year2_OPEN_Opp, CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN 'Renewal_Year2_OPEN_Opp_Drill' END AS Renewal_Year2_OPEN_Opp_Drill FROM DW.OPP_C_DIM RIGHT OUTER JOIN RT.L_OPP ON (TO_CHAR(OPP_C_DIM.OFFER_CODE) =TO_CHAR(L_OPP.Offer_Code__C) AND (TO_CHAR(L_OPP.CloseDate,'YYYYMM')) = TO_CHAR(OPP_C_DIM.PERIOD)) LEFT OUTER JOIN RT.L_User ON (L_OPP.Ownerid=L_User.Id) LEFT OUTER JOIN DW.Tm_Rep ON (Tm_Rep.Rep_Num='0' ||L_User.Rep_Employee_Number__C) )) T774110 ON T7167.MEMBER_KEY = T774110.Rep_Id AND T7167.SALESYEARMONTH = T774110.Sales_Year_Month) ON T765190.ID = T774110.ACCOUNTID) LEFT OUTER JOIN DW.TM_R_REP T7171 ON T7167.ANCESTOR_KEY = T7171.CHILD_REP_ID AND T7167.SALESYEARMONTH = T7171.SALES_YEAR_MONTH WHERE ( T774110.Sales_Year_Month = T7315.YEAR_MONTH AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH2 D1 ) AND T7171.SALES_YEAR_MONTH BETWEEN '201505' AND '201505' AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH1 D1 ) ) GROUP BY T7171.CH_ID_SYM, T7315.MONTH_NUMERIC ) SELECT DISTINCT D2.c9 AS c1, D2.c10 AS c2, D2.c11 AS c3, D2.c12 AS c4, D2.c13 AS c5, D2.c14 AS c6, D1.c1 AS c7, D1.c2 AS c8, D1.c3 AS c9, D1.c4 AS c10, D1.c5 AS c11, D1.c6 AS c12, D1.c7 AS c13, D1.c8 AS c14 FROM SAWITH0 D1 INNER JOIN SAWITH1 D2 ON SYS_OP_MAP_NONNULL(D2.c15) = SYS_OP_MAP_NONNULL(D1.c4) AND SYS_OP_MAP_NONNULL(D2.c16) = SYS_OP_MAP_NONNULL(D1.c7) ORDER BY c10, c13
SQL in real time, followed by the details with the Predicate Section of dbms_xplan.display_cursor shot
Global stats
==============================================================================================================================
| Elapsed. CPU | E/S | Request | Cluster | Others | Pick up | Buffer | Read | Read | Write | Write | Cell |
| Time (s) | Time (s) | Waiting (s) | Waiting (s) | Waiting (s) | Waiting (s) | Calls | Gets | Reqs | Bytes | Reqs | Bytes | Unloading |
==============================================================================================================================
| 152. 146. 3.73 | 0.08 | 0.04 | 2.04 | 2. 16 M | 5223. 1 GB | 1. 200KB | 95,11% |
==============================================================================================================================
SQL details surveillance Plan (Plan hash value = 442312180)
===============================================================================================================================================================================================================================================
| ID | Operation | Name | Lines | Cost | Time | Start | Execs | Lines | Read | Read | Cell | MEM | Activity | Activity detail |
| | | | (Estimated) | | Active (s) | Active | | (Real) | Reqs | Bytes | Unloading | (Max) | (%) | (Number of samples).
===============================================================================================================================================================================================================================================
| 0 | SELECT STATEMENT | | | | 1. 152. 1. 0 | | | | | 0.65 | Cpu (1) |
| 1. RANGE OF PARTITION ALL THE | | 1. 3892 | | | 1. | | | | | | |
| 2. ACCESS STORAGE FULL FIRST RANKS TABLE. PC_T_REP | 1. 3892 | | | 37. | 74. 19 MB | 78.45% | 17 M | | |
| 3. TRANSFORMATION OF THE TEMPORARY TABLE. | | | 1. 152. 1. 1. | | | | | |
| 4. LOAD SELECT ACE | | | | 1. + 5 | 1. 1. | | | 278K | | |
| 5. VIEW | | 105. 3980 | 1. + 5 | 1. 13637 | | | | | | |
| 6. SORT UNIQUE | | 105. 3980 | 1. + 5 | 1. 13637 | | | | 757K | | |
| 7. UNION-ALL | | | | 1. + 5 | 1. 14033. | | | | | |
| 8. STORE TABLE FULL ACCESS | TM_R_REP | 22. 88. 1. + 5 | 1. 36. | | | | | |
| 9. RANGE OF PARTITION ALL THE | | 83. 3890. 1. + 5 | 1. 13997. | | | | | |
| 10. STORE TABLE FULL ACCESS | PC_T_REP | 83. 3890. 6. + 0 | 37. 13997. | | | 2 M | 0.65 | Smart cell table scan (1) |
| 11. LOAD SELECT ACE | | | | 1. + 5 | 1. 1. | | | 278K | | |
| 12. HASH UNIQUE | | 1. 4166. 1. + 5 | 1. 1. | | | 479K | | |
| 13. HASH JOIN | | 1. 4165 | 1. + 5 | 1. 444. | | | 1 M | | |
| 14. JOIN FILTER PART CREATE | : BF0000 | 3. 4075 | 1. + 5 | 1. 549. | | | | | |
| 15. OUTER HASH JOIN | | 3. 4075 | 1. + 5 | 1. 549. | | | 1 M | | |
| 16. HASH JOIN | | 3. 4068 | 1. + 5 | 1. 549. | | | 2 M | | |
| 17. VIEW | | 105. 3980 | 1. + 5 | 1. 13637 | | | | | | |
| 18. SORT UNIQUE | | 105. 3980 | 1. + 5 | 1. 13637 | | | | 757K | | |
| 19. UNION-ALL | | | | 1. + 5 | 1. 14033. | | | | | |
| 20. STORE TABLE FULL ACCESS | TM_R_REP | 22. 88. 1. + 5 | 1. 36. | | | | | |
| 21. RANGE OF PARTITION ALL THE | | 83. 3890. 1. + 5 | 1. 13997. | | | | | |
| 22. STORE TABLE FULL ACCESS | PC_T_REP | 83. 3890. 1. + 5 | 37. 13997. | | | 2 M | | |
| 23. STORE TABLE FULL ACCESS | TM_R_REP | 1884 | 88. 1. + 5 | 1. 1929 | | | | | | |
| 24. STORE TABLE FULL ACCESS | TM_REP | 7136 | 7. 1. + 5 | 1. 7137 | | | | | | |
| 25. RANGE OF SINGLE PARTITION | | 7449. 90. 1. + 5 | 1. 7449. | | | | | |
| 26. STORE TABLE FULL ACCESS | PC_T_REP | 7449. 90. 1. + 5 | 1. 7449. | | | | | |
| 27. SORT UNIQUE | | 1. 26032 | 1. 152. 1. 1. | | | 2048 | | |
| 28. OUTER HASH JOIN | | 1. 26031 | 72. + 81 | 1. 8238 | | | | 4 M | | |
| 29. FILTER | | | | 74. + 79 | 1. 8238 | | | | | 1.96 | Cpu (3) |
| 30. NESTED EXTERNAL LOOPS | | 1. 26027 | 72. + 81 | 1. 15 M | | | | | 3.27 | Cpu (5) |
| 31. HASH JOIN | | 1. 26026 | 72. + 81 | 1. 15 M | | | | 447K | 18.95 | Cpu (29) |
| 32. OUTER HASH JOIN | | 1. 13213 | 1. + 81 | 1. 332. | | | 452K | | |
| 33. HASH JOIN | | 1. 13206 | 1. + 81 | 1. 332. | | | 1 M | | |
| 34. HASH JOIN | | 1. 13199. 1. + 81 | 1. 444. | | | 434K | | |
| 35. HASH JOIN | | 1. 13197. 1. + 81 | 1. 444. | | | 290K | | |
| 36. JOIN CREATE FILTER | : BF0000 | 1. 13195. 1. + 81 | 1. 444. | | | | | |
| 37. HASH JOIN | | 1. 13195. 1. + 81 | 1. 444. | | | 2 M | | |
| 38. THE CARTESIAN MERGE JOIN. | 27. 13107 | 1. + 81 | 1. 7449. | | | | | |
| 39. HASH JOIN | | 1. 13017. 77. + 5 | 1. 1. | | | 750K | | |
| 40. STORE TABLE FULL ACCESS | TIM_MON | 1. 4. 1. + 5 | 1. 1. | | | | | |
| 41. VIEW | | 1. 13013. 1. + 81 | 1. 1. | | | | | |
| 42. HASH GROUP BY. | 1. 13013. 1. + 81 | 1. 1. | | | 482K | | |
| 43. OUTER HASH JOIN | | 1. 13012. 77. + 5 | 1. 8238 | | | | 4 M | | |
| 44. NESTED LOOPS | | 1. 13008. 77. + 5 | 1. 8238 | | | | | | |
| 45. FILTER | | | | 77. + 5 | 1. 8238 | | | | | 2.61 | Cpu (4) |
| 46. NESTED EXTERNAL LOOPS | | 1. 13007. 77. + 5 | 1. 15 M | | | | | 4.58. Cpu (7) |
| 47. HASH JOIN | | 1. 13006. 77. + 5 | 1. 15 M | | | | 424K | 11.76. Cpu (18) |
| 48. HASH JOIN | | 1. 193. 1. + 5 | 1. 332. | | | 1 M | | |
| 49. HASH JOIN | | 1. 186. 1. + 5 | 1. 444. | | | 420K | | |
| 50. HASH JOIN | | 4. 184. 1. + 5 | 1. 444. | | | 290K | | |
| 51. JOIN CREATE FILTER | : BF0002 | 1. 94. 1. + 5 | 1. 1. | | | | | |
| 52. JOIN FILTER PART CREATE | : BF0001 | 1. 94. 1. + 5 | 1. 1. | | | | | |
| 53. HASH JOIN | | 1. 94. 1. + 5 | 1. 1. | | | 290K | | |
| 54. JOIN CREATE FILTER | : BF0003 | 1. 6. 1. + 5 | 1. 1. | | | | | |
| 55. THE CARTESIAN MERGE JOIN. | 1. 6. 1. + 5 | 1. 1. | | | | | |
| 56. STORE TABLE FULL ACCESS | TIM_MON | 1. 4. 1. + 5 | 1. 1. | | | | | |
| 57. KIND OF BUFFER. | 1. 2. 1. + 5 | 1. 1. | | | 2048 | | |
| 58. VIEW | VW_NSO_1 | 1. 2. 1. + 5 | 1. 1. | | | | | |
| 59. UNIQUE HASH | | 1. | 1. + 5 | 1. 1. | | | 485K | | |
| 60. VIEW | | 1. 2. 1. + 5 | 1. 1. | | | | | |
| 61. STORE TABLE FULL ACCESS | SYS_TEMP_0FD9D71E1_B445AE36 | 1. 2. 1. + 5 | 1. 1. | | | | | |
| 62. USE OF JOIN FILTER | : BF0003 | 1884 | 88. 1. + 5 | 1. 1. | | | | | |
| 63. STORE TABLE FULL ACCESS | TM_R_REP | 1884 | 88. 1. + 5 | 1. 1. | | | | | |
| 64. USE OF JOIN FILTER | : BF0002 | 7449. 90. 1. + 5 | 1. 444. | | | | | |
| 65. RANGE OF SINGLE PARTITION | | 7449. 90. 5. + 1 | 1. 444. | | | | 0.65 | Cpu (1) |
| 66. STORE TABLE FULL ACCESS | PC_T_REP | 7449. 90. 1. + 5 | 1. 444. | | | | | |
| 67. VIEW | | 105. 2. 1. + 5 | 1. 13637 | | | | | | |
| 68. STORE TABLE FULL ACCESS | SYS_TEMP_0FD9D71E0_B445AE36 | 105. 2. 1. + 5 | 1. 13637 | | | | | | |
| 69. STORE TABLE FULL ACCESS | TM_REP | 7136 | 7. 1. + 5 | 1. 7137 | | | | | | |
| 70. STORE TABLE FULL ACCESS | L_OP | 19382 | 12813 | 77. + 5 | 1. 43879 | 565. 551 MB | 98.18% | 15 M | | |
| 71. TABLE ACCESS BY INDEX ROWID | L_US | 1. 1. 79. + 3 | 15 M | 15 M | 26. 208KO | | | 19.61 | Cpu (30) |
| 72. INDEX UNIQUE SCAN | L_US_PK | 1. | 77. + 5 | 15 M | 15 M | 2. 16384. | | 9 h 15 | Cpu (14) |
| 73. INDEX UNIQUE SCAN | L_A_PK | 1. 1. 151. + 2 | 8238 | 8238 | 3269 | 26 MB | | | 2.61 | Cpu (1) |
| | | | | | | | | | | | | | | monobloc cell physical read (3) |
| 74. STORE TABLE FULL ACCESS | OPP_C_DIM | 2304 | 4. 1. + 81 | 1. 2304 | 3. 112 KB | | | | |
| 75. KIND OF BUFFER. | 7449. 13107 | 1. + 81 | 1. 7449. | | | 370K | | |
| 76. RANGE OF SINGLE PARTITION | | 7449. 90. 1. + 81 | 1. 7449. | | | | | |
| 77. STORE TABLE FULL ACCESS | PC_T_REP | 7449. 90. 1. + 81 | 1. 7449. | | | | | |
| 78. STORE TABLE FULL ACCESS | TM_R_REP | 1884 | 88. 1. + 81 | 1. 1929 | | | | | | |
| 79. VIEW | | 1. 2. 1. + 81 | 1. 1. | | | | | |
| 80. USE OF JOIN FILTER | : BF0000 | 1. 2. 1. + 81 | 1. 1. | | | | | |
| 81. STORE TABLE FULL ACCESS | SYS_TEMP_0FD9D71E1_B445AE36 | 1. 2. 1. + 81 | 1. 1. | | | | | |
| 82. VIEW | | 105. 2. 1. + 81 | 1. 13637 | | | | | | |
| 83. STORE TABLE FULL ACCESS | SYS_TEMP_0FD9D71E0_B445AE36 | 105. 2. 1. + 81 | 1. 13637 | | | | | | |
| 84. STORE TABLE FULL ACCESS | TM_REP | 7136 | 7. 1. + 81 | 1. 7137 | | | | | | |
| 85. STORE TABLE FULL ACCESS | TM_REP | 7136 | 7. 1. + 81 | 1. 7137 | | | | | | |
| 86. STORE TABLE FULL ACCESS | L_OP | 19382 | 12813 | 72. + 81 | 1. 43879 | 593. 577 MB | 98,44% | 15 M | | |
| 87. TABLE ACCESS BY INDEX ROWID | L_US | 1. 1. 72. + 81 | 15 M | 15 M | | | | | 13.73. Cpu (21) |
| 88. INDEX UNIQUE SCAN | L_US_PK | 1. | 73. + 80 | 15 M | 15 M | | | | | 9.80 | Cpu (15) |
| 89. STORE TABLE FULL ACCESS | OPP_C_DIM | 2304 | 4. 1. 152. 1. 2304 | | | | | | |
===============================================================================================================================================================================================================================================
Information of predicates (identified by the operation identity card):
---------------------------------------------------
2. (("MEMBER_KEY_SYM" =: B1 ET "IS_LEAF" = 1) filter)
8 - storage("T7171".") CHILD_REP_ID "= 939)
filter ("T7171". ("CHILD_REP_ID" = 939)
10 - storage("T7167".") ANCESTOR_KEY "= 939)
filter ("T7167". ("ANCESTOR_KEY" = 939)
13 - access("T7167".") SALESYEARMONTH "= 'T7171'." SALES_YEAR_MONTH' AND 'T7167 '. "ANCESTOR_KEY"= "T7171". ("' CHILD_REP_ID")
filter (CASE WHEN TO_CHAR ("T7171". "CHILD_REP_ID_N") AS 9999% ' THEN 'ALL UNASSIGNED' | " CAST ("T7167". ("TERRITORY_ID" AS A VARCHAR (20)) ELSE 'T6715 '. "" NAME "| "
'||" T6715 ". "" MIDDLE_NAME "| " '||" T6715 ". ("" LAST_NAME "END ="JOES CRAMER")
15 - access("T7171".") CHILD_REP_ID_N "= 'T6715'." REP_ID")
16 - access("T7171".") CH_ID_SYM '= 'D1'.' C1")
20 - storage("T7171".") CHILD_REP_ID "= 939)
filter ("T7171". ("CHILD_REP_ID" = 939)
22 - storage("T7167".") ANCESTOR_KEY "= 939)
filter ("T7167". ("ANCESTOR_KEY" = 939)
23 - storage("T7171".") SALES_YEAR_MONTH "= 201505)
filter ("T7171". ("SALES_YEAR_MONTH" = 201505)
26 - storage("T7167".") SALESYEARMONTH "= 201505)
filter ("T7167". ("SALESYEARMONTH" = 201505)
28 - access (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM") = TO_CHAR ("OPP_C_DIM". " PERIOD") AND
TO_CHAR ("OPP_C_DIM". "OFFER_CODE") = "L_OP". ("' OFFER_CODE__C")
29 - filter("TM_REP".") REP_NUM "=" 0"|" » « « « L_US «. » REP_EMPLOYEE_NUMBER__C')
31 - access("T7315".") YEAR_MONTH «= TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP".» CLOSEDATE"),"YYYYMM")) AND
'T7167 '. «SALESYEARMONTH «= TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP".» (((("" CLOSEDATE "),"YYYYMM")))
32 - access("T7171".") CHILD_REP_ID_N "= 'T6715'." REP_ID")
33 - access("T7167".") MEMBER_KEY "=" TM_REP. " ("" REP_ID ")
34 - access("T7171".") CH_ID_SYM '= 'D1'.' C1")
35 - access("T7171".") CH_ID_SYM '= 'D1'.' C1")
37 - access (SYS_OP_MAP_NONNULL ("D2". "C16") = SYS_OP_MAP_NONNULL ("T7171". " CH_ID_SYM") AND"T7167 ". "SALESYEARMONTH"= "T7171". "" SALES_YEAR_MONTH "AND
'T7167 '. "ANCESTOR_KEY"= "T7171". ("' CHILD_REP_ID")
39 - access (SYS_OP_MAP_NONNULL ("D2". "C15") = SYS_OP_MAP_NONNULL ("T7315". " MONTH_NUMERIC'))
40 - storage("T7315".") YEAR_MONTH "= 201505)
filter ("T7315". ("YEAR_MONTH" = 201505)
43 - access (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM") = TO_CHAR ("OPP_C_DIM". " PERIOD") AND
TO_CHAR ("OPP_C_DIM". "OFFER_CODE") = "L_OP". ("' OFFER_CODE__C")
45 - filter("TM_REP".") REP_NUM "=" 0"|" » « « « L_US «. » REP_EMPLOYEE_NUMBER__C')
47 - access("T7315".") YEAR_MONTH «= TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP".» CLOSEDATE"),"YYYYMM")) AND
'T7167 '. «SALESYEARMONTH «= TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP".» (((("" CLOSEDATE "),"YYYYMM")))
48 - access("T7167".") MEMBER_KEY "=" TM_REP. " ("" REP_ID ")
49 - access("T7171".") CH_ID_SYM '= 'D1'.' C1")
50 - access("T7167".") SALESYEARMONTH "= 'T7171'." SALES_YEAR_MONTH' AND 'T7167 '. "ANCESTOR_KEY"= "T7171". ("' CHILD_REP_ID")
53 - access("T7171".") CH_ID_SYM "=" C1")
56 - storage("T7315".") YEAR_MONTH "= 201505)
filter ("T7315". ("YEAR_MONTH" = 201505)
63 - storage (("T7171". "SALES_YEAR_MONTH" = 201505 AND SYS_OP_BLOOM_FILTER (: BF0000, "T7171" ".") CH_ID_SYM')))
filter (("T7171". "SALES_YEAR_MONTH" = 201505 AND SYS_OP_BLOOM_FILTER (: BF0000, "T7171" ".") CH_ID_SYM')))
66 - storage (("T7167". "SALESYEARMONTH" = 201505 AND SYS_OP_BLOOM_FILTER (: BF0000, "T7167" ".") SALESYEARMONTH', 'T7167 '. ((("" ANCESTOR_KEY ")))
filter (("T7167". "SALESYEARMONTH" = 201505 AND SYS_OP_BLOOM_FILTER (: BF0000, "T7167" ".") SALESYEARMONTH', 'T7167 '. ((("" ANCESTOR_KEY ")))
70 - storage ((TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) > = 201505 AND "
TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) (< = 201505)) "
filter ((TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) > = 201505 AND "
TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) (< = 201505)) "
72 - access("L_OP".") OWNERID "=" L_US. " (' ' ID ')
73 - access("T765190".") WITH THE ID "=" L_OP. " ("' ACCOUNTID ')
77 - storage("T7167".") SALESYEARMONTH "= 201505)
filter ("T7167". ("SALESYEARMONTH" = 201505)
78 - storage("T7171".") SALES_YEAR_MONTH "= 201505)
filter ("T7171". ("SALES_YEAR_MONTH" = 201505)
81 - storage (SYS_OP_BLOOM_FILTER (: BF0000, "C0"))
filter (SYS_OP_BLOOM_FILTER (: BF0000, "C0"))
86 - storage ((TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) > = 201505 AND "
TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) (< = 201505)) "
filter ((TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) > = 201505 AND "
TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) (< = 201505)) "
88 - access("L_OP".") OWNERID "=" L_US. " (' ' ID ')
Note
-----
-dynamic sample used for this survey (level = 4)
-Automatic DOP: calculated degree of parallelism is 1 because of the parallel threshold
Although the table meet statistical why dynamic sampling is to be used? Why 15 million times ID 71, 72, 87 and 88 are executed, curious because that is where most of the time is spent. How can we reduce this 15 million probes?
Suggestions to reduce the response time sql would be useful.
Post edited by: Yasu masking of sensitive information (literal value)
YASU says:
For educational purposes could you please clarify why the optimizer has evaluated the join condition in the functioning of the FILTER to 15 million lines?
This is unusual, but TM_REP is attached to another PC_T_REP table using a join operation, so maybe it's the explanation of why it is moved to a FILTER operation had been delayed - could also be a side effect of the combination of ANSI join processing (Oracle internally transforms ANSI joins in Oracle join syntax) and the transformation of outer join internally.
Very curious to know how this is possible, could you please give us the hint/tour, you can use to push the inner join down execution plan is evaluated as soon as possible to reduce the data to be processed? I have a sql plus, where the situation is almost similar - ranks of filtering 2 million to the hash JOIN operation and return 0 rows. I can post the details of sql here but not to mix different sql question in the same post. Please let me know if you would like to give the details of this sql in this same thread, or a different thread? I searched for this type of information, but to no avail, so could you please suggest how this is possible, if not for this long sql then at least please provide a few examples/suggestions?
Normally you can influence this through the appropriate join order, for example with the help of the LEADING indicator, for filter indicator PUSH_SUBQ subqueries can also be useful for filtering at the beginning. But here the comment of Franck is particularly important - by leaning on the Cartesian join this problem here should be less relevant.
As I already said I would recommend here from scratch and think that all that this query is supposed to average and the question why most outer joins is actually converted into inner joins - the current query example returns the correct result?
Randolf
-
Difference of cardinality estimate on explain plan and implementation plan
I think some of you know the 5% rule, which is explained in the note on metalink # 68992.1.
In short, this means that (with bind peeking out voltage)
It is also well explained fundamentals of the CBO by Jonathan Lewis.- c1 > :b1 : 5% of selectivity - c1 >= :b1 :5% of selectivity - c1 between :b1 and :b2 : 0.25% of selectivity (5% * 5%)
But I found a few odd cases where the 5% rule is broken DURATION estimate.
The most interesting part is explain plan watch again the 5% rule.
Why the difference?
I think that with bind peeking out, explain the plan and the implementation plan should show the same things.
(Assuming that all values of the environment are identical)
Am I wrong?
It's the long story to tell, but simple test cases will show what I mean.
* Disable bind peeking. *UKJA@ukja102> @version BANNER --------------------------------------------------------------------- --------------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production UKJA@ukja102> UKJA@ukja102> set echo on UKJA@ukja102> UKJA@ukja102> drop table t1 purge; Table dropped. Elapsed: 00:00:00.09 UKJA@ukja102> UKJA@ukja102> create table t1(c1 int, c2 int) 2 ; Table created. Elapsed: 00:00:00.01 UKJA@ukja102> UKJA@ukja102> insert into t1 2 select 1, level 3 from dual 4 connect by level <= 10000 5 union all 6 select 2, level 7 from dual 8 connect by level <= 1000 9 union all 10 select 3, level 11 from dual 12 connect by level <= 100 13 union all 14 select 4, level 15 from dual 16 connect by level <= 10 17 union all 18 select 5, level 19 from dual 20 connect by level <= 1 21 ; 11111 rows created. Elapsed: 00:00:00.32 UKJA@ukja102> UKJA@ukja102> exec dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1'); PL/SQL procedure successfully completed.
In the following result, explain the plan following the 5% rule.UKJA@ukja102> UKJA@ukja102> alter session set "_optim_peek_user_binds" = false;
(11111 * 0.05 = 555)
But the term plan does'nt follow the 5% rule. It uses its own densityUKJA@ukja102> UKJA@ukja102> explain plan for 2 select count(*) 3 from t1 4 where c1 > :b1 5 ; Explained. Elapsed: 00:00:00.01 UKJA@ukja102> UKJA@ukja102> @plan UKJA@ukja102> select * from table(dbms_xplan.display) 2 / PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 3724264953 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 3 | 6 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | |* 2 | TABLE ACCESS FULL| T1 | 556 | 1668 | 6 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("C1">TO_NUMBER(:B1)) 14 rows selected. Elapsed: 00:00:00.01
(11111 * density (c1) = 11111 * 0.2 = 2222)
the 5% rule seems to beUKJA@ukja102> select /*+ gather_plan_statistics */ 2 count(*) 3 from t1 4 where c1 > :b1 5 ; COUNT(*) ---------- 1111 Elapsed: 00:00:00.00 UKJA@ukja102> UKJA@ukja102> @stat UKJA@ukja102> select * from table 2 (dbms_xplan.display_cursor(null,null,'allstats cost last')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- SQL_ID 0nmqsysmr3ap9, child number 0 ------------------------------------- select /*+ gather_plan_statistics */ count(*) from t1 where c1 > :b1 Plan hash value: 3724264953 -------------------------------------------------------------------------------- ------------------ | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A- Time | Buffers | -------------------------------------------------------------------------------- ------------------ | 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00 :00.01 | 23 | |* 2 | TABLE ACCESS FULL| T1 | 1 | 2223 | 6 (0)| 1111 |00:00 :00.01 | 23 | -------------------------------------------------------------------------------- ------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("C1">:B1) 18 rows selected.
-applied to explain the plan always
-applied at the level of enforcement only when density 5 < %. When the density > 5%, he uses the density not 5%
I'm not sure it's a designed feature or a bug.
But estimates of different cardinality explain a plan and DURATION (with bind peeking out voltage) is not that desirable thing.
One's opinion on this?
Dion ChoSorry to take some time to get back on this one.
I can reproduce your results in 10.2.0.1, but the anomaly is not present in 9.2.0.8 and 10.2.0.3 and 11.1.0.6.
As Charles, the calculation has a boundary condition when a num_diistinct falls below 20
(i.e. when a value is more than 5% of the total data set - average).However, the fact that explain the plan and the run time you give estimates of different cardinality is a bug.
Everything they say, they should say the same thing at least that the introduction of the variable binding
introduced a possible type conversion or the NLS conversion feature that has changed the
calculation of the expected cardinality. In this case there is no reason why the use of links should be
cause confusion - so we can reasonably assume that it is a bug.Concerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
http://www.jlcomp.demon.co.UK"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." (Stephen Hawking)
-
Hi, I'm using Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64-bit Production version of oracle. Here are the parameters that have default values.
OPTIMIZER_INDEX_COST_ADJ 100
optimizer_index_caching 0I have a simple, similar as query below
select c1, c2,c3 from tab1 where c1 = 'abc'; its using below plan --------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.27 | 26816 | |* 1 | TABLE ACCESS FULL| TAB1 | 1 | 1171K| 0 |00:00:00.27 | 26816 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("TAB1"."c1"='abc') now my concern is , even if the rows it returns is 0, how the expected cardinality calculated to be 1171K. This perhaps tilting the optimizer decision to go for FTS of table, even if ther exist an index id1 on TAB1(C1) and resulting less record sometimes.Now below are the stats for the table and column c1 select num_rows from dba_tables where table_name='TAB1' NUM_ROWS ------- 1171095 select density,num_distinct from dba_tab_col_statistics where table_name='TAB1' and column_name='C1' density num_distinct -------- --------- 4.26950845149198E-7 1171095 As ther existing frequency histogram on this column, so the expected cardinality estimation should be density*num_distinct = .5. select endpoint_number, endpoint_value from dba_tab_histograms where table_name='TAB1' and column_name='C1'; ENDPOINT_NUMBER ENDPOINT_VALUE -------------- ---------------- 234219 3.80421485912222E35 when i force the index , the plan becaomes as below Elapsed: 00:00:01.50 Execution Plan ---------------------------------------------------------- Plan hash value: 1013434088 --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1171K| 93M| 870K (1)| 01:13:50 | | 1 | TABLE ACCESS BY INDEX ROWID| TAB1 | 1171K| 93M| 870K (1)| 01:13:50 | |* 2 | INDEX RANGE SCAN | ID1 | 1171K| | 21124 (1)| 00:01:48 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("TAB1"."C1"='abc')
You have a histogram of the frequency that captured a single value - there is a bug known for the case (although I can't quote numbers - check out the blog of Randolf Geist). I'm a little surprised that he is still there in 11.2.0.3, but there are a few edge cases, maybe not all set. I note that the value of the histogram you captured begins: "ID:414" - given that you used a sample of 20%, I wonder if the column has 1 million distinct values that all start with the same 15 characters (select min(), max() table to satisfy my curiosity) - Oracle Gets a bit lost with the statistics for the strings after (approximately) the first 6.
Concerning
Jonathan Lewis
-
Join the cardinality statistics interesting the result
Hello Experts,
I just read an article on the CARDINALITY of the JOIN related things Oracle: function table and join cardinality estimates , without collection of statistics (with the help of dynamic sampling) optimizer calculates the CARDINALITY of JOIN pain why? I mean, I'm trying to understand the CARDINALITY JOIN, must rely on the statistics? Although I don't have the statistics to gether, as you can see on the optimizer estimates that correct cardinalities of table, below, but JOIN CARDINALITY completely wrong. What do you think about this behavior? Dynamic sampling misled the optimizer?
drop table t1 purge;
drop table t2 purge;
create table t1
as
Select rownum id, mod (rownum, 10) + 1 as fk, rpad ('X', 10) filter from dual connect by level < = 1000;
create table t2 as
Select rownum + id 20, rpad ('X', 10) filter from dual connect by level < = 10;
explain plan for
Select * from t1 join t2 on t1.fk = t2.id;
Select * from table (dbms_xplan.display);
Hash value of plan: 2959412835
---------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 53000. 8 (13) | 00:00:01 |
|* 1 | HASH JOIN | | 1000 | 53000. 8 (13) | 00:00:01 |
| 2. TABLE ACCESS FULL | T2 | 10. 200 | 3 (0) | 00:00:01 |
| 3. TABLE ACCESS FULL | T1 | 1000 | 33000 | 4 (0) | 00:00:01 |
---------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
1 - access("T1".") FK '= 'T2'.' (ID')
Note
-----
-dynamic sample used for this survey (level = 2)
exec dbms_stats.gather_table_stats (user, 'T1');
exec dbms_stats.gather_table_stats (user, 'T2');
explain plan for
Select * from t1 join t2 on t1.fk = t2.id;
Select * from table (dbms_xplan.display);
Hash value of plan: 2959412835
---------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. 32. 8 (13) | 00:00:01 |
|* 1 | HASH JOIN | | 1. 32. 8 (13) | 00:00:01 |
| 2. TABLE ACCESS FULL | T2 | 10. 140. 3 (0) | 00:00:01 |
| 3. TABLE ACCESS FULL | T1 | 1000 | 18000 | 4 (0) | 00:00:01 |
---------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
1 - access("T1".") FK '= 'T2'.' (ID')
Thanks in advance.
take a look at CBO (event 10053) track for your example, I see the different calculations for the cardinality of the join:
-without statistics
Join map: 1000.000000 == external (10.000000) * inner (1000.000000) * salt (0.100000)
-with statistics
Join map: 0.000000 == external (10.000000) * inner (1000.000000) * salt (0.000000)
Salt (0.100000) for execution with a dynamic sampling corresponds to the standard formula: selectivity join = 1 / greater (num_distinct (t1.fk), num_distinct (t2.id)). In the CBO trace for this run, I also see information on sampling:
SELECT / * OPT_DYN_SAMP * / / * + ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL (SAMPLESUB) opt_param ('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX (SAMPLESUB) NO_SQL_TUNE * / NVL (SUM (C1), 0), NVL (SUM (C2), 0), COUNT (DISTINCT (C3), NVL (SUM (CASE WHERE C3 IS NULL THEN 1 ELSE 0 END), 0) FROM (SELECT / * + NO_PARALLEL ('T1') FULL ('T1') NO_PARALLEL_INDEX ('T1') * / 1 AS C1 1 IN C2) , 'T1 '. ("" FK "AS C3 FROM"T1""T1") SAMPLESUB
2013-12-13 14:14:19.584
* Run the dynamic sampling query:
level: 2
PCT. example: 100.000000
the actual sample size: 1000
map of the filtered sample. : 1000
map of the orig. : 572
block the NTC. Table stats: 7
block the NTC. for sampling: 7
Maximum sample block cnt. : 64
sample block cnt. : 7
NDV C3: 10
scale: 10 h 00
values NULL C4: 0
scaling: 0.00
min. salt. is. :-1.00000000
* Stats pass. dynamic sampling. :
Column (#2): FK (manufacturer: 0)
AvgLen: 22 NDV: 10 NULL values: density 0: 0.100000
* By dynamic sampling NULLs estimates.
* Using dynamic sampling NDV estimated.
Scaling using malaria cardinality = 1000.
* With the help of dynamic sampling card. : 1000
* Updated dynamic sampling table card.
Table: Alias T1: T1
Map: Original: 1000.000000 rounded: 1000 calculated: 1000.00 unadjusted: 1000.00
Path: analysis
Cost: 2.00 RESP: 2,00 degree: 0
Cost_io: 2.00 Cost_cpu: 239850
Resp_io: 2.00 Resp_cpu: 239850
Best: AccessPath: analysis
Cost: 2.00: 1 degree Resp: 2,00 map: 1000,00 bytes: 0
The application of sampling does not contain information on the range of values for the join columns - if the optimizer has no choice but to use the standard formula for the selectivity of the join.
According to the statistics of the CBO knows the HIGH_VALUE and LOW_VALUE for the join columns and can guess that there is no intersection - and thus the selectivity is set to 0 (resulting in a cardinality of 1).
-
Estimates of cardinality for index range scan with bind variables
Oracle 11.2.0.4
I am struggling to explain that the cardinality estimates for a scan of the index systematic range when using the bind variable.
Consider the following query:
SELECT /*+ INDEX(t1) */ * FROM t1 WHERE source_id <= ?;
Cardinalities for the INDEX RANGE SCAN and ACCESS of the TABLE are the same for different literal predicates, for example, source_id < = 5:
------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 50 | 350 | 12 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| T1 | 50 | 350 | 12 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | IX1 | 50 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("SOURCE_ID"<=5)
If a variable binding is used instead of a literal, the overall selectivity is 5%. However, why the optimizer based on CSSTidy gives a cardinality estimated 11 for the scan of the index systematic range? As with the predicates literal, surely the cardinalities of the index range scan and access table should be the same?
------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 50 | 350 | 5 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| T1 | 50 | 350 | 5 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | IX1 | 11 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("SOURCE_ID"<=TO_NUMBER(:A))
Unit test code:
CREATE TABLE t1 ( id NUMBER , source_id NUMBER ); CREATE INDEX ix1 ON t1 (source_id); INSERT INTO t1 SELECT level , ora_hash(level,99)+1 FROM dual CONNECT BY level <= 1000; exec DBMS_STATS.GATHER_TABLE_STATS(user,'T1') EXPLAIN PLAN FOR SELECT /*+ INDEX(t1) */ * FROM t1 WHERE source_id <= 5; SELECT * FROM TABLE(dbms_xplan.display); EXPLAIN PLAN FOR SELECT /*+ INDEX(t1) */ * FROM t1 WHERE source_id <= :a; SELECT * FROM TABLE(dbms_xplan.display);
There are various places where the optimizer uses an assumption, and lie unpeekable (and of Villa "unknowable value") introduced guess.
For unpeekable binds the conjecture for column<= {unknown}="" is="" 5%="" for="" table="" access="" (hence="" 50="" rows="" out="" of="" 1,000),="" but="" it's="" 0.009="" for="" index_column="">=><= {unknown},="" which="" means="" i="" was="" expecting="" to="" see="" 9="" as="" the="" row="" estimate="" on="" the="" index="" range="">=>
I just ran some quick tests, and EXPLAIN the PLAN seems to just use 0.011 selectivity in this case (in different versions of Oracle) although if we do the bind variable unpeekable at run time (and sample dynamic block etc.) optimization for execution is 0.009%.
Concerning
Jonathan Lewis
Update: and this is a very old reference to the 0.009 (and 0.0045 for ' between the ' when it is applied to a clue: cost based Oracle - access Chapter 4 single B-tree )
-
Join or subquery (in terms of performance)
Hi all
Could someone please help me find the best-performing sql below.
The two SQL have same elapsed time.
Version Oracle 10.2.0.4
Thank you.
Published by: Yasu on November 26, 2010 21:46
Added elapsed time information.
Published by: Yasu on Nov 27, 2010 10:20
The sensitive data deletedYASU says:
We are in process to inject the new module to an existing application, so I think we can go to EXISTS, because we are not rewriting all SQL.Also, does affect in any way on the lines estimated in case EXISTS that are high about 19103 lines.
I noticed this problem of cardinality before posting my previous answer, but didn't mention it. The cardinality estimate problem, while it has not caused a problem with your relatively simple SQL statement, it might cause changes in the join order if the cardinality estimate problem exists in other queries.
One more doubt.
Why full scan limited index is implemented instead of the analysis of range of index for the LIKE operator.
-------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | -------------------------------------------------------------------------------------------------------------------- | 1 | NESTED LOOPS SEMI | | 1 | 684 | 64 |00:00:00.02 | 17148 | |* 2 | INDEX FAST FULL SCAN | PK_ORG_ALIAS_NAMES | 1 | 684 | 180 |00:00:01.29 | 16713 | |* 3 | TABLE ACCESS BY GLOBAL INDEX ROWID| ORGANIZATIONS | 170 | 19103 | 59 |00:00:00.01 | 435 | |* 4 | INDEX UNIQUE SCAN | PK_ORGANIZATIONS | 170 | 1 | 170 |00:00:00.01 | 346 | -------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter((LOWER("ORG_ALIAS_NAME") LIKE 'technology%' AND "F_GETFIRSTWORD"("ORG_ALIAS_NAME")='TECHNOLOGY')) 3 - filter((COALESCE("M"."ORG_COUNTRY_OF_DOMICILE","M"."ORG_COUNTRY_OF_INCORPORATION")='US' AND "ORG_DATA_PROVIDER"<100)) 4 - access("M"."ORGID"="AN"."ORGID") DDL for index PK_ORG_ALIAS_NAMES: CREATE UNIQUE INDEX "PK_ORG_ALIAS_NAMES" ON "OA"."ORG_ALIAS_NAMES" ("ORGID", "ORG_ALIAS_NAME", "ORG_ALIAS_EFF_FROM_DATE")
A scan interval is not possible for the index for two reasons:
1. based on the definition of the supplied index, the ORG_ALIAS_NAME column is not the main (first) column in the index definition. So, at best an index skip scan can occur, rather than a scan of the index systematic range.
2. the LOWER() function is applied to the indexed column of ORG_ALIAS_NAME.To see a range of index scan you would probably need a function based index constructd like this:
CREATE UNIQUE INDEX PK_ORG_ALIAS_NAMES ON OA.ORG_ALIAS_NAMES (LOWER(ORG_ALIAS_NAME), ORGID, ORG_ALIAS_EFF_FROM_DATE)
However, changing the definition of index as shown above may impact important negative performance on other queries that currently use the PK_ORG_ALIAS_NAMES index.
Charles Hooper
Co-author of "Expert Oracle practices: Oracle Database Administration of the Oak Table.
http://hoopercharles.WordPress.com/
IT Manager/Oracle DBA
K & M-making Machine, Inc. -
Hello people, make sample application
Got lines '2' only... Cardinality error... How to avoid this?SQL> create table t as select object_name from all_objects; Table created. SQL> select count(*) from all_objects; COUNT(*) ---------- 49731 SQL> select count(*) from t; COUNT(*) ---------- 49732 SQL> insert into t select 'SMITH' from t where rownum <= 196; 196 rows created. SQL> select count(*) from t; COUNT(*) ---------- 49928 SQL> create index t_idx on t(object_name); Index created. SQL> analyze table t compute statistics; Table analyzed. SQL> set autotrace traceonly explain; SQL> select * from t where object_name = 'SMITH'; Execution Plan ---------------------------------------------------------- Plan hash value: 2946670127 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2 | 48 | 1 (0)| 00:00:01 | |* 1 | INDEX RANGE SCAN| T_IDX | 2 | 48 | 1 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("OBJECT_NAME"='SMITH') SQL>
Thank youIf the optimizer's cardinality estimates are reasonably accurate, it is reasonably likely that he chose a plan that is at least nearly optimal in light of the SQL database settings and optimizer and structures available to the optimizer to improve performance (index, materialized, views etc.).
The fact that probably the optimizer has not made a mistake, however, won't say that there is no possibility of improvement. It is quite possible, for example, that the query itself can be reworked to be more effective, whether we can create additional indexes or materialized views, or that a better plan may be available so different settings have been made.
In this case, EMP and DEPT being so small, the plan appears reasonable. You can create a materialized view that would be pre-computes the result that would probably improve the query performance, but it is likely that you really care about the performance of this query optimization since it is probably ~ 10 compatible Gets and probably runs in less than a hundredth of a second.
Justin
-
Optimizer how SQL prefer a kind of join method on other
Hello
I use Oracle version 10.2.0.3.
I have a question, how the oracle SQL optimizer decides that to three different types of joins (hash join, sort Merge nested loop join join) which meet the method to be applied to a particular place?
How etc it done before opting for a particular mode of join?
I read below, the oracle documentation.
Because the default purpose of the cost-based approach is better throughput, the optimizer chooses an operation of nested loops, a merge sort operation or a hash operation in order to join these tables, which are likely to return all of the rows selected by the query faster in function.
But how oracle decides that which join method will return lines "QUICKLY"?
Thanks in advance.
oratest
Published by: oratest on August 10, 2010 01:57oratest wrote:
HelloI use Oracle version 10.2.0.3.
I have a question, how the oracle SQL optimizer decides that to three different types of joins (hash join, sort Merge nested loop join join) which meet the method to be applied to a particular place?
How etc it done before opting for a particular mode of join?
I read below, the oracle documentation.
Because the default purpose of the cost-based approach is better throughput, the optimizer chooses an operation of nested loops, a merge sort operation or a hash operation in order to join these tables, which are likely to return all of the rows selected by the query faster in function.
But how oracle decides that which join method will return lines "QUICKLY"?
According to me, this quote can be found here:
http://download.Oracle.com/docs/CD/A97630_01/server.920/a96533/hintsref.htm#5655Optimizer of Oracle does not necessarily on which will be return lines more quickly, but rather what join method will have the lowest calculated (estimated) cost. There is, however, a correlation between the calculated cost and estimated time. This calculation is visible by reviewing a trace 10053 for a query. For example:
... NL Join Outer table: Card: 255.00 Cost: 61.50 Resp: 61.50 Degree: 1 Bytes: 26 Inner table: T1 Alias: L Access Path: TableScan NL Join: Cost: 94237.11 Resp: 94237.11 Degree: 0 Cost_io: 89563.00 Cost_cpu: 19435977712 Resp_io: 89563.00 Resp_cpu: 19435977712 OPTIMIZER PERCENT INDEX CACHING = 100 Access Path: index (RangeScan) Index: SYS_C004736 resc_io: 4.00 resc_cpu: 42152 ix_sel: 1.6903e-004 ix_sel_with_filters: 1.6903e-004 NL Join: Cost: 266.02 Resp: 266.02 Degree: 1 Cost_io: 262.00 Cost_cpu: 16710000 Resp_io: 262.00 Resp_cpu: 16710000 OPTIMIZER PERCENT INDEX CACHING = 100 Access Path: index (AllEqJoin) Index: T1_IND2 resc_io: 335.00 resc_cpu: 3580792 ix_sel: 0.015748 ix_sel_with_filters: 0.015748 NL Join: Cost: 17190.42 Resp: 17190.42 Degree: 1 Cost_io: 17143.00 Cost_cpu: 197180657 Resp_io: 17143.00 Resp_cpu: 197180657 Best NL cost: 266.02 resc: 266.02 resc_io: 262.00 resc_cpu: 16710000 resp: 266.02 resp_io: 262.00 resp_cpu: 16710000 Join Card: 0.09 = outer (255.00) * inner (812.91) * sel (4.2166e-007) Join cardinality for HJ/SMJ (no post filters): 34.96, outer: 255.00, inner: 812.91, sel: 1.6867e-004 Join Card - Rounded: 1 Computed: 0.09 SM Join Outer table: resc: 61.50 card 255.00 bytes: 26 deg: 1 resp: 61.50 Inner table: T1 Alias: L resc: 67.77 card: 812.91 bytes: 32 deg: 1 resp: 67.77 using dmeth: 2 #groups: 1 SORT resource Sort statistics Sort width: 598 Area size: 1048576 Max Area size: 104857600 Degree: 1 Blocks to Sort: 2 Row size: 39 Total Rows: 255 Initial runs: 1 Merge passes: 0 IO Cost / pass: 0 Total IO sort cost: 0 Total CPU sort cost: 4250063 Total Temp space used: 0 SORT resource Sort statistics Sort width: 598 Area size: 1048576 Max Area size: 104857600 Degree: 1 Blocks to Sort: 5 Row size: 46 Total Rows: 813 Initial runs: 1 Merge passes: 0 IO Cost / pass: 0 Total IO sort cost: 0 Total CPU sort cost: 4512317 Total Temp space used: 0 SM join: Resc: 131.38 Resp: 131.38 [multiMatchCost=0.00] SM cost: 131.38 resc: 131.38 resc_io: 125.60 resc_cpu: 24044481 resp: 131.38 resp_io: 125.60 resp_cpu: 24044481 SM Join (with index on outer) Access Path: index (FullScan) Index: SYS_C004653 resc_io: 441.00 resc_cpu: 17604740 ix_sel: 1 ix_sel_with_filters: 1 Cost: 89.05 Resp: 89.05 Degree: 1 Outer table: resc: 89.05 card 255.00 bytes: 26 deg: 1 resp: 89.05 Inner table: T1 Alias: L resc: 67.77 card: 812.91 bytes: 32 deg: 1 resp: 67.77 using dmeth: 2 #groups: 1 SORT resource Sort statistics Sort width: 598 Area size: 1048576 Max Area size: 104857600 Degree: 1 Blocks to Sort: 5 Row size: 46 Total Rows: 813 Initial runs: 1 Merge passes: 0 IO Cost / pass: 0 Total IO sort cost: 0 Total CPU sort cost: 4512317 Total Temp space used: 0 SM join: Resc: 157.91 Resp: 157.91 [multiMatchCost=0.00] HA Join Outer table: resc: 61.50 card 255.00 bytes: 26 deg: 1 resp: 61.50 Inner table: T1 Alias: L resc: 67.77 card: 812.91 bytes: 32 deg: 1 resp: 67.77 using dmeth: 2 #groups: 1 Cost per ptn: 0.53 #ptns: 1 hash_area: 0 (max=256) Hash join: Resc: 129.80 Resp: 129.80 [multiMatchCost=0.00] HA cost: 129.80 resc: 129.80 resc_io: 125.60 resc_cpu: 17480759 resp: 129.80 resp_io: 125.60 resp_cpu: 17480759 Best:: JoinMethod: Hash Cost: 129.80 Degree: 1 Resp: 129.80 Card: 0.09 Bytes: 58
As it is apparent from the foregoing, a hash join was the lowest calculated cost to 129,80, the cost calculated for the nested loop join was 266,02, and the merge join and sort had a cost calculated between the two. If you want to learn more about traces of 10053 pick up a copy of the book "basic Oracle cost-based. Interestingly, the author of this book has recently proposed that all the joints are actually nested loops joins just with different start-up costs. See the following blog post:
http://jonathanlewis.WordPress.com/2010/08/02/joins/Charles Hooper
Co-author of "Expert Oracle practices: Oracle Database Administration of the Oak Table.
http://hoopercharles.WordPress.com/
IT Manager/Oracle DBA
K & M-making Machine, Inc. -
Order of execution of SQL statements
Hello
I have two SQL statements where the first statement executes on a table with more than 40 million lines, the second statement is running on a table more 6 million lines. When they are running their own each take about 0.15 seconds to run, but when combined they take 20 minutes to run, (the second SQL statement is inserted in the WHERE clause of the statement of first). It would seem that after combining these statements, the first statement goes through all 40 million lines before it performs the SELECT in the WHERE clause. I think that what is necessary is to ensure the SELECT in the WHERE clause is executed first... or something like that! Anyone has any ideas on how to combine these statements but not suffer from the performance impact?
The first statement is:
Select csi.instance_id,
OEL.ordered_item
of apps.csi_item_instances csi,.
Apps.oe_order_lines_all oel
where csi.instance_id in
(1718000,3698000,48740202)
and csi.last_oe_order_line_id = oel.line_id;
The second statement is:
Select / * + INDEX (IEA (attribute_id)) * /.
IEA.instance_id
apps.csi_iea_values do
where iea.attribute_id = 10004
and iea.attribute_value is not null;
The joint return is:
Select csi.instance_id,
OEL.ordered_item
of apps.csi_item_instances csi,.
Apps.oe_order_lines_all oel
where csi.instance_id in
(select / * + INDEX (IEA (attribute_id)) * /)
IEA.instance_id
apps.csi_iea_values do
where iea.attribute_id = 10004
and iea.attribute_value is not null)
and csi.last_oe_order_line_id = oel.line_id;
Thanks for any help,
Mike
Your subquery returns probably just two values that you did originally as constants - but the optimizer thinks that you're going to get 564 K lines. This is why the indicator for the simple query has a beneficial effect, it forces the Oracle to use an index when it would otherwise make a search.
When the subquery is incorporated, however, the optimizer uses its cardinality expected to decide whether to use a nested loop join or the hash join to CSI_ITEM_INSTANCES, since the large enough estimate, he uses the hash with a join analysis complete. That's why I pointed out that the fact to tell the optimizer to how many lines outside the subquery should make a difference.
Have you tried the "common table expression" approach, rather than approach no_merge, but it would not help because it does not change the optimizer for cardinality estimate. If you want to repeat the method CTE adding boards / * + materialize cardinality (2) * / to the query in the WITH clause, you should get the desired result.
Concerning
Jonathan Lewis
-
Oracle would have put in a lot of thought behind the histograms, to get accurate cardinality and so query plan. Thank you.
I may be wrong by asking at our, but can't help but ask. Expecting some bright minds to help me better understand
Why not make simple observations as each value and their respective heads, exactly the same as the histograms of frequency, but without the limitation on the number of distinct values.
Simply change the order by the official documentation on found in the link
SELECT country_subregion_id, count (*)
OF sh.countries
GROUP BY country_subregion_id
ORDER BY 2;
COUNTRY_SUBREGION_ID COUNT (*)
-------------------- ----------
52792 1
1 52795
1 52796
2 52794
52797 2
52798 2
5 52793
9 52799
and then based on the number of rows in the table, NDV, etc. find a threshold for this if the repeat count is > N (may be 5 here in this example), on or above this number, means full table scan is preferable. So to create two buckets for this table, once with the popular (52793, 52799) and the other with steady values (peace to all). When some research, start with the bucket with the least amount of size. In this case, popular bucket. So if there is, then assume that it is a non-popular value and suggest query plans to use the index.
What happens if fine, new country_sub_region_id is inserted above these values, (after you have created the popular and non-popular buckets)? Its the same risk as of today, if the GATHER_STATS is / was timed to harm, then we could get hurt out histograms suggestions. Right?
There is much more to choose whether to use an index. How your model to determine the correct order of join when joining many tables? It would not be. He could not do his extended no more stats. I blogged about the other day, with complex predicates cardinality estimates
-
Hi all
It is the plan of the explanation of my request:
{code}
Hash value of 1 plan: 1914807434
2
3 --------------------------------------------------------------------------------------------------------------------------------------
4 | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |
5 --------------------------------------------------------------------------------------------------------------------------------------
6 | 0 | SELECT STATEMENT | | 3753. 747K | | 3771 (2) | 00:00:29 |
7 | 1. SORT ORDER BY | | 3753. 747K | 1016K | 3771 (2) | 00:00:29 |
8 | 2. HASH GROUP BY. | 3753. 747K | 1016K | 3771 (2) | 00:00:29 |
9 | 3. VIEW | | 3753. 747K | | 3432 (2) | 00:00:27 |
10. 4. KIND OF WINDOW. | 3753. 1491K | 1592K | 3432 (2) | 00:00:27 |
11. * 5 | OUTER HASH JOIN | | 3753. 1491K | 1488K | 3103 (2) | 00:00:24 |
12. 6. VIEW | | 3753. 1436K | | 927 (2) | 00:00:08 |
13. * 7 | HASH JOIN | | 3753. 1513K | | 927 (2) | 00:00:08 |
14. * 8 | FULL RESTRICTED INDEX SCAN FAST | PK_REQUIREMENT_YEAR | 942. 7536 | | 3 (0) | 00:00:01 |
15. 9. VIEW | | 580. 229K | | 924 (2) | 00:00:08 |
16. 10. HASH GROUP BY. | 580. 99K | | 924 (2) | 00:00:08 |
17. * 11 | OUTER HASH JOIN | | 580. 99K | | 923 (1) | 00:00:08 |
18. 12. VIEW | | 580. 93380 | | 558 (1) | 00:00:05 |
19. 13. NESTED LOOPS | | 580. 105K | | 558 (1) | 00:00:05 |
20. 14. NESTED LOOPS | | 580. 105K | | 558 (1) | 00:00:05 |
21. * 15 | OUTER HASH JOIN | | 131. 22532 | | 165 (2) | 00:00:02 |
22. * 16. HASH JOIN | | 131. 20043 | | 65 (2) | 00:00:01 |
23. 17. VIEW | the index $ _join$ _010 | 116. 1276 | | 2 (0) | 00:00:01 |
24. * 18. HASH JOIN | | | | | | |
25. 19. FULL RESTRICTED INDEX SCAN FAST | IDX_AVAILABILITY_TYPE | 116. 1276 | | 1 (0) | 00:00:01 |
26. 20. FULL RESTRICTED INDEX SCAN FAST | PK_AVAILABILITY_TYPE | 116. 1276 | | 1 (0) | 00:00:01 |
27. * 21. HASH JOIN | | 131. 18602 | | 63 (2) | 00:00:01 |
28. 22. TABLE ACCESS FULL | PE | 23. 276. | 3 (0) | 00:00:01 |
29. * 23. HASH JOIN | | 131. 17030. | 60 (2) | 00:00:01 |
30. * 24. TABLE ACCESS BY INDEX ROWID | SHIP_SHEETS | 131. 9956. | 50 (0) | 00:00:01 |
31. * 25. INDEX RANGE SCAN | NIDX_REQ_ID | 1597. | | 7 (0) | 00:00:01 |
32. * 26. HASH JOIN | | 422. 22788 | | 10 (10) | 00:00:01 |
33. 27. THE MERGE JOIN. | 46. 1058. | 5 (20) | 00:00:01 |
34. 28. TABLE ACCESS BY INDEX ROWID | COMPANY | 5. 35. | 2 (0) | 00:00:01 |
35. 29. INDEX SCAN FULL | PK_ENTERPRISE | 5. | | 1 (0) | 00:00:01 |
36. * 30 | JOIN TYPE. | 46. 736. | 3 (34) | 00:00:01 |
37. 31. VIEW | the index $ _join$ _006 | 46. 736. | 2 (0) | 00:00:01 |
38. * 32 | HASH JOIN | | | | | | |
39. 33. FULL RESTRICTED INDEX SCAN FAST | IDX_HULLCLASS_ENT | 46. 736. | 1 (0) | 00:00:01 |
40. 34. FULL RESTRICTED INDEX SCAN FAST | IDX_HULLCLASS | 46. 736. | 1 (0) | 00:00:01 |
41. 35. TABLE ACCESS FULL | HULL | 422. 13082. | 5 (0) | 00:00:01 |
42. 36. TABLE ACCESS FULL | TRAVEL | 53461 | 991K | | 99 (2) | 00:00:01 |
43. * 37 | INDEX RANGE SCAN | PK_SHIPSHEET_ANNUAL_PHASING | 4. | | 2 (0) | 00:00:01 |
44. 38. TABLE ACCESS BY INDEX ROWID | SHIP_SHEET_ANNUAL_PHASING | 4. 60. | 3 (0) | 00:00:01 |
45. 39. TABLE ACCESS FULL | TRAVEL_OVERTIME | 245K | 3596K | | 363 (2) | 00:00:03 |
46. * 40 | TABLE ACCESS FULL | SHIP_SHEET_ANNUAL_PHASING | 500K | 7327K | | 1458 (2) | 00:00:12 |
47 --------------------------------------------------------------------------------------------------------------------------------------
48
Predicate information 49 (identified by the operation identity card):
50 ---------------------------------------------------
51
5 52 - access ("P". "FISCAL YEAR" (+) = "RY" "." " THE EXERCISE' AND 'P '. "' SHIP_SHEETS_ID ' (+) ="SQ ". ("' SHIP_SHEETS_ID")
7 53 - access ("RY". "" REQ_ID '=' SQ. ("' REQ_ID")
54 8 - filter ("RY". "FY" > = 2014 AND 'RY' "." " (FY' < = 2020)
55-11 - access ("TRO". "FISCAL YEAR" (+) = "P1" "." " THE EXERCISE"AND"TRO ". "' TRAVEL_ID ' (+) ="T ". ("' TRAVEL_ID")
56 15 - access ("T". "' SHIP_SHEETS_ID ' (+) ="SS ". ("' SHIP_SHEETS_ID")
57 16 - access ("SS". "AVAILABILITY_TYPE_ID"= 'A'." AVAILABILITY_TYPE_ID')
58 18 - access (ROWID = ROWID)
59 21 - access ("SS". "PE_ID"="P" "." PE_ID')
60 23 - access ("SS". "HULL_ID"="H" "." HULL_ID')
61 24 - filter ("SS". ("NSA_WORKSITE_ID" = 11)
62 25 - access ("SS". ("REQ_ID" = 248)
63 26 - access ('H'. "" HULL_CLASS_ID "=" HC ". ("' HULL_CLASS_ID")
64 30 - access ("HC". "ENT_ID"="E" "." " ENT_ID')
filter 65 ("HC". "ENT_ID"="E" "." " ENT_ID')
66 32 - access (ROWID = ROWID)
67 37 - access ("P1". "SHIP_SHEETS_ID"="SS" "." " SHIP_SHEETS_ID')
68 40 - filters ("P". "FY" (+) > = 2014 AND 'P '. "FY" (+) (< = 2020) "
{code}
I see several FULL ACCESS TABLE, I want to improve the speed of this query because the running business manager pays 47% activity.
Can anyone guide my thru?
Thank you!
If the query takes 2 seconds to finish? How many times is the executed query? Do you think that 2 seconds too for this operation?
The query plan does not show errors in the optimizers cardinality estimates. Maybe the COMPLETE TABLE SCANs of 39 and 40 step could be avoided (and corresponding HASH joins replaced by NESTED LOOPS) if there were signs of adjustment - but this does not mean that a NL join with access index would provide a large performance gain.
Without the request, it is difficult to say but I think that access on SHIP_SHEET_ANNUAL_PHASING could use an index on SHIP_SHEET_ANNUAL_PHASING (SHIP_SHEETS_ID, FY) and access on TRAVEL_OVERTIME use an index on TRAVEL_OVERTIME (TRAVEL_ID, FY). So the first question is: these indices exist? The second question is: you can create additional indexes? (In view of the fact that these indices could influence other queries - and not necessarily in a positive way - and they will have a negative impact on DDL operations on the table)
Perhaps the first question is rather: do you need to improve performance at all? The other questions are 2. and 3.
-
Hello
After completing my question I see than the quite extensive preface...
After spending some time in an attempt to build a simple case of adaptive test plans in 12 c I'm more sure I understand the concept completely. In its white paper on the optimizer changes 12 c (http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf), Maria Colgan says:
adaptation plans allow the optimizer to postpone the decision of final plan for a statement, until the time of execution. The instruments of the optimizer plan chosen (the plan by default), among collectors of statistics so that at run time, it can detect if the cardinality estimate, differ greatly the actual number of lines seen by operations in the plan. If there is a significant difference, then the plan or any part of it can be automatically adapted so as to avoid suboptimal performance on the first execution of a SQL statement.
So I waited for the collector of statistics would be aware of old and misleading statistics and change the Adaptive plan in my test case:
-12.1.0.1 (Windows 7)
ALTER session set statistics_level = all;
drop table t1;
drop table t2;
create table t1
as
Select rownum id
, mod (rownum, 10) col1
, lpad ('* ', 20, ' *') col2
of the double
connect by level < = 100000;
exec dbms_stats.gather_table_stats (user, 't1')
create index t1_id_idx on t1 (col1, id);
create table t2
as
Select the mod (rownum, 100) id_t1
, lpad ('* ', 20, ' *') col2
rownum col3
of the double
connect by level < = 100000;
exec dbms_stats.gather_table_stats (user, 't2')
create index t2_idx on t2 (id_t1);
Select sum (t1.col1) sum_col3
from t1
t2
where t1.id = t2.id_t1
and t1.col1 = 1;
SUM_COL3
----------
10000
Select *.
table (dbms_xplan.display_cursor (null, null, 'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
SQL_ID, 0g3mdd8b1pstt, number of children 0
-------------------------------------
Select sum (t1.col1) sum_col3 from t1, t2 where t1.id =
T2.id_t1 and t1.col1 = 1
Hash value of plan: 1632433607
----------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Begins | E - lines. A - lines. A - time | Pads | Bed | OMem | 1Mem | Used Mem.
----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. | 1. 00:00:00.18 | 232. 223. | | |
| 1. GLOBAL TRI | | 1. 1. 1. 00:00:00.18 | 232. 223. | | |
|* 2 | HASH JOIN | | 1. 100K | 10000 | 00:00:00.18 | 232. 223. 1888K | 1888K | 2088K (0) |
|* 3 | INDEX RANGE SCAN | T1_ID_IDX | 1. 10000 | 10000 | 00:00:00.02 | 29. 28. | | |
| 4. FULL RESTRICTED INDEX SCAN FAST | T2_IDX | 1. 100K | 100K | 00:00:00.09 | 203. 195. | | |
----------------------------------------------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
2 - access("T1".") ID '= 'T2'.' ID_T1')
3 - access("T1".") COL1 "= 1)
Note
-----
-This is an adaptation plan
Update t1 set col1 = 1000 where id > 1;
-> 99999 Zeilen written on.
Select sum (t1.col1) sum_col3
from t1
t2
where t1.id = t2.id_t1
and t1.col1 = 1;
SUM_COL3
----------
1000
Select *.
table (dbms_xplan.display_cursor (null, null, 'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
SQL_ID, 0g3mdd8b1pstt, number of children 0
-------------------------------------
Select sum (t1.col1) sum_col3 from t1, t2 where t1.id =
T2.id_t1 and t1.col1 = 1
Hash value of plan: 1632433607
-------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Begins | E - lines. A - lines. A - time | Pads | OMem | 1Mem | Used Mem.
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. | 1. 00:00:00.05 | 232. | | |
| 1. GLOBAL TRI | | 1. 1. 1. 00:00:00.05 | 232. | | |
|* 2 | HASH JOIN | | 1. 100K | 1000 | 00:00:00.05 | 232. 1969K | 1969K | 635K (0) |
|* 3 | INDEX RANGE SCAN | T1_ID_IDX | 1. 10000 | 1. 00:00:00.01 | 29. | | |
| 4. FULL RESTRICTED INDEX SCAN FAST | T2_IDX | 1. 100K | 100K | 00:00:00.01 | 203. | | |
-------------------------------------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
2 - access("T1".") ID '= 'T2'.' ID_T1')
3 - access("T1".") COL1 "= 1)
Note
-----
-This is an adaptation plan
- but with a single-row for access over T1 a NL would be a cheaper option:
Select / * + use_nl (t1, t2) * /.
Sum (T1. Col1) sum_col3
from t1
t2
where t1.id = t2.id_t1
and t1.col1 = 1;
SUM_COL3
----------
1000
Select *.
table (dbms_xplan.display_cursor (null, null, 'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL_ID, 3jfztz70m0qtm, number of children 0
-------------------------------------
Select / * + use_nl (t1, t2) * / sum (t1.col1) t1 sum_col3
t2 where t1.id = t2.id_t1 and t1.col1 = 1
Hash value of plan: 1261696607
------------------------------------------------------------------------------------------
| ID | Operation | Name | Begins | E - lines. A - lines. A - time | Pads |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. | 1. 00:00:00.01 | 33.
| 1. GLOBAL TRI | | 1. 1. 1. 00:00:00.01 | 33.
| 2. NESTED LOOPS | | 1. 100K | 1000 | 00:00:00.01 | 33.
|* 3 | INDEX RANGE SCAN | T1_ID_IDX | 1. 10000 | 1. 00:00:00.01 | 29.
|* 4 | INDEX RANGE SCAN | T2_IDX | 1. 10. 1000 | 00:00:00.01 | 4.
------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
3 - access("T1".") COL1 "= 1)
4 - access("T1".") ID '= 'T2'.' ID_T1')
Obviously, the plan is an adaptation plan (at least dbms_xplan says and in v$ sql the IS_RESOLVED_ADAPTIVE_PLAN column is defined on Y)- but the massive change of the actual cardinality for the systematic index scan (10000 before the update) and 1 after the update is invisible to the CBO. I also created a trace sql (10046 event) but did not find a sign of the activity of the interchange of statistics - what a strange representation of the Adaptive plan (and no sql_id for the query in the way):
Select sum (t1.col1) sum_col3
from t1
t2
where t1.id = t2.id_t1
and t1.col1 = 1
call the query of disc elapsed to cpu count current lines
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Run 2 0.00 0.00 0 0 0 0
Pick 4 0.22 0.12 223 464 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Total 8 0.12 0.23 223 464 0 2
Chess in the library during parsing cache: 1
Optimizer mode: ALL_ROWS
The analysis of the user id: 110
Caught plan statistics number: 2
Ranks (1) operation of line Source lines (avg) lines (max)
---------- ---------- ---------- ---------------------------------------------------
1 1 1 TRI GLOBAL (cr = 232 pr = 112 pw = time 0 = 114572 US)
10000 5500 10000 HASH JOIN (cr = 232 pr = 112 pw = time 0 = 113613 US cost = size = 84 card 1100000 = 100000)
10000 5001 10000 NESTED LOOPS (cr = 29 pr = 14 pw = time 0 = 20841 US cost = size = 84 card 1100000 = 100000)
10000 5001 10000 COLLECTOR of STATISTICS (cr = 29 pr = 14 pw = time 0 = 18786 US)
10000 5001 10000 T1_ID_IDX INDEX RANGE SCAN (cr = 29 pr 14 pw = time = 0 = 16473 US cost = 28 = size card 80000 = 10000) (object id 92499)
0 0 0 T2_IDX INDEX RANGE SCAN (0 = 0 US cost = size 55 cr = 0 pr = 0 pw = time = 30 card = 10) (object id 92501)
100000 100000 100000 T2_IDX INDEX FAST FULL SCAN (cr 203 pr = pw 98 = time = 0 = 41156 US cost = size 55 = 300000 card = 100000) (object id 92501)
Without comments dbms_xplan provides (format = > '+ adaptive'; shows how information: marked lines '-' are inactive) the regime is quite a mess.
Of course an additional dynamic sampling (now statistics daynamic) gives the CBO the missing details but I assumed adaptation plans have an independent function.
Here is finally my questions are:
-is the effective use of additional prerequistes-based Adaptive plans (I missed read more of Maria Colgan)?
-can anyone provide a link to a test (including data creation scripts) scenario showing plans adaptive in action?
- Or it is indeed difficult to find a situation in which adaptation plans do a decent job (at least so far)?
Concerning
Martin Preiss
After playing a little more thanks to the function, it seems that:
-It is sometimes a good idea to read a white paper...
-Adaptive elements of an accommodation plan is indeed relevant "on the first execution of a SQL statement.
-After the first execution of the column IS_RESOLVED_ADAPTIVE_PLAN is defined on Y, saying that "the Adaptive parts of a plan have been resolved to the final plan" - V$ SQL
-without the option format '+ adaptive' explain plan shows the variant cheapest Adaptive parts prior to execution, while dbms_xplan.display_cursor shows items that are executed after the execution.
-the cost identified in the plan values are still the values for the cheaper plan: If there is an Adaptive decision between a hash join and a join of NL and statistics favor the hash join then join NL will indicate the cost of the hash join after execution. This isn't a big surprise given that statistics are not changed-, but it can sometimes cause a little confusion seen operation with a cost value that does not fit (at least if you ignore the note "is an adaptation plan")
-plans of adaptation are not a solution for the volatile data (with modifications of significant cardinality after the last collection of statistics): it's always a dynamic sampling work (i.e. dynamic statistics)
-the representation of Adaptive plans in a (formatted) sql trace is not very readable - lack of comments on the Adaptive plan inactive parties.
-
How to set up a query - 'A-lines' and 'E' are different.
Hi all
I use 11 g R2 and I'm trying to resolve a query. On the execution below plan, my E-lines and my A are different. As you can see, there is little the same line 22 to 43, but starting from line 21 to 1 E-lines are false. For example, on the content (RowSource) 12, E-lines is (1 * 179) and lines A is 8631.
All statistics have been generated and are up to date. Do you know what could be the problem?
Also, line 31, my score is used with a local index and E-lines and A-lines are correct, but the index takes 02:33 minutes to get a 8758 ranks who seems to be excessive. EC which could I check/do to reduce it?-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | O/1/M | Max-Tmp | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 50 |00:05:55.08 | 203K| 69975 | 1094 | | | | | | 1 | SORT ORDER BY | | 1 | 179 | 50 |00:05:55.08 | 203K| 69975 | 1094 | 8307K| 1133K| | | | 2 | HASH UNIQUE | | 1 | 179 | 8631 |00:05:54.56 | 203K| 69317 | 180 | 4474K| 1861K| | 16384 | | 3 | NESTED LOOPS | | 1 | | 8631 |00:05:54.04 | 203K| 69137 | 0 | | | | | | 4 | NESTED LOOPS | | 1 | 179 | 8631 |00:05:23.24 | 194K| 63768 | 0 | | | | | | 5 | NESTED LOOPS | | 1 | 179 | 8631 |00:05:13.51 | 177K| 61396 | 0 | | | | | | 6 | NESTED LOOPS | | 1 | 179 | 8631 |00:04:03.74 | 151K| 52928 | 0 | | | | | | 7 | NESTED LOOPS | | 1 | 179 | 8631 |00:03:32.15 | 125K| 46425 | 0 | | | | | |* 8 | HASH JOIN | | 1 | 179 | 8631 |00:02:59.42 | 99470 | 40095 | 0 | 4379K| 1869K| 1/0/0| | |* 9 | HASH JOIN | | 1 | 179 | 8631 |00:02:59.15 | 98919 | 40067 | 0 | 909K| 909K| 1/0/0| | | 10 | TABLE ACCESS FULL | DIM_PERFORMANCE | 1 | 5 | 5 |00:00:00.01 | 2 | 0 | 0 | | | | | | 11 | NESTED LOOPS OUTER | | 1 | 179 | 8631 |00:02:59.06 | 98917 | 40067 | 0 | | | | | | 12 | NESTED LOOPS | | 1 | 179 | 8631 |00:02:58.87 | 98917 | 40067 | 0 | | | | | |* 13 | HASH JOIN | | 1 | 191 | 8758 |00:00:25.47 | 18101 | 2398 | 0 | 1000K| 1000K| 1/0/0| | | 14 | TABLE ACCESS FULL | DIM_TYP_FLUIDE | 1 | 3 | 3 |00:00:00.01 | 2 | 0 | 0 | | | | | |* 15 | HASH JOIN | | 1 | 191 | 8758 |00:00:25.32 | 18099 | 2398 | 0 | 955K| 955K| 1/0/0| | | 16 | TABLE ACCESS FULL | DIM_TYP_PDC | 1 | 3 | 3 |00:00:00.01 | 2 | 0 | 0 | | | | | | 17 | NESTED LOOPS | | 1 | | 8758 |00:00:25.19 | 18097 | 2398 | 0 | | | | | | 18 | NESTED LOOPS | | 1 | 191 | 8758 |00:00:06.46 | 9339 | 315 | 0 | | | | | |* 19 | HASH JOIN | | 1 | 191 | 8758 |00:00:00.67 | 565 | 0 | 0 | 880K| 880K| 1/0/0| | | 20 | TABLE ACCESS FULL | DIM_OFFRE | 1 | 22 | 22 |00:00:00.01 | 2 | 0 | 0 | | | | | |* 21 | HASH JOIN | | 1 | 191 | 8758 |00:00:00.53 | 563 | 0 | 0 | 779K| 779K| 1/0/0| | | 22 | NESTED LOOPS | | 1 | | 1 |00:00:00.01 | 5 | 0 | 0 | | | | | | 23 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | 0 | | | | | |* 24 | TABLE ACCESS FULL | DIM_CONTRAT | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | 0 | | | | | |* 25 | INDEX UNIQUE SCAN | PK_CONTRACTANT_KEY | 1 | 1 | 1 |00:00:00.01 | 1 | 0 | 0 | | | | | |* 26 | TABLE ACCESS BY INDEX ROWID| DIM_CONTRACTANT | 1 | 1 | 1 |00:00:00.01 | 1 | 0 | 0 | | | | | | 27 | TABLE ACCESS FULL | REL_SERVICE_POINT_CONTRAT | 1 | 54923 | 54923 |00:00:00.14 | 558 | 0 | 0 | | | | | |* 28 | INDEX UNIQUE SCAN | PK_SP_KEY | 8758 | 1 | 8758 |00:00:05.66 | 8774 | 315 | 0 | | | | | | 29 | TABLE ACCESS BY INDEX ROWID | DIM_SERVICE_POINT | 8758 | 1 | 8758 |00:00:18.58 | 8758 | 2083 | 0 | | | | | | 30 | PARTITION RANGE SINGLE | | 8758 | 1 | 8631 |00:02:33.32 | 80816 | 37669 | 0 | | | | | |* 31 | TABLE ACCESS BY LOCAL INDEX ROWID| FAC_DAILY_TRANS_PERFORM | 8758 | 1 | 8631 |00:02:33.18 | 80816 | 37669 | 0 | | | | | |* 32 | INDEX RANGE SCAN | IDX3_FAC_DAY_TRAN_PERF | 8758 | 6 | 103K|00:00:16.05 | 17929 | 4751 | 0 | | | | | | 33 | TABLE ACCESS BY INDEX ROWID | DIM_DIAGNOSYS | 8631 | 1 | 0 |00:00:00.12 | 0 | 0 | 0 | | | | | |* 34 | INDEX UNIQUE SCAN | PK_DIAGNOSYS_KEY | 8631 | 1 | 0 |00:00:00.05 | 0 | 0 | 0 | | | | | | 35 | TABLE ACCESS FULL | DIM_DATE | 1 | 21914 | 21914 |00:00:00.05 | 551 | 28 | 0 | | | | | | 36 | TABLE ACCESS BY INDEX ROWID | DIM_RADIO_MODULE | 8631 | 1 | 8631 |00:00:32.65 | 25895 | 6330 | 0 | | | | | |* 37 | INDEX UNIQUE SCAN | PK_RADMODKEY | 8631 | 1 | 8631 |00:00:08.38 | 17264 | 2122 | 0 | | | | | | 38 | TABLE ACCESS BY INDEX ROWID | DIM_METER_MODULE | 8631 | 1 | 8631 |00:00:31.51 | 25895 | 6503 | 0 | | | | | |* 39 | INDEX UNIQUE SCAN | PK_METMODKEY | 8631 | 1 | 8631 |00:00:10.17 | 17264 | 2212 | 0 | | | | | | 40 | TABLE ACCESS BY INDEX ROWID | DIM_CUSTOMER | 8631 | 1 | 8631 |00:01:09.66 | 25895 | 8468 | 0 | | | | | |* 41 | INDEX UNIQUE SCAN | PK_CUSTOMER | 8631 | 1 | 8631 |00:00:12.03 | 17264 | 2328 | 0 | | | | | |* 42 | INDEX UNIQUE SCAN | PK_METER | 8631 | 1 | 8631 |00:00:09.63 | 17264 | 2372 | 0 | | | | | | 43 | TABLE ACCESS BY INDEX ROWID | DIM_METER | 8631 | 1 | 8631 |00:00:30.71 | 8631 | 5369 | 0 | | | | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 8 - access("T786"."DATE_KEY"="T1830"."DATE_KEY") 9 - access("T786"."PERFORMANCE_KEY"="T2158"."PERFORMANCE_KEY") 13 - access("T160"."FLUIDE_KEY"="T210"."FLUIDE_KEY") 15 - access("T160"."TYP_PDC_KEY"="T217"."TYP_PDC_KEY") 19 - access("T122"."OFFRE_KEY"="T298"."OFFRE_KEY") 21 - access("T24"."CONTRAT_KEY"="T298"."CONTRAT_KEY" AND "T17"."CONTRACTANT_KEY"="T298"."CONTRACTANT_KEY") 24 - filter("T24"."NOM_CONTRAT"='COMMUNAY ET REGION (SIE)') 25 - access("T17"."CONTRACTANT_KEY"="T24"."CONTRACTANT_KEY") 26 - filter("T17"."NOM_CONTRACTANT"='ER Rhône Alpes Auvergne') 28 - access("T160"."SERVICE_POINT_KEY"="T298"."SERVICE_POINT_KEY") 31 - filter("T786"."REPORTING_DATE"=TO_DATE(' 2013-02-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 32 - access("T160"."SERVICE_POINT_KEY"="T786"."SERVICE_POINT_KEY") 34 - access("T786"."DIAGNOSYS_KEY"="T2186"."DIAGNOSYS_KEY") 37 - access("T786"."RADIO_MODULE_KEY"="T949"."RADIO_MODULE_KEY") 39 - access("T786"."METER_MODULE_KEY"="T1191"."METER_MODULE_KEY") 41 - access("T786"."CUSTOMER_KEY"="T2098"."CUSTOMER_KEY") 42 - access("T786"."METER_KEY"="T1011"."METER_KEY")
Thank you very much!| 30 | PARTITION RANGE SINGLE | | 8758 | 1 | 8631 |00:02:33.32 | 80816 | 37669 | 0 | | | | | |* 31 | TABLE ACCESS BY LOCAL INDEX ROWID| FAC_DAILY_TRANS_PERFORM | 8758 | 1 | 8631 |00:02:33.18 | 80816 | 37669 | 0 | | | | |
Hello
(1) for a significant query, it is rare that the optimizer obtains all right cardinalities
(2) but normally, it is not really necessary, in most cases just to be in the right ballpark (1-2 orders of magnitude)
(3) as others already said, the optimizer in your case does that 1 error - in step 1, when evaluating a cardinality of a hash of several columns, the steps join upward simply reproduce this error over and over
(4) rethink the cardinality of the join is always difficult, and especially when it comes to columns multiple joins because you have several possibilities made a mistake in the selectivity of unique columns, but also because there is a chance that there is a correlation between the columns attached; also, the optimizer uses many settings, called 'mental health controls', which does not make things more simple
(5) you can try to create statistics on a column in the joined columns group for a more accurate forecast of join cardinality
(6) there again, get all the cardinalities right does not guarantee that you will get a good plan - for example, you may be limited by the lack of an appropriate index
(7) sometimes, and more feedback of cardinality (or even instead) it is better to tackle the problem from a different angle: see what operations are ineffective (e.g. rejection lines after spending a lot of e/s on their acquisition), see http://savvinov.com/2013/01/28/efficiency-based-sql-tuning/
(8) in your case, you have noticed that you: step|* 31 | TABLE ACCESS BY LOCAL INDEX ROWID| FAC_DAILY_TRANS_PERFORM | 8758 | 1 | 8631 |00:02:33.18 | 80816 | 37669 | 0 | | | |
is the biggest problem for this plan, even if the cardinality estimate of it's very precise
(9) you can fix the problem by extending the IDX3_FAC_DAY_TRAN_PERF index to the REPORTING_DATE column: If you do this, then you won't need to 63 k logical reads to return lines k 103 just to reject 90% of them - you will reject them at an earlier stage (INDEX RANGE SCAN) and only need to at most 8 k readings to acquire the filtered lines by table rowidBest regards
Nikolai -
The identical query, unusable performance in an environment - please help
We strive to improve 10.2.0.4 to 11.2.0.2, but one of our considered requests is completely wrong (30 seconds in our current environment, 4000 in our upgraded environment).
Note that the caveat is that it will be very difficult for me to edit the SQL code (his coming of another group of TI, as well as their position is that he should not have to be changed given the fact that it works so well in the current production environment).
The query is exactly the same in both environments and transports the SQL_ID, but explain the plans are different.
The environment in which the application works is version 10.2.0.4 and time the trace is 30,15 seconds, 841 rows returned.
The new environment is 11.2.0.2, the elapsed time is 4035 seconds, 841 rows returned.
The environments are comparable in terms of CPU/memory/IO (the two are written for on our NetApp NFS mounts)
SGA_MAX/TARGET and PGA_AGGREGATE_TARGET are the same in both environments, as well as HASH_AREA_SIZE and SORT_AREA_SIZE.
The table database is identical, and all of the indexes are the same in both environments. His stats were collected and this behavior has persisted through several reboots of the databases.
I ran traces on statements in both environments and the performance difference seems to be due to direct path read/write temp:
The SQL
New environment (11.2.0.2), takes 4000 seconds according to tkprofSELECT DISTINCT a.emplid, a.name, rds.sa_get_stdnt_email_fn (a.emplid), a.req_term, a.req_term_ldesc, CASE WHEN (a.req_acad_plan = 'PKINXXXBBS' AND a.cum_gpa >= d.gpa) THEN NVL (c.num_met, 0) + 1 WHEN (b.gpa >= d.gpa AND a.req_acad_plan <> 'PKINXXXBBS') THEN NVL (c.num_met, 0) + 1 ELSE NVL (c.num_met, 0) END AS "Requirement Status", a.cum_total_passed AS "Cumulative Units", a.admit_term, a.admit_term_ldesc, a.acad_plan, a.acad_plan_ldesc, a.academic_level, a.academic_level_ldesc, TO_CHAR (a.rpt_date, 'MM/DD/YYYY') AS rpt_date, TO_CHAR (NVL (b.gpa, 0), '0.000') AS gpa, TO_CHAR (NVL (a.cum_gpa, 0), '0.000') AS cum_gpa FROM sa.rec_sm_stdnt_deg_completion a, ( SELECT DISTINCT CASE WHEN SUM (b_sub.units_earned) = 0 THEN 0 ELSE SUM (b_sub.grade_points) / SUM (b_sub.units_earned) END AS gpa, b_sub.emplid, b_sub.acad_career, b_sub.acad_plan, b_sub.req_acad_plan, b_sub.req_term, b_sub.academic_level, b_sub.rqrmnt_group FROM sa.rec_sm_stdnt_deg_completion b_sub, hrsa_extr.ps_rq_grp_tbl g3, hrsa_extr.ps_rq_main_tbl m3 WHERE b_sub.req_acad_plan IS NOT NULL AND b_sub.acad_career = 'UGRD' AND b_sub.acad_prog = 'UBACH' AND b_sub.acad_plan = b_sub.req_acad_plan AND b_sub.grade <> 'IP' AND b_sub.impact_flag = 'Y' AND g3.effdt = (SELECT MAX (g3_ed.effdt) FROM hrsa_extr.ps_rq_grp_tbl g3_ed WHERE g3_ed.rqrmnt_group = g3.rqrmnt_group AND g3_ed.effdt <= b_sub.req_term_begin_date) AND g3.rqrmnt_group = b_sub.rqrmnt_group AND m3.effdt = (SELECT MAX (m3_ed.effdt) FROM hrsa_extr.ps_rq_main_tbl m3_ed WHERE m3_ed.requirement = m3.requirement AND m3_ed.effdt <= b_sub.req_term_begin_date) AND m3.requirement = b_sub.requirement GROUP BY b_sub.emplid, b_sub.acad_career, b_sub.acad_plan, b_sub.req_acad_plan, b_sub.req_term, b_sub.academic_level, b_sub.rqrmnt_group) b, ( SELECT c_sub.emplid, c_sub.acad_career, c_sub.acad_plan, c_sub.req_acad_plan, c_sub.req_term, c_sub.academic_level, c_sub.rqrmnt_group, COUNT (*) AS num_met FROM sa.rec_sm_stdnt_deg_completion c_sub, hrsa_extr.ps_rq_grp_tbl g2, hrsa_extr.ps_rq_main_tbl m2 WHERE c_sub.rqrmnt_line_status = 'COMP' AND c_sub.grade <> 'IP' AND c_sub.impact_flag = 'Y' AND c_sub.acad_career = 'UGRD' AND c_sub.acad_prog = 'UBACH' AND c_sub.acad_plan = c_sub.req_acad_plan AND g2.effdt = (SELECT MAX (g2_ed.effdt) FROM hrsa_extr.ps_rq_grp_tbl g2_ed WHERE g2_ed.rqrmnt_group = g2.rqrmnt_group AND g2_ed.effdt <= c_sub.req_term_begin_date) AND g2.rqrmnt_group = c_sub.rqrmnt_group AND m2.effdt = (SELECT MAX (m2_ed.effdt) FROM hrsa_extr.ps_rq_main_tbl m2_ed WHERE m2_ed.requirement = m2.requirement AND m2_ed.effdt <= c_sub.req_term_begin_date) AND m2.requirement = c_sub.requirement GROUP BY c_sub.emplid, c_sub.acad_career, c_sub.acad_plan, c_sub.req_acad_plan, c_sub.req_term, c_sub.academic_level, c_sub.rqrmnt_group) c, hrsa_extr.ps_smo_rdr_imp_pln d, hrsa_extr.ps_rq_grp_tbl g, hrsa_extr.ps_rq_main_tbl m WHERE a.acad_career = 'UGRD' AND a.acad_prog = 'UBACH' AND a.req_acad_plan IN (N'NUPPXXXBBS', N'NURPBASBBS', N'NURPXXXBBS') AND a.academic_level IN (N'10', N'20', N'30', N'40', N'50', N'GR') AND a.acad_plan = a.req_acad_plan AND a.impact_flag = 'Y' AND g.effdt = (SELECT MAX (g_ed.effdt) FROM hrsa_extr.ps_rq_grp_tbl g_ed WHERE g_ed.rqrmnt_group = g.rqrmnt_group AND g_ed.effdt <= a.req_term_begin_date) AND g.rqrmnt_group = a.rqrmnt_group AND m.effdt = (SELECT MAX (m_ed.effdt) FROM hrsa_extr.ps_rq_main_tbl m_ed WHERE m_ed.requirement = m.requirement AND m_ed.effdt <= a.req_term_begin_date) AND m.requirement = a.requirement AND a.emplid = b.emplid(+) AND a.acad_career = b.acad_career(+) AND a.acad_plan = b.acad_plan(+) AND a.req_acad_plan = b.req_acad_plan(+) AND a.academic_level = b.academic_level(+) AND a.req_term = b.req_term(+) AND a.rqrmnt_group = b.rqrmnt_group(+) AND a.emplid = c.emplid(+) AND a.acad_career = c.acad_career(+) AND a.acad_plan = c.acad_plan(+) AND a.req_acad_plan = c.req_acad_plan(+) AND a.academic_level = c.academic_level(+) AND a.req_term = c.req_term(+) AND a.rqrmnt_group = c.rqrmnt_group(+) AND d.acad_plan = a.req_acad_plan ORDER BY 6 DESC, 2 ASC;
Explain planPLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------- Plan hash value: 4117596694 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 314 | 15231 (1)| 00:03:03 | | 1 | SORT UNIQUE | | 1 | 314 | 15230 (1)| 00:03:03 | | 2 | NESTED LOOPS OUTER | | 1 | 314 | 15227 (1)| 00:03:03 | | 3 | NESTED LOOPS OUTER | | 1 | 285 | 15216 (1)| 00:03:03 | | 4 | NESTED LOOPS | | 1 | 256 | 15205 (1)| 00:03:03 | | 5 | NESTED LOOPS | | 1 | 241 | 15204 (1)| 00:03:03 | | 6 | NESTED LOOPS | | 1 | 223 | 15203 (1)| 00:03:03 | | 7 | NESTED LOOPS | | 17 | 731 | 15186 (1)| 00:03:03 | | 8 | VIEW | VW_SQ_3 | 998 | 27944 | 15186 (1)| 00:03:03 | | 9 | HASH GROUP BY | | 998 | 62874 | 15186 (1)| 00:03:03 | | 10 | MERGE JOIN | | 29060 | 1787K| 15184 (1)| 00:03:03 | | 11 | SORT JOIN | | 26 | 1248 | 15180 (1)| 00:03:03 | | 12 | TABLE ACCESS BY INDEX ROWID | REC_SM_STDNT_DEG_COMPLETION | 26 | 1248 | 15179 (1)| 00:03:03 | |* 13 | INDEX SKIP SCAN | REC0SM_STDNT_DEG_IDX | 26 | | 15168 (1)| 00:03:03 | |* 14 | SORT JOIN | | 1217 | 18255 | 4 (25)| 00:00:01 | | 15 | INDEX FAST FULL SCAN | PS3RQ_GRP_TBL | 1217 | 18255 | 3 (0)| 00:00:01 | |* 16 | INDEX UNIQUE SCAN | PS_RQ_GRP_TBL | 1 | 15 | 0 (0)| 00:00:01 | |* 17 | TABLE ACCESS BY USER ROWID | REC_SM_STDNT_DEG_COMPLETION | 1 | 180 | 1 (0)| 00:00:01 | |* 18 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 18 | 1 (0)| 00:00:01 | | 19 | SORT AGGREGATE | | 1 | 18 | | | | 20 | FIRST ROW | | 1 | 18 | 2 (0)| 00:00:01 | |* 21 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_MAIN_TBL | 1 | 18 | 2 (0)| 00:00:01 | |* 22 | INDEX FULL SCAN | PS0SMO_RDR_IMP_PLN | 1 | 15 | 1 (0)| 00:00:01 | |* 23 | VIEW PUSHED PREDICATE | | 1 | 29 | 11 (19)| 00:00:01 | | 24 | SORT GROUP BY | | 1 | 52 | 11 (19)| 00:00:01 | | 25 | VIEW | VM_NWVW_5 | 1 | 52 | 10 (10)| 00:00:01 | |* 26 | FILTER | | | | | | | 27 | SORT GROUP BY | | 1 | 165 | 10 (10)| 00:00:01 | |* 28 | FILTER | | | | | | | 29 | NESTED LOOPS | | 1 | 165 | 7 (0)| 00:00:01 | | 30 | NESTED LOOPS | | 1 | 147 | 6 (0)| 00:00:01 | | 31 | NESTED LOOPS | | 1 | 117 | 5 (0)| 00:00:01 | |* 32 | TABLE ACCESS BY INDEX ROWID | REC_SM_STDNT_DEG_COMPLETION | 1 | 90 | 4 (0)| 00:00:01 | |* 33 | INDEX RANGE SCAN | REC1SM_STDNT_DEG_IDX | 1 | | 3 (0)| 00:00:01 | |* 34 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 27 | 1 (0)| 00:00:01 | | 35 | SORT AGGREGATE | | 1 | 15 | | | | 36 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 37 | INDEX RANGE SCAN (MIN/MAX)| PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | |* 38 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 30 | 1 (0)| 00:00:01 | |* 39 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 18 | 1 (0)| 00:00:01 | |* 40 | VIEW PUSHED PREDICATE | | 1 | 29 | 11 (19)| 00:00:01 | | 41 | SORT GROUP BY | | 1 | 32 | 11 (19)| 00:00:01 | | 42 | VIEW | VM_NWVW_4 | 1 | 32 | 10 (10)| 00:00:01 | |* 43 | FILTER | | | | | | | 44 | SORT GROUP BY | | 1 | 166 | 10 (10)| 00:00:01 | |* 45 | FILTER | | | | | | |* 46 | FILTER | | | | | | | 47 | NESTED LOOPS | | 1 | 166 | 7 (0)| 00:00:01 | | 48 | NESTED LOOPS | | 1 | 148 | 6 (0)| 00:00:01 | | 49 | NESTED LOOPS | | 1 | 118 | 5 (0)| 00:00:01 | |* 50 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 27 | 2 (0)| 00:00:01 | |* 51 | TABLE ACCESS BY INDEX ROWID | REC_SM_STDNT_DEG_COMPLETION | 1 | 91 | 3 (0)| 00:00:01 | |* 52 | INDEX RANGE SCAN | REC1SM_STDNT_DEG_IDX | 1 | | 2 (0)| 00:00:01 | |* 53 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 30 | 1 (0)| 00:00:01 | |* 54 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 18 | 1 (0)| 00:00:01 | | 55 | SORT AGGREGATE | | 1 | 15 | | | | 56 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 57 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------------------------------
call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 6.59 6.66 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 1521.36 4028.91 2256624 240053408 0 841 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 1527.95 4035.57 2256624 240053408 0 841
The current production (10.2.0.4), takes 30 secondsElapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 2 0.00 0.00 Disk file operations I/O 3 0.07 0.11 db file sequential read 10829 0.12 16.62 direct path write temp 72445 0.30 293.71 direct path read temp 72445 0.58 2234.14 asynch descriptor resize 22 0.00 0.00 SQL*Net more data to client 9 0.00 0.00 SQL*Net message from client 2 0.84 1.25 ********************************************************************************
PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------- Plan hash value: 2178773127 ------------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 331 | 89446 (2)| 00:17:54 | | 1 | SORT UNIQUE | | 1 | 331 | 89445 (2)| 00:17:54 | | 2 | NESTED LOOPS | | 1 | 331 | 89440 (2)| 00:17:54 | | 3 | NESTED LOOPS | | 1 | 316 | 89439 (2)| 00:17:54 | |* 4 | HASH JOIN OUTER | | 1 | 298 | 89438 (2)| 00:17:54 | |* 5 | HASH JOIN OUTER | | 1 | 240 | 59625 (2)| 00:11:56 | | 6 | NESTED LOOPS | | 1 | 182 | 29815 (2)| 00:05:58 | |* 7 | TABLE ACCESS FULL | REC_SM_STDNT_DEG_COMPLETION | 1 | 167 | 29814 (2)| 00:05:58 | |* 8 | INDEX FULL SCAN | PS0SMO_RDR_IMP_PLN | 1 | 15 | 1 (0)| 00:00:01 | | 9 | VIEW | | 1 | 58 | 29809 (2)| 00:05:58 | | 10 | HASH GROUP BY | | 1 | 71 | 29809 (2)| 00:05:58 | | 11 | VIEW | | 1 | 71 | 29809 (2)| 00:05:58 | |* 12 | FILTER | | | | | | | 13 | HASH GROUP BY | | 1 | 198 | 29809 (2)| 00:05:58 | | 14 | NESTED LOOPS | | 1 | 198 | 29806 (2)| 00:05:58 | |* 15 | HASH JOIN | | 1 | 171 | 29805 (2)| 00:05:58 | |* 16 | HASH JOIN | | 4 | 572 | 29802 (2)| 00:05:58 | |* 17 | TABLE ACCESS FULL | REC_SM_STDNT_DEG_COMPLETION | 4 | 452 | 29798 (2)| 00:05:58 | | 18 | INDEX FAST FULL SCAN | PS2RQ_MAIN_TBL | 1035 | 31050 | 3 (0)| 00:00:01 | | 19 | INDEX FAST FULL SCAN | PS2RQ_MAIN_TBL | 1035 | 28980 | 3 (0)| 00:00:01 | |* 20 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 27 | 1 (0)| 00:00:01 | | 21 | SORT AGGREGATE | | 1 | 15 | | | | 22 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 23 | INDEX RANGE SCAN (MIN/MAX)| PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | | 24 | VIEW | | 1 | 58 | 29813 (2)| 00:05:58 | | 25 | HASH GROUP BY | | 1 | 45 | 29813 (2)| 00:05:58 | | 26 | VIEW | | 1 | 45 | 29813 (2)| 00:05:58 | |* 27 | FILTER | | | | | | | 28 | HASH GROUP BY | | 1 | 199 | 29813 (2)| 00:05:58 | | 29 | NESTED LOOPS | | 1 | 199 | 29810 (2)| 00:05:58 | |* 30 | HASH JOIN | | 1 | 172 | 29809 (2)| 00:05:58 | |* 31 | HASH JOIN | | 8 | 1152 | 29805 (2)| 00:05:58 | |* 32 | TABLE ACCESS FULL | REC_SM_STDNT_DEG_COMPLETION | 7 | 798 | 29802 (2)| 00:05:58 | | 33 | INDEX FAST FULL SCAN | PS2RQ_MAIN_TBL | 1035 | 31050 | 3 (0)| 00:00:01 | | 34 | INDEX FAST FULL SCAN | PS2RQ_MAIN_TBL | 1035 | 28980 | 3 (0)| 00:00:01 | |* 35 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 27 | 1 (0)| 00:00:01 | | 36 | SORT AGGREGATE | | 1 | 15 | | | | 37 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 38 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | |* 39 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 18 | 1 (0)| 00:00:01 | | 40 | SORT AGGREGATE | | 1 | 18 | | | | 41 | FIRST ROW | | 1 | 18 | 2 (0)| 00:00:01 | |* 42 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_MAIN_TBL | 1 | 18 | 2 (0)| 00:00:01 | |* 43 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 15 | 1 (0)| 00:00:01 | | 44 | SORT AGGREGATE | | 1 | 15 | | | | 45 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 46 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------------------
call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 1.49 1.51 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 18.25 28.63 463672 932215 0 836 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 19.75 30.15 463672 932215 0 836
Published by: ngilbert on June 26, 2012 16:40Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 2 0.00 0.00 db file scattered read 14262 0.31 13.13 latch: shared pool 1 0.01 0.01 db file sequential read 7 0.00 0.00 direct path write temp 493 0.00 0.00 direct path read temp 493 0.00 0.00 SQL*Net more data to client 40 0.00 0.00 SQL*Net message from client 2 0.83 1.23 ********************************************************************************
Published by: ngilbert on June 26, 2012 16:41Hello
as is almost always the case, your bad plan is the result of messed up the cardinality estimates. The biggest problem seems to be the cardinality in steps 12 and 13 of the bad plan:
| 12 | TABLE ACCESS BY INDEX ROWID | REC_SM_STDNT_DEG_COMPLETION | 26 | 1248 | 15179 (1)| 00:03:03 | |* 13 | INDEX SKIP SCAN | REC0SM_STDNT_DEG_IDX | 26 | | 15168 (1)| 00:03:03 |
that is the estimated cardinality is 26. But if we look at the map, we see that the actual number of lines is 4 orders of magnitude (!) higher than that. So of course this goes wrong from there: the optimizer uses weird join methods, wrong join order etc.
And if we look at the predicate:
13 - access("A"."ACAD_CAREER"='UGRD' AND "A"."ACAD_PROG"='UBACH' AND "A"."IMPACT_FLAG"='Y') filter("A"."ACAD_PLAN"="A"."REQ_ACAD_PLAN" AND "A"."ACAD_PROG"='UBACH' AND "A"."IMPACT_FLAG"='Y' AND "A"."ACAD_CAREER"='UGRD')
We can assume that the problem is related to the related predicates: you select lines of tables with 4 equality predicates, but 260 k lines survive this filtering. The optimizer thinks that it is closer to 26, not 260 k, but it's probably because the predicates are not really independent.
There is another thing that seems suspicious: filter predicate is redundant with the predicates of access. There is another discussion on OTN not so long ago, and it turned out to be a symptom of a bug (which makes it even more suspect, is that it was also a SKIP SCAN). See:
Re: CBO does not consider cheaper NL-Plan without guidance
Hope that was helpful.
Best regards
Nikolai
Maybe you are looking for
-
How to speed up the loading of a movie?
Try to watch a movie, but it takes to long to load and it stops then takes more time to load again
-
Airport utility shows green points on the AirPort Extreme and the Internet, but my my Mac mini does not connect to the internet. IPhones and 2 portable Windows connect very well, and see all the devices on the home network. The problem occurred after
-
where is my button "print" be available
Since the upgrade to the latest Firefox I can't find my key "impression" on thetoolbar.
-
Satellite A300 - install the secondary hard drive
Hi all I'm new to the forum. I was thinking about adding a second hard drive of my laptop.I saw that there is a single disc and a space for a second Is this possible? If so, what drive would be worth? Thank you
-
5530 envy: Envy 5530 won't print, copy or scan
Red 'X' on the lower right part of the screen of the printer is on and printer no longer works. Checked the utility application and was advised that black cartridge that is low on ink. Change cartridge. Utility program says ok now all levels. Tried t