Bad cardinality estimate
Hi, I'm using Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64-bit Production version of oracle. Here are the parameters that have default values.
OPTIMIZER_INDEX_COST_ADJ 100
optimizer_index_caching 0
I have a simple, similar as query below
select c1, c2,c3 from tab1 where c1 = 'abc'; its using below plan --------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.27 | 26816 | |* 1 | TABLE ACCESS FULL| TAB1 | 1 | 1171K| 0 |00:00:00.27 | 26816 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("TAB1"."c1"='abc') now my concern is , even if the rows it returns is 0, how the expected cardinality calculated to be 1171K. This perhaps tilting the optimizer decision to go for FTS of table, even if ther exist an index id1 on TAB1(C1) and resulting less record sometimes.Now below are the stats for the table and column c1 select num_rows from dba_tables where table_name='TAB1' NUM_ROWS ------- 1171095 select density,num_distinct from dba_tab_col_statistics where table_name='TAB1' and column_name='C1' density num_distinct -------- --------- 4.26950845149198E-7 1171095 As ther existing frequency histogram on this column, so the expected cardinality estimation should be density*num_distinct = .5. select endpoint_number, endpoint_value from dba_tab_histograms where table_name='TAB1' and column_name='C1'; ENDPOINT_NUMBER ENDPOINT_VALUE -------------- ---------------- 234219 3.80421485912222E35 when i force the index , the plan becaomes as below Elapsed: 00:00:01.50 Execution Plan ---------------------------------------------------------- Plan hash value: 1013434088 --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1171K| 93M| 870K (1)| 01:13:50 | | 1 | TABLE ACCESS BY INDEX ROWID| TAB1 | 1171K| 93M| 870K (1)| 01:13:50 | |* 2 | INDEX RANGE SCAN | ID1 | 1171K| | 21124 (1)| 00:01:48 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("TAB1"."C1"='abc')
You have a histogram of the frequency that captured a single value - there is a bug known for the case (although I can't quote numbers - check out the blog of Randolf Geist). I'm a little surprised that he is still there in 11.2.0.3, but there are a few edge cases, maybe not all set. I note that the value of the histogram you captured begins: "ID:414" - given that you used a sample of 20%, I wonder if the column has 1 million distinct values that all start with the same 15 characters (select min(), max() table to satisfy my curiosity) - Oracle Gets a bit lost with the statistics for the strings after (approximately) the first 6.
Concerning
Jonathan Lewis
Tags: Database
Similar Questions
-
Subquery Factoring - cardinality estimate good but bad sql response times
This is Exadata 11.2.0.4.0 database, all tables have statistics of up to date. Cardinality estimation is good compared to the actual cardinality. It is a way to tune this sql to reduce its response time.
Sorry for the long sql and the execution plan.
WITH SUBWITH0 AS (SELECT D1.c1 AS c1 FROM ( (SELECT D1.c1 AS c1 FROM (SELECT DISTINCT T7171.CH_ID_SYM AS c1 FROM DW.TM_R_REP T7171 WHERE ( T7171.CHILD_REP_ID = 939 ) ) D1 UNION SELECT D1.c1 AS c1 FROM (SELECT DISTINCT T7167.MEMBER_KEY_SYM AS c1 FROM DW.PC_T_REP T7167 WHERE ( T7167.ANCESTOR_KEY = 939 ) ) D1 ) ) D1 ), SUBWITH1 AS (SELECT D1.c1 AS c1 FROM ( (SELECT D1.c1 AS c1 FROM (SELECT DISTINCT T7171.CH_ID_SYM AS c1 FROM DW.TM_R_REP T7171 WHERE ( T7171.CHILD_REP_ID = 939 ) ) D1 UNION SELECT D1.c1 AS c1 FROM (SELECT DISTINCT T7167.MEMBER_KEY_SYM AS c1 FROM DW.PC_T_REP T7167 WHERE ( T7167.ANCESTOR_KEY = 939 ) ) D1 ) ) D1 ), SUBWITH2 AS (SELECT DISTINCT T7171.CH_ID_SYM AS c1 FROM ( DW.PC_T_REP T7167 LEFT OUTER JOIN DW.TM_R_REP T7171 ON T7167.ANCESTOR_KEY = T7171.CHILD_REP_ID AND T7167.SALESYEARMONTH = T7171.SALES_YEAR_MONTH) LEFT OUTER JOIN DW.TM_REP T6715 ON T7171.CHILD_REP_ID_N = T6715.REP_ID WHERE ( CASE WHEN T7171.CHILD_REP_ID_N LIKE '9999%' THEN concat(concat('UNASSIGNED', lpad(' ', 2)), CAST(T7167.TERRITORY_ID AS VARCHAR ( 20 ) )) ELSE concat(concat(concat(concat(T6715.FIRST_NAME, lpad(' ', 2)), T6715.MIDDLE_NAME), lpad(' ', 2)), T6715.LAST_NAME) END = 'JOES CRAMER' AND T7171.SALES_YEAR_MONTH BETWEEN '201505' AND '201505' AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH0 D1 ) ) ), SUBWITH3 AS (SELECT MEMBER_KEY_SYM AS c1 FROM DW.PC_T_REP T7167 WHERE ( IS_LEAF = 1 ) ), SAWITH0 AS (SELECT DISTINCT CASE WHEN T7171.CHILD_REP_ID_N LIKE '9999%' THEN concat(concat('UNASSIGNED', lpad(' ', 2)), CAST(T7167.TERRITORY_ID AS VARCHAR ( 20 ) )) ELSE concat(concat(concat(concat(T6715.FIRST_NAME, lpad(' ', 2)), T6715.MIDDLE_NAME), lpad(' ', 2)), T6715.LAST_NAME) END AS c1, T6715.REP_NUM AS c2, T7171.SALES_YEAR_MONTH AS c3, T7315.MONTH_NUMERIC AS c4, CASE WHEN T7171.CH_ID_SYM IN (SELECT D1.c1 AS c1 FROM SUBWITH3 D1 ) THEN 1 ELSE 0 END AS c5, CAST(T7171.PARENT_REP_ID AS CHARACTER ( 30 ) ) AS c6, T7171.CH_ID_SYM AS c7, T7171.PARENT_REP_ID_SYM AS c8 FROM DW.TIM_MON T7315 , ( ( DW.PC_T_REP T7167 LEFT OUTER JOIN ( (SELECT TO_NUMBER(TO_CHAR(L_OPP.CloseDate,'YYYYMM')) AS Sales_Year_Month, Tm_Rep.Rep_Id AS Rep_Id, L_OPP.Account_Name__C AS Account_Name__C, L_OPP.Closedate AS Closedate, L_OPP.Forecastcategory AS Forecastcategory, L_OPP.Forecastcategoryname AS Forecastcategoryname, L_User.NAME AS Opp_Owner_S_Sales_Org__C, L_OPP.Opportunity_Id__C AS Opportunity_Id__C, L_OPP.Renewal_Date__C AS Renewal_Date__C, L_OPP.Total_Incremental__C AS Total_Incremental__C, L_OPP.Offer_Code__C AS Offer_Code__C, L_OPP.ID AS Opportunity_ID, L_OPP.TERRITORYID AS TERRITORYID, L_OPP.ACCOUNTID AS ACCOUNTID, L_OPP.OWNERID AS OWNERID, L_OPP.TOTAL_RENEWAL__C AS TOTAL_RENEWAL__C, L_OPP.NAME AS NAME, L_OPP.STAGENAME AS STAGE_NAME, L_OPP.STAGE_DESCRIPTION__C AS STAGE_DESCRIPTION, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN L_OPP.Total_Incremental__C END , 0) AS Closed_Oppurtunity, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN 'Closed_Oppurtunity_Drill' END AS Closed_Oppurtunity_Drill, NVL( CASE WHEN L_OPP.Forecastcategoryname IN ('Pipeline', 'Potential', 'Commit') AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN L_OPP.Total_Incremental__C END , 0) AS OPEN_Oppurtunity, CASE WHEN L_OPP.Forecastcategoryname IN ('Pipeline', 'Potential', 'Commit') AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN 'OPEN_Oppurtunity_Drill' END AS OPEN_Oppurtunity_Drill, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year1_Closed_Opp, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN 'Renewal_Year1_Closed_Opp_Drill' END AS Renewal_Year1_Closed_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year1_OPEN_Opp, CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN 'Renewal_Year1_OPEN_Opp_Drill' END AS Renewal_Year1_OPEN_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year2_Closed_Opp, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN 'Renewal_Year2_Closed_Opp_Drill' END AS Renewal_Year2_Closed_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year2_OPEN_Opp, CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN 'Renewal_Year2_OPEN_Opp_Drill' END AS Renewal_Year2_OPEN_Opp_Drill FROM DW.OPP_C_DIM RIGHT OUTER JOIN RT.L_OPP ON (TO_CHAR(OPP_C_DIM.OFFER_CODE) =TO_CHAR(L_OPP.Offer_Code__C) AND (TO_CHAR(L_OPP.CloseDate,'YYYYMM')) = TO_CHAR(OPP_C_DIM.PERIOD)) LEFT OUTER JOIN RT.L_User ON (L_OPP.Ownerid=L_User.Id) LEFT OUTER JOIN DW.Tm_Rep ON (Tm_Rep.Rep_Num='0' ||L_User.Rep_Employee_Number__C) )) T774110 ON T7167.MEMBER_KEY = T774110.Rep_Id AND T7167.SALESYEARMONTH = T774110.Sales_Year_Month) LEFT OUTER JOIN DW.TM_R_REP T7171 ON T7167.ANCESTOR_KEY = T7171.CHILD_REP_ID AND T7167.SALESYEARMONTH = T7171.SALES_YEAR_MONTH) LEFT OUTER JOIN DW.TM_REP T6715 ON T7171.CHILD_REP_ID_N = T6715.REP_ID WHERE ( T774110.Sales_Year_Month = T7315.YEAR_MONTH AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH2 D1 ) AND T7171.SALES_YEAR_MONTH BETWEEN '201505' AND '201505' AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH1 D1 ) ) ), SAWITH1 AS (SELECT SUM(T774110.Renewal_Year2_OPEN_Opp) AS c9, SUM(T774110.Renewal_Year2_Closed_Opp) AS c10, SUM(T774110.Renewal_Year1_OPEN_Opp) AS c11, SUM(T774110.Renewal_Year1_Closed_Opp) AS c12, SUM(T774110.OPEN_Oppurtunity) AS c13, SUM(T774110.Closed_Oppurtunity) AS c14, T7315.MONTH_NUMERIC AS c15, T7171.CH_ID_SYM AS c16 FROM DW.TIM_MON T7315 , ( RT.L_ACCOUNT T765190 LEFT OUTER JOIN ( DW.PC_T_REP T7167 LEFT OUTER JOIN ( (SELECT TO_NUMBER(TO_CHAR(L_OPP.CloseDate,'YYYYMM')) AS Sales_Year_Month, Tm_Rep.Rep_Id AS Rep_Id, L_OPP.Account_Name__C AS Account_Name__C, L_OPP.Closedate AS Closedate, L_OPP.Forecastcategory AS Forecastcategory, L_OPP.Forecastcategoryname AS Forecastcategoryname, L_User.NAME AS Opp_Owner_S_Sales_Org__C, L_OPP.Opportunity_Id__C AS Opportunity_Id__C, L_OPP.Renewal_Date__C AS Renewal_Date__C, L_OPP.Total_Incremental__C AS Total_Incremental__C, L_OPP.Offer_Code__C AS Offer_Code__C, L_OPP.ID AS Opportunity_ID, L_OPP.TERRITORYID AS TERRITORYID, L_OPP.ACCOUNTID AS ACCOUNTID, L_OPP.OWNERID AS OWNERID, L_OPP.TOTAL_RENEWAL__C AS TOTAL_RENEWAL__C, L_OPP.NAME AS NAME, L_OPP.STAGENAME AS STAGE_NAME, L_OPP.STAGE_DESCRIPTION__C AS STAGE_DESCRIPTION, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN L_OPP.Total_Incremental__C END , 0) AS Closed_Oppurtunity, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN 'Closed_Oppurtunity_Drill' END AS Closed_Oppurtunity_Drill, NVL( CASE WHEN L_OPP.Forecastcategoryname IN ('Pipeline', 'Potential', 'Commit') AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN L_OPP.Total_Incremental__C END , 0) AS OPEN_Oppurtunity, CASE WHEN L_OPP.Forecastcategoryname IN ('Pipeline', 'Potential', 'Commit') AND ( OPP_C_DIM.OPPORTUNITIES_GROUP IS NULL ) THEN 'OPEN_Oppurtunity_Drill' END AS OPEN_Oppurtunity_Drill, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year1_Closed_Opp, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN 'Renewal_Year1_Closed_Opp_Drill' END AS Renewal_Year1_Closed_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year1_OPEN_Opp, CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL1' THEN 'Renewal_Year1_OPEN_Opp_Drill' END AS Renewal_Year1_OPEN_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year2_Closed_Opp, CASE WHEN L_OPP.Forecastcategory = 'Closed' AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN 'Renewal_Year2_Closed_Opp_Drill' END AS Renewal_Year2_Closed_Opp_Drill, NVL( CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN L_OPP.TOTAL_RENEWAL__C END , 0) AS Renewal_Year2_OPEN_Opp, CASE WHEN L_OPP.Forecastcategory IN ('Pipeline', 'Forecast', 'BestCase') AND OPP_C_DIM.OPPORTUNITIES_GROUP ='RENEWAL2' THEN 'Renewal_Year2_OPEN_Opp_Drill' END AS Renewal_Year2_OPEN_Opp_Drill FROM DW.OPP_C_DIM RIGHT OUTER JOIN RT.L_OPP ON (TO_CHAR(OPP_C_DIM.OFFER_CODE) =TO_CHAR(L_OPP.Offer_Code__C) AND (TO_CHAR(L_OPP.CloseDate,'YYYYMM')) = TO_CHAR(OPP_C_DIM.PERIOD)) LEFT OUTER JOIN RT.L_User ON (L_OPP.Ownerid=L_User.Id) LEFT OUTER JOIN DW.Tm_Rep ON (Tm_Rep.Rep_Num='0' ||L_User.Rep_Employee_Number__C) )) T774110 ON T7167.MEMBER_KEY = T774110.Rep_Id AND T7167.SALESYEARMONTH = T774110.Sales_Year_Month) ON T765190.ID = T774110.ACCOUNTID) LEFT OUTER JOIN DW.TM_R_REP T7171 ON T7167.ANCESTOR_KEY = T7171.CHILD_REP_ID AND T7167.SALESYEARMONTH = T7171.SALES_YEAR_MONTH WHERE ( T774110.Sales_Year_Month = T7315.YEAR_MONTH AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH2 D1 ) AND T7171.SALES_YEAR_MONTH BETWEEN '201505' AND '201505' AND T7171.CH_ID_SYM IN (SELECT DISTINCT D1.c1 AS c1 FROM SUBWITH1 D1 ) ) GROUP BY T7171.CH_ID_SYM, T7315.MONTH_NUMERIC ) SELECT DISTINCT D2.c9 AS c1, D2.c10 AS c2, D2.c11 AS c3, D2.c12 AS c4, D2.c13 AS c5, D2.c14 AS c6, D1.c1 AS c7, D1.c2 AS c8, D1.c3 AS c9, D1.c4 AS c10, D1.c5 AS c11, D1.c6 AS c12, D1.c7 AS c13, D1.c8 AS c14 FROM SAWITH0 D1 INNER JOIN SAWITH1 D2 ON SYS_OP_MAP_NONNULL(D2.c15) = SYS_OP_MAP_NONNULL(D1.c4) AND SYS_OP_MAP_NONNULL(D2.c16) = SYS_OP_MAP_NONNULL(D1.c7) ORDER BY c10, c13
SQL in real time, followed by the details with the Predicate Section of dbms_xplan.display_cursor shot
Global stats
==============================================================================================================================
| Elapsed. CPU | E/S | Request | Cluster | Others | Pick up | Buffer | Read | Read | Write | Write | Cell |
| Time (s) | Time (s) | Waiting (s) | Waiting (s) | Waiting (s) | Waiting (s) | Calls | Gets | Reqs | Bytes | Reqs | Bytes | Unloading |
==============================================================================================================================
| 152. 146. 3.73 | 0.08 | 0.04 | 2.04 | 2. 16 M | 5223. 1 GB | 1. 200KB | 95,11% |
==============================================================================================================================
SQL details surveillance Plan (Plan hash value = 442312180)
===============================================================================================================================================================================================================================================
| ID | Operation | Name | Lines | Cost | Time | Start | Execs | Lines | Read | Read | Cell | MEM | Activity | Activity detail |
| | | | (Estimated) | | Active (s) | Active | | (Real) | Reqs | Bytes | Unloading | (Max) | (%) | (Number of samples).
===============================================================================================================================================================================================================================================
| 0 | SELECT STATEMENT | | | | 1. 152. 1. 0 | | | | | 0.65 | Cpu (1) |
| 1. RANGE OF PARTITION ALL THE | | 1. 3892 | | | 1. | | | | | | |
| 2. ACCESS STORAGE FULL FIRST RANKS TABLE. PC_T_REP | 1. 3892 | | | 37. | 74. 19 MB | 78.45% | 17 M | | |
| 3. TRANSFORMATION OF THE TEMPORARY TABLE. | | | 1. 152. 1. 1. | | | | | |
| 4. LOAD SELECT ACE | | | | 1. + 5 | 1. 1. | | | 278K | | |
| 5. VIEW | | 105. 3980 | 1. + 5 | 1. 13637 | | | | | | |
| 6. SORT UNIQUE | | 105. 3980 | 1. + 5 | 1. 13637 | | | | 757K | | |
| 7. UNION-ALL | | | | 1. + 5 | 1. 14033. | | | | | |
| 8. STORE TABLE FULL ACCESS | TM_R_REP | 22. 88. 1. + 5 | 1. 36. | | | | | |
| 9. RANGE OF PARTITION ALL THE | | 83. 3890. 1. + 5 | 1. 13997. | | | | | |
| 10. STORE TABLE FULL ACCESS | PC_T_REP | 83. 3890. 6. + 0 | 37. 13997. | | | 2 M | 0.65 | Smart cell table scan (1) |
| 11. LOAD SELECT ACE | | | | 1. + 5 | 1. 1. | | | 278K | | |
| 12. HASH UNIQUE | | 1. 4166. 1. + 5 | 1. 1. | | | 479K | | |
| 13. HASH JOIN | | 1. 4165 | 1. + 5 | 1. 444. | | | 1 M | | |
| 14. JOIN FILTER PART CREATE | : BF0000 | 3. 4075 | 1. + 5 | 1. 549. | | | | | |
| 15. OUTER HASH JOIN | | 3. 4075 | 1. + 5 | 1. 549. | | | 1 M | | |
| 16. HASH JOIN | | 3. 4068 | 1. + 5 | 1. 549. | | | 2 M | | |
| 17. VIEW | | 105. 3980 | 1. + 5 | 1. 13637 | | | | | | |
| 18. SORT UNIQUE | | 105. 3980 | 1. + 5 | 1. 13637 | | | | 757K | | |
| 19. UNION-ALL | | | | 1. + 5 | 1. 14033. | | | | | |
| 20. STORE TABLE FULL ACCESS | TM_R_REP | 22. 88. 1. + 5 | 1. 36. | | | | | |
| 21. RANGE OF PARTITION ALL THE | | 83. 3890. 1. + 5 | 1. 13997. | | | | | |
| 22. STORE TABLE FULL ACCESS | PC_T_REP | 83. 3890. 1. + 5 | 37. 13997. | | | 2 M | | |
| 23. STORE TABLE FULL ACCESS | TM_R_REP | 1884 | 88. 1. + 5 | 1. 1929 | | | | | | |
| 24. STORE TABLE FULL ACCESS | TM_REP | 7136 | 7. 1. + 5 | 1. 7137 | | | | | | |
| 25. RANGE OF SINGLE PARTITION | | 7449. 90. 1. + 5 | 1. 7449. | | | | | |
| 26. STORE TABLE FULL ACCESS | PC_T_REP | 7449. 90. 1. + 5 | 1. 7449. | | | | | |
| 27. SORT UNIQUE | | 1. 26032 | 1. 152. 1. 1. | | | 2048 | | |
| 28. OUTER HASH JOIN | | 1. 26031 | 72. + 81 | 1. 8238 | | | | 4 M | | |
| 29. FILTER | | | | 74. + 79 | 1. 8238 | | | | | 1.96 | Cpu (3) |
| 30. NESTED EXTERNAL LOOPS | | 1. 26027 | 72. + 81 | 1. 15 M | | | | | 3.27 | Cpu (5) |
| 31. HASH JOIN | | 1. 26026 | 72. + 81 | 1. 15 M | | | | 447K | 18.95 | Cpu (29) |
| 32. OUTER HASH JOIN | | 1. 13213 | 1. + 81 | 1. 332. | | | 452K | | |
| 33. HASH JOIN | | 1. 13206 | 1. + 81 | 1. 332. | | | 1 M | | |
| 34. HASH JOIN | | 1. 13199. 1. + 81 | 1. 444. | | | 434K | | |
| 35. HASH JOIN | | 1. 13197. 1. + 81 | 1. 444. | | | 290K | | |
| 36. JOIN CREATE FILTER | : BF0000 | 1. 13195. 1. + 81 | 1. 444. | | | | | |
| 37. HASH JOIN | | 1. 13195. 1. + 81 | 1. 444. | | | 2 M | | |
| 38. THE CARTESIAN MERGE JOIN. | 27. 13107 | 1. + 81 | 1. 7449. | | | | | |
| 39. HASH JOIN | | 1. 13017. 77. + 5 | 1. 1. | | | 750K | | |
| 40. STORE TABLE FULL ACCESS | TIM_MON | 1. 4. 1. + 5 | 1. 1. | | | | | |
| 41. VIEW | | 1. 13013. 1. + 81 | 1. 1. | | | | | |
| 42. HASH GROUP BY. | 1. 13013. 1. + 81 | 1. 1. | | | 482K | | |
| 43. OUTER HASH JOIN | | 1. 13012. 77. + 5 | 1. 8238 | | | | 4 M | | |
| 44. NESTED LOOPS | | 1. 13008. 77. + 5 | 1. 8238 | | | | | | |
| 45. FILTER | | | | 77. + 5 | 1. 8238 | | | | | 2.61 | Cpu (4) |
| 46. NESTED EXTERNAL LOOPS | | 1. 13007. 77. + 5 | 1. 15 M | | | | | 4.58. Cpu (7) |
| 47. HASH JOIN | | 1. 13006. 77. + 5 | 1. 15 M | | | | 424K | 11.76. Cpu (18) |
| 48. HASH JOIN | | 1. 193. 1. + 5 | 1. 332. | | | 1 M | | |
| 49. HASH JOIN | | 1. 186. 1. + 5 | 1. 444. | | | 420K | | |
| 50. HASH JOIN | | 4. 184. 1. + 5 | 1. 444. | | | 290K | | |
| 51. JOIN CREATE FILTER | : BF0002 | 1. 94. 1. + 5 | 1. 1. | | | | | |
| 52. JOIN FILTER PART CREATE | : BF0001 | 1. 94. 1. + 5 | 1. 1. | | | | | |
| 53. HASH JOIN | | 1. 94. 1. + 5 | 1. 1. | | | 290K | | |
| 54. JOIN CREATE FILTER | : BF0003 | 1. 6. 1. + 5 | 1. 1. | | | | | |
| 55. THE CARTESIAN MERGE JOIN. | 1. 6. 1. + 5 | 1. 1. | | | | | |
| 56. STORE TABLE FULL ACCESS | TIM_MON | 1. 4. 1. + 5 | 1. 1. | | | | | |
| 57. KIND OF BUFFER. | 1. 2. 1. + 5 | 1. 1. | | | 2048 | | |
| 58. VIEW | VW_NSO_1 | 1. 2. 1. + 5 | 1. 1. | | | | | |
| 59. UNIQUE HASH | | 1. | 1. + 5 | 1. 1. | | | 485K | | |
| 60. VIEW | | 1. 2. 1. + 5 | 1. 1. | | | | | |
| 61. STORE TABLE FULL ACCESS | SYS_TEMP_0FD9D71E1_B445AE36 | 1. 2. 1. + 5 | 1. 1. | | | | | |
| 62. USE OF JOIN FILTER | : BF0003 | 1884 | 88. 1. + 5 | 1. 1. | | | | | |
| 63. STORE TABLE FULL ACCESS | TM_R_REP | 1884 | 88. 1. + 5 | 1. 1. | | | | | |
| 64. USE OF JOIN FILTER | : BF0002 | 7449. 90. 1. + 5 | 1. 444. | | | | | |
| 65. RANGE OF SINGLE PARTITION | | 7449. 90. 5. + 1 | 1. 444. | | | | 0.65 | Cpu (1) |
| 66. STORE TABLE FULL ACCESS | PC_T_REP | 7449. 90. 1. + 5 | 1. 444. | | | | | |
| 67. VIEW | | 105. 2. 1. + 5 | 1. 13637 | | | | | | |
| 68. STORE TABLE FULL ACCESS | SYS_TEMP_0FD9D71E0_B445AE36 | 105. 2. 1. + 5 | 1. 13637 | | | | | | |
| 69. STORE TABLE FULL ACCESS | TM_REP | 7136 | 7. 1. + 5 | 1. 7137 | | | | | | |
| 70. STORE TABLE FULL ACCESS | L_OP | 19382 | 12813 | 77. + 5 | 1. 43879 | 565. 551 MB | 98.18% | 15 M | | |
| 71. TABLE ACCESS BY INDEX ROWID | L_US | 1. 1. 79. + 3 | 15 M | 15 M | 26. 208KO | | | 19.61 | Cpu (30) |
| 72. INDEX UNIQUE SCAN | L_US_PK | 1. | 77. + 5 | 15 M | 15 M | 2. 16384. | | 9 h 15 | Cpu (14) |
| 73. INDEX UNIQUE SCAN | L_A_PK | 1. 1. 151. + 2 | 8238 | 8238 | 3269 | 26 MB | | | 2.61 | Cpu (1) |
| | | | | | | | | | | | | | | monobloc cell physical read (3) |
| 74. STORE TABLE FULL ACCESS | OPP_C_DIM | 2304 | 4. 1. + 81 | 1. 2304 | 3. 112 KB | | | | |
| 75. KIND OF BUFFER. | 7449. 13107 | 1. + 81 | 1. 7449. | | | 370K | | |
| 76. RANGE OF SINGLE PARTITION | | 7449. 90. 1. + 81 | 1. 7449. | | | | | |
| 77. STORE TABLE FULL ACCESS | PC_T_REP | 7449. 90. 1. + 81 | 1. 7449. | | | | | |
| 78. STORE TABLE FULL ACCESS | TM_R_REP | 1884 | 88. 1. + 81 | 1. 1929 | | | | | | |
| 79. VIEW | | 1. 2. 1. + 81 | 1. 1. | | | | | |
| 80. USE OF JOIN FILTER | : BF0000 | 1. 2. 1. + 81 | 1. 1. | | | | | |
| 81. STORE TABLE FULL ACCESS | SYS_TEMP_0FD9D71E1_B445AE36 | 1. 2. 1. + 81 | 1. 1. | | | | | |
| 82. VIEW | | 105. 2. 1. + 81 | 1. 13637 | | | | | | |
| 83. STORE TABLE FULL ACCESS | SYS_TEMP_0FD9D71E0_B445AE36 | 105. 2. 1. + 81 | 1. 13637 | | | | | | |
| 84. STORE TABLE FULL ACCESS | TM_REP | 7136 | 7. 1. + 81 | 1. 7137 | | | | | | |
| 85. STORE TABLE FULL ACCESS | TM_REP | 7136 | 7. 1. + 81 | 1. 7137 | | | | | | |
| 86. STORE TABLE FULL ACCESS | L_OP | 19382 | 12813 | 72. + 81 | 1. 43879 | 593. 577 MB | 98,44% | 15 M | | |
| 87. TABLE ACCESS BY INDEX ROWID | L_US | 1. 1. 72. + 81 | 15 M | 15 M | | | | | 13.73. Cpu (21) |
| 88. INDEX UNIQUE SCAN | L_US_PK | 1. | 73. + 80 | 15 M | 15 M | | | | | 9.80 | Cpu (15) |
| 89. STORE TABLE FULL ACCESS | OPP_C_DIM | 2304 | 4. 1. 152. 1. 2304 | | | | | | |
===============================================================================================================================================================================================================================================
Information of predicates (identified by the operation identity card):
---------------------------------------------------
2. (("MEMBER_KEY_SYM" =: B1 ET "IS_LEAF" = 1) filter)
8 - storage("T7171".") CHILD_REP_ID "= 939)
filter ("T7171". ("CHILD_REP_ID" = 939)
10 - storage("T7167".") ANCESTOR_KEY "= 939)
filter ("T7167". ("ANCESTOR_KEY" = 939)
13 - access("T7167".") SALESYEARMONTH "= 'T7171'." SALES_YEAR_MONTH' AND 'T7167 '. "ANCESTOR_KEY"= "T7171". ("' CHILD_REP_ID")
filter (CASE WHEN TO_CHAR ("T7171". "CHILD_REP_ID_N") AS 9999% ' THEN 'ALL UNASSIGNED' | " CAST ("T7167". ("TERRITORY_ID" AS A VARCHAR (20)) ELSE 'T6715 '. "" NAME "| "
'||" T6715 ". "" MIDDLE_NAME "| " '||" T6715 ". ("" LAST_NAME "END ="JOES CRAMER")
15 - access("T7171".") CHILD_REP_ID_N "= 'T6715'." REP_ID")
16 - access("T7171".") CH_ID_SYM '= 'D1'.' C1")
20 - storage("T7171".") CHILD_REP_ID "= 939)
filter ("T7171". ("CHILD_REP_ID" = 939)
22 - storage("T7167".") ANCESTOR_KEY "= 939)
filter ("T7167". ("ANCESTOR_KEY" = 939)
23 - storage("T7171".") SALES_YEAR_MONTH "= 201505)
filter ("T7171". ("SALES_YEAR_MONTH" = 201505)
26 - storage("T7167".") SALESYEARMONTH "= 201505)
filter ("T7167". ("SALESYEARMONTH" = 201505)
28 - access (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM") = TO_CHAR ("OPP_C_DIM". " PERIOD") AND
TO_CHAR ("OPP_C_DIM". "OFFER_CODE") = "L_OP". ("' OFFER_CODE__C")
29 - filter("TM_REP".") REP_NUM "=" 0"|" » « « « L_US «. » REP_EMPLOYEE_NUMBER__C')
31 - access("T7315".") YEAR_MONTH «= TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP".» CLOSEDATE"),"YYYYMM")) AND
'T7167 '. «SALESYEARMONTH «= TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP".» (((("" CLOSEDATE "),"YYYYMM")))
32 - access("T7171".") CHILD_REP_ID_N "= 'T6715'." REP_ID")
33 - access("T7167".") MEMBER_KEY "=" TM_REP. " ("" REP_ID ")
34 - access("T7171".") CH_ID_SYM '= 'D1'.' C1")
35 - access("T7171".") CH_ID_SYM '= 'D1'.' C1")
37 - access (SYS_OP_MAP_NONNULL ("D2". "C16") = SYS_OP_MAP_NONNULL ("T7171". " CH_ID_SYM") AND"T7167 ". "SALESYEARMONTH"= "T7171". "" SALES_YEAR_MONTH "AND
'T7167 '. "ANCESTOR_KEY"= "T7171". ("' CHILD_REP_ID")
39 - access (SYS_OP_MAP_NONNULL ("D2". "C15") = SYS_OP_MAP_NONNULL ("T7315". " MONTH_NUMERIC'))
40 - storage("T7315".") YEAR_MONTH "= 201505)
filter ("T7315". ("YEAR_MONTH" = 201505)
43 - access (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM") = TO_CHAR ("OPP_C_DIM". " PERIOD") AND
TO_CHAR ("OPP_C_DIM". "OFFER_CODE") = "L_OP". ("' OFFER_CODE__C")
45 - filter("TM_REP".") REP_NUM "=" 0"|" » « « « L_US «. » REP_EMPLOYEE_NUMBER__C')
47 - access("T7315".") YEAR_MONTH «= TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP".» CLOSEDATE"),"YYYYMM")) AND
'T7167 '. «SALESYEARMONTH «= TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP".» (((("" CLOSEDATE "),"YYYYMM")))
48 - access("T7167".") MEMBER_KEY "=" TM_REP. " ("" REP_ID ")
49 - access("T7171".") CH_ID_SYM '= 'D1'.' C1")
50 - access("T7167".") SALESYEARMONTH "= 'T7171'." SALES_YEAR_MONTH' AND 'T7167 '. "ANCESTOR_KEY"= "T7171". ("' CHILD_REP_ID")
53 - access("T7171".") CH_ID_SYM "=" C1")
56 - storage("T7315".") YEAR_MONTH "= 201505)
filter ("T7315". ("YEAR_MONTH" = 201505)
63 - storage (("T7171". "SALES_YEAR_MONTH" = 201505 AND SYS_OP_BLOOM_FILTER (: BF0000, "T7171" ".") CH_ID_SYM')))
filter (("T7171". "SALES_YEAR_MONTH" = 201505 AND SYS_OP_BLOOM_FILTER (: BF0000, "T7171" ".") CH_ID_SYM')))
66 - storage (("T7167". "SALESYEARMONTH" = 201505 AND SYS_OP_BLOOM_FILTER (: BF0000, "T7167" ".") SALESYEARMONTH', 'T7167 '. ((("" ANCESTOR_KEY ")))
filter (("T7167". "SALESYEARMONTH" = 201505 AND SYS_OP_BLOOM_FILTER (: BF0000, "T7167" ".") SALESYEARMONTH', 'T7167 '. ((("" ANCESTOR_KEY ")))
70 - storage ((TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) > = 201505 AND "
TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) (< = 201505)) "
filter ((TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) > = 201505 AND "
TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) (< = 201505)) "
72 - access("L_OP".") OWNERID "=" L_US. " (' ' ID ')
73 - access("T765190".") WITH THE ID "=" L_OP. " ("' ACCOUNTID ')
77 - storage("T7167".") SALESYEARMONTH "= 201505)
filter ("T7167". ("SALESYEARMONTH" = 201505)
78 - storage("T7171".") SALES_YEAR_MONTH "= 201505)
filter ("T7171". ("SALES_YEAR_MONTH" = 201505)
81 - storage (SYS_OP_BLOOM_FILTER (: BF0000, "C0"))
filter (SYS_OP_BLOOM_FILTER (: BF0000, "C0"))
86 - storage ((TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) > = 201505 AND "
TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) (< = 201505)) "
filter ((TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) > = 201505 AND "
TO_NUMBER (TO_CHAR (INTERNAL_FUNCTION ("L_OP". "CLOSEDATE"), "YYYYMM")) (< = 201505)) "
88 - access("L_OP".") OWNERID "=" L_US. " (' ' ID ')
Note
-----
-dynamic sample used for this survey (level = 4)
-Automatic DOP: calculated degree of parallelism is 1 because of the parallel threshold
Although the table meet statistical why dynamic sampling is to be used? Why 15 million times ID 71, 72, 87 and 88 are executed, curious because that is where most of the time is spent. How can we reduce this 15 million probes?
Suggestions to reduce the response time sql would be useful.
Post edited by: Yasu masking of sensitive information (literal value)
YASU says:
For educational purposes could you please clarify why the optimizer has evaluated the join condition in the functioning of the FILTER to 15 million lines?
This is unusual, but TM_REP is attached to another PC_T_REP table using a join operation, so maybe it's the explanation of why it is moved to a FILTER operation had been delayed - could also be a side effect of the combination of ANSI join processing (Oracle internally transforms ANSI joins in Oracle join syntax) and the transformation of outer join internally.
Very curious to know how this is possible, could you please give us the hint/tour, you can use to push the inner join down execution plan is evaluated as soon as possible to reduce the data to be processed? I have a sql plus, where the situation is almost similar - ranks of filtering 2 million to the hash JOIN operation and return 0 rows. I can post the details of sql here but not to mix different sql question in the same post. Please let me know if you would like to give the details of this sql in this same thread, or a different thread? I searched for this type of information, but to no avail, so could you please suggest how this is possible, if not for this long sql then at least please provide a few examples/suggestions?
Normally you can influence this through the appropriate join order, for example with the help of the LEADING indicator, for filter indicator PUSH_SUBQ subqueries can also be useful for filtering at the beginning. But here the comment of Franck is particularly important - by leaning on the Cartesian join this problem here should be less relevant.
As I already said I would recommend here from scratch and think that all that this query is supposed to average and the question why most outer joins is actually converted into inner joins - the current query example returns the correct result?
Randolf
-
I am using version - 11.2.0.4.0 - Oracle.
I have below details of stats for the two tables with no histograms on columns
Table T1 - NUM_ROWS - 8 900 759
------------------------------
column_name num_nulls num_distinct density
C1 100800 0 9.92063492063492E - 6
0-7184 0.000139198218262806 C2
Table T2 - NUM_ROWS - 28835
---------------------------------
column_name num_nulls num_distinct density
C1 0 101 0.0099009900990099
0 39 0.0256410256410256 C2Query:
------Select * from T1, T2
WHERE t1.c1 = t2.c1;
Execution plan
----------------------------------------------------------
Hash value of plan: 4149194932--------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
--------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2546K | 675 M | | 65316 (1) | 00:13:04 | | |
|* 1 | HASH JOIN | | 2546K | 675 M | 5944K | 65316 (1) | 00:13:04 | | |
| 2. STORE TABLE FULL ACCESS | T2 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 3. RANGE OF PARTITION ALL THE | | 8900K | 670 M | | 26453 (1) | 00:05:18 | 1. 2.
| 4. STORE TABLE FULL ACCESS | T1 | 8900K | 670 M | | 26453 (1) | 00:05:18 | 1. 2.
--------------------------------------------------------------------------------------------------------------------------------
as the below rule says itsJoin selectivity =
((num_rows (t1) - num_nulls (t1.c1)) / (t1) num_rows) *.
((num_rows (t2) - num_nulls (t2.c2)) / (t2) num_rows).
Greater (num_distinct (T1. (C1), num_distinct (t2.c2))Join selectivity = (((28835-0) / (28835)) * ((8900759-0)/8900759)) / 100800)
Join cardinality = join selectivity * num_rows (t1) * num_rows (t2)
= (((28835-0) / (28835)) * ((8900759-0)/8900759)) / 100800) * (8900759 * 28835)
= 2546164.54, which corresponds to the output of the above plan.but when I add a different join condition as below, I am not able to understand, how the cardinality of the join becomes 28835? And what a difference it will behave in case of presence of histogram?
Select * from T1, T2
WHERE t1.c1 = t2.c1
and t1.c2 = t2.c2;
Execution plan
----------------------------------------------------------
Hash value of plan: 1645075573---------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
---------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 28835 | 7828K | | 65316 (1) | 00:13:04 | | |
|* 1 | HASH JOIN | | 28835 | 7828K | 5944K | 65316 (1) | 00:13:04 | | |
| 2. JOIN FILTER PART CREATE | : BF0000 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 3. STORE TABLE FULL ACCESS | T2 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 4. RANGE OF PARTITION-JOIN FILTER | | 8900K | 670 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
| 5. STORE TABLE FULL ACCESS | T1 | 8900K | 670 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
---------------------------------------------------------------------------------------------------------------------------------Total of selectivity = selectivity of c1 * c2 selectivity
= ((((28835 - 0)/(28835)) * ((8900759-0)/8900759))/ 100800)*((((28835 - 101)/(28835)) * ((8900759-0)/8900759))/ 7184)
total of cardinality = selectivity * num_rows (t1) * num_rows (t2) total
=
((((28835 - 0)/(28835)) * ((8900759-0)/8900759))/ 100800) * ((((28835 - 101)/(28835)) * ((8900759-0)/8900759))/ 7184) * (8900759) * ()28835) = 353.18 but its does not not at the outut above
--> C2 for table T2 is partitioned column. T1 is not partitioned.
--> There are two partitions of the range for the T2. And one of them is empty, the data resides in a single partition.
--> As a single partition is empty, so it would be to visit only one partition for the final results.
--> I use "set autotrace traceonly explain" to get the plan for the query.--> Here is the max and min for c1 and c2 for the T2 value
Max (C1) min (c1) (c2) max Min (c2)
86 383759 2/28 / 2011 23:59:38 28/02/2011 12:00:02 AMHere is the max and min for c1 and c2 for the T1 value
Max (C1) min (c1) (c2) max Min (c2)
4860 354087 2/28 / 2011 23:55:47 28/02/2011 12:07:49 AM--> Given below is the plan with the predicate section
Execution plan
----------------------------------------------------------
Hash value of plan: 1645075573---------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
---------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 28835 | 8166K | | 70364 (1) | 00:14:05 | | |
|* 1 | HASH JOIN | | 28835 | 8166K | 5944K | 70364 (1) | 00:14:05 | | |
| 2. JOIN FILTER PART CREATE | : BF0000 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 3. STORE TABLE FULL ACCESS | T1 | 28835 | 5603K | | 239 (1) | 00:00:03 | | |
| 4. RANGE OF PARTITION-JOIN FILTER | | 8900K | 772 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
| 5. STORE TABLE FULL ACCESS | T2 | 8900K | 772 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
---------------------------------------------------------------------------------------------------------------------------------Information of predicates (identified by the operation identity card):
---------------------------------------------------1 - access("T2".") C2 '= 'T1'.' C2"AND"T2 ". ' C1 '= "T1". ("" C1 ")
--> I see below three values in all current data for Q2 of having County > 10 000
C1 C2 Count (*)
171966 2/28 / 2011 07:21:14 14990
41895 2/28 / 2011 08:41:36 12193
7408 2/28 / 2011 06:16:20 12158
53120 2/28 / 2011 06:16:13 7931
51724 2/28 / 2011 18:03:22 6783
51724 2/28 / 2011 18:02:58 6757
51724 2/28 / 2011 16:02:22 6451
51724 2/28 / 2011 16:02:01 6388
51724 2/28 / 2011 14:01:29 5979
234233 2/28 / 2011 07:21:14 5975
51724 2/28 / 2011 14:01:09 5917
7408 2/28 / 2011 06:16:13 5355
51724 2/28 / 2011 20:04:18 5074
51724 2/28 / 2011 20:03:54 5058I see below three values in the current data set for T1, which is to have County > 75
C1 C2 Count (*)
4860 2/28 / 2011 19:33:45
31217 2/28 / 2011 23:27:54
31217 2/28 / 2011 23:48:14
4860 2/28 / 2011 17:36:07
4860 2/28 / 2011 20:00:11
4860 2/28 / 2011 18:20:13
4860 2/28 / 2011 14:35:39
4860 2/28 / 2011 19:48:06
4860 2/28 / 2011 12:30:29
4860 2/28 / 2011 15:32:31
4860 2/28 / 2011 17:48:05
4860 2/28 / 2011 17:02:26
4860 2/28 / 2011 22:27:02--> Yes the join is targeted on the larger partition, because the other is just empty.
--> Here is the stats and the plan after having extended his stats collected on the Group column c1, c2 from T1 (by converting it to physics) with no histogram. now its giving a better estimate, which is the closure of real cardinality. But the problem is that in reality, table T1 is a global temporary table, so I'm not able to gather extended on that stat. Are there other work around for this quote?column_name density histogram Num_distinct
SYS_STUMW3X8MDKZEJOG$ AHPEND1W $2699 NO 0.000370507595405706
Execution plan
----------------------------------------------------------
Hash value of plan: 1645075573---------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
---------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 432K | 124 M | | 70380 (1) | 00:14:05 | | |
|* 1 | HASH JOIN | | 432K | 124 M | 6280K | 70380 (1) | 00:14:05 | | |
| 2. JOIN FILTER PART CREATE | : BF0000 | 28835 | 5941K | | 239 (1) | 00:00:03 | | |
| 3. STORE TABLE FULL ACCESS | T1 | 28835 | 5941K | | 239 (1) | 00:00:03 | | |
| 4. RANGE OF PARTITION-JOIN FILTER | | 8900K | 772 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
| 5. STORE TABLE FULL ACCESS | T2 | 8900K | 772 M | | 26453 (1) | 00:05:18 | : BF0000 | : BF0000 |
---------------------------------------------------------------------------------------------------------------------------------Information of predicates (identified by the operation identity card):
---------------------------------------------------1 - access("T2".") C2 '= 'T1'.' C2"AND"T2 ". ' C1 '= "T1". ("" C1 ")
-
Difference of cardinality estimate on explain plan and implementation plan
I think some of you know the 5% rule, which is explained in the note on metalink # 68992.1.
In short, this means that (with bind peeking out voltage)
It is also well explained fundamentals of the CBO by Jonathan Lewis.- c1 > :b1 : 5% of selectivity - c1 >= :b1 :5% of selectivity - c1 between :b1 and :b2 : 0.25% of selectivity (5% * 5%)
But I found a few odd cases where the 5% rule is broken DURATION estimate.
The most interesting part is explain plan watch again the 5% rule.
Why the difference?
I think that with bind peeking out, explain the plan and the implementation plan should show the same things.
(Assuming that all values of the environment are identical)
Am I wrong?
It's the long story to tell, but simple test cases will show what I mean.
* Disable bind peeking. *UKJA@ukja102> @version BANNER --------------------------------------------------------------------- --------------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production UKJA@ukja102> UKJA@ukja102> set echo on UKJA@ukja102> UKJA@ukja102> drop table t1 purge; Table dropped. Elapsed: 00:00:00.09 UKJA@ukja102> UKJA@ukja102> create table t1(c1 int, c2 int) 2 ; Table created. Elapsed: 00:00:00.01 UKJA@ukja102> UKJA@ukja102> insert into t1 2 select 1, level 3 from dual 4 connect by level <= 10000 5 union all 6 select 2, level 7 from dual 8 connect by level <= 1000 9 union all 10 select 3, level 11 from dual 12 connect by level <= 100 13 union all 14 select 4, level 15 from dual 16 connect by level <= 10 17 union all 18 select 5, level 19 from dual 20 connect by level <= 1 21 ; 11111 rows created. Elapsed: 00:00:00.32 UKJA@ukja102> UKJA@ukja102> exec dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1'); PL/SQL procedure successfully completed.
In the following result, explain the plan following the 5% rule.UKJA@ukja102> UKJA@ukja102> alter session set "_optim_peek_user_binds" = false;
(11111 * 0.05 = 555)
But the term plan does'nt follow the 5% rule. It uses its own densityUKJA@ukja102> UKJA@ukja102> explain plan for 2 select count(*) 3 from t1 4 where c1 > :b1 5 ; Explained. Elapsed: 00:00:00.01 UKJA@ukja102> UKJA@ukja102> @plan UKJA@ukja102> select * from table(dbms_xplan.display) 2 / PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 3724264953 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 3 | 6 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | |* 2 | TABLE ACCESS FULL| T1 | 556 | 1668 | 6 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("C1">TO_NUMBER(:B1)) 14 rows selected. Elapsed: 00:00:00.01
(11111 * density (c1) = 11111 * 0.2 = 2222)
the 5% rule seems to beUKJA@ukja102> select /*+ gather_plan_statistics */ 2 count(*) 3 from t1 4 where c1 > :b1 5 ; COUNT(*) ---------- 1111 Elapsed: 00:00:00.00 UKJA@ukja102> UKJA@ukja102> @stat UKJA@ukja102> select * from table 2 (dbms_xplan.display_cursor(null,null,'allstats cost last')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- SQL_ID 0nmqsysmr3ap9, child number 0 ------------------------------------- select /*+ gather_plan_statistics */ count(*) from t1 where c1 > :b1 Plan hash value: 3724264953 -------------------------------------------------------------------------------- ------------------ | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A- Time | Buffers | -------------------------------------------------------------------------------- ------------------ | 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00 :00.01 | 23 | |* 2 | TABLE ACCESS FULL| T1 | 1 | 2223 | 6 (0)| 1111 |00:00 :00.01 | 23 | -------------------------------------------------------------------------------- ------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("C1">:B1) 18 rows selected.
-applied to explain the plan always
-applied at the level of enforcement only when density 5 < %. When the density > 5%, he uses the density not 5%
I'm not sure it's a designed feature or a bug.
But estimates of different cardinality explain a plan and DURATION (with bind peeking out voltage) is not that desirable thing.
One's opinion on this?
Dion ChoSorry to take some time to get back on this one.
I can reproduce your results in 10.2.0.1, but the anomaly is not present in 9.2.0.8 and 10.2.0.3 and 11.1.0.6.
As Charles, the calculation has a boundary condition when a num_diistinct falls below 20
(i.e. when a value is more than 5% of the total data set - average).However, the fact that explain the plan and the run time you give estimates of different cardinality is a bug.
Everything they say, they should say the same thing at least that the introduction of the variable binding
introduced a possible type conversion or the NLS conversion feature that has changed the
calculation of the expected cardinality. In this case there is no reason why the use of links should be
cause confusion - so we can reasonably assume that it is a bug.Concerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
http://www.jlcomp.demon.co.UK"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." (Stephen Hawking)
-
I have on a query performance problems in a production environment I can't reproduce in our test environment. Our test environment is an import of the production. Version information:
When I run the query in the test, I get the following results using TKPROFOracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi PL/SQL Release 10.2.0.3.0 - Production CORE 10.2.0.3.0 Production TNS for Solaris: Version 10.2.0.3.0 - Production NLSRTL Version 10.2.0.3.0 - Production
The same query in Productioncall count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 2 0.03 0.05 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.02 0.01 241 1033 0 2 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.05 0.07 241 1033 0 2 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS
This performance problem started a few weeks ago and the problem seems to be the number consistent readings during extraction. The DBA tried to restart stats instance and gathering of fresh on the tables, but that does not make a difference. Now, he wants to export tables, drop the schema and the re - import. I'd like to understand why this sound OK.call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 2 0.03 0.04 0 8 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 36.15 35.61 4 8187612 0 2 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 36.18 35.65 4 8187620 0 2 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS
Hello
(1) obviously, the performance problem is caused by the wrong choice of conduct table
(2) this, in turn, is caused by steps 51-56 bad cardinality estimates, the production plan: the optimizer expects 4 rows where 120 k lines are returned
(3) in order to understand the reason for the inaccurate cardinality estimates, we see predicates you have not posted, and perhaps also the statistics on the columns involved in the predicate, thus the rowcounts actual and estimatedBest regards
Nikolai -
Tune - kind of expensive query window.
I want some advice to grant this request.
My version of the database:
The querySQL> SELECT * FROM v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for IBM/AIX RISC System/6000: Version 10.2.0.5.0 - Productio NLSRTL Version 10.2.0.5.0 - Production SQL>
The PlanSQL> EXPLAIN PLAN FOR 2 WITH INN AS 3 ( 4 SELECT /*+ PARALLEL(log,8) */ 5 item_no, 6 item_type, 7 bu_code_send, 8 bu_type_send, 9 bu_code_rcv, 10 bu_type_rcv, 11 from_date_rcv, 12 to_date_rcv, 13 lt_val, 14 lt_val_uom_code, 15 upd_dtime, 16 delete_dtime, 17 ROW_NUMBER() OVER(PARTITION BY item_no, item_type, bu_code_send, bu_type_send, 18 bu_code_rcv, bu_type_rcv, from_date_rcv 19 ORDER BY upd_dtime DESC) AS nr 20 FROM log.CEM_TOTAL_LEADTIME_T_LOG log 21 WHERE UPD_DTIME < TO_DATE ('13-11-2012','DD-MM-YYYY') 22 ) 23 SELECT 24 item_no, 25 item_type, 26 bu_code_send, 27 bu_type_send, 28 bu_code_rcv, 29 bu_type_rcv, 30 from_date_rcv, 31 to_date_rcv, 32 lt_val, 33 lt_val_uom_code, 34 SYSDATE 35 FROM inn 36 WHERE DELETE_DTIME IS NULL 37 AND NR=1 38 AND to_DATE ('13-11-2012','DD-MM-YYYY') BETWEEN from_date_rcv AND NVL(to_date_rcv, '31-DEC-9999') 39 ; Explained.
A few thoughts more/inputs:PLAN_TABLE_OUTPUT ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Plan hash value: 3866412310 ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 331M| 26G| | 374K (4)| 00:50:30 | | | | | | | 1 | PX COORDINATOR | | | | | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10001 | 331M| 26G| | 374K (4)| 00:50:30 | | | Q1,01 | P->S | QC (RAND) | |* 3 | VIEW | | 331M| 26G| | 374K (4)| 00:50:30 | | | Q1,01 | PCWP | | |* 4 | WINDOW SORT PUSHED RANK | | 331M| 20G| 29G| 374K (4)| 00:50:30 | | | Q1,01 | PCWP | | | 5 | PX RECEIVE | | 331M| 20G| | 374K (4)| 00:50:30 | | | Q1,01 | PCWP | | | 6 | PX SEND HASH | :TQ10000 | 331M| 20G| | 374K (4)| 00:50:30 | | | Q1,00 | P->P | HASH | |* 7 | WINDOW CHILD PUSHED RANK| | 331M| 20G| | 374K (4)| 00:50:30 | | | Q1,00 | PCWP | | | 8 | PX BLOCK ITERATOR | | 331M| 20G| | 7123 (44)| 00:00:58 | 1 | 167 | Q1,00 | PCWC | | |* 9 | TABLE ACCESS FULL | CEM_TOTAL_LEADTIME_T_LOG | 331M| 20G| | 7123 (44)| 00:00:58 | 1 | 167 | Q1,00 | PCWP | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("DELETE_DTIME" IS NULL AND "NR"=1 AND NVL("TO_DATE_RCV",TO_DATE(' 9999-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))>=TO_DATE(' 2012-11-13 PLAN_TABLE_OUTPUT ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 4 - filter(ROW_NUMBER() OVER ( PARTITION BY "ITEM_NO","ITEM_TYPE","BU_CODE_SEND","BU_TYPE_SEND","BU_CODE_RCV","BU_TYPE_RCV","FROM_DATE_RCV" ORDER BY INTERNAL_FUNCTION("UPD_DTIME") DESC )<=1) 7 - filter(ROW_NUMBER() OVER ( PARTITION BY "ITEM_NO","ITEM_TYPE","BU_CODE_SEND","BU_TYPE_SEND","BU_CODE_RCV","BU_TYPE_RCV","FROM_DATE_RCV" ORDER BY INTERNAL_FUNCTION("UPD_DTIME") DESC )<=1) 9 - filter("FROM_DATE_RCV"<=TO_DATE(' 2012-11-13 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "UPD_DTIME"<TO_DATE(' 2012-11-13 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
1. the CEM_TOTAL_LEADTIME_T_LOG table has 333 million lines, it is a partition table, but the partition of the range is the LOG_DATE column (not used in this query).
2. the type of window takes up huge space temp (bad cardinality estimate? How to fix?)
3. in fact, it takes about 56 minutes to complete.Hello
It is unlikely that you can improve the performance of this query a lot - instead, I'd take a close look in the data model to see if it can be improved. It seems that this query selects the last snapshot in a history table (group all the data by the values of item_no item_type, bu_code_send, bu_type_send, bu_code_rcv, bu_type_rcv, from_date_rcv, and for each such choice of group the last record with upd_dtime) - you should be able to find a cheaper way to select these data. Some possible options are: a materialized view, a redundant table partitioning.
Best regards
Nikolai -
How to handle stale Stats script.
Hello
I use Release 10.2.0.1.0 Oracle. I have a scenario where I'm mediocre execution due to obsolete statistics plans, and how do I address the scenario. Here's the part of my main request which deviates the path of execution by the bad cardinality estimate.
Published by: 930254 on August 30, 2012 04:41My column c1 of table tab1 holds javatimestamp values i.e. its NUMBER datatype which points to a date and time component only. And we gather stats each weekend on this table tab1. below is my query: select /*+gather_plan_statistics*/* from tab1 where c1 BETWEEN 1346300090668 AND 1346325539486 ; Plan hash value: 3167980259 -------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | -------------------------------------------------------------------------------------------------------------------------- | 1 | TABLE ACCESS BY INDEX ROWID| tab1 | 1 | 1 | 167K|00:01:13.72 | 158K| 12390 | |* 2 | INDEX RANGE SCAN | IDX_N1 | 1 | 1 | 167K|00:00:13.27 | 13880 | 1736 | -------------------------------------------------------------------------------------------------------------------------- Above shows a big gap in actual and estimated cardinality estimation, and its due to the fact that the HIGH_VALUE (1346203206173 points to 8/29/2012 1:20:06 AM) in DBA_TAB_COLUMN for column C1 is well below the STARTRANGE(1346300090668 points to 8/30/2012 4:14:51 AM) and ENDRANGE(1346325539486 points to 8/30/2012 11:18:59 AM) of the BETWEEN clause. So even gathering stats daily on the table wont help me as because, in morning again it will require updated maxvalue for the column C1 for estimating proper, So how to handle this situation? Dont want to go with 'hint' , want to make the stats proper so that optimizer will automatically pick the right path.
930254 wrote:
Yes, I think that of the two options as1. setting the High_value (high_value + 7 days ahead) in weekend work which is used to gather statistics on that table.
2. addition of indication to follow the optimal path.y at - it of the other alternatives for this scenario? If this isn't the case, which will be advised of the option above?
Published by: 930254 on August 30, 2012 06:46
If you consider the Doms tips as well, theres "a piece of work, you need to make sure that the optimizer always uses the index that it could not properly high_value far out of reach since the last gathering"
What is the best way to do it? Although you seem reluctant to do it, for me, it's the index indicator. You know that the index, it's what you want to use, it's a small change. All the others, although not terrible difficult to implement, need additional jobs in
-
get EST for sql without running against PB? possible?
Hello
is there anyway to get a query run time without running on the database? even an estimate would be good... I'm on 10.2.0.3... .i am it might be impossible for... but anyway to get the estimate and more setting autotrace only? as I said, they don't want anything but as an estimate for how long time will it take to complete the query... I know we can see very long_ops but like I said... without running against the DB... .is it possible? as the command explain plan that I have watch time? but is that how long it will take to finish or? /
Select * from table (dbms_xplan.display);----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 56 | 3192 | 9 (0)| 00:00:01 | | 1 | MERGE JOIN CARTESIAN| | 56 | 3192 | 9 (0)| 00:00:01 | | 2 | TABLE ACCESS FULL | DEPT | 4 | 80 | 3 (0)| 00:00:01 | | 3 | BUFFER SORT | | 14 | 518 | 6 (0)| 00:00:01 | | 4 | TABLE ACCESS FULL | EMP | 14 | 518 | 2 (0)| 00:00:01 | -----------------------------------------------------------------------------
The theory is that 'time' in an explain Plan is an estimate.
However, it is so very dependent on
a. statistical system and correct assessment of the speed of the CPU, SingleBlockReadTime and MultiBlockReadTime
b. table, column, index and estimates appropriate size statistics and the number of blockreads for the execution plan
c. cardinality ("Rows") is estimated at each stage of the particular execution planWhat is, in most cases, the entry "wrongest" is the cardinality estimate and that changes everything.
I would use 'time' in a Plan with the exception of the simplest to explain SQL statements and good statistics on the table (however multi-column predicates can still cause bad cardinality estimates).
Therefore, I would use the estimation of 'Time' almost never. -
Estimates of cardinality for index range scan with bind variables
Oracle 11.2.0.4
I am struggling to explain that the cardinality estimates for a scan of the index systematic range when using the bind variable.
Consider the following query:
SELECT /*+ INDEX(t1) */ * FROM t1 WHERE source_id <= ?;
Cardinalities for the INDEX RANGE SCAN and ACCESS of the TABLE are the same for different literal predicates, for example, source_id < = 5:
------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 50 | 350 | 12 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| T1 | 50 | 350 | 12 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | IX1 | 50 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("SOURCE_ID"<=5)
If a variable binding is used instead of a literal, the overall selectivity is 5%. However, why the optimizer based on CSSTidy gives a cardinality estimated 11 for the scan of the index systematic range? As with the predicates literal, surely the cardinalities of the index range scan and access table should be the same?
------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 50 | 350 | 5 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| T1 | 50 | 350 | 5 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | IX1 | 11 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("SOURCE_ID"<=TO_NUMBER(:A))
Unit test code:
CREATE TABLE t1 ( id NUMBER , source_id NUMBER ); CREATE INDEX ix1 ON t1 (source_id); INSERT INTO t1 SELECT level , ora_hash(level,99)+1 FROM dual CONNECT BY level <= 1000; exec DBMS_STATS.GATHER_TABLE_STATS(user,'T1') EXPLAIN PLAN FOR SELECT /*+ INDEX(t1) */ * FROM t1 WHERE source_id <= 5; SELECT * FROM TABLE(dbms_xplan.display); EXPLAIN PLAN FOR SELECT /*+ INDEX(t1) */ * FROM t1 WHERE source_id <= :a; SELECT * FROM TABLE(dbms_xplan.display);
There are various places where the optimizer uses an assumption, and lie unpeekable (and of Villa "unknowable value") introduced guess.
For unpeekable binds the conjecture for column<= {unknown}="" is="" 5%="" for="" table="" access="" (hence="" 50="" rows="" out="" of="" 1,000),="" but="" it's="" 0.009="" for="" index_column="">=><= {unknown},="" which="" means="" i="" was="" expecting="" to="" see="" 9="" as="" the="" row="" estimate="" on="" the="" index="" range="">=>
I just ran some quick tests, and EXPLAIN the PLAN seems to just use 0.011 selectivity in this case (in different versions of Oracle) although if we do the bind variable unpeekable at run time (and sample dynamic block etc.) optimization for execution is 0.009%.
Concerning
Jonathan Lewis
Update: and this is a very old reference to the 0.009 (and 0.0045 for ' between the ' when it is applied to a clue: cost based Oracle - access Chapter 4 single B-tree )
-
Join the cardinality statistics interesting the result
Hello Experts,
I just read an article on the CARDINALITY of the JOIN related things Oracle: function table and join cardinality estimates , without collection of statistics (with the help of dynamic sampling) optimizer calculates the CARDINALITY of JOIN pain why? I mean, I'm trying to understand the CARDINALITY JOIN, must rely on the statistics? Although I don't have the statistics to gether, as you can see on the optimizer estimates that correct cardinalities of table, below, but JOIN CARDINALITY completely wrong. What do you think about this behavior? Dynamic sampling misled the optimizer?
drop table t1 purge;
drop table t2 purge;
create table t1
as
Select rownum id, mod (rownum, 10) + 1 as fk, rpad ('X', 10) filter from dual connect by level < = 1000;
create table t2 as
Select rownum + id 20, rpad ('X', 10) filter from dual connect by level < = 10;
explain plan for
Select * from t1 join t2 on t1.fk = t2.id;
Select * from table (dbms_xplan.display);
Hash value of plan: 2959412835
---------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 53000. 8 (13) | 00:00:01 |
|* 1 | HASH JOIN | | 1000 | 53000. 8 (13) | 00:00:01 |
| 2. TABLE ACCESS FULL | T2 | 10. 200 | 3 (0) | 00:00:01 |
| 3. TABLE ACCESS FULL | T1 | 1000 | 33000 | 4 (0) | 00:00:01 |
---------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
1 - access("T1".") FK '= 'T2'.' (ID')
Note
-----
-dynamic sample used for this survey (level = 2)
exec dbms_stats.gather_table_stats (user, 'T1');
exec dbms_stats.gather_table_stats (user, 'T2');
explain plan for
Select * from t1 join t2 on t1.fk = t2.id;
Select * from table (dbms_xplan.display);
Hash value of plan: 2959412835
---------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. 32. 8 (13) | 00:00:01 |
|* 1 | HASH JOIN | | 1. 32. 8 (13) | 00:00:01 |
| 2. TABLE ACCESS FULL | T2 | 10. 140. 3 (0) | 00:00:01 |
| 3. TABLE ACCESS FULL | T1 | 1000 | 18000 | 4 (0) | 00:00:01 |
---------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
1 - access("T1".") FK '= 'T2'.' (ID')
Thanks in advance.
take a look at CBO (event 10053) track for your example, I see the different calculations for the cardinality of the join:
-without statistics
Join map: 1000.000000 == external (10.000000) * inner (1000.000000) * salt (0.100000)
-with statistics
Join map: 0.000000 == external (10.000000) * inner (1000.000000) * salt (0.000000)
Salt (0.100000) for execution with a dynamic sampling corresponds to the standard formula: selectivity join = 1 / greater (num_distinct (t1.fk), num_distinct (t2.id)). In the CBO trace for this run, I also see information on sampling:
SELECT / * OPT_DYN_SAMP * / / * + ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL (SAMPLESUB) opt_param ('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX (SAMPLESUB) NO_SQL_TUNE * / NVL (SUM (C1), 0), NVL (SUM (C2), 0), COUNT (DISTINCT (C3), NVL (SUM (CASE WHERE C3 IS NULL THEN 1 ELSE 0 END), 0) FROM (SELECT / * + NO_PARALLEL ('T1') FULL ('T1') NO_PARALLEL_INDEX ('T1') * / 1 AS C1 1 IN C2) , 'T1 '. ("" FK "AS C3 FROM"T1""T1") SAMPLESUB
2013-12-13 14:14:19.584
* Run the dynamic sampling query:
level: 2
PCT. example: 100.000000
the actual sample size: 1000
map of the filtered sample. : 1000
map of the orig. : 572
block the NTC. Table stats: 7
block the NTC. for sampling: 7
Maximum sample block cnt. : 64
sample block cnt. : 7
NDV C3: 10
scale: 10 h 00
values NULL C4: 0
scaling: 0.00
min. salt. is. :-1.00000000
* Stats pass. dynamic sampling. :
Column (#2): FK (manufacturer: 0)
AvgLen: 22 NDV: 10 NULL values: density 0: 0.100000
* By dynamic sampling NULLs estimates.
* Using dynamic sampling NDV estimated.
Scaling using malaria cardinality = 1000.
* With the help of dynamic sampling card. : 1000
* Updated dynamic sampling table card.
Table: Alias T1: T1
Map: Original: 1000.000000 rounded: 1000 calculated: 1000.00 unadjusted: 1000.00
Path: analysis
Cost: 2.00 RESP: 2,00 degree: 0
Cost_io: 2.00 Cost_cpu: 239850
Resp_io: 2.00 Resp_cpu: 239850
Best: AccessPath: analysis
Cost: 2.00: 1 degree Resp: 2,00 map: 1000,00 bytes: 0
The application of sampling does not contain information on the range of values for the join columns - if the optimizer has no choice but to use the standard formula for the selectivity of the join.
According to the statistics of the CBO knows the HIGH_VALUE and LOW_VALUE for the join columns and can guess that there is no intersection - and thus the selectivity is set to 0 (resulting in a cardinality of 1).
-
Hello people, make sample application
Got lines '2' only... Cardinality error... How to avoid this?SQL> create table t as select object_name from all_objects; Table created. SQL> select count(*) from all_objects; COUNT(*) ---------- 49731 SQL> select count(*) from t; COUNT(*) ---------- 49732 SQL> insert into t select 'SMITH' from t where rownum <= 196; 196 rows created. SQL> select count(*) from t; COUNT(*) ---------- 49928 SQL> create index t_idx on t(object_name); Index created. SQL> analyze table t compute statistics; Table analyzed. SQL> set autotrace traceonly explain; SQL> select * from t where object_name = 'SMITH'; Execution Plan ---------------------------------------------------------- Plan hash value: 2946670127 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2 | 48 | 1 (0)| 00:00:01 | |* 1 | INDEX RANGE SCAN| T_IDX | 2 | 48 | 1 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("OBJECT_NAME"='SMITH') SQL>
Thank youIf the optimizer's cardinality estimates are reasonably accurate, it is reasonably likely that he chose a plan that is at least nearly optimal in light of the SQL database settings and optimizer and structures available to the optimizer to improve performance (index, materialized, views etc.).
The fact that probably the optimizer has not made a mistake, however, won't say that there is no possibility of improvement. It is quite possible, for example, that the query itself can be reworked to be more effective, whether we can create additional indexes or materialized views, or that a better plan may be available so different settings have been made.
In this case, EMP and DEPT being so small, the plan appears reasonable. You can create a materialized view that would be pre-computes the result that would probably improve the query performance, but it is likely that you really care about the performance of this query optimization since it is probably ~ 10 compatible Gets and probably runs in less than a hundredth of a second.
Justin
-
Different plan for the same sql id
Hello
Our team of application reported recently that a query ran longer than usual.
Search in tables AWR (dba_hist_active_sess_history & dba_hist_sqlstat), it was a change of plan_hash_value. The next day, without intervention from anyone (collect statistics did not run, no change in the tables involved) the sql used the former plan and completed quickly.
The differences in the plan was the index that was used to access the table.
What could be the reason for the optimizer to choose a bad plan and return once more the good thing. ?
The code sql that has been run twice & three in one day and then it years off cursor cache.
Regime shifts are normal.
Flipfloppping plans are quite normal.
I'd be willing to bet that a large percentage of your SQL shows variations in implementation plans and you never notice.
And there are several reasons why you might get a different plan.
1 bind variable peeking - it is normal SQL for the age from the cache and then get analyzed once again with different bind variable leading to different cardinality estimates and plans
2. the statistics change - it is normal that the statistics of change over time causing different cardinality estimates and the various plans, especially if the histograms are involved and changing buckets
3. data change - especially if you use dynamic sampling it is normal to get a different sample of data leading to different cardinality estimates and plans
4 SYSDATE - it's normal for queries with SYSDATE in them to recognize that time is never on leave and which, in conjunction with statistics or data, can lead to different cardinality estimates and plans
5. integrated optimization features - it's normal for features like ACS to kick in and notice that they got forecasts wrong in an execution plan and force the recalculation to a different plan with an adjusted cardinality.
Using DBMS_XPLAN. DISPLAY_CURSOR or DISPLAY_AWR you can get different plans (provided that they are in memory or AWR). The notes section should indicate cardinality comments have been a factor. You can also get links peeked with format mask of "+ PEEKED_BINDS".
-
How feedback statistics really works in 12 c?
Hello
I'm studying 12 c material and have a question about the feedback statistics (or Feedback: cardinality).
Reading on the Feedback from statistics, I understand that the actual number of lines is written in a Directive Plan
-In the first execution of a SQL statement, the optimizer generates an execution plan.
-After the first performance, the optimizer disables comments from statistics tracking.
-If the query runs again, the optimizer then uses the corrected instead of its usual estimate cardinality estimates.
Source: http://docs.Oracle.com/database/121/TGSQL/tgsql_optcncpt.htm#TGSQL94983
But what happens if the data becomes spread out in the next few hours and the plan should better not be used?
When and how the plan will become invalid and will be the optimizer hard parse a new plan?
Is anyone able to explain that to me?
Thank you in advance.
Kind regards
Harry
Hello. This feature is on the bad rating because of missing statistics. And solves the problem once. To modify data, the work of gathering must do the right thing.
-
How to find the reason for the high use of the processor - shared server process?
Hello
I have a Linux server with instance and an oracle 11.2.0.3. The database is configured for shared server. I do not know the application using the database, but they have performance problems. When I check the server I see, that ora_s001 use a processor Core with 90 to 100%. Is there a way to know the reason for the high CPU load? I'm not very adept at finding bottlenecks or bad sql statements. What I found ist, that some users have very high logical_io and the same users have much cpu_usage.
What can I do next? Thanks for the tips.
You must Server multi-user shared for this number of sessions. The main objective os using MTS is reducing the memory usage on the server for the UGA and PGA of sessions.
About the CPU usage, you must session which uses this server shared at the time, with process $ v and v$ session to trace. From there, check the SQL_ID running and then get his stats. This is probably one or two SQLs with large number of I/o logic. This happens most of the time due to incorrect cardinality estimates and/or bad statistics on the subject which causes suboptimal plans to generate.
-
The identical query, unusable performance in an environment - please help
We strive to improve 10.2.0.4 to 11.2.0.2, but one of our considered requests is completely wrong (30 seconds in our current environment, 4000 in our upgraded environment).
Note that the caveat is that it will be very difficult for me to edit the SQL code (his coming of another group of TI, as well as their position is that he should not have to be changed given the fact that it works so well in the current production environment).
The query is exactly the same in both environments and transports the SQL_ID, but explain the plans are different.
The environment in which the application works is version 10.2.0.4 and time the trace is 30,15 seconds, 841 rows returned.
The new environment is 11.2.0.2, the elapsed time is 4035 seconds, 841 rows returned.
The environments are comparable in terms of CPU/memory/IO (the two are written for on our NetApp NFS mounts)
SGA_MAX/TARGET and PGA_AGGREGATE_TARGET are the same in both environments, as well as HASH_AREA_SIZE and SORT_AREA_SIZE.
The table database is identical, and all of the indexes are the same in both environments. His stats were collected and this behavior has persisted through several reboots of the databases.
I ran traces on statements in both environments and the performance difference seems to be due to direct path read/write temp:
The SQL
New environment (11.2.0.2), takes 4000 seconds according to tkprofSELECT DISTINCT a.emplid, a.name, rds.sa_get_stdnt_email_fn (a.emplid), a.req_term, a.req_term_ldesc, CASE WHEN (a.req_acad_plan = 'PKINXXXBBS' AND a.cum_gpa >= d.gpa) THEN NVL (c.num_met, 0) + 1 WHEN (b.gpa >= d.gpa AND a.req_acad_plan <> 'PKINXXXBBS') THEN NVL (c.num_met, 0) + 1 ELSE NVL (c.num_met, 0) END AS "Requirement Status", a.cum_total_passed AS "Cumulative Units", a.admit_term, a.admit_term_ldesc, a.acad_plan, a.acad_plan_ldesc, a.academic_level, a.academic_level_ldesc, TO_CHAR (a.rpt_date, 'MM/DD/YYYY') AS rpt_date, TO_CHAR (NVL (b.gpa, 0), '0.000') AS gpa, TO_CHAR (NVL (a.cum_gpa, 0), '0.000') AS cum_gpa FROM sa.rec_sm_stdnt_deg_completion a, ( SELECT DISTINCT CASE WHEN SUM (b_sub.units_earned) = 0 THEN 0 ELSE SUM (b_sub.grade_points) / SUM (b_sub.units_earned) END AS gpa, b_sub.emplid, b_sub.acad_career, b_sub.acad_plan, b_sub.req_acad_plan, b_sub.req_term, b_sub.academic_level, b_sub.rqrmnt_group FROM sa.rec_sm_stdnt_deg_completion b_sub, hrsa_extr.ps_rq_grp_tbl g3, hrsa_extr.ps_rq_main_tbl m3 WHERE b_sub.req_acad_plan IS NOT NULL AND b_sub.acad_career = 'UGRD' AND b_sub.acad_prog = 'UBACH' AND b_sub.acad_plan = b_sub.req_acad_plan AND b_sub.grade <> 'IP' AND b_sub.impact_flag = 'Y' AND g3.effdt = (SELECT MAX (g3_ed.effdt) FROM hrsa_extr.ps_rq_grp_tbl g3_ed WHERE g3_ed.rqrmnt_group = g3.rqrmnt_group AND g3_ed.effdt <= b_sub.req_term_begin_date) AND g3.rqrmnt_group = b_sub.rqrmnt_group AND m3.effdt = (SELECT MAX (m3_ed.effdt) FROM hrsa_extr.ps_rq_main_tbl m3_ed WHERE m3_ed.requirement = m3.requirement AND m3_ed.effdt <= b_sub.req_term_begin_date) AND m3.requirement = b_sub.requirement GROUP BY b_sub.emplid, b_sub.acad_career, b_sub.acad_plan, b_sub.req_acad_plan, b_sub.req_term, b_sub.academic_level, b_sub.rqrmnt_group) b, ( SELECT c_sub.emplid, c_sub.acad_career, c_sub.acad_plan, c_sub.req_acad_plan, c_sub.req_term, c_sub.academic_level, c_sub.rqrmnt_group, COUNT (*) AS num_met FROM sa.rec_sm_stdnt_deg_completion c_sub, hrsa_extr.ps_rq_grp_tbl g2, hrsa_extr.ps_rq_main_tbl m2 WHERE c_sub.rqrmnt_line_status = 'COMP' AND c_sub.grade <> 'IP' AND c_sub.impact_flag = 'Y' AND c_sub.acad_career = 'UGRD' AND c_sub.acad_prog = 'UBACH' AND c_sub.acad_plan = c_sub.req_acad_plan AND g2.effdt = (SELECT MAX (g2_ed.effdt) FROM hrsa_extr.ps_rq_grp_tbl g2_ed WHERE g2_ed.rqrmnt_group = g2.rqrmnt_group AND g2_ed.effdt <= c_sub.req_term_begin_date) AND g2.rqrmnt_group = c_sub.rqrmnt_group AND m2.effdt = (SELECT MAX (m2_ed.effdt) FROM hrsa_extr.ps_rq_main_tbl m2_ed WHERE m2_ed.requirement = m2.requirement AND m2_ed.effdt <= c_sub.req_term_begin_date) AND m2.requirement = c_sub.requirement GROUP BY c_sub.emplid, c_sub.acad_career, c_sub.acad_plan, c_sub.req_acad_plan, c_sub.req_term, c_sub.academic_level, c_sub.rqrmnt_group) c, hrsa_extr.ps_smo_rdr_imp_pln d, hrsa_extr.ps_rq_grp_tbl g, hrsa_extr.ps_rq_main_tbl m WHERE a.acad_career = 'UGRD' AND a.acad_prog = 'UBACH' AND a.req_acad_plan IN (N'NUPPXXXBBS', N'NURPBASBBS', N'NURPXXXBBS') AND a.academic_level IN (N'10', N'20', N'30', N'40', N'50', N'GR') AND a.acad_plan = a.req_acad_plan AND a.impact_flag = 'Y' AND g.effdt = (SELECT MAX (g_ed.effdt) FROM hrsa_extr.ps_rq_grp_tbl g_ed WHERE g_ed.rqrmnt_group = g.rqrmnt_group AND g_ed.effdt <= a.req_term_begin_date) AND g.rqrmnt_group = a.rqrmnt_group AND m.effdt = (SELECT MAX (m_ed.effdt) FROM hrsa_extr.ps_rq_main_tbl m_ed WHERE m_ed.requirement = m.requirement AND m_ed.effdt <= a.req_term_begin_date) AND m.requirement = a.requirement AND a.emplid = b.emplid(+) AND a.acad_career = b.acad_career(+) AND a.acad_plan = b.acad_plan(+) AND a.req_acad_plan = b.req_acad_plan(+) AND a.academic_level = b.academic_level(+) AND a.req_term = b.req_term(+) AND a.rqrmnt_group = b.rqrmnt_group(+) AND a.emplid = c.emplid(+) AND a.acad_career = c.acad_career(+) AND a.acad_plan = c.acad_plan(+) AND a.req_acad_plan = c.req_acad_plan(+) AND a.academic_level = c.academic_level(+) AND a.req_term = c.req_term(+) AND a.rqrmnt_group = c.rqrmnt_group(+) AND d.acad_plan = a.req_acad_plan ORDER BY 6 DESC, 2 ASC;
Explain planPLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------- Plan hash value: 4117596694 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 314 | 15231 (1)| 00:03:03 | | 1 | SORT UNIQUE | | 1 | 314 | 15230 (1)| 00:03:03 | | 2 | NESTED LOOPS OUTER | | 1 | 314 | 15227 (1)| 00:03:03 | | 3 | NESTED LOOPS OUTER | | 1 | 285 | 15216 (1)| 00:03:03 | | 4 | NESTED LOOPS | | 1 | 256 | 15205 (1)| 00:03:03 | | 5 | NESTED LOOPS | | 1 | 241 | 15204 (1)| 00:03:03 | | 6 | NESTED LOOPS | | 1 | 223 | 15203 (1)| 00:03:03 | | 7 | NESTED LOOPS | | 17 | 731 | 15186 (1)| 00:03:03 | | 8 | VIEW | VW_SQ_3 | 998 | 27944 | 15186 (1)| 00:03:03 | | 9 | HASH GROUP BY | | 998 | 62874 | 15186 (1)| 00:03:03 | | 10 | MERGE JOIN | | 29060 | 1787K| 15184 (1)| 00:03:03 | | 11 | SORT JOIN | | 26 | 1248 | 15180 (1)| 00:03:03 | | 12 | TABLE ACCESS BY INDEX ROWID | REC_SM_STDNT_DEG_COMPLETION | 26 | 1248 | 15179 (1)| 00:03:03 | |* 13 | INDEX SKIP SCAN | REC0SM_STDNT_DEG_IDX | 26 | | 15168 (1)| 00:03:03 | |* 14 | SORT JOIN | | 1217 | 18255 | 4 (25)| 00:00:01 | | 15 | INDEX FAST FULL SCAN | PS3RQ_GRP_TBL | 1217 | 18255 | 3 (0)| 00:00:01 | |* 16 | INDEX UNIQUE SCAN | PS_RQ_GRP_TBL | 1 | 15 | 0 (0)| 00:00:01 | |* 17 | TABLE ACCESS BY USER ROWID | REC_SM_STDNT_DEG_COMPLETION | 1 | 180 | 1 (0)| 00:00:01 | |* 18 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 18 | 1 (0)| 00:00:01 | | 19 | SORT AGGREGATE | | 1 | 18 | | | | 20 | FIRST ROW | | 1 | 18 | 2 (0)| 00:00:01 | |* 21 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_MAIN_TBL | 1 | 18 | 2 (0)| 00:00:01 | |* 22 | INDEX FULL SCAN | PS0SMO_RDR_IMP_PLN | 1 | 15 | 1 (0)| 00:00:01 | |* 23 | VIEW PUSHED PREDICATE | | 1 | 29 | 11 (19)| 00:00:01 | | 24 | SORT GROUP BY | | 1 | 52 | 11 (19)| 00:00:01 | | 25 | VIEW | VM_NWVW_5 | 1 | 52 | 10 (10)| 00:00:01 | |* 26 | FILTER | | | | | | | 27 | SORT GROUP BY | | 1 | 165 | 10 (10)| 00:00:01 | |* 28 | FILTER | | | | | | | 29 | NESTED LOOPS | | 1 | 165 | 7 (0)| 00:00:01 | | 30 | NESTED LOOPS | | 1 | 147 | 6 (0)| 00:00:01 | | 31 | NESTED LOOPS | | 1 | 117 | 5 (0)| 00:00:01 | |* 32 | TABLE ACCESS BY INDEX ROWID | REC_SM_STDNT_DEG_COMPLETION | 1 | 90 | 4 (0)| 00:00:01 | |* 33 | INDEX RANGE SCAN | REC1SM_STDNT_DEG_IDX | 1 | | 3 (0)| 00:00:01 | |* 34 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 27 | 1 (0)| 00:00:01 | | 35 | SORT AGGREGATE | | 1 | 15 | | | | 36 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 37 | INDEX RANGE SCAN (MIN/MAX)| PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | |* 38 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 30 | 1 (0)| 00:00:01 | |* 39 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 18 | 1 (0)| 00:00:01 | |* 40 | VIEW PUSHED PREDICATE | | 1 | 29 | 11 (19)| 00:00:01 | | 41 | SORT GROUP BY | | 1 | 32 | 11 (19)| 00:00:01 | | 42 | VIEW | VM_NWVW_4 | 1 | 32 | 10 (10)| 00:00:01 | |* 43 | FILTER | | | | | | | 44 | SORT GROUP BY | | 1 | 166 | 10 (10)| 00:00:01 | |* 45 | FILTER | | | | | | |* 46 | FILTER | | | | | | | 47 | NESTED LOOPS | | 1 | 166 | 7 (0)| 00:00:01 | | 48 | NESTED LOOPS | | 1 | 148 | 6 (0)| 00:00:01 | | 49 | NESTED LOOPS | | 1 | 118 | 5 (0)| 00:00:01 | |* 50 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 27 | 2 (0)| 00:00:01 | |* 51 | TABLE ACCESS BY INDEX ROWID | REC_SM_STDNT_DEG_COMPLETION | 1 | 91 | 3 (0)| 00:00:01 | |* 52 | INDEX RANGE SCAN | REC1SM_STDNT_DEG_IDX | 1 | | 2 (0)| 00:00:01 | |* 53 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 30 | 1 (0)| 00:00:01 | |* 54 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 18 | 1 (0)| 00:00:01 | | 55 | SORT AGGREGATE | | 1 | 15 | | | | 56 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 57 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------------------------------
call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 6.59 6.66 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 1521.36 4028.91 2256624 240053408 0 841 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 1527.95 4035.57 2256624 240053408 0 841
The current production (10.2.0.4), takes 30 secondsElapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 2 0.00 0.00 Disk file operations I/O 3 0.07 0.11 db file sequential read 10829 0.12 16.62 direct path write temp 72445 0.30 293.71 direct path read temp 72445 0.58 2234.14 asynch descriptor resize 22 0.00 0.00 SQL*Net more data to client 9 0.00 0.00 SQL*Net message from client 2 0.84 1.25 ********************************************************************************
PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------- Plan hash value: 2178773127 ------------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 331 | 89446 (2)| 00:17:54 | | 1 | SORT UNIQUE | | 1 | 331 | 89445 (2)| 00:17:54 | | 2 | NESTED LOOPS | | 1 | 331 | 89440 (2)| 00:17:54 | | 3 | NESTED LOOPS | | 1 | 316 | 89439 (2)| 00:17:54 | |* 4 | HASH JOIN OUTER | | 1 | 298 | 89438 (2)| 00:17:54 | |* 5 | HASH JOIN OUTER | | 1 | 240 | 59625 (2)| 00:11:56 | | 6 | NESTED LOOPS | | 1 | 182 | 29815 (2)| 00:05:58 | |* 7 | TABLE ACCESS FULL | REC_SM_STDNT_DEG_COMPLETION | 1 | 167 | 29814 (2)| 00:05:58 | |* 8 | INDEX FULL SCAN | PS0SMO_RDR_IMP_PLN | 1 | 15 | 1 (0)| 00:00:01 | | 9 | VIEW | | 1 | 58 | 29809 (2)| 00:05:58 | | 10 | HASH GROUP BY | | 1 | 71 | 29809 (2)| 00:05:58 | | 11 | VIEW | | 1 | 71 | 29809 (2)| 00:05:58 | |* 12 | FILTER | | | | | | | 13 | HASH GROUP BY | | 1 | 198 | 29809 (2)| 00:05:58 | | 14 | NESTED LOOPS | | 1 | 198 | 29806 (2)| 00:05:58 | |* 15 | HASH JOIN | | 1 | 171 | 29805 (2)| 00:05:58 | |* 16 | HASH JOIN | | 4 | 572 | 29802 (2)| 00:05:58 | |* 17 | TABLE ACCESS FULL | REC_SM_STDNT_DEG_COMPLETION | 4 | 452 | 29798 (2)| 00:05:58 | | 18 | INDEX FAST FULL SCAN | PS2RQ_MAIN_TBL | 1035 | 31050 | 3 (0)| 00:00:01 | | 19 | INDEX FAST FULL SCAN | PS2RQ_MAIN_TBL | 1035 | 28980 | 3 (0)| 00:00:01 | |* 20 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 27 | 1 (0)| 00:00:01 | | 21 | SORT AGGREGATE | | 1 | 15 | | | | 22 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 23 | INDEX RANGE SCAN (MIN/MAX)| PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | | 24 | VIEW | | 1 | 58 | 29813 (2)| 00:05:58 | | 25 | HASH GROUP BY | | 1 | 45 | 29813 (2)| 00:05:58 | | 26 | VIEW | | 1 | 45 | 29813 (2)| 00:05:58 | |* 27 | FILTER | | | | | | | 28 | HASH GROUP BY | | 1 | 199 | 29813 (2)| 00:05:58 | | 29 | NESTED LOOPS | | 1 | 199 | 29810 (2)| 00:05:58 | |* 30 | HASH JOIN | | 1 | 172 | 29809 (2)| 00:05:58 | |* 31 | HASH JOIN | | 8 | 1152 | 29805 (2)| 00:05:58 | |* 32 | TABLE ACCESS FULL | REC_SM_STDNT_DEG_COMPLETION | 7 | 798 | 29802 (2)| 00:05:58 | | 33 | INDEX FAST FULL SCAN | PS2RQ_MAIN_TBL | 1035 | 31050 | 3 (0)| 00:00:01 | | 34 | INDEX FAST FULL SCAN | PS2RQ_MAIN_TBL | 1035 | 28980 | 3 (0)| 00:00:01 | |* 35 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 27 | 1 (0)| 00:00:01 | | 36 | SORT AGGREGATE | | 1 | 15 | | | | 37 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 38 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | |* 39 | INDEX RANGE SCAN | PS_RQ_MAIN_TBL | 1 | 18 | 1 (0)| 00:00:01 | | 40 | SORT AGGREGATE | | 1 | 18 | | | | 41 | FIRST ROW | | 1 | 18 | 2 (0)| 00:00:01 | |* 42 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_MAIN_TBL | 1 | 18 | 2 (0)| 00:00:01 | |* 43 | INDEX RANGE SCAN | PS_RQ_GRP_TBL | 1 | 15 | 1 (0)| 00:00:01 | | 44 | SORT AGGREGATE | | 1 | 15 | | | | 45 | FIRST ROW | | 1 | 15 | 2 (0)| 00:00:01 | |* 46 | INDEX RANGE SCAN (MIN/MAX) | PS_RQ_GRP_TBL | 1 | 15 | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------------------
call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 1.49 1.51 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 18.25 28.63 463672 932215 0 836 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 19.75 30.15 463672 932215 0 836
Published by: ngilbert on June 26, 2012 16:40Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 2 0.00 0.00 db file scattered read 14262 0.31 13.13 latch: shared pool 1 0.01 0.01 db file sequential read 7 0.00 0.00 direct path write temp 493 0.00 0.00 direct path read temp 493 0.00 0.00 SQL*Net more data to client 40 0.00 0.00 SQL*Net message from client 2 0.83 1.23 ********************************************************************************
Published by: ngilbert on June 26, 2012 16:41Hello
as is almost always the case, your bad plan is the result of messed up the cardinality estimates. The biggest problem seems to be the cardinality in steps 12 and 13 of the bad plan:
| 12 | TABLE ACCESS BY INDEX ROWID | REC_SM_STDNT_DEG_COMPLETION | 26 | 1248 | 15179 (1)| 00:03:03 | |* 13 | INDEX SKIP SCAN | REC0SM_STDNT_DEG_IDX | 26 | | 15168 (1)| 00:03:03 |
that is the estimated cardinality is 26. But if we look at the map, we see that the actual number of lines is 4 orders of magnitude (!) higher than that. So of course this goes wrong from there: the optimizer uses weird join methods, wrong join order etc.
And if we look at the predicate:
13 - access("A"."ACAD_CAREER"='UGRD' AND "A"."ACAD_PROG"='UBACH' AND "A"."IMPACT_FLAG"='Y') filter("A"."ACAD_PLAN"="A"."REQ_ACAD_PLAN" AND "A"."ACAD_PROG"='UBACH' AND "A"."IMPACT_FLAG"='Y' AND "A"."ACAD_CAREER"='UGRD')
We can assume that the problem is related to the related predicates: you select lines of tables with 4 equality predicates, but 260 k lines survive this filtering. The optimizer thinks that it is closer to 26, not 260 k, but it's probably because the predicates are not really independent.
There is another thing that seems suspicious: filter predicate is redundant with the predicates of access. There is another discussion on OTN not so long ago, and it turned out to be a symptom of a bug (which makes it even more suspect, is that it was also a SKIP SCAN). See:
Re: CBO does not consider cheaper NL-Plan without guidance
Hope that was helpful.
Best regards
Nikolai
Maybe you are looking for
-
How can I stop pop-up window when new messages to the?
Pop-up when new messages to the must be turned off. What should I do to stop it.
-
Satellite A200 - question on Slot Express
Can someone tell me if the PSAf6A uses the same location for cards PCMCIA and Express Card accessories?
-
Laptop Pavilion: large touch pad on the computer restarts
How can I disable the Touchpad and KEEP IT DISABLED? Whenever I start the computer I have to go through the steps to disable the miserable touchpad. How can I disable it and it disabled thru restarts and until (if ever) I want to keep. Really irri
-
Original title: updates After updates on l0, 26, 2015, that I can't get facebook game double down to play it darkens. I also get kb9l5597 update and more
-
I am brand new to PowerConnect switches, even though I am familiar with the concepts of VLANS, aggregation of links and spanning tree. I am on a deadline to get some new ones installed 6248 in our baskets and get them functioning as a stack. What are