Always parallel hash join paginate in TEMP

Hello

I've known some strange behaviors on Oracle 9.2.0.5 recently: hash query simple join of two tables - smaller with 16 k records/1 MB in size and more important with 2.5 m records/1.5 GB in size is trading at TEMP at the launch in parallel mode (4 game PQ slaves). What is strange performance series runs as expected - in memory occurs hash join. It should be added that running parallel and series correctly selects smaller table as an intern but parallel query always decides to buffering data source (no matter what is its size).

To be more precise - all the table statistics gathered, I have enough memory PGA assigned to queries (WORKAREA_POLICY_SIZE = AUTO, PGA_AGGREGATE_TARGET = 6 GB) and I analyze the results. Same hidden parameter px_max_size MMSis set correctly to about 2 GB, the problem is that parallel execution still decides to Exchange (even if the internal size of data for each slave is about 220 KB.).

I dig in the footsteps (event 10104) and found a substantial difference between series and parallel execution. It seems that some internal indicator order PQ slaves to the buffer always data, here's what I found in trace slave PQ:

HASH JOIN STATISTICS (INITIALIZATION)
Original brief: 4428800
Memory after all overhead costs: 4283220
Memory for the slot machines: 3809280
Calculated overhead for partitions and slot machine/line managers: 473940
Fan-out hash join: 8
Number of sheets: 9
Number of slots: 15
Diluvium IO: 31
Block size: 8
Cluster size (slot): 248
Join hash fanout (manual): 8
Cluster/slot size (manual): 280
Minimum number of bytes per block: 8160
Bit memory allocation (KB) vector: 128
By partition bit vector length (KB): 16
Possible maximum line length: 1455
Size (KB) of construction found: 645
Estimates the length of the line (including overhead): 167
Immutable flags:
The result of the join of the BUFFER for a parallel query
kxhfSetPhase: phase = BUILD
kxhfAddChunk: Add 0 (sz = 32) piece to the table slot machine
kxhfAddChunk: chunk 0 (lb = 800003ff640ebb50, slotTab = 800003ff640ebce8) successfully added
kxhfSetPhase: phase = PROBE_1

In bold is the part that is not present in serial mode. Unfortunately that I can't find something that could help identify the reason or the definition that drives this behavior :(

Best regards
Bazyli

Published by: user10419027 on October 13, 2008 03:53

Buzzylee wrote:
Jonathan,
>
After the trials of today that my understanding of the problem has not significantly changed - I still don't understand why Oracle swaps table of probe on the disc.
The only new, is that I see it's not typical hash join of "on the disk", because the inner table is not written to TEMP. More you confirmed this immutable flag is not forcing this kind of behavior (BTW, thanks for that!).

So maybe that's the bug? In the meantime, I checked it against never version of DB (9.2.0.8) - always the same behavior.

I copied your example - the behavior also appears in 10g and 11g.
This probably isn't a bug, but it may be a case where a generic strategy is not appropriate.

The extra partition does NOT probe table, now is the result of the hash join. The result is built until this draft is sent to the next 'series of slave' (who is being the Coordinator of the application in this case). Your allocation of memory allowed for about 18 slots (diluvium IO lots) of 31 blocks each. You used 8 of them for the hash table, the rest is available to hold the result.

Somewhere in your path, around the point where you go from scripture readings, you should see a summary on the partition 8 and set the number of "memory seats" that will tell you the size of the result.

If the difference between the clusters and the slots in the memory is low, you can see that by setting the '_hash_multiblock_io_count' to a value less than 31 than the selected optimizer free you enough memory for the hash table for the result set to build in memory.

Another option - to circumvent this spill - is to switch to a (broadcast, none) distribution.

Concerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
http://www.jlcomp.demon.co.UK

Tags: Database

Similar Questions

  • Hash Join pouring tempspace

    I am using 11.2.0.4.0 - oracle Version. I have underwear paremters on GV$ parameter
    pga_aggregate_target - 8GB
    hash_area_size - 128 KB
    Sort_area_size - 64 KB

    Now under query plan is running for ~ 1 HR and resulting in question tempspace. Unable to extend segment temp of 128 in tablespace TEMP.
    We have currently allocated ~ 200GB at the tempspace. This query runs good for the daily race with Nested loop and the required index, but to run monthly that it changes the plan due to the volume I think and I went for the join of HASH, who believe is good decision by the optimizer.

    AFAIK, the hash join reverse to temp will slow query response time, so I need expert advice, if we increase the pga_aggregate_target so that HASH_AREA_SIZE will be raised to adequate to accommadate the driving table in this? howmuch size and should put us, it should be the same as the size of the array of conduct? or are there other work around the same? Note - the size of the driving table B is "~ 400GB.

    -----------------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                      | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    -----------------------------------------------------------------------------------------------------------------------------------
    |   0 | INSERT STATEMENT               |                          |       |       |       |    10M(100)|          |       |       |
    |   1 |  LOAD TABLE CONVENTIONAL       |                          |       |       |       |            |          |       |       |
    |   2 |   FILTER                       |                          |       |       |       |            |          |       |       |
    |   3 |    HASH JOIN                   |                          |  8223K|  1811M|       |    10M  (1)| 35:30:55 |       |       |
    |   4 |     TABLE ACCESS STORAGE FULL  | A_GT                     |    82 |   492 |       |     2   (0)| 00:00:01 |       |       |
    |   5 |     HASH JOIN                  |                          |  8223K|  1764M|   737M|    10M  (1)| 35:30:55 |       |       |
    |   6 |      PARTITION RANGE ITERATOR  |                          |  8223K|   643M|       |    10M  (1)| 34:18:55 |   KEY |   KEY |
    |   7 |       TABLE ACCESS STORAGE FULL| B                        |  8223K|   643M|       |    10M  (1)| 34:18:55 |   KEY |   KEY |
    |   8 |      TABLE ACCESS STORAGE FULL | C_GT                     |    27M|  3801M|       |   118K  (1)| 00:23:48 |       |       |
    -----------------------------------------------------------------------------------------------------------------------------------
    
    
    

    Find plans by trial and error is not an efficient use of the time - and if it was a good idea to avoid joins and hash, then Oracle have set up their in the first place. I can understand your DBA with a yen to avoid, however, because any spill of a hash for disc often join a (relative) effect much more important than you might expect.  In this case, however, you have a loop nested in A_GT which operates 39M times to access a table of 82 lines index - clearly (a) CPU work to achieve would be reduced if you included table columns in the index definition, but more significantly the cost of CPU of the A_GT/C_GT join would drop if you have built a hash in memory of A_GT table that is not a hash join.

    What you ask for is a description of how to optimize a warehouse of data on Exadata machine - a forum is not the right place for this discussion; all I can say is that you and your databases need to do some testing to find out the best way to match queries to the Exadata has, so keep an eye on queries that produces the application in case of change of usage patterns.  There are a few trivial generalities that anyone could offer:

    (a) partitioning a day is good, so you can ensure that your queries are able to do partitioning to remove only the days where they want; even better is if there is a limited set of partitions that you can

    (b) e/s for joins of large hash spilling to disk can be catastrophic compared to the underlying i/o for tablescans for the first access to the data, which means that simple queries can give the impression that Exadata is incredibly fast (especially when the index the flash cache and storage are effective), but slightly more complex queries are surprisingly slow in comparison.

    (c) once you have passed the flash server cell cache, single block reads are very large and slow - queries that do a lot of single e/s (viz: big reports using nested against randomly scattered data loops joins) can cause very slow IO.

    You must know the data types, know the general structure of your queries, be ready to generate of materialized views for derived complex data and understand the strengths and weaknesses of the Exadata.

    Concerning

    Jonathan Lewis

  • Hash join

    Hi friends,


    If I have a table T1 and T2 table. Table T1 is to have 100 rows and table T2 has 20 rows. When you make a hash join, what table should be used to make the hash table, a larger or smaller, and why? IF the data set is too small for consideration then please keep table T1 with 10 million rows and the table T2 with 1 million rows.




    Thanks as always :)

    If you are a developer: "the database optimizer chooses."

    If you are a student: http://tahiti.oracle.com ' read the docs... we are not helping people cheat on tests. "

  • In buffered memory hash join

    Hi all

    What is the operation of source line buffered hash join. I know that it uses the hash of the pass to send data but other then that what would be the difference in the hash join and put in buffered memory hash join. I would like to know work mechanisam.

    Foreground:
    ========
    The Coordinator query scans, aggregates and hash distributes ZIPXXX of line 13 to the slave series 2

    Put slave 1 parallel analyses and hash distributes ZIPXXX from line 17 to series 2 slave

    Slave set 2 joins these on line 7, then hash (probably on another column) distributes backward for slave set 1 - that is why the hash as line 7 join must be buffered.

    Slave 2 set Parallels and hash analysis distribute REP_XXX on line 21 to the slave series 1

    Hash slave joins them to line 4, then passes them the query with the Coordinator to write to the table.

    It seems that you should 'alter session enable parallel DML' to allow parallel loading in select in line 4.

    Second plan:
    =========
    The plan is virtually identical, although the collection of statistics seems to have changed the table names.

    The query Coordinator scans, aggregates and the DIFFUSE GEO_XXX of line 13 to the slave set 2

    Because the very small result set aired together slave 2 can analyze and join the GEO_SXXX of line 15 then diffuses back result (probably on another column) to the slave set 1

    Because the very small result set aired slave together 1 can scan and join the REP_XXX of line 15, and then pass the results to the QC to write the table.

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

  • change the selected column puts loop nested in the hash join

    Hi all

    If "select * from...". «I "select table.* of...» "then plan changes.
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID  a4fgvz5w6b0z8, child number 0
    -------------------------------------
    select * from ofertas ofe, ofertas_renting ofer where ofer.codigodeempresa = ofe.codigodeempresa    AND ofer.numerooferta =
    ofe.numerooferta    AND ofe.captacion = '1'
    
    Plan hash value: 3056192218
    
    ----------------------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation          | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    ----------------------------------------------------------------------------------------------------------------------------------------
    |*  1 |  HASH JOIN         |                 |      1 |  23766 |  4032   (2)|  27421 |00:00:00.96 |    5444 |  9608K|  1887K|   10M (0)|
    |*  2 |   TABLE ACCESS FULL| OFERTAS         |      1 |  23969 |  1324   (2)|  27421 |00:00:00.14 |    2140 |       |       |          |
    |   3 |   TABLE ACCESS FULL| OFERTAS_RENTING |      1 |  71297 |   937   (2)|  72385 |00:00:00.22 |    3304 |       |       |          |
    ----------------------------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - access("OFER"."CODIGODEEMPRESA"="OFE"."CODIGODEEMPRESA" AND "OFER"."NUMEROOFERTA"="OFE"."NUMEROOFERTA" AND
                  SYS_OP_DESCEND("OFER"."NUMEROOFERTA")=SYS_OP_DESCEND("OFE"."NUMEROOFERTA"))
       2 - filter("OFE"."CAPTACION"='1')
    
    
    22 filas seleccionadas.
    
    PLAN_TABLE_OUTPUT
    ----------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID  2410uqu059fgw, child number 0
    -------------------------------------
    select ofe.* from ofertas ofe, ofertas_renting ofer where ofer.codigodeempresa = ofe.codigodeempresa
    AND ofer.numerooferta = ofe.numerooferta    AND ofe.captacion = '1'
    
    Plan hash value: 4206210976
    
    ----------------------------------------------------------------------------------------------------------------
    | Id  | Operation          | Name               | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
    ----------------------------------------------------------------------------------------------------------------
    |   1 |  NESTED LOOPS      |                    |      1 |  23766 |  1333   (3)|  27421 |00:00:00.58 |   33160 |
    |*  2 |   TABLE ACCESS FULL| OFERTAS            |      1 |  23969 |  1324   (2)|  27421 |00:00:00.27 |    3910 |
    |*  3 |   INDEX UNIQUE SCAN| PK_OFERTAS_RENTING |  27421 |      1 |     0   (0)|  27421 |00:00:00.26 |   29250 |
    ----------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - filter("OFE"."CAPTACION"='1')
       3 - access("OFER"."CODIGODEEMPRESA"="OFE"."CODIGODEEMPRESA" AND
                  "OFER"."NUMEROOFERTA"="OFE"."NUMEROOFERTA")
    
    
    22 filas seleccionadas.
    Why change if the cost to access the complete table OFFERS is identical in the two plans?

    Thank you very much.

    Joaquin Gonzalez

    Published by: Joaquín González on November 4, 2008 17:32

    Joaquín González wrote:
    Hello

    Perhaps the reason for Blevel = 0 is?

    "This is."
    some common cases that could result in a variation between the basic formula and the
    "result:

    ...

    "Index where the blevel is set to 1 (the index goes directly from the root block in the).
    leaf blocks). The optimizer ignores effectively the blevel if each column in the index
    appears in a predicate of equality. "

    Joaquin,

    you're referring to the chapter "access Simple B-tree", it is a nested loop operation, so this does not apply. You can see that the 'Simple B-tree access' refers to a cost of 1, you have tested yourself.

    I think that it is a special case, if you have a nested loop operation that uses access unique index as the source of the inner line, then the cost of access unique index is simply BLEVEL - 1. You might get a different cost if access additional table of rowid is involved, which is usually the case. But even in this case access to the unique index is always encrypted in BLEVEL - 1, and access the table by rowid is usually encrypted to 1 by iteration.

    You can see on page 313 (chapter "Nested Loop") that Jonathan has used an example involving a unique index scan that also has a cost of 0.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Indexes and hash join

    Hi all

    I'll ask very quick question, can I use the hash join to two tables with access by index as noop nested?  Is this possible?

    For example:

    HASH JOIN

    TABLE ACCESS BY INDEX ROWID

    INDEX RANGE SCAN

    TABLE ACCESS BY INDEX ROWID

    INDEX RANGE SCAN


    * Edition

    Thank you

    Of course, you can, if you do reference it:

    orclz > set autot traceonly exp

    orclz > create index emp_ename_i on emp (ename);

    The index is created.

    orclz > create index dept_dname_i on dept (dname);

    The index is created.

    orclz > select / * + use_hash (emp dept) * / * from emp join natural dept where dname = 'SALES' and ename = 'MILLER ';

    Execution plan

    ----------------------------------------------------------

    Hash value of plan: 937889317

    -----------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |

    -----------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |              |     1.   117.     4 (0) | 00:00:01 |

    |*  1 |  HASH JOIN |              |     1.   117.     4 (0) | 00:00:01 |

    |   2.   TABLE ACCESS BY ROWID INDEX BATCH | EMP |     1.    87.     2 (0) | 00:00:01 |

    |*  3 |    INDEX RANGE SCAN | EMP_ENAME_I |     1.       |     1 (0) | 00:00:01 |

    |   4.   TABLE ACCESS BY ROWID INDEX BATCH | DEPT |     1.    30.     2 (0) | 00:00:01 |

    |*  5 |    INDEX RANGE SCAN | DEPT_DNAME_I |     1.       |     1 (0) | 00:00:01 |

    -----------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    1 - access("EMP".") DEPTNO "=" DEPT ". ("' DEPTNO ')

    3 - access("EMP".") ENAME "= 'MILLER')

    5 - access("DEPT".") DNAME "= 'SALES')

    Note

    -----

    -the dynamic statistics used: dynamic sampling (level = 4)

  • Join hash Anti-NA

    A simple another day at the office...

    What has been the case.
    A colleague contacted me saying that he had two similar queries. One of the data return, the other not.
    The "simplified" version of both applications looked like:
    SELECT col1
      FROM tab1
     WHERE col1 NOT IN (SELECT col1 FROM tab2);
    This query returned no data, however it - and subsequently, I also never knew that there was an inconsistency in the data, which would have had to go back to the lines.
    This was also proved/shown by the second query:
    SELECT col1
      FROM tab1
     WHERE NOT EXISTS
              (SELECT col1
                 FROM tab2
                WHERE tab1.col1 = tab2.col1);
    This query returned the expected difference. And this request is in fact identical to the first request!
    Even when we have hardcoded extra WHERE clause, the result was the same. No line for:
    SELECT *
      FROM tab1
     WHERE  tab1.col1 NOT IN (SELECT col1 FROM tab2)
           AND tab1.col1 = 'car';
    and the correct lines to:
    SELECT *
      FROM tab1
     WHERE     NOT EXISTS
                  (SELECT 1
                     FROM tab2
                    WHERE tab1.col1 = tab2.col1)
           AND tab1.col1 = 'car';
    After an hour of searching, trying to reproduce the problem, I was almost about to give up and send it to Oracle Support qualifying as a bug.
    However, there is a difference that I saw, that could be the cause of the problem.
    Although the statements are almost the same, the execution plan showed a slight difference. The NOT IN query execution plan looked like:
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 5 Bytes: 808 Cardinality: 2
    3 HASH JOIN ANTI NA Cost: 5 Bytes: 808 Cardinality: 2
    1 TABLE ACCESS FULL TABLE PIM_KRG.TAB1 Cost: 2 Bytes: 606 Cardinality: 3 
    2 TABLE ACCESS FULL TABLE PIM_KRG.TAB2 Cost: 2 Bytes: 404 Cardinality: 2 
    Whereas the execution plan of the query with the NOT EXISTS looked like:
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 5 Bytes: 808 Cardinality: 2
    3 HASH JOIN ANTI Cost: 5 Bytes: 808 Cardinality: 2
    1 TABLE ACCESS FULL TABLE PIM_KRG.TAB1 Cost: 2 Bytes: 606 Cardinality: 3 
    2 TABLE ACCESS FULL TABLE PIM_KRG.TAB2 Cost: 2 Bytes: 404 Cardinality: 2 
    See the difference?
    Is not knowing what a "HASH JOIN ANTI NA" was exactly, I entered My Oracle Support knowledge base as a search command. In addition to a few lists of patch-set, I also found Document 1082123.1, which explains all about the HASH JOIN ANTI NULL_AWARE.

    In this document, the behavior we've seen explained, with the most important is the note:
    "*' If t2.n2 contains NULL values, do not return all t1 lines and cancel."

    And then it suddenly hit me as I was unable to reproduce the case using my own created test tables.

    In our case, this meant that if tab2.col1 would have contained all the rows with a NULL value, the join between the two tables could not be achieved based on a clause 'NOT IN'.
    The query would end without any result!
    And that's exactly what we saw.

    The query with the NOT EXISTS does not use an ANTI NULL_AWARE JOIN and therefore does not return the results

    Also the workaround solution mentioned:
    alter session set "_optimizer_null_aware_antijoin" = false;
    seems to not work. Allthought the execution plan changes:
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 4 Bytes: 202 Cardinality: 1 
    3 FILTER 
    1 TABLE ACCESS FULL TABLE PIM_KRG.TAB1 Cost: 2 Bytes: 606 Cardinality: 3 
    2 TABLE ACCESS FULL TABLE PIM_KRG.TAB2 Cost: 2 Bytes: 404 Cardinality: 2 
    It will always return no line!


    And now?

    As a document explaining the behavior, I'm doubting if we can classify this as a bug. But in my opinion, if the developers do not know this strange behavior, they easily call it a bug.
    The 'problem' is easily solved (or work around) using the NOT EXISTS or NVL solution with the joined columns. However, I expect the optimizer to sort these things himself.


    For all those who want to reproduce/investigate this case, I have listed my test code.
    The database version, we used was 11.1.0.7 on Windows 2008 R2. I don't know anyone here of the operating system.
    -- Create two tables, make sure they allow NULL values
    CREATE TABLE tab1 (col1 VARCHAR2 (100) NULL);
    CREATE TABLE tab2 (col1 VARCHAR2 (100) NULL);
    
    INSERT INTO tab1
    VALUES ('bike');
    
    INSERT INTO tab1
    VALUES ('car');
    
    INSERT INTO tab1
    VALUES (NULL);
    
    INSERT INTO tab2
    VALUES ('bike');
    
    INSERT INTO tab2
    VALUES (NULL);
    
    COMMIT;
    
    -- This query returns No results
    SELECT col1
      FROM tab1
     WHERE col1 NOT IN (SELECT col1 FROM tab2);
    
    -- This query return results
    SELECT col1
      FROM tab1
     WHERE NOT EXISTS
              (SELECT col1
                 FROM tab2
                WHERE tab1.col1 = tab2.col1);
    I have also written a ticket with the text above to http://managingoracle.blogspot.com

    Anyone who has the true explanation of this behavior as in why the HASH JOIN ANTI takes end. Please give details
    Thank you

    Kind regards
    FJFranken

    As you have discovered, NOT IN and EXISTS are NOT the same.

    This is expected behavior when comparing with the value NULL.

    See:
    http://jonathanlewis.WordPress.com/2007/02/25/not-in/

  • 12 c parallel execution Plans

    Hello world

    I have a little a problem of performance on 12 c that gives me a little trouble at the head. I moved from 11 to 12 databases and no amendment of the application have been made. Our requests are generated somewhat dynamically, so that they are the same thing every time.

    Let's start with the execution plan I get:

    SQL > select * from table (dbms_xplan.display ());


    PLAN_TABLE_OUTPUT

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Hash value of plan: 3567104424

    -----------------------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation                                            | Name                  | Lines | Bytes | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.

    -----------------------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |                       |    55.  7095 |  3764 (1) | 00:00:01 |        |      |            |

    |   1.  COORDINATOR OF PX |                       |       |       |            |          |        |      |            |

    |   2.   PX SEND QC (ORDER). : TQ10006 |    55.  7095 |  3764 (1) | 00:00:01 |  Q1, 06 | P > S | QC (ORDER).

    |   3.    SORT ORDER BY |                       |    55.  7095 |  3764 (1) | 00:00:01 |  Q1, 06 | SVCP |            |

    |   4.     PX RECEIVE                                       |                       |    55.  7095 |  3763 (1) | 00:00:01 |  Q1, 06 | SVCP |            |

    |   5.      RANGE OF SEND PX | : TQ10005 |    55.  7095 |  3763 (1) | 00:00:01 |  Q1, 05 | P > P | RANGE |

    |   6.       UNIQUE FATE |                       |    55.  7095 |  3763 (1) | 00:00:01 |  Q1, 05 | SVCP |            |

    |*  7 |        HASH JOIN                                     |                       |    55.  7095 |  3762 (1) | 00:00:01 |  Q1, 05 | SVCP |            |

    |   8.         PX RECEIVE                                   |                       |   801 | 50463 |  3696 (1) | 00:00:01 |  Q1, 05 | SVCP |            |

    |   9.          PX SEND HASH | : TQ10003 |   801 | 50463 |  3696 (1) | 00:00:01 |  Q1, 03 | P > P | HASH |

    | * 10 |           HASH JOIN                                  |                       |   801 | 50463 |  3696 (1) | 00:00:01 |  Q1, 03 | SVCP |            |

    |  11.            RECEIVE PX |                       |   801 | 40851 |  2333 (1) | 00:00:01 |  Q1, 03 | SVCP |            |

    |  12.             PX SEND BROADCAST | : TQ10002 |   801 | 40851 |  2333 (1) | 00:00:01 |  Q1, 02 | P > P | BROADCAST |

    |  13.              NESTED LOOPS |                       |   801 | 40851 |  2333 (1) | 00:00:01 |  Q1, 02 | SVCP |            |

    |  14.               KIND OF BUFFER.                       |       |       |            |          |  Q1, 02 | ISSUE |            |

    |  15.                RECEIVE PX |                       |       |       |            |          |  Q1, 02 | SVCP |            |

    |  16.                 PX SEND HASH | : TQ10000 |       |       |            |          |        | S > P | HASH |

    |  17.                  NESTED LOOPS |                       |   823. 31274 |  1509 (1) | 00:00:01 |        |      |            |

    | * 18.                   TABLE ACCESS BY ROWID INDEX BATCH | PAGED_LOOKUP_PKS |   500 |  9500 |     3 (0) | 00:00:01 |        |      |            |

    | * 19.                    INDEX RANGE SCAN | PAGED_LOOKUP_PKS_IDX2 |     1.       |     2 (0) | 00:00:01 |        |      |            |

    |  20.                   TABLE ACCESS BY ROWID INDEX BATCH | BILL_ITEM |     2.    38.     4 (0) | 00:00:01 |        |      |            |

    | * 21.                    INDEX RANGE SCAN | BILL_ITEM_FK2 |     4.       |     2 (0) | 00:00:01 |        |      |            |

    | * 22.               INDEX UNIQUE SCAN | PK_INSERTION |     1.    13.     1 (0) | 00:00:01 |  Q1, 02 | SVCP |            |

    |  23.            ITERATOR BLOCK PX |                       |  1548K |    17 M |  1353 (2) | 00:00:01 |  Q1, 03 | ISSUE |            |

    |  24.             FULL RESTRICTED INDEX SCAN FAST | BOOKING_ACCOUNT_1 |  1548K |    17 M |  1353 (2) | 00:00:01 |  Q1, 03 | SVCP |            |

    |  25.         PX RECEIVE                                   |                       | 22037 |  1420K |    65 (2) | 00:00:01 |  Q1, 05 | SVCP |            |

    |  26.          PX SEND HASH | : TQ10004 | 22037 |  1420K |    65 (2) | 00:00:01 |  Q1, 04 | S > P | HASH |

    |  27.           SELECTOR PX |                       |       |       |            |          |  Q1, 04 | SCWC |            |

    |  28.            TABLE ACCESS FULL | CONTACT | 22037 |  1420K |    65 (2) | 00:00:01 |  Q1, 04 | SCWP |            |

    -----------------------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    7 - access ("ACCOUNT_ID" ="ACCOUNT_ID")

    10 - access ("BOOKING" ="BOOKING")

    18 - filter("T1".") SEQUENCE_NO' < 501 AND "T1". ("' SEQUENCE_NO" > = 1).

    19 - access("T1".") SESSION_ID '= 123456 AND 'T1'.' SEARCH_ID "= 25)

    21 - access("T1".") N1 "=" BILL_ID")

    22 - access ("BOOKING" = "BOOKING" AND "INSERTION_SET" = "INSERTION_SET" AND "INSERT"="INSERT")

    Note

    -----

    -the dynamic statistics used: dynamic sampling (level = 2)

    -This is an adaptation plan

    -2 directives Plan Sql used for this statement

    51 selected lines.

    Elapsed time: 00:00:00.15

    SQL > spool off

    OK, now let's go through the problem:

    1. It's a development running on a virtual server, and which hosts a few other databases, so the parallel execution is not a good thing. parallel_degree_policy is set to MANUAL, parallel_max_servers and all other parallel_ limits are set to 1 and tables have been changed with the settings of NOPARALLEL. So why is the execution plan always generated with all stages of parallel execution? I don't seem to get rid of in 12 c
    2. Next mystery is that the said plan of the explain command is an adaptation plan, and yet I put the true optimizer_adaptive_reproting_only
    3. Now to the problem of effective enforcement, so I'm playing around with all these settings. The query runs for 3-4 seconds, returning around about 500 cases. However, in some cases this same query with the same input variable races for hours and if I can believe the AWR and ASH reports, read a good 180 GB of data. The main wait event is direct path read temp temp and writing.


    This is not isolated to that one query. I have a few queries now that all display the same behavior, one of them running overnight. I don't seem to get to a standard nested loop execution plans.


    The entire base is a database plug-in and I don't know I just missed something in the new features Guide.

    Would appreciate some ideas.

    Thank you

    If you want to disable parallel execution, you must set parallel_max_servers to zero.  Maybe the optimizer thinks he can use a parallel plan because parallel_max_servers is non-zero (even though the number of slaves available means that it will be serialized to a parallel plan).

    Note that you have a ticket saying dynamic stats have been used.  Maybe you have a 11 for optimizer_dynamic_sampling setting, and allowing Oracle to be very inventive with collection of samples and parallelism.

    You have also 2 SQL instructions in game. These are the things that get associated with objects rather than the instructions, then perhaps someone has been playing with parallelism and managed to associate the parallelism with one of the tables in your query (I am not sure 100% that it is possible, just throw a suggestion).  Take a look at the SQL used for education guidelines.

    To give us a little more information, you can:

    Shoot memory execution plan dbms_xplan.display_cursor ({sql_id}, {number of children}, 'ALL'));

    We show all the parallel settings (see setting the parallel)

    Pull on the parameters of the optimizer for query memory (select name, value of V$ sql_optimizer_env where sql_id = {your sql identifier} and child_number = {your child number})

    Concerning

    Jonathan Lewis

  • Partition wise joined possible with partitions of the interval?

    Hello

    I want to know the score wise join (NTC) is possible with interval partitioning - I can't find an explicit statement that he isn't, but I can't make it work - I did a simple test case to illustrate the issue.

    below, I have 2 create table scripts - 1 for the case of interval and 1 for the case of hash - I then a simple query on these 2 objects which should produce a NTC.

    In the case of hash, it works very well (see screenshot 2nd with a set of slaves), the first screenshot shows the case of the interval where I find myself with 2 sets of slaves and no NTC.

    No idea if this is possible and I just missed something?

    (for the test case choose the names of schema/storage appropriate for your system)

    Oh and version (I almost forgot... :-))-East 11.2.0.4.1 SLES 11)

    See you soon,.

    Rich

    -case interval

    CREATE TABLE 'SB_DWH_IN '. "' TEST1 '.

    TABLESPACE "SB_DWH_INTEGRATION".

    PARTITION BY RANGE ("OBJECT_ID") INTERVAL (10000)

    (PARTITION 'LESS_THAN_ZERO' VALUES LESS THAN (0) TABLESPACE "SB_DWH_INTEGRATION")

    in select * from DBA_OBJECTS where object_id is not null;

    CREATE TABLE 'SB_DWH_IN '. "" TEST2 ".

    TABLESPACE "SB_DWH_INTEGRATION".

    PARTITION BY RANGE ("OBJECT_ID") INTERVAL (10000)

    (PARTITION 'LESS_THAN_ZERO' VALUES LESS THAN (0) TABLESPACE "SB_DWH_INTEGRATION")

    in select * from DBA_OBJECTS where object_id is not null;

    -case of hash

    CREATE TABLE 'SB_DWH_IN '. "' TEST1 '.

    TABLESPACE "SB_DWH_INTEGRATION".

    8 partitions PARTITION OF HASH ("OBJECT_ID")

    store in ("SB_DWH_INTEGRATION")

    in select * from DBA_OBJECTS where object_id is not null;

    CREATE TABLE 'SB_DWH_IN '. "" TEST2 ".

    TABLESPACE "SB_DWH_INTEGRATION".

    8 partitions PARTITION OF HASH ("OBJECT_ID")

    store in ("SB_DWH_INTEGRATION")

    in select * from DBA_OBJECTS where object_id is not null;

    -query to run

    Select / * + PARALLEL(TEST2,8) PARALLEL(TEST1,8) * / *.

    of 'SB_DWH_IN '. "" TEST2 ","SB_DWH_IN ". "' TEST1 '.

    where TEST1.object_id = test2.object_id

    nonPWJ.PNG

    pwjenabled.PNG

    It is planned and a consequence of the estimate of the number of parallel slaves.

    To the parallel 41 each slave made 3 passes (i.e. sleeves 3 partitions).

    Add a partition (by table), and a set of slaves will have to manage a 4th pass: the cost of the query using NTC would increase from 33 percent even if the modification of the data is less than 0.8%.

    I guess that in the production Oracle distributes your lines of 1 M for a hash join.

    Because the decision is encrypted, it is possible that a very extreme tilt in partition in the table sizes billion line might overthrow the optimizer in a non - NTC join - but I have not tested that.

    If you want to force the plan John Watson suggestion for a hint of pq_distribute is relevant.  To cover all the bases and call your tables SMALL and LARGE

    /*+

    leading (FAT kid)

    USE_HASH (large)

    no_swap_join_inputs (large)

    PQ_DISTRIBUTE (wide none none)

    */

    If it's legal, that should do it.

    Concerning

    Jonathan Lewis

  • Query, stuck on the temp to write direct path after 11.1.0.7 to 11.2.0.4 upgrade

    Dear respected community Memebers,

    We have upgraded the database to 11.2 and now see some queries stuck on direct way heavy write temp.
    We have seen queries is is completed with error after a few hours:
    ORA-1652: unable to extend temp by 16 segment in tablespace TEMP1
    where files TEMP1 saturates at 32G (which is usually to 10G)

    Our parameters of memory are:
    whole large pga_aggregate_target 4G
    Whole large SGA_TARGET 12G
    Whole large SGA_MAX_SIZE 12G

    Physical memory to the operating system is 24G.

    Impatient he solved.

    Thank you.

    Ali

    This would mean that the execution plan has changed and it uses more temporary space (for example, a hash join).

    Look at the execution plan before and after the upgrade and to identify differences.

    See if you can adjust the SQL, change the gaskets.

    Hemant K Collette

  • When the join happens?

    Hi experts,

    When any type of join (hash join, merge join), please correct me if I'm wrong, first Oracle have to store all data for the other set of data, right join operation? Where do I store it? PGA is used for this operation? In other words, say that optimizer use the hash join method, (in-memory hash table must be built for this operation), he built in PGA or CMS?

    If it's in the PGA, what happens if the sorted data set will not match the memory?

    Thank you

    basically: Yes.

    If the input set is less than the value (explicitly defined or assigned automatically) for hash_area_size, then the hash join is made in memory of the PGA (as best of activities execution). If the game is bigger, it is necessary to empty the intermediate result in TEMP tablespace (causing onepass or operations activities even multipass). Jonathan Lewis described the mechanism in detail cost base fundamental Oracle.

  • Reg: Parallel Execution

    Hi Experts,

    I have this request below. I was wondering - w/o parallel hint, how you naturally to run in parallel.

    Select * from SC
    where
    There are)
    Select 1
    STI
    where
    STC. LNO = sti.lno and
    STI.act_id = 1 and
    STI.codec in ('2697', '6697', '2737', '6737', '3886', '7886', '2692',' 6692',
    '3483 ', '7483', '500', '800', '501', '801' ', 3888',' 7888 ', ' 3887',
    "7887 ', '3946', '7946', '3945', '7945' ', 3944',' 7944 ', ' 3953',
    "7953 ', '3954', '7954', '3955', '7955' ', 3481',' 3482 ', ' 7481',
    "7482 ', '3960', '7960', '4072', '8072' ', 4284',' 8284 ', ' 4343',
    "8343 ', '4499', '8499', '10038', '14038' ', 3959',' 11482 ', ' 11483',
    "11484 ', '11485', '11486', '11487' ', 11488',' 11489 ', ' 11490',
    "11491 ', '11492', '11493', '11494' ', 11495',' 11496 ', ' 11510',
    "11560 ', '11561', '11562', '11563' ', 11564',' 11565 ', ' 11566',
    "7959 ', '15482', '15483', '15484' ', 15485',' 15486 ', ' 15487',
    "15488 ', '15489', '15490', '15491' ', 15492',' 15493 ', ' 15494',
    "15495 ', '15496', '15510', '15560' ', 15561',' 15562 ', ' 15563',
    "15564 ', '15565', '15566', '11588', '15588',' 12096 ', ' 16096')
    );



    PLAN_TABLE_OUTPUT
    Hash value of plan: 347973291

    ----------------------------------------------------------------------------------------------------------------------------


    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.
    ----------------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |                  |    86 M |  6992M |  2901K (1) | 09:40:22 |        |      |            |
    |   1.  COORDINATOR OF PX |                  |       |       |            |          |        |      |            |
    |   2.   PX SEND QC (RANDOM). : TQ10002 |    86 M |  6992M |  2901K (1) | 09:40:22 |  Q1, 02 | P > S | QC (RAND) |
    |*  3 |    HASH JOIN RIGHT SEMI |                  |    86 M |  6992M |  2901K (1) | 09:40:22 |  Q1, 02 | SVCP |            |
    |   4.     RECEIVE PX |                  |   263K |  4887K |  5456 (14) | 00:01:06 |  Q1, 02 | SVCP |            |
    |   5.      PX SEND HASH | : TQ10001 |   263K |  4887K |  5456 (14) | 00:01:06 |  Q1, 01 | P > P | HASH |
    |   6.       ITERATOR BLOCK PX |                  |   263K |  4887K |  5456 (14) | 00:01:06 |  Q1, 01 | ISSUE |            |
    |*  7 |        FULL RESTRICTED INDEX SCAN FAST | STI__IDX |   263K |  4887K |  5456 (14) | 00:01:06 |  Q1, 01 | SVCP |            |
    |   8.     KIND OF BUFFER.                  |       |       |            |          |  Q1, 02 | ISSUE |            |
    |   9.      RECEIVE PX |                  |  1087M |    66G |  2891K (1) | 09:38:14 |  Q1, 02 | SVCP |            |
    |  10.       PX SEND HASH | : TQ10000 |  1087M |    66G |  2891K (1) | 09:38:14 |        | S > P | HASH |
    |  11.        TABLE ACCESS FULL | STC              |  1087M |    66G |  2891K (1) | 09:38:14 |        |      |            |
    ----------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    3 - access("STC".") LOANNUMBER "=" STI ". ("' LOANNUMBER")
    7 - filter("STI".") ACTIVEIV «= 1 AND ("STI".» Codec "= ' 10038"OR "STI" "codec"="11482' OR
    "STI". "codec"= "11483" OR "STI". "codec"= "11484" OR "STI". "codec '=" 11485' OR
    "STI". "codec"= "11486" OR "STI". "codec"= "11487" OR "STI". "codec '=" 11488' OR
    "STI". "codec"= "11489" OR "STI". ' Codec '= ' 11490 "OR"STI". "codec '=" 11491' OR
    "STI". "codec"= "11492" OR "STI". "codec"= "11493" OR "STI". "codec '=" 11494' OR
    "STI". "codec"= "11495" OR "STI". "codec"= "11496" OR "STI". "codec" = '11510' OR "
    "STI". "codec"= "11560" OR "STI". "codec"= "11561" OR "STI". "codec '=" 11562' OR
    "STI". "codec"= "11563" OR "STI". "codec"= "11564" OR "STI". "codec '=" 11565' OR
    "STI". "codec"= "11566" OR "STI". "codec"= "11588" OR "STI". "codec '=" 12096' OR
    "STI". "codec"= "14038" OR "STI". "codec"= "15482" OR "STI". "codec '=" 15483' OR
    "STI". "codec"= "15484" OR "STI". "codec"= "15485" OR "STI". "codec '=" 15486' OR
    "STI". "codec"= "15487" OR "STI". "codec"= "15488" OR "STI". "codec '=" 15489' OR
    "STI". "codec"= "15490" OR "STI". "codec"= "15491" OR "STI". "codec '=" 15492' OR
    "STI". "codec"= "15493" OR "STI". "codec"= "15494" OR "STI". "codec '=" 15495' OR
    "STI". "codec"= "15496" OR "STI". "codec"= "15510" OR "STI". "codec '=" 15560' OR
    "STI". "codec"= "15561" OR "STI". "codec"= "15562" OR "STI". "codec '=" 15563' OR
    "STI". "codec"= "15564" OR "STI". "codec"= "15565" OR "STI". "codec '=" 15566' OR
    "STI". "codec"= "15588" OR "STI". "codec"= "16096" OR "STI". "codec"="2692" OR
    "STI". "codec"= "2697" OR "STI". "codec"= "2737" OR "STI". "codec '=" 3481' OR
    "STI". "codec"= "3482" OR "STI". "codec"= "3483" OR "STI". "codec" = '3886' OR "
    "STI". "codec"= "3887" OR "STI". "codec": "3888" OR "STI". "codec '=" 3944' OR
    "STI". "codec"= "3945" OR "STI". "codec"= "3946" OR "STI". "codec"="3953" OR "
    "STI". "codec"= "3954" OR "STI". "codec"= "3955" OR "STI". "codec '=" 3959' OR
    "STI". "codec"= "3960" OR "STI". «Codec ' = ' 4072 "OR"STI".» "codec '=" 4284' OR
    "STI". "codec": "4343" OR "STI". "codec"= "4499" OR "STI". "codec"="500" OR "
    "STI". "codec"= "501" OR "STI". "codec"= "6692" OR "STI". "codec '=" 6697' OR
    "STI". "codec": "6737" OR "STI". "codec"= "7481" OR "STI". "codec '=" 7482' OR
    "STI". "codec": "7483" OR "STI". "codec"= "7886" OR "STI". "codec '=" 7887' OR
    "STI". "codec": "7888" OR "STI". "codec"= "7944" OR "STI". "codec '=" 7945' OR
    "STI". "codec"= "7946" OR "STI". "codec"= "7953" OR "STI". "codec '=" 7954' OR
    "STI". "codec"=, "7955" OR "STI". "codec"= "7959" OR "STI". "codec '=" 7960' OR
    "STI". "codec"= "800" OR "STI". ' Codec '= ' 801 "OR"STI". "codec '=" 8072' OR
    (("STI"."codec"= "8284" OR "STI"."codec"= "8343" OR "STI"."codec"="8499'))

    What could be the possible reason?

    No matter what level setting of session as "ENABLE PARALLEL" or something.

    Please tell us in this regard.

    Thank you and best regards,

    -Nordine

    (on Oracle 11.2.0.3.0)

    nordine B wrote:

    Thanks Hoek. Learned something new.

    BTW, I checked the DOF in this table and there is no PARALLELism present.

    Any other pointer why this could happen?

    You have probably checked the parallelism of the * index * STI__IDX - that's probably the problem no. 1 with reconstructions of index in parallel to speed up the reconstruction process and oblivion reset the level again to 1. This change in 12 c, incidentally, where reconstructions of index and table move is no longer change / persist the degree of the operation in the dictionary of the object reconstruction / moved.

    Randolf

  • ORDER BY in the subselect, then a join

    I had a difficult time in the research on this or even prove it so I came to the forum asking for advice.

    My question is how Oracle will take care of the scheduling of a query in which I

    SELECT *
      FROM (  SELECT *
                FROM (    SELECT LEVEL c1,
                                   LEVEL
                                 * TRUNC(DBMS_RANDOM.VALUE(1,
                                                           4))
                                    val_rnd
                            FROM DUAL
                      CONNECT BY LEVEL <= 200)
            ORDER BY c1 ASC) a,
           (    SELECT TRUNC(  LEVEL
                             * DBMS_RANDOM.VALUE(1,
                                                 4))
                          val_rnd2
                  FROM DUAL
            CONNECT BY LEVEL <= 200) b
     WHERE a.c1 = b.val_rnd2(+)
    

    In this case, my subselect is the order of the ascendants of c1. That part I understand, and I'll do my subquery 'a' ordered to become. My next step then joins this request sorted to a second table ('b').

    In this join, Oracle always preserves my initial sort or do I need to add a second order by a.C1 ASC after the join to guarantee that she will return in a sorted order.

    My example above returns the query how I want that it but I can't tell if it's just because of me be lucky and Oracle return correctly or if it will be guaranteed 100% function this way.

    I'm running on Oracle 11.2.0.4

    Oracle can transform your query into something that is easier to optimize, but with the same semantics. Being the ORDER BY in a view online, I guess that Oracle could even decide to ignore it (although I'm not sure of this point) - as Justin and Frank write already: only an ORDER BY in the main query will give you the 100% guarantee. If I run your query to 11.2.0.1 plan is:

    ----------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |

    ----------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |      |     1.    39.     6 (34) | 00:00:01 |

    |   1.  SORT ORDER BY |      |     1.    39.     6 (34) | 00:00:01 |

    |*  2 |   OUTER HASH JOIN |      |     1.    39.     5 (20) | 00:00:01 |

    |   3.    VIEW                         |      |     1.    26.     2 (0) | 00:00:01 |

    |*  4 |     CONNECT TO WITHOUT FILTERING.      |       |       |            |          |

    |   5.      QUICK DOUBLE |      |     1.       |     2 (0) | 00:00:01 |

    |   6.    VIEW                         |      |     1.    13.     2 (0) | 00:00:01 |

    |*  7 |     CONNECT TO WITHOUT FILTERING.      |       |       |            |          |

    |   3 ×      QUICK DOUBLE |      |     1.       |     2 (0) | 00:00:01 |

    ----------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    2 - access("from$_subquery$_002".") C1 '=' B '. "VAL_RND2" (+)) "

    4 filter (LEVEL<>

    7 filter (LEVEL<>

    So we can see that Oracle made a transformation and moved the SORT ORDER BY in subquery: with this plan, you get the correct order - but there is no guarantee that the Oracle will use this plan (and transformation) in different versions / with different settings etc.

  • The Sky is Falling! ORA-01652: unable to extend segment by 128 temp

    So, we currently have a production problem, and I'm not so much aware as a java developer humble and not an expert of the Oracle.
    We continue to receive this error (below) when a certain heavy query hits the DB.
    Our DBA says that for "TABLE_SPACE_NAME_HERE" storage space is 20 GB of space and the problem is the query.
    The query has been works well for many months to many, but all of a sudden this a problem and we need to do something quick.
    We tried to bounce the application server, but the error came just at the time when the large select query gets hit.
    Any thoughts? Help! : )

    java.sql.SQLException: ORA-01652: unable to extend segment temp of 128 in tablespace TABLE_SPACE_NAME_HERE

    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:113)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
    at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:754)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:219)
    at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:972)
    at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1074)
    at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1156)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3415)
    at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3460)
    at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:296)

    The error indicates that you have run out of space in your temporary tablespace.

    It is quite possible that the query plan has changed and that, for some reason, your query uses now a plan that is the cause to use radically more temporary for things like space sorts and hash joins that it was in the past. Depending on the version of Oracle, the authorized options, if Statspack is installed, etc as your DBA can probably look to see if the query plan for this statement has been modified. If it has, the DBA should be able to force Oracle to use the old query plan and probably need to investigate what caused the query plan to change first and resolve the underlying problem.

    It is also possible that the query itself is not using more space than normal temp, but the number of users (or their workload or of the volumns of data, with which they work) grew by causing the application to require more space in the temporary tablespace. If everyone does a kind of 10 MB and you have 2,000 concurrent users, you would use a total of 20 GB of temporary space. Try adding making 2001 the user a sort of 10 MB and now you're out of temporary space. Similarly, if the data volume has increased a little, you could go to have just enough space to be out of space.

    Justin

  • The slow queries with WITH and JOIN

    Hello
    This code takes too long to complete:
    WITH rawData AS -- 563 rows in 0.07s OR 59 rows in 0.02s
    (
    SELECT
         date_releve AS x1,
         index_corrige AS y1,
         LEAD(date_releve) OVER (PARTITION BY id_compteur ORDER BY date_releve) AS x2,
         LEAD(index_corrige) OVER (PARTITION BY id_compteur ORDER BY date_releve) AS y2,
         id AS id_releve,
         id_compteur
    FROM V_relevesCorriges
    ),
    
    meteoData AS -- 1082 rows in 1.34s OR 116 rows in 0.16s
    (
    SELECT avg(meteo.valeur) AS meteoValue, x2 AS dateMeteo, id_variable, id_releve, id_compteur
    FROM meteo, rawData
    WHERE date_meteo <= x2 AND date_meteo > x1
    GROUP BY id_releve, id_variable, x2, id_compteur
    ORDER BY x2
    ),
    
    consoData AS -- 1104 rows in 1.43s, 117 rows in 0.2s
    (
    SELECT
    to_char(x1, 'DD.MM.YYYY') || ' - ' || to_char(x2, 'DD.MM.YYYY') AS periode,
         meteoValue AS meteo_moyenne,
         (y2 - y1) / nullif((x2 - x1),0) AS conso_par_jour,
         (y2 - y1) AS conso,
         rawData.id_releve id_releve,
         meteoData.id_variable id_variable,
         meteoData.id_compteur id_compteur
    FROM rawData LEFT OUTER JOIN meteoData ON rawData.id_releve = meteoData.id_releve
    ORDER BY x2
    )
    
    SELECT periode, meteo_moyenne, conso_par_jour, consoData.id_variable id_variable, consoData.id_releve id_releve, id_compteur -- 1104 rows in 1.51s, 116 rows in 1.34s
    FROM consoData LEFT OUTER JOIN diagnostic2 ON consoData.id_releve = diagnostic2.id_releve AND consoData.id_variable = diagnostic2.id_variable
    WHERE Id_compteur = 4
    If I remove "WHERE Id_compteur = 4" on the last line, there is almost no difference in run time. Without this WHERE clause, it returns 1104 ranks in s 1.51, with her he returned 116 lines s 1.34.

    I say it takes too long because when I put this WHERE 'consoData' clause (WHERE meteoData.consoData = 4), he returns to 0 116rows. 2 sec., to get the same output of data. If I remove the LEFT OUTER JOIN diagnosis2 (last but one line) there are also 0. 2 s.

    I think that the solution would be to force "WHERE Id_compteur =... "to take effect before he joined the"diagnosis2"of the table, but do not know how to do.

    The subquery takes a lot of time when to return all the lines is "meteoData.

    This code is supposed to be a VIEW, so "WHERE Id_compteur =... ' will not be included in it but passed when you query the view. I tested it as a point of VIEW, same problem.

    Explain the plan:
    Plan hash value: 724835998
    -----------------------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    -----------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 16364 | 1342K| | 586 (4)| 00:00:08 |
    | 1 | TEMP TABLE TRANSFORMATION | | | | | | |
    | 2 | LOAD AS SELECT | DIAGNOSTIC2 | | | | | |
    | 3 | WINDOW SORT | | 563 | 15764 | | 12 (25)| 00:00:01 |
    | 4 | VIEW | V_RELEVESCORRIGES | 563 | 15764 | | 11 (19)| 00:00:01 |
    | 5 | SORT GROUP BY | | 563 | 56300 | | 11 (19)| 00:00:01 |
    | 6 | VIEW | | 563 | 56300 | | 10 (10)| 00:00:01 |
    |* 7 | HASH JOIN RIGHT OUTER | | 563 | 25335 | | 10 (10)| 00:00:01 |
    | 8 | TABLE ACCESS FULL | COMPTEURS | 22 | 132 | | 3 (0)| 00:00:01 |
    | 9 | VIEW | | 563 | 21957 | | 7 (15)| 00:00:01 |
    |* 10 | HASH JOIN OUTER | | 563 | 26461 | | 7 (15)| 00:00:01 |
    | 11 | TABLE ACCESS FULL | RELEVES | 563 | 12949 | | 3 (0)| 00:00:01 |
    | 12 | VIEW | V_CORRECTIONDATA | 563 | 13512 | | 3 (0)| 00:00:01 |
    |* 13 | VIEW | | 563 | 28150 | | 3 (0)| 00:00:01 |
    | 14 | WINDOW SORT | | 563 | 67560 | | 3 (0)| 00:00:01 |
    | 15 | VIEW | | 563 | 67560 | | 3 (0)| 00:00:01 |
    | 16 | NESTED LOOPS OUTER| | 563 | 14638 | | 3 (0)| 00:00:01 |
    | 17 | TABLE ACCESS FULL| RELEVES | 563 | 12949 | | 3 (0)| 00:00:01 |
    |* 18 | INDEX UNIQUE SCAN| COMPTEURS_PK | 1 | 3 | | 0 (0)| 00:00:01 |
    |* 19 | HASH JOIN RIGHT OUTER | | 16364 | 1342K| | 573 (4)| 00:00:07 |
    | 20 | INDEX FULL SCAN | DIAGNOSTIC2_PK | 4 | 24 | | 1 (0)| 00:00:01 |
    | 21 | VIEW | | 16364 | 1246K| | 572 (4)| 00:00:07 |
    | 22 | SORT ORDER BY | | 16364 | 1661K| 3864K| 572 (4)| 00:00:07 |
    |* 23 | HASH JOIN | | 16364 | 1661K| | 179 (9)| 00:00:03 |
    | 24 | VIEW | | 1157 | 55536 | | 3 (0)| 00:00:01 |
    | 25 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6657_90A96D1D | 1157 | 55536 | | 3 (0)| 00:00:01 |
    |* 26 | VIEW | | 7963 | 435K| | 175 (8)| 00:00:03 |
    | 27 | SORT GROUP BY | | 7963 | 311K| 1768K| 175 (8)| 00:00:03 |
    | 28 | MERGE JOIN | | 26409 | 1031K| | 23 (48)| 00:00:01 |
    | 29 | SORT JOIN | | 1157 | 28925 | | 4 (25)| 00:00:01 |
    | 30 | VIEW | | 1157 | 28925 | | 3 (0)| 00:00:01 |
    | 31 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6657_90A96D1D | 1157 | 55536 | | 3 (0)| 00:00:01 |
    |* 32 | FILTER | | | | | | |
    |* 33 | SORT JOIN | | 9130 | 133K| | 11 (19)| 00:00:01 |
    | 34 | TABLE ACCESS FULL | METEO | 9130 | 133K| | 9 (0)| 00:00:01 |
    -----------------------------------------------------------------------------------------------------------------------
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    7 - access("RELEVES"."ID_COMPTEUR"="COMPTEURS"."ID"(+))
    10 - access("RELEVES"."ID_COMPTEUR"="V_CORRECTIONDATA"."ID_COMPTEUR"(+))
    filter("RELEVES"."DATE_RELEVE">="V_CORRECTIONDATA"."DATE_CHANGEMENT"(+))
    13 - filter("CHG_COMPTEUR"=1 AND "ID_COMPTEUR"="ID_COMPTEUR_CORR")
    18 - access("RELEVES"."ID_COMPTEUR"="COMPTEURS"."ID"(+))
    19 - access("CONSODATA"."ID_VARIABLE"="DIAGNOSTIC2"."ID_VARIABLE"(+) AND
    "CONSODATA"."ID_RELEVE"="DIAGNOSTIC2"."ID_RELEVE"(+))
    23 - access("RAWDATA"."ID_RELEVE"="METEODATA"."ID_RELEVE")
    26 - filter("METEODATA"."ID_COMPTEUR"=4)
    32 - filter("DATE_METEO">"X1")
    33 - access(INTERNAL_FUNCTION("DATE_METEO")<=INTERNAL_FUNCTION("X2"))
    filter(INTERNAL_FUNCTION("DATE_METEO")<=INTERNAL_FUNCTION("X2"))
    Oracle database version: 10.2.0.4.0, I'm accessing through the APEX version 4.1.1.00.23

    I hope that my question is not too blurred...

    Ah, sorry, I missed that bit in your original post. I had a similar problem where I was adamant that the predicate must be pushed into the view (despite the use of analytical functions - they used the conditions that would allow the predicate be pushed): http://www.orchestrapit.co.uk/?p=55

    In the end, I solved my problem by using joins to inline, rather than the subquery factoring - maybe you can try to convert your view online views and see if that helps?

Maybe you are looking for

  • You can synchronize between iMac iPhone iPad iBooks

    A purchased iBook can be synchronized between my iMac iPhone and iPad?

  • Cannot access files recovery

    I had to do a recovery my PC system and before, then used the program of recovery to automatic backup on my external HARD drive. I have since then found my backup there and it extracted in a "System recovery files" folder on my pc. But it won't let m

  • Pavilion 360: Wifi

    He faces some problems to connect to my router, but I used it for scabies by following the steps for help. But for the last 2 days that I can't connect at all, I did all the steps, turn off... Closing... Reconnection, diagnosis... And still not solve

  • Change the size of the scroll bars and title

    I need to know how to change the size of scroll bars and title in windows.  When I was using XP, all I had to do was right click on the blank desktop and select Properties.  Then click on the Advanced button which brought up the Advanced Appearance d

  • Phones unlocking password box smart blackBerry. Keyboard allows only numeric characters.

    I choose aA which is at the top right, but this disappears 123 boxes and in the top right of the screen is displayed when you try to unlock the keyboard.  The 123 box does not turn off when the unlocking password box is displayed. Alphabetic characte