Query performance

CURSOR c_exercise_list IS
       SELECT
              DECODE(v_mfd_mask_id ,'Y',' ',o.opt_id) opt_id,
              DECODE(v_mfd_mask_id ,'Y',' ',o.soc_sec) soc_sec,
              P.plan_id plan_id, E.exer_id exer_id, E.exer_num,
              DECODE(G.sar_flag, 0, DECODE(G.plan_type, 0, '1', 1, '2', 2, '3', 3, ' ', 4,'5', 5, '6', 6, '7', 7, '8', 8, '9', '0'), ' ') option_type,
              TO_CHAR(G.grant_dt, 'YYYYMMDD') grant_dt, TO_CHAR(E.exer_dt, 'YYYYMMDD') exer_dt,
              E.opts_exer opts_exer,
              E.mkt_prc   mkt_prc,
              E.swap_prc  swap_prc,
              E.shrs_swap shrs_swap, decode(e.exer_type,2,decode(xe.cash_partial,'Y','A','2'),TO_CHAR(E.exer_type)) exer_type,
              E.sar_shrs  sar_shrs,
              NVL(ROUND(((xe.sar_shrs_withld_optcost - (e.opts_exer * g.opt_prc) / e.mkt_prc) * e.mkt_prc),2),0)+e.sar_cash sar_cash,
              NVL(f.fixed_fee1,0) fixed_fee1,
              NVL(f.fixed_fee2,0) fixed_fee2,
              NVL(f.fixed_fee3,0) fixed_fee3,
              NVL(f.commission,0) commission,
              NVL(f.sec_fee,0)    sec_fee,
              NVL(f.fees_paid,0)  fees_paid,
              NVL(ct.amount,0)     cash_tend,
              E.shrs_tend  shrs_tend, G.grant_id grant_id, NVL(G.grant_cd, ' ') grant_cd,
              NVL(xg.child_symbol,' ') child_symbol,
              NVL(xg.opt_gain_deferred_flag,'N') defer_flag,
              o.opt_num opt_num,
              --XO.new_ssn,
              DECODE(v_mfd_mask_id ,'Y',' ',xo.new_ssn) new_ssn,
                      xo.use_new_ssn
              ,xo.tax_verification_eligible tax_verification_eligible
              ,(SELECT TO_CHAR(MIN(settle_dt),'YYYYMMDD') FROM tb_ml_exer_upload WHERE exer_num = E.exer_num AND user_id=E.user_id AND NVL(settle_dt,TO_DATE('19000101','YYYYMMDD'))>=E.exer_dt) AS settle_dt
              ,xe.rsu_type  AS rsu_type
              ,xe.trfbl_det_name AS trfbl_det_name
              ,o.user_txt1,o.user_txt2,xo.user_txt3,xo.user_txt4,xo.user_txt5,xo.user_txt6,xo.user_txt7
              ,xo.user_txt8,xo.user_txt9,xo.user_txt10,xo.user_txt11,
              xo.user_txt12,
              xo.user_txt13,
              xo.user_txt14,
              xo.user_txt15,
              xo.user_txt16,
              xo.user_txt17,
              xo.user_txt18,
              xo.user_txt19,
              xo.user_txt20,
              xo.user_txt21,
              xo.user_txt22,
              xo.user_txt23,
              xo.user_dt2,
              xo.adj_dt_hire_vt_svc,
              xo.adj_dt_hire_vt_svc_or,
              xo.adj_dt_hire_vt_svc_or_dt,
              xo.severance_plan_code,
              xo.severance_begin_dt,
              xo.severance_end_dt,
              xo.retirement_bridging_dt
              ,NVL(xg.pu_var_price ,0) v_pu_var_price
              ,NVL(xe.ficamed_override,'N') v_ficmd_ovrride
              ,NVL(xe.vest_shrs,0) v_vest_shrs
              ,NVL(xe.client_exer_id,' ') v_client_exer_id
              ,(CASE WHEN xg.re_tax_flag = 'Y' THEN pk_xop_reg_outbound.Fn_GetRETaxesWithheld(g.grant_num, E.exer_num, g.plan_type)
                     ELSE 'N'
                 END) re_tax_indicator -- 1.5V
              ,xe.je_bypass_flag
              ,xe.sar_shrs_withld_taxes   --Added for SAR july 2010 release
              ,xe.sar_shrs_withld_optcost --Added for SAR july 2010 release
        FROM
        (SELECT exer.* FROM exercise exer WHERE NOT EXISTS (SELECT s.exer_num FROM suspense s
            WHERE s.exer_num = exer.exer_num AND s.user_id = exer.user_id AND exer.mkt_prc = 0))E,
            grantz G,  xop_grantz xg, optionee o, xop_optionee xo, feeschgd f, cashtendered ct, planz P,xop_exercise xe
        WHERE
              E.grant_num  = G.grant_num
        AND   E.user_id    = G.user_id
        AND   E.opt_num    = o.opt_num
        AND   E.user_id    = o.user_id
        AND   (G.grant_num = xg.grant_num(+) AND G.user_id=xg.user_id(+))
        AND   (o.opt_num   = xo.opt_num(+)   AND o.user_id=xo.user_id(+))
        AND   E.plan_num = P.plan_num
        AND   E.user_id = P.user_id
        AND   E.exer_num = f.exer_num(+)
        AND   E.user_id = ct.user_id(+)
        AND   E.exer_num = ct.exer_num(+)
        AND   E.user_id = ct.user_id(+)
        AND   E.exer_num=xe.exer_num(+)
        AND   E.user_id=xe.user_id(+)
        AND   G.user_id = USER
        AND NOT EXISTS (
                    SELECT tv.exer_num
                      FROM tb_xop_tax_verification tv--,exercise ex
                     WHERE tv.exer_num = e.exer_num
                       AND tv.user_id = e.user_id
                       AND tv.user_id = v_cms_user
                       AND tv.status_flag IN (0,1,3,4, 5)) -- Not Processed
        ;
How to tune the query performance, any1 help me to impropve... Thanks in advance

Published by: BluShadow on February 21, 2013 08:14
corrected {noformat}
{noformat} tags. Please read {message:id=9360002} and learn how to post code correctly.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                

956684 wrote:
I got the cost of CPU: 458.50 time: 1542.90 therefore anything can capture to improve performance, but there is no applied full table scan to put nothing in the mentioned table. . and most of the columns are index unique scan takes place... someone can help me to find the solution

His request as "my car doesn't work, care color is gray. Can solve you this problem? »

Please read the FAQ, I already posted and follow the instructions.

Tags: Database

Similar Questions

  • The BLASTP_ALIGN query performance decreases as increases the size table Ref?

    Newbie here.

    I'm using Oracle 11.2.0.3.

    I am currently running and a loop through the cursor according to who uses the tool BLASTP_ALIGN from Oracle:

    FOR MyALIGN_TAB IN

    (

    Select a.query_string, H.AA_SEQUENCE target_string, t_seq_id, pct_identity, alignment_length, incompatibilities, positive, gap_openings, gap_list, q_seq_start, q_frame, q_seq_end, t_seq_start, t_seq_end, t_frame, score, wait

    from (select t_seq_id, pct_identity, alignment_length, incompatibilities, positive, gap_openings, gap_list, q_seq_start, q_frame, q_seq_end, t_seq_start, t_seq_end, t_frame, score, wait

    table (BLASTP_ALIGN ((p_INPUT_SEQUENCE SELECT query_string FROM DUAL),

    CURSOR (Select GB_ACCESSION, AA_SEQUENCE from HUMAN_DB1. HUMAN_PROTEINS),

    1-1, 0, 0, 'PAM30',. 1, 10, 1, 2, 0, 0)

    )

    ),

    (SELECT p_INPUT_SEQUENCE FROM DUAL Query_string).

    HUMAN_DB1. HUMAN_PROTEINS H

    WHERE UPPER (t_seq_id) = UPPER (H.gb_accession) and gap_openings = 0

    )

    LOOP


    This initial query works relatively well (about 2 seconds) on a table target of approximately 20,000 documents (reproduced above, as the HUAMN_DB1. Table HUMAN_PROTEINS. However, if I had to choose a selected target table that contains approximately 170 000 records, the query performance are significantly reduced in about 45 seconds. The two tables have identical ratings.


    I was wondering if there are ways to improve the performance of BLASTP_ALIGN on large tables? There only seems to be a lot of documentation on BLASTP_ALIGN. I could find this (http://docs.oracle.com/cd/B19306_01/datamine.102/b14340/blast.htm), but it wasn't that useful.


    Any ideas would be greatly appreciated.



    In case one is interested... it looked like the AA_SEQUENCE column in the following slider: SLIDER (Select GB_ACCESSION, AA_SEQUENCE from HUMAN_DB1. HUMAN_PROTEINS) was a CLOB field. In my second target, my column correspodoning table was VARCHAR2. One hypothesis is that BLASTP_ALIGN made a VARCHAR2-> CLOB conversion internally. I changed the table to have a CLOB column and with success against BLASTP_ALIGN 170 000 documents about 8 seconds (not much, but better than 45).

    I will mark it as answered.

  • The DIMINFO affects query performance?

    Hi all

    A USER_SDO_GEOM_METADATA can. DIMINFO well defined to improve the query performance?


    For all the tables in my system, I have the USER_SDO_GEOM_METADATA view like this:
    DIMINFO
    X; -2147483648; 2147483648; 5TH-5
    Y; -2147483648; 2147483648; 5TH-5
    Z; -2147483648; 2147483648; 5TH-5




    Thanks to you all

    The simple answer is Yes - it provides an alternative and faster I/O path.

    The real question is whether it is supposed that was the data model and its use.

    So your question is similar to asking if a varchar2 column indexing is good or not. The answer is "+ depends on +".

  • Query performance poor when they join CONTAINS to another table

    We just recently started evaluation Oracle Text for a search solution. We must be able to find a table which can have over 20 million lines. Each user can have visibility to a very small part of these lines. The goal is to have a single Oracle text index that represents all the columns of research in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending score order. What we see is that the performance of the queries of TOAD are extremely fast, when we write a simple CONTAINS query against the table indexed Oracle text. However, when we first try reduce the lines from that CONTAINS query must search using a we find the query performance degrades significantly.

    For example, we can find all the records that a user has access from our base table of the following query:

    SELECT d.duns_loc
    DUNS d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id =: employeeID;

    This query may run in < 100 m in the example, this query returns close to 1200 lines of the duns_loc of primary key.

    Our search query looks like this:

    SELECT score (1), d.
    DUNS d
    WHERE CONTAINS (TEXT_KEY,: research, 1) > 0
    ORDER BY score (1) DESC;

    The: Find value in this example will be 'Highway '. The query can return 246 k lines in about 2 seconds.

    2 seconds is good, but we should be able to have a much quicker response if the request did not have to search the entire table, right? Since each user can only records from 'view' that they are assigned to as us if the search operation had to be analysed a tiny tiny percentage of the TEXT index, we should see results faster (and more relevant). If we now write the following query:

    WITH the subset
    AS
    (SELECT d.duns_loc
    DUNS d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id =: employeeID
    )
    SELECT score (1), d.
    DUNS d
    JOIN the subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS (TEXT_KEY,: research, 1) > 0
    ORDER BY score (1) DESC;

    For reasons that we have not been able to identify this query actually takes longer to run than the sum times the contributing elements. This query takes more than 6 seconds to run. We, or our DBA can understand why this query runs worse than a large open research. Open research is not ideal because the query eventually folders back to the user, they do not have access to view.

    Has anyone ever encountered something like that? Any suggestions on what to watch or where to go? If someone wants more information to help diagnosis to let me know, and I'll be happy to produce it here.

    Thank you!!

    Since you're using two tables, you will get probably better performance on an index that uses a section group and a user_datastore that uses a procedure. He should be able to recover all the data with a simple query, and hit a single index. Please see the demo below. Indexing can be slower, but research should be faster. If you have your primary and foreign keys in place and current statistics before you create the index, it should speed up indexing.

    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc       NUMBER,
      3       business_name  VARCHAR2 (15),
      4       business_name2 VARCHAR2 (15),
      5       address_line   VARCHAR2 (30),
      6       city            VARCHAR2 (15),
      7       state            VARCHAR2 (2),
      8       business_phone VARCHAR2 (15),
      9       contact_name   VARCHAR2 (15),
     10       contact_title  VARCHAR2 (15),
     11       text_key       VARCHAR2 (1),
     12       CONSTRAINT     duns_pk PRIMARY KEY (duns_loc))
     13  /
    
    Table created.
    
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc       NUMBER,
      3       emp_id            NUMBER,
      4       CONSTRAINT     primary_contact_pk
      5                   PRIMARY KEY (emp_id, duns_loc),
      6       CONSTRAINT     primary_contact_fk FOREIGN KEY (duns_loc)
      7                   REFERENCES duns (duns_loc))
      8  /
    
    Table created.
    
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) VALUES (1, 'highway')
      2  /
    
    1 row created.
    
    SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line) VALUES (2, 'highway')
      2  /
    
    1 row created.
    
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    
    1 row created.
    
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (2, 2)
      2  /
    
    1 row created.
    
    SCOTT@orcl_11gR2> INSERT INTO duns (duns_loc, address_line)
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 2
      5  /
    
    76029 rows created.
    
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 2
      5  /
    
    76029 rows created.
    
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> -- procedure:
    SCOTT@orcl_11gR2> CREATE OR REPLACE PROCEDURE duns_proc
      2    (p_rowid IN ROWID,
      3       p_clob     IN OUT NOCOPY CLOB)
      4  AS
      5  BEGIN
      6    FOR d IN
      7        (SELECT duns_loc,
      8             '' ||
      9             business_name     || ' ' ||
     10             business_name2  || ' ' ||
     11             address_line  || ' ' ||
     12             city  || ' ' ||
     13             state     || ' ' ||
     14             business_phone  || ' ' ||
     15             contact_name  || ' ' ||
     16             contact_title ||
     17             ''
     18             AS duns_cols
     19         FROM      duns
     20         WHERE  ROWID = p_rowid)
     21    LOOP
     22        DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (d.duns_cols), d.duns_cols);
     23        FOR pc IN
     24          (SELECT '' || emp_id || '' AS pc_col
     25           FROM   primary_contact
     26           WHERE  duns_loc = d.duns_loc)
     27        LOOP
     28          DBMS_LOB.WRITEAPPEND (p_clob, LENGTH (pc.pc_col), pc.pc_col);
     29        END LOOP;
     30    END LOOP;
     31  END duns_proc;
     32  /
    
    Procedure created.
    
    SCOTT@orcl_11gR2> SHOW ERRORS
    No errors.
    SCOTT@orcl_11gR2> -- user datastore, section group with field section:
    SCOTT@orcl_11gR2> begin
      2    ctx_ddl.create_preference ('duns_store', 'USER_DATASTORE');
      3    ctx_ddl.set_attribute ('duns_store', 'PROCEDURE', 'duns_proc');
      4    ctx_ddl.set_attribute ('duns_store', 'OUTPUT_TYPE', 'CLOB');
      5    ctx_ddl.create_section_group ('duns_sg', 'BASIC_SECTION_GROUP');
      6    ctx_ddl.add_field_section ('duns_sg', 'emp_id', 'emp_id', true);
      7  end;
      8  /
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> -- text index with user datastore and section group:
    SCOTT@orcl_11gR2> CREATE INDEX duns_context_index
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  PARAMETERS
      6    ('DATASTORE     duns_store
      7        SECTION GROUP     duns_sg
      8        SYNC          (ON COMMIT)')
      9  /
    
    Index created.
    
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    
    PL/SQL procedure successfully completed.
    
    SCOTT@orcl_11gR2> -- query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> SELECT SCORE(1), d.*
      2  FROM   duns d
      3  WHERE  CONTAINS
      4             (text_key,
      5              :search || ' AND ' ||
      6              :employeeid || ' WITHIN emp_id',
      7              1) > 0
      8  /
    
      SCORE(1)   DUNS_LOC BUSINESS_NAME   BUSINESS_NAME2  ADDRESS_LINE                   CITY            ST BUSINESS_PHONE
    ---------- ---------- --------------- --------------- ------------------------------ --------------- -- ---------------
    CONTACT_NAME    CONTACT_TITLE   T
    --------------- --------------- -
             3          1                                 highway
    
    1 row selected.
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 2241294508
    
    --------------------------------------------------------------------------------------------------
    | Id  | Operation                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT            |                    |    38 |  1102 |    12   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| DUNS               |    38 |  1102 |    12   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | DUNS_CONTEXT_INDEX |       |       |     4   (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH||' AND '||:EMPLOYEEID||' WITHIN
                  emp_id',1)>0)
    
    SCOTT@orcl_11gR2>
    
  • SQL query performance

    I have a table that lists the users visits to pages on our website. The information takes the type of structure within our next record table:

    VisitID | IDVisiteur | VisitPage | VisitDate
    Index | UniqueID. VisitPage | Date/time

    I need to get to IDVisiteur who visited in a user defined date range for a report that is to be written, and then get a count the days of separate visit that each user has visited our website. I have a request of work attached that will get me the result set, I want, but it's so _very_ slowly. Query Analyzer it shows that 84% included in table scans. I hope someone has a suggestion on how to optimize it. I am currently working on a MSSQL 8.0 Server, so I have no access to the function of tronque() that I would prefer to use on the dates, but that's a minor inconvenience.

    Thank you
    -Daniel

    Quote:
    Posted by: Dan Bracuk
    You have an index on visitdate?

    Visitdate contains real-time, or are all the parts of the time 0:00? If they are all from 00:00, you don't need the convert function. Otherwise, you might have better luck by selecting all data from your database and using Q of Q for the counties.

    Dan there on this one. Looking at the design table index was absent. Once I added an index my query performance dramatically, improved enough so that I don't have a lot of worries more. Thanks for the suggestion.

    -Daniel

  • Query performance problem

    I have two schemas of two databases.

    When I check the sql plan, both the schema contains diffrently (one is underway for a full table scan and a scan of systematic index range).

    Both the scheme almost similar kind of data, indexes, and charges.

    What is causing the performance of sql

    in the second plan, the optimizer expects the analysis of range on IDX_TSK_ID step 5 to return to only 14 lines and decides it's a good idea to join the second TB_TRANS_MSTR table with a nested loops join (make a loop on 14 TASK_INSTANCE results and do a search on each iteration of the use of the PK_TRANS_ID index).

    In the foreground, the optimizer decides to read TB_TRANS_MSTR (containing 978 lines) and build a table of hash in memory of the results - and then probe against the TASK_INSTANCE second set.

    The next question is: which plan is most suitable and translates into better performance? The chances are high that the best plan is one that includes an estimate more fitting of the cardinalities. These estimates are based on a simple arithmetic (more or less) - and they depend on the table and column statistics. So the dba_tab_column entires Swen W. mentioned would be useful. In addition the text of the query would probably we shed some light on the question.

  • Partitioning strategy for the OBIEE query performance

    I use partitioning for the first time and I'll have trouble determining if I partition my fact table in a way that will allow the partition size to work with queries generating OBIEE.  I've set up a simple example using query I wrote to illustrate my problem.  In this example, I have a star with a fact table schema and I join in two dimensions.  My fact table is partitioned on JOB_ID and TIME_ID RANGE LIST and those are the keys that link the two dimensions that I use in this application.


    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';


    What can I say, because the query is in fact done filtering on columns in the dimensions instead of they, the columns in the table, the size isn't actually is happening.  I see actually slightly better performance of a non-partitioned table, even though I wrote this application specifically for partitioning strategy which is now in place.


    If I run the next statement, it runs a lot faster and a plan to explain is very simple and it seems to me that it's size down to a partition of sub as I hoped.  This isn't any query generated by OBIEE how will seem so.


    Select sum (boxbase)

    of TEST_RESPONSE_COE_JOB_QTR

    where job_id = 101123480

    and response_time_id < 20000000;


    Any suggestions?  I get some benefits from the exchange of partition by using this configuration, but if I'm going to sacrifice performance reports then that maybe isn't useful, or at the very least, I would need to get rid of my partitions void if they are not providing any benefit.


    Here are the plans to explain that I got for two queries in my original post:

    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    20960





    AGGREGATION OF TRI


    1

    13






    VIEW

    SYS. VW_ST_5BC3A99F

    101 K

    1 M

    20960





    NESTED LOOPS


    101 K

    3 M

    20950





    PARTITION LIST SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    RANGE OF PARTITION SUBQUERY


    101 K

    2 M

    1281



    KEY (SUBQUERY)

    KEY (SUBQUERY)

    CONVERSION OF BITMAP IN ROWID


    101 K

    2 M

    1281





    BITMAP AND









    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    INDEX SKIP SCAN

    CISCO_SYSTEMS. DIM_STUDY_UK

    1

    17

    1





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_JOB_ID_BMID_12






    KEY

    KEY

    MERGE TO BITMAP IMAGE









    KEY ITERATION BITMAP









    BUFFER









    VIEW

    CISCO_SYSTEMS.index$ _join$ _052

    546

    8 K

    9





    HASH JOIN









    INDEX RANGE SCAN

    CISCO_SYSTEMS. DIM_TIME_QUARTER_IDX

    546

    8 K

    2





    INDEX FULL SCAN

    CISCO_SYSTEMS. TIME_ID_PK

    546

    8 K

    8





    BITMAP INDEX RANGE SCAN

    CISCO_SYSTEMS. FACT_RESPONSE_TIME_ID_BMIDX_11






    KEY

    KEY

    TABLE ACCESS BY ROWID USER

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    1

    15

    19679



    ROWID

    L LINE









    Operation

    Name of the object

    Lines

    Bytes

    Cost

    Object node

    In/Out

    PStart

    PStop

    INSTRUCTION SELECT optimizer Mode = ALL_ROWS


    1


    1641





    AGGREGATION OF TRI


    1

    13






    SIMPLE LIST OF PARTITION


    198 K

    2 M

    1641



    KEY

    KEY

    RANGE OF SINGLE PARTITION


    198 K

    2 M

    1641



    1

    1

    TABLE ACCESS FULL

    CISCO_SYSTEMS. TEST_RESPONSE_COE_JOB_QTR

    198 K

    2 M

    1641



    36

    36


    It seems unreasonable to think that relying on our index in a table partitioned (or partitioned in a way only focused on the help of ETL) can actually work better than partitioning in a way that we could get some size dynamic, but never static pruning?

    Yes - standard tables with indexes can often outperform partitioned tables. It all depends on types of queries and predicates to query which are typically used and the number of rows returned in general.

    Size of the partition eliminates the partitions TOGETHER - regardless of the number of rows in the partition or table. An index, on the other hand, is prohibited if the query predicate needs a significant number of lines since Oracle can determine that the cost is lower simply use close readings and make a full scan.

    A table with 1 million rows and a query predicate who wants 100 K of them probably will not use an index at all. But the same table with two partitions could easily have one of the partitions pruned by the "number of effective lines" only 500 k or less.

    If you are partitioning for performance, you should test your critical queries to make sure partitioning/pruning is effective for them.

    Select sum (boxbase)

    TEST_RESPONSE_COE_JOB_QTR a

    Join DIM_STUDY C on A.job_id = C.job_id

    Join DIM_TIME B on A.response_time_id = B.time_id

    where C.job_name = "FY14 CSAT"

    and B.fiscal_quarter_name = ' quarter 1';

    So, what is a typical value for 'A.response_time_id '? That represents a 'B.time_id '?

    Because a way of providing explicit partition keys may be to use a range of 'response_time_id' of the FACT table rather than a value of 'fiscal_quarter_name' of the DIMENSION table.

    If "1 quarter" could correspond to a range of dates from 01/01/YYYY ' at ' 03/31/yyyy '. "."

    Also, you said that on the partitioning: JOB_ID and TIME_ID

    But if your questions relate mainly to the DATES / TIMES, you might be better use the TIME_ID for PARTITIONS and JOB_ID, if necessary, for the subpartitioning.

    Date range partitioning is one of the most common around.and serves both performance and ease of maintenance (delete/archive old data).

  • Questions after TimesTen first trial: memory footprint and query performance

    Hello!

    I'm testing TimesTen In - Memory Database cache to see if it could help with some ad hoc reports questioned this need too long to run in our Oracle database.

    Here is the configuration:

    1.) TimesTen Server CPU Quad Core 2 with 32 GB of RAM running Windows 2003 x 64.

    2.) put in place two cachegroups read-only: a little for a quick test and the real thing that maps to a table of the database as such:


    Database table looks like:
      CREATE TABLE "TB_BD" 
       (   
       "VALUE" NUMBER NOT NULL ENABLE, 
       "TIME_UTC" TIMESTAMP (6) NOT NULL ENABLE, 
       "ASSIGNED_TO_ID" NUMBER NOT NULL ENABLE, 
       "EVENT_ID" NUMBER, 
       "ID" NUMBER NOT NULL ENABLE, 
       "ID_LABEL" NUMBER NOT NULL ENABLE, 
       "ID_ALARM" NUMBER, 
        CONSTRAINT "PK_TB_BD" PRIMARY KEY ("ID")  
       );
    Oracle database table has 1.367.336.329 lines and table segments are approximately 61 GB, so a medium line takes about 46 bytes.

    Since I have 32 GB in the TimesTen machine, I created the Group cache with a where predicate in the ID column that only the 98.191.284 most recent ranks get in the cache group. In the Oracle database, it is around 4.2 GB of data.

    After the cache loading dssize group returns:
    Command> dssize
    
      PERM_ALLOCATED_SIZE:      26624000
      PERM_IN_USE_SIZE:         19772852
      PERM_IN_USE_HIGH_WATER:   26622892
      TEMP_ALLOCATED_SIZE:      32768
      TEMP_IN_USE_SIZE:         10570
      TEMP_IN_USE_HIGH_WATER:   14192
    
      (Note: the high PERM_IN_USE_HIGH_WATER comes from a first test where I tried to cache too many rows)
    I then ran on the TimesTen machine:
    tisql> select avg(value) from tb_bd;
    She is still going after 10 hours, so I can already tell that the query execution time is not really met my expectations. :-)

    In the Windows Task Manager, I see that tisql constantly use 13% of CPU (= 100% / 8 cores), so that it uses only a carrot, but even he was using all the hearts and the execution time would be 1/8th, it wouldn't meet my expectation. :-)

    I also see in the the Windows Task Manager who becomes slowly higher and higher, currently the 'MemUsage' of my tisq 14FR processl. I believe that it is shared memory mapping that is already mapped by the TimesTen process that has approximately 24 GB mapped. The query is probably 53% through and the total time of queries can be around 20 hours.


    My questions:

    1.) for what I tested, 1 GB of data in the table Oracle needs about 4-5 gigabytes of memory in the TimesTen database. I read a post on the forum who has explained with ' data are optimized for performance, no space in TT ", but I don't quite buy it. A factor of 4-5 means that the CPU must spend 4 to 5 times the amount of data. The data is not compressed in the Oracle database, but it is in its natural binary form. I would like to understand why data takes much more space in TT - like when you have a numeric in Oracle, which TT do with it to make it 4 - 5 times bigger and why does do that?

    2.) regarding the performance of the queries: how long can take even to the base allows to browse about 20 GB of data in memory, number of lines, summarize the NUMBER of a column with a division to get the avg (< column >)? Is there something flawed with my setup?


    Thanks for the ideas!

    Kind regards
    Marcus

    Published by: user11973438 on 06.09.2012 23:27

    I agree that the use of 4 - 5 times more memory than Oracle is far from optimal. Your drawing is unfortunately a little pathological; normally we see more like 2 - 3 times (which is still too really0. There are many internal differences between Oracle and TimesTen in the way data are stored internally. Some are historical, and some are due to the optimization of performance rather than storage efficiency.

    For example:

    1 oracle lines are always variable storage length while TimesTen lines are always of fixed length in storage.

    2. in Oracle, a column defined as NUMBER only occupies the space needed based on the stored value. In TimesTen SEVERAL column always occupies the space to store the maximum possible precision and therefore takes up 22 bytes. You can reduce it by restricting explicitly using NUMBER (n) or NUMBER(n,p).

    3 TimesTen does not support any kind of parallel query within a single data store. All queries will be run using maximum core CPU; Oracle DB supports parallel queries and so it can make a big difference for certain types of application.

    4. NUMBER is implemented in software and is relatively ineffective. Calculating the average of almost 100M lines will take time... You can try to change cela a native binary type (TT_INTEGER, TT_BIGINT, BINARY_DOUBLE depending on your data); This will no doubt give a good improvement (but see point 5 below).

    5. with a database of this size, it is possible that Windows made a lot of paging, while the query is running. I myself also observed on Windows it seems to be a penalty when a process key/maps a page for the first time. You should monitor the paging activity via the task manager that the query is run. All important pagination will really affect the performance. Also, try to execute the query a second time without disconnecting ttIsql this may also show an advantage. On Unix/Linux platforms, we provide an option (MemoryLock == 4) to lock the entire database in physical memory to prevent any paging, but is not available under Windows.

    Chris

  • Question about the construction of cube / query performance (11.2.0.3)

    Hi, I have a stupid question on the performance of cube generation. By choosing the precalc %, is linear (or nearly linear) construction time to that? for example if you select 10% going to be 3 times faster than the selection of 30%? Also, is it fair to assume that if only 10% of the values are precalculated, an average end-user queries have to hit 3 times more data and therefore be about 3 times slower?

    Sorry, this on a virtual computer on your laptop, so test different configs build takes forever (I still have a load of cube really complete). Guess I should not be trying a cube Sun 15 on a virtual computer on your laptop, but trying to sell a DBA on the fact that she could improve the performance of our mini - DW.

    Thank you
    Scott

    Aggregation based on costs (aka "Pre-computes percent") was introduced in 11.1 as a simpler alternative based on the level of aggregation. Product management dream was a linear parameter, but the complexity is quickly apparent. Which could measure the linearity against? The generation time? Time of the query? Total size of the disk? The result the balances of all of these factors, but is linear against any of them. Fortunately, behavior level percentage precompute was fairly consistent between cubes and patterns in our experience, so I can give you a rough characterization. But keep in mind that this is a guide only - you have to experience on your own scheme and a system to see what works for you. In particular, you must balance your own requirements on construction time, time of the request and the disk size.

    * 0% *-this means no precomputation at all, so all data access will be dynamic. This is the recommended setting for the top of the page partition of a cube. If, for any reason, you want to use for the scores of leaves as well, then I advise you pass a cube uncompressed.

    * 1% *-he pre-computes the smallest part of the cube that is allowed by the algorithm and would take certainly greater than 1% of the time taken by an accumulation of 100%. For partitions of leaves, it is usually best to increase the amount because you'll get much better query response time for not much more profitable in terms of disc size and the generation time. It may be a good level for the top of the page partition of a cube, but should be used with caution because the top of the page scores are often too big to pre-computes.

    * 2%-19% *-these levels does not seem to be a lot of benefits since the amount of time and the total size of the disk is almost identical to a construction of 20%, but queries are slower.

    * 20%-50% *-this range is probably the best compromise in terms of construction compared to the time of the query. The default value of the AWM is 35%, which is a good starting point. Lower it to 20% if you want a faster version and get up to 50% if you want to replace the faster queries. The setting is close to linear in this interval than outside it.

    * 51%-99% *-you should probably avoid these levels, although I've seen 60% used in practice. The reason is that while the size of the cube and the length of the construction increase rapidly, the queries do not receive proportionally faster. Indeed, you may find that queries are slower because the code spends more time swapping in the pages of the disc.

    * 100% *-this will be pre-computes all (non-NULL) cells in the cube. It may be a surprise after my advice about 51%-99%, but 100% is a reasonable level to choose. This is because the code is much simpler when you know that everything is precalculated and then stored in the disk pages.

  • poor cardinality of bad query performance

    I have a query, whose performance is unsatisfactory.

    It produces (11.2.0.2) next track. The slowest part of the query is the functioning of the UNION-ALL on two fast full scans indexes on PRODUCTS_DATES indices. These clues are in the two tables that make up a notice, V_SALES_ALL.

    The estimation of cardinality for the full scans seems to be out - 100 lines against 78,000,000 and 1 703 000 respectively. The estimate of 100 strangely resembles a defect because the two tables are, as I said, an interior view. In fact, if I break up of the view to its constituent tables, queries run in a tenth of the time.


    How can I fix this misinformation, most likely created by the view?

    I'm reading the right trace?

    Regs

    Johnnie


    Rows Row Source operation
    ------- ---------------------------------------------------
    321 SORT GROUP BY (cr = 6759441 pr = 176970 pw = 176955 time = 480 US cost = 63 = 896 card = 14 size)
    5322875 NESTED LOOPS (cr = 6759441 pr = 176970 pw = 176955 time = 109327744-en)
    5322875 NESTED LOOPS (cr = 241360 pr = 176970 pw = 176955 time = 55796544 US cost = size 62 = 896 card = 14)
    5322875 HASH JOIN (cr = 241049 pr = 176970 pw = 176955 time = 7774711 US cost = size 48 = map 280 = 14)
    80445738 VIEW V_SALES_ALL (cr 241001 pr = 0 pw = time = 0 = 569162368 cost = US size 4 = 1800 map = 200)
    80445738 UNION-ALL (cr = 241001 pr = 0 pw = time 0 = 404890176 en)
    78742696 INDEX FAST FULL SCAN PRODUCTS_DATES_IDX (cr = 235954 pr = 0 pw = time 0 = 85524904 US cost = size 2 = 900 card = 100) (object id 221975)
    1703042 INDEX FAST FULL SCAN PRODUCTS_DATES_IDX_HARD (cr = 5047 pr = 0 pw = time 0 = 1850486 US cost = size 2 = 900 card = 100) (object id 241720)
    2238 VIEW index$ _join$ _003 (cr = 48 pr = 0 pw = time 0 = US cost = size 44 14474 = 24618 card = 2238)
    JOIN by HASH 2238 (cr = 48 pr = 0 pw = time 0 = 9737 US)
    2238 PRODUCTS_GF_INDEX2 INDEX RANGE SCAN (cr = 8 pr = 0 pw = time 0 = 2609 US cost = size 6 = 24618 card = 2238) (object id 255255)
    16206 INDEX FAST FULL SCAN PRODUCTS_GF_PK (cr = 40 pr = 0 pw = time 0 = 20415 US cost = size 45 = 24618 card = 2238) (object id 255253)
    5322875 INDEX UNIQUE SCAN DATES_PK (cr = 311 pr = 0 pw = time 0 = 0 US cost = 0 size = 0 = 1 card) (object id 151306)
    5322875 ACCESS BY ROWID DATES TABLE INDEX (cr 6518081 pr = 0 pw = time = 0 = 0 US cost = 1 size = 44 = 1 card)


    Implementation plan of lines
    ------- ---------------------------------------------------
    0 SELECT STATEMENT MODE: FIRST_ROWS
    321 TRI (GROUP BY)
    5322875-HASH JOIN
    TABLE 5322875 ACCESS MODE: ANALYZED (FULL) OF "DATES" (TABLE)
    5322875-HASH JOIN
    80445738 VIEW OF "index$ _join$ _003 ' (VIEW)
    80445738-HASH JOIN
    MODE 78742696 INDEX: SCANNED (SCAN INTERVAL) OF
    "PRODUCTS_GF_INDEX2" (INDEX)
    MODE 1703042 INDEX: ANALYZED (FULL SCAN) OF
    "PRODUCTS_GF_PK" ((UNIQUE) INDEX)
    2238 VIEW OF "V_SALES_ALL" (VIEW)
    2238 UNION-ALL
    INDEX 2238 MODE: ANALYZED (COMPLETE ANALYSIS) OF
    "PRODUCTS_DATES_IDX" (INDEX)
    HOW TO INDEX 16206: ANALYZED (COMPLETE ANALYSIS) OF
    "PRODUCTS_DATES_IDX_HARD" (INDEX)

    Johnnie d wrote:
    I have a query, whose performance is unsatisfactory.

    It produces (11.2.0.2) next track. The slowest part of the query is the functioning of the UNION-ALL on two fast full scans indexes on PRODUCTS_DATES indices. These clues are in the two tables that make up a notice, V_SALES_ALL.

    The estimation of cardinality for the full scans seems to be out - 100 lines against 78,000,000 and 1 703 000 respectively. The estimate of 100 strangely resembles a defect because the two tables are, as I said, an interior view. In fact, if I break up of the view to its constituent tables, queries run in a tenth of the time.

    Implementation plan of lines
    ------- ---------------------------------------------------
    0 SELECT STATEMENT MODE: FIRST_ROWS

    You run with optimizer_mode = first_rows_100?

    If yes then you may have found a bug in the code of first_rows_N. 100 seem to have been pushed inside the view in a way that has made Oracle to the full index full scans while only "intend" to find 100 rows in each table. There catches the limit of 100 as the actual limit in deciding which online source to use as the hash and which will serve to the probe.

    If you want the entire result set, or a large part of it, you could add the all_rows education indicator.

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    Author: core Oracle

  • Will there be any tools avialable to the on query performance tuning

    Hello

    How to tune a database query?
    Suppose that there are basic query "select * from emp where empid = '1101' ';"

    Y at - it all avilabale tools to suggest performance Tuning on a query?

    How can I resolve this application, and what is the best approach on this?

    Please share your ideas and thanks.

    Oracle provides tools such as SQL Trace and TKPROF. For the information on a SQL query.

    But basically, you should know how oracle works. You must be aware of the different access plans and know why they are being used.

    You need to know when to use an index and when not to use a.

    There are many more. Best thing for you would be to start reading the document.

  • Developer SQL vs TOAD - query performance issue

    Someone pointed out the same queries are run slower in SQL Developer and TOAD. I'm curious on this issue, since I understand Java is 'slow', but I can't find another thread on this point. I do not use TOAD, so I can't compare...

    Can it be linked to the amount of data returned by the query? What could be the other reasons for the SQL Dev works more slowly with a similar query?

    Thank you
    Attila

    It occurs to me also that TOAD always uses the equivalent of the 'thick' JDBC driver Developer SQL can use 'thin' driver or the 'thick' driver, but the connections are usually configured with the pilot "thin", since you need an Oracle client to use the 'thick' driver

    The difference is that the 'thin' drivers are written entirely in Java, but 'thick' drivers are written with only a small Java that calls the native executable (that's why you need an Oracle client) to do most of the work. In theory, a thick driver is faster because the code of the object should not be interpreted by the JAVA virtual machine. However, I heard that the performance difference is not that big. The only way to know for sure is to set up a connection with SQL Developer to use thick driver and see if it is faster (I would use a stopwatch).

    Correct me if I'm wrong, but I think that if you use 'TNS' as your type of connection, Developer SQL use thick driver, while someone is using the default value, the type of ' basic' connection the thin driver. Otherwise, you need to use the connection type 'Advanced' and type in the JDBC URL custom for the thick driver.

  • Database query performance issues

    I use the database of polling to detect changes in database OLTP application.

    my doubts are

    (1) will affect the performance of the OLTP application
    (2) if so, what would be the impact on the OLTP application
    (3) how we can improve performance
    (4) any link to get more idea about it.

    (1) will affect the performance of the OLTP application

    No IT WONT AFFECTENT BECAUSE Aapplication oltp has
    Transactions that involve small amounts of data
    * Indexed access to data
    * Many users
    * Frequent queries and updates
    * Responsiveness
    (2) if so, what would be the impact on the OLTP application
    N/A
    (3) how we can improve performance

    Not query the table for poll interval 30 secs for intervals of 45 seconds at least

    (4) any link to get more idea about it.

    N/A

  • SLOW query PERFORMANCE

    We live a very slow performance for the below query. Joined the explain command plan too. One of us can, help us with suggestions. We expect something as nested loops could be avoided by following certain steps, some changes in the structure of the query.

    SELECT assoc.name_first. ' ' || Assoc.name_last AS client_manager
    T1 assoc,.
    This T2,
    T3 aa,
    Law on the T4
    T5 cc
    WHERE ce.ent_id = act.primary_ent_id (+)
    AND ce.ent_id =: p_ent_id
    AND assoc.id = CASE WHEN aa.assoc_id IS NULL THEN (SELECT DISTINCT ca.assoc_id
    From t6 ca
    WHERE ca.comp_id =
    : p_ent_id
    AND
    CA.cd_code IN
    ("CMG", "GCR",
    "BCM", "CCM"
    'BAE'))
    Of OTHER aa.assoc_id
    END
    AND nvl (act.activity_id, 0) = nvl (aa.activity_id, 0)
    AND assoc.bk_code = cc.cpy_no
    AND assoc.center = cc.cct_no
    AND aa.role_code IN ("CMG", "GCR", 'BCM', 'MAC', 'BAE')


    Object owner name cardinality bytes cost object description
    SELECT STATEMENT, TARGET = CHOOSE 29 2 232
    27 2 232 NESTED LOOPS
    26 2 214 NESTED LOOPS
    25 2 82 NESTED LOOPS
    3 2 50 OUTER NESTED LOOPS
    INDEX UNIQUE SCAN DEC 1 1 6 COREENT_PK
    T4 VIEW 2 2 38 CBD
    2 2 48 NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID CBD t4 1 8 128
    INDEX RANGE SCAN PEH 1 8 PRIMATY_CLIENT_IDX
    TABLE ACCESS BY INDEX ROWID PEH ACTIVITY_TYPE 1 1 8
    INDEX SCAN SINGLE PEH 1 1 ACTIVITY_TYPE_PK
    INDEX SCAN FULL CDB 11 1 16 ACTIVITY_ASSOCIATE_PK
    TABLE ACCESS BY INDEX ROWID SEC 1 1 66 t1
    INDEX UNIQUE SCAN DEC 1 1 ASSOC_PK
    OUT UNIQUE 16 1 2 NOSORT
    INDEX RANGE SCAN DEC 1 1 16 COMPASSOC_PK
    INDEX UNIQUE SCAN DEC 1 1 9 CST_CNTR_PK



    I appreciate your time and efforts.

    Maybe try this

    SELECT assoc.name_first || ' ' || assoc.name_last AS client_manager
      FROM t1 assoc,
           t2 ce,
           t3 aa,
           t4 act,
           t5 cc,
           t6 ca
     WHERE ce.ent_id = act.primary_ent_id(+)
       AND ce.ent_id = :p_ent_id
       AND ca.comp_id = ce.ent_id
       AND ca.cd_code = aa.role_code
       AND assoc.id = nvl(aa.assoc_id, ca.assoc_id)
       AND assoc.bk_code = cc.cpy_no
       AND assoc.center = cc.cct_no
       AND (act.activity_id = aa.activity_id OR (act.activity_id is null and aa.activity_id is null))
       AND aa.role_code IN ('CMG', 'RCM', 'BCM', 'CCM', 'BAE')
    

    Untested code. Pleace in order to check if it gives the correct result.

  • On query performance problem. Need help.

    It is essentially a performance problem. I hope someone can help me with that.

    Basically, I have four old masters (150000 records), (100000 records) Child1, Child2 (50 million records!), child 3 (10000 + records)
    (please forgive the alias).

    Each record in the master has now more than one matching record in each table child (one to many).
    Also there may be any record in any or all of the tables for a particular master record.

    Now, I need to get the maximum of last_updated_date for each master record in each table 3 child and then find the maximum of
    the three obtained last_active_dates from the 3 tables.
    for example: Master ID 100, to interrogate Child1 for all Master ID 100 records and get the max last_updated_date.
    Same for the other 2 tables and get the most out of these three values.
    (I also need to deal with cases where no trace may be found in a child table to a Master ID)

    Write a procedure that uses sliders that the value of each of the performance hits of child table
    evil. And that's, I need to know the last_updated_date for each master file (all 150000 of them). It will probably take days to do this.

    SELECT MAX (C1. LAST_UPDATED_DATE)
    MAX (C2. LAST_UPDATED_DATE)
    MAX (C3. LAST_UPDATED_DATE)
    OF CHILD1 C1
    CHILD2 C2
    CHILD3 C3
    WHERE C1. MASTER_ID = 100
    OR C2. MASTER_ID = 100
    OR C3. MASTER_ID = 100

    I tried the above, but I got an error in tablespace temp. I don't think that the application is good enough at all.
    (The GOLD clause is to take care of any records in a child table. If there is an AND, then the join and then select
    No, not even if there is no record in a child table, but valid values in the other 2 tables).

    Thank you very much.

    Published by: user773489 on December 16, 2008 11:49

    You want alias to this field then.

    SELECT MAX (C.LAST_UPDATED_DATE)
    FROM
    (select child1_master_id MASTER_ID, field2, field3,... field4 from CHILD1 UNION ALL
     select child2_master_id MASTER_ID, field2, field3,... field4 from CHILD2 UNION ALL
     select child3_master_id MASTER_ID, field2, field3,... field4 from CHILD3) C
    WHERE C.MASTER_ID = 100
    

    If do you something like that, and explicitly list the columns you want.

    Edit: for something like a specific query for a MASTER_ID...

    SELECT MAX (C.LAST_UPDATED_DATE)
    FROM
    (select child1_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD1 where child1_master_id = 100 UNION ALL
     select child2_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD2 where child2_master_id = 100 UNION ALL
     select child3_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD3 where child3_master_id = 100) C
    WHERE C.MASTER_ID = 100
    

    That should give you very good performance by raising a record. But a better idea, as indicated, would be to get it all at once with a sql:

    SELECT MASTER_ID, MAX(C.LAST_UPDATED_DATE)
    FROM
    (select child1_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD1 UNION ALL
     select child2_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD2 UNION ALL
     select child3_master_id MASTER_ID, LAST_UPDATED_DATE from CHILD3 ) C
    GROUP BY MASTER_ID
    

    This will give you the max for each MASTER_ID in a sql without a cursor.

    Published by: tk-7381344, December 16, 2008 12:12

Maybe you are looking for