Help query optimization

Just look to see if anyone has directions on how I can optimize this query... it's take a minute to view the data and even if its not terribly long, it would be even better if it was faster (I am hearing reports that try to refresh the page after 30 seconds or so Ive been trying to figure out if I can "flush" something , but I don't know if its possible to hunt during a query or same output. I rinse at the beginning and shows a message that says: "wait...". "who I'd like to get rid of after the data is ready, do not know how to do this, but, I think it would need to be a layer or something that shows and hides the layer. But without further ADO... the query.

SELECT * may be the culprit. Do you really need all the fields in the table? Is your database on the ColdFusion server? Otherwise, what is the bandwidth between the CF and the Informix Server?

Tags: ColdFusion

Similar Questions

  • query optimization

    Hello guys,.

    I made a request, but the execution time is too long, could you help me optimize it please?

    Kind regards

    Try to change your return type of the object root for virtual machines, VMWVirtualMachine.

    and the path to the reference of the root to VMWModel, virtualCenters, virtualMachineCollection, virtualMachines

    A property on or below the return object would then be your path to the where clause.

    There is no end to subject to return anything below VMWVirtualMachine, unless you create 1 line per object in a relationship 1 - several with the object root, such as processors or logical disks.

    In this case, the path in which the comparison of the clause is triggering additional works useless back up the tree view of the object of return, which can work, but not very efficiently in large environments.

    Note that aggregations will cause some performance issues with large sets of data, and how they are created can alter performance.  Always test the performance first without filter or aggregation, then add filters, then add aggregations.

    Aggregations are generally limited in capabilities in queries, in any case.  In general, I remove the aggregation, make the filter here, then the output power to a WCF service to the aggregation (s) the need.

  • Need help to optimize this script.

    Hello

    I use ID CS5 on Windows7.

    I have a requirment where I need to type text in a block of text perticular and copied it to any other text frames.

    I wrote this little script that does the job for me.

    It has a few obvious repitations and is ineffective.

    Please help me optimize it.

    myDocument var = app.activeDocument;

    myPage var = myDocument.pages.item (0);

    find a block of text with the title of specific script and store its content in a variable

    for (var i = 0; i < myPage.pageItems.length; i ++)

    {

    currItem = myPage.pageItems.item (i);

    If (currItem.label is "ParentText")

    {

    source var = currItem.contents;

    }

    }

    find all the blocks of text except the text with the title of script block and set its content to the source

    for (var i = 0; i < myPage.pageItems.length; i ++)

    {

    currItem = myPage.pageItems.item (i);

    If (currItem.label! = "ParentText")

    {

    currItem.contents = source;

    }

    }

    Hello

    You could match to your 'model' textFrame using string functions.

    Text operations are faster and there is no need of extra loops:

    var
              mLabel = "ParentText",
              mBreak = ";_;",
              mString = app.activeDocument.textFrames.everyItem().label.join(mBreak ),
              mID = mString.search(mLabel),
              mFrames = app.activeDocument.textFrames.everyItem().getElements(),
              source, k;
    
    if (mID == -1)  { alert ("Desired label not found"); exit(); }
    
    mID = mString.slice(0,mID).split(mBreak).length - 1;
    source = app.activeDocument.textFrames[mID].contents;
    
    for (k = 0; k < mFrames.length; k++)
              if (mFrames[k].label != mLabel)
                        mFrames[k].contents = source;
    

    Arrival to the string table returns a string,

    Search method returns the index

    Slice method cut the string.

    Split method trasforms it back towards the table length (minus 1) is equal to the index of the textFrame.

    The rest is to change the content of the frames inside a loop

    Jarek

  • Need help on query optimization

    Hi experts,

    I have the following query that lasts more than 30 minutes to retrieve data. We use Oracle 11 g.
    SELECT B.serv_item_id,
      B.document_number,
      DECODE(B.activity_cd,'I','C',B.activity_cd) activity_cd,
      DECODE(B.activity_cd, 'N', 'New', 'I', 'Change', 'C', 'Change', 'D', 'Disconnect', B.activity_cd ) order_activity,
      b.due_date,
      A.order_due_date ,
      A.activity_cd order_activty_cd
    FROM
      (SELECT SRSI2.serv_item_id ,
        NVL(to_date(TO_CHAR(asap.PKG_GMT.sf_gmt_as_local(TASK2.revised_completion_date),'J'),'J'), SR2.desired_due_date) order_due_date ,
        'D' activity_cd
      FROM asap.serv_req_si SRSI2,
        asap.serv_req SR2,
        asap.task TASK2
      WHERE SRSI2.document_number = 10685440
      AND SRSI2.document_number   = SR2.document_number
      AND SRSI2.document_number   = TASK2.document_number (+)
      AND SRSI2.activity_cd       = 'D'
      AND TASK2.task_type (+)     = 'DD'
      ) A ,
      (SELECT SRSI1.serv_item_id,
        SR1.document_number,
        SRSI1.activity_cd,
        NVL(to_date(TO_CHAR(asap.PKG_GMT.sf_gmt_as_local(TASK1.revised_completion_date),'J'),'J'), SR1.desired_due_date) due_date
      FROM asap.serv_req_si SRSI1,
        asap.serv_req SR1,
        asap.task TASK1,
        asap.serv_req_si CURORD
      WHERE CURORD.document_number   = 10685440
      AND SRSI1.document_number      = SR1.document_number
      AND SRSI1.document_number     != CURORD.document_number
      AND SRSI1.serv_item_id         = CURORD.serv_item_id
      AND SRSI1.document_number      = TASK1.document_number (+)
      AND TASK1.task_type (+)        = 'DD'
      AND SR1.type_of_sr             = 'SO'
      AND SR1.service_request_status < 801
      AND SRSI1.activity_cd         IN ('I', 'C', 'N')
      ) B
    WHERE B.serv_item_id = A.serv_item_id;
    If I run the inline queries (subqueries) A and B separately, it comes in a few seconds, but when I try to join the two should we close at 1: 00 sometimes. In my case query A specific, 52 returns records and query B return 120 files only.

    For me, it looks like the failure of the optimizer to determine the amount of data, it will return for each subquery. I feel the need to fool the optimizer through workaround to get the result more quickly. But I'm not able to find a work around for this. If someone of you can give some light on it, it would be really useful.

    Thank you very much
    GAF

    Published by: user780504 on August 7, 2012 02:16

    Published by: BluShadow on August 7, 2012 10:17
    addition of {noformat}
    {noformat} tags for readability and replace &lt;&gt; with != to circumvent forum issue.  Please read {message:id=9360002}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    

    Perhaps using / * + materialize * / advice? See above

    Concerning

    Etbin

  • eBooks on SQL query optimization

    Hello.

    can someone assign me all the technical links for ebooks on tuning SQL query to 10g or 11g.

    Tuning SQL queries, it's something I want to build my expertise...

    There may be many consultants as the query SQL, Access Advisor Advisor and so on... but nothing of such adjustment your queries manually for maximum optimization.

    Kindly help me in this!

    Check Oracle docs http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm

    But really, you should also check:

    http://www.asktom.Oracle.com

    http://tkyte.blogspot.com

    AND all the other blogs he does (like Jonathan Lewis site and blog) you will find many interesting articles there, the 'real world', regarding performance and optimization.

    Published by: hoek on March 24, 2009 10:40

  • help the optimization of the descent

    Hi all

    I am trying to solve this one equation with an unknown using optimization.  It's a non-linear equation and I was looking around for something that works like excel Solver s.  Optimization of descent seems close to what I want to do, but it is not optimize.  What would someone mind looking at this code and try to determine what I am doing wrong?

    FYI, I checked the answer by 0.02 as the X in my function, and it gives good value.  The service is configured to be like: (X) - real = 0 Calc.  The part "= 0" should be implied (that is perhaps what is wrong?)

    Thank you

    Matt

    I just noticed that... If you try to get the func - measured, then you try to detect the break of it instead of minimum. In this case, you need to f (x) to present the minimum when zero crossing. You can use the reduction function, but you must change of f (x) to be abs (f (x)). Then, find the minimum of means of detection where the function crosses zero.

    If this is not also help, perhaps it would be good to have your initial problem and, if possible, what function you are trying to use in Excel.

  • Need help with optimization

    Hello

    I use Oracle Database Enterprise Edition Release 12.1.0.1.0 12 c

    I would like to help optimize a script.

    The code takes more than 3 days to run on my computer of the developer. I think that most of the time is devoted to select it in the code below.

    The HistoryID table will fill up with almost 4 million lines.

    It's the HistoryID table creation code:

    execute immediate 'create table HistoryID
    (
    oldId         NUMBER,
    newId         NUMBER,
    tableName     VARCHAR2(50),
    CONSTRAINT pk_HistoryID_OldId_TableName PRIMARY KEY (oldId, tableName)
    )';
    
    

    This is the script:

    DECLARE
    
    
    l_newID_value     NUMBER;
    l_index             NUMBER;
    
    
    TYPE record_to_update IS RECORD(plan_id TABLE1.PLAN_ID%TYPE, new_plan_id TABLE1.PLAN_ID%TYPE);
    TYPE records_to_update IS TABLE OF record_to_update INDEX BY BINARY_INTEGER;
    
    
    l_records_to_update records_to_update;
    
    
    cursor C1  -- The cursor will be filled with 4 Million rows
    is
    select PLAN_ID from TABLE1
    where PA_INDEX = 0;
    
    
    BEGIN
    
    
    for curl in C1
    loop
    ---------------------------------
    -- Most time spent here.
    -----------------------------------
      SELECT (SELECT newId
      FROM HistoryID
      WHERE oldId = curl.PLAN_ID
      and tableName = 'TABLE1')
      INTO l_newID_value
      FROM dual;
    ----------------------------------
    
      IF(l_newID_value IS NULL) THEN
        select "TABLE2_PKeyCount".nextval into l_newID_value from dual;
        INSERT
            INTO HistoryID
              (
                oldId,
                newId,
                tableName
              )
              VALUES
              (
                curl.PLAN_ID,
                l_newID_value,
                'TABLE1'
              );
        end if;
    
        l_index := l_records_to_update.COUNT + 1;
        l_records_to_update(l_index).plan_id := curl.PLAN_ID;
        l_records_to_update(l_index).new_plan_id := l_newID_value;
       
    end loop;
    
    
    FORALL i IN 1 .. l_records_to_update.COUNT
      update TABLE1
      set PLAN_ID = l_records_to_update(i).new_plan_id
      where PLAN_ID = l_records_to_update(i).plan_id
      and l_records_to_update(i).new_plan_id is not null; 
    
    insert into TABLE2 select * from TABLE1;
    execute immediate 'drop table TABLE1';
    
    
    dbms_output.put_line('successful');
    exception when others then
      dbms_output.put_line('Failed: ' || SQLERRM);
    END;
    /
    
    

    This is 100,000 of tests

    create table tbl_hist (
    old_id         number,
    new_id         number,
    table_name     varchar2(50),
    primary key (old_id, table_name)
    )
    ;
    create index idx_tbl_hist on tbl_hist(old_id, new_id)
    ;
    create table tbl(
      plan_id number not null primary key,
      pa_index number default 0 not null
    );
    create index idx_tbl on tbl(plan_id, pa_index)
    ;
    create sequence id
    ;
    -- start
    set timing on
    
    insert into tbl (plan_id)
    select id.nextval from dual
    connect by level <= 100000
    ;
    insert into
    tbl_hist (
      old_id,
      new_id,
      table_name
    )
    select
      t.plan_id,
      id.nextval,
      'TBL'
    from tbl t left join tbl_hist h on (
      h.old_id = t.plan_id
    )
    where t.pa_index = 0
    ;
    merge into tbl d
    using (
      select
        t.rowid row_id,
        h.old_id,
        h.new_id
      from tbl_hist h join tbl t on
        h.old_id = t.plan_id
    ) s on (
      d.rowid = s.row_id
    )
    when matched then
      update set
        plan_id = s.new_id
    ;
    
    -- stop
    set timing off
    
    --select * from tbl order by plan_id
    --;
    --select * from tbl_hist order by old_id
    --;
    drop sequence id
    ;
    drop table tbl_hist purge
    ;
    drop table tbl purge
    ;
    
    Table TBL_HIST created.
    Index IDX_TBL_HIST created.
    Table TBL created.
    Index IDX_TBL created.
    Sequence ID created.
    
    100 000 rows inserted.
    
    Elapsed: 00:00:08.669
    
    100 000 rows inserted.
    
    Elapsed: 00:00:50.462
    
    100 000 rows merged.
    
    Elapsed: 00:00:31.606
    
    Sequence ID dropped.
    Table TBL_HIST dropped.
    Table TBL dropped.
    

    Made in cheap laptop Oracle 12 c

    ~1.5 min * 40 so 4 million lines takes about 60 min

  • Setting the query: optimizer does not use the index function

    Hello

    I have a request written by a developer that I can't change.

    It is here that the condition:

    (   UPPER(TRIM (CODFSC)) = UPPER (TRIM ( '01923980500'))

           OR UPPER(TRIM (CODUIC)) = UPPER (TRIM ( '01923980500')))

    There is an index on CODFSC and on CODUIC1.

    the plan is:

    Plan

    INSTRUCTION SELECT ALL_ROWS cost: 9 194 bytes: 3 206 502 cardinality: 15 054

    ACCESS FULL ANAGRAFICA cost TABLE TABLE 1: 9 194 bytes: 3 206 502 cardinality: 15 054

    So I created two new index on SUPERIOR (TRIM ()CODFSC)) and SUPERIOR (TRIM ()CODUIC)) but the plan

    complete analysis of STIL.

    Modifing where condition in:

    (   CODFSC = UPPER (TRIM ( '01923980500'))

           OR CODUIC = UPPER (TRIM ( '01923980500')))

    the plan is:

    SELECT STATEMENT ALL_ROWSCost: 157 bytes: 426 cardinality: 2

    CONCATENATION OF 5

    TABLE ACCESS BY INDEX ROWID ANAGRAFICA cost TABLE 2: cardinality of 5 bytes: 213: 1

    1 INDEX RANGE SCAN INDEX ANAGRAFICA_IDX01 cost: cardinality 3: 1

    TABLE ACCESS BY INDEX ROWID ANAGRAFICA cost TABLE 4: cardinality 152 bytes: 213: 1

    3 INDEX SKIP SCAN INDEX ANAGRAFICA_IDX02 cost: cardinality 1: 151

    Why optimizer not use my funct index?

    Thank you.

    Franck,

    I always forget that the default value for the GOLD expansion depends on a path indexed for each branch.

    2 in your use of or_predicates (2) depends on the position of complex predicate which must be expanded.  If you change the order of predicate 'State = 0' to display AFTER the complex predicate, you must change the indicator of "or_predicates (1).

    Outside of the current state of undocumented indicator, it also introduces the disturbing thought that, for a more complex query, a change in the transformation may result in another set of query blocks generated with a different ranking of the predicates. Yet another case to ensure that if you suggest anything suggest you (or create a SQL database).

    Concerning

    Jonathan Lewis

  • Vo ADF query optimizer


    Hello

    I'm on 11.1.2.1.0.

    In SQl, we use / * + FIRST_ROWS (10) * /.

    one)

    For use in VO we can go to

    VO.xml-> general-> Tuning

    Query FIRST_ROWS optimizer indicator (10)

    Access to the scroll mode

    Size of the beach 1

    or

    (b)

    VO.xml-> general-> Tuning

    Query FIRST_ROWS optimizer indicator

    Access to the scroll mode

    Set size 10

    (Should I use option a) or B)?

    Thank you

    Kiran

    (b) it is not sensible to try to get the first 10 lines as quickly as possible and then use 10 back and forth to get the data.

    Timo

  • Performance of SQL query optimization

    SELECT
    BOX WHEN SACA. CTD_TYPE = 2 THEN 'JMP' WHEN SACA. CTD_TYPE = 3 THEN "PTD" WHAT SACA. CTD_TYPE = 4 THEN "QTD" WHAT SACA. CTD_TYPE = 5 THEN 'CDA' END AS NAME,
    SACA. TOT_REVENUE, SACC. TOT_REVENUE AS LAST_TOT_REVENUE,
    SACA. TOT_MARGIN, SACC. TOT_MARGIN AS LAST_TOT_MARGIN,
    SACA. TOT_MARGIN_PCT, SACC. TOT_MARGIN_PCT AS LAST_TOT_MARGIN_PCT,
    SACA. TOT_VISIT_CNT, SACC. TOT_VISIT_CNT AS LAST_TOT_VISIT_CNT,
    SACA. AVG_ORDER_SIZE, SACC. AVG_ORDER_SIZE AS LAST_AVG_ORDER_SIZE,
    SACA. TOT_MOVEMENT, SACC. TOT_MOVEMENT AS LAST_TOT_MOVEMENT
    DE AAAAAAAAAAAA JOIN AAAAAAAAAAAA SACC WE SACA SACA. CTD_TYPE = OF THE GUIDE OF THE SACC. CTD_TYPE WHERE SACA. SUMMARY_CTD_ID = (SELECT SUMMARY_CTD_ID FROM SALES_AGGR_DAILY WHERE LOCATION_LEVEL_ID = 5 AND location_id = 5656 AND PRODUCT_LEVEL_ID is NULL AND PRODUCT_ID IS NULL AND CALENDAR_ID = (SELECT LAST_AGGR_CALENDARID FROM SALES_AGGR_WEEKLY WHERE LOCATION_LEVEL_ID = 5 AND location_id = 5656 AND CALENDAR_ID = 365 AND PRODUCT_LEVEL_ID is NULL AND PRODUCT_ID IS NULL)) AND of the Guide of the SACC. SUMMARY_CTD_ID = (SELECT SUMMARY_CTD_ID FROM SALES_AGGR_DAILY WHERE LOCATION_LEVEL_ID = 5 AND location_id = 5656 AND PRODUCT_LEVEL_ID is NULL AND PRODUCT_ID IS NULL AND CALENDAR_ID = (SELECT LAST_AGGR_CALENDARID FROM SALES_AGGR_WEEKLY WHERE LOCATION_LEVEL_ID = 5 AND location_id = 5656 AND CALENDAR_ID = 365 AND PRODUCT_LEVEL_ID is NULL AND PRODUCT_ID IS NULL))

    Normally this query run 15-17 seconds my bike to reduce below 6 seconds... Can someone help me with this?

    Edited by: 927853 18 April 2012 10:59
    /* Formatted on 2012/04/17 14:42 (Formatter Plus v4.8.8) */
    SELECT CASE
             WHEN saca.ctd_type = 2
               THEN 'WTD'
             WHEN saca.ctd_type = 3
               THEN 'PTD'
             WHEN saca.ctd_type = 4
               THEN 'QTD'
             WHEN saca.ctd_type = 5
               THEN 'YTD'
           END AS NAME,
           saca.tot_revenue, sacc.tot_revenue AS last_tot_revenue, saca.tot_margin, sacc.tot_margin AS last_tot_margin,
           saca.tot_margin_pct, sacc.tot_margin_pct AS last_tot_margin_pct, saca.tot_visit_cnt, sacc.tot_visit_cnt AS last_tot_visit_cnt,
           saca.avg_order_size, sacc.avg_order_size AS last_avg_order_size, saca.tot_movement, sacc.tot_movement AS last_tot_movement
      FROM sales_aggr_ctd saca JOIN sales_aggr_ctd sacc ON saca.ctd_type = sacc.ctd_type
     WHERE EXISTS (
             SELECT 1
               FROM sales_aggr_daily oops
              WHERE oops.summary_ctd_id = saca.summary_ctd_id
                AND oops.location_level_id = 5
                AND oops.location_id = 5656
                AND oops.product_level_id IS NULL
                AND oops.product_id IS NULL
                AND EXISTS (
                      SELECT 1
                        FROM sales_aggr_weekly xxx
                       WHERE oops.calendar_id = xxx.last_aggr_calendarid
                         AND xxx.location_level_id = 5
                         AND xxx.location_id = 5656
                         AND xxx.calendar_id = 365
                         AND xxx.product_level_id IS NULL
                         AND xxx.product_id IS NULL))
       AND EXISTS (
             SELECT 1
               FROM sales_aggr_daily zzz
              WHERE sacc.summary_ctd_id = zzz.summary_ctd_id
                AND zzz.location_level_id = 5
                AND zzz.location_id = 5656
                AND zzz.product_level_id IS NULL
                AND zzz.product_id IS NULL
                AND EXISTS (
                      SELECT 1
                        FROM sales_aggr_weekly mmm
                       WHERE zzz.calendar_id = mmm.last_aggr_calendarid
                         AND mmm.location_level_id = 5
                         AND mmm.location_id = 5656
                         AND mmm.calendar_id = 365
                         AND mmm.product_level_id IS NULL
                         AND mmm.product_id IS NULL))
    
  • the given query optimization

    Hi all

    I have a CST_ITEM_COSTS table that has columns MATERIAL_COST, MATERIAL_OVERHEAD_COST, RESOURCE_COST, OUTSIDE_PROCESSING_COST, OVERHEAD_COST, INVENTORY_ITEM_ID,
    ORGANIZATION_ID, LAST_UPDATE_DATE and COST_TYPE_ID. I want to take the data in this table and fill out the table, items_d, that has the columns of data COST AND COST_ELEMENT, LAST_UPDATE_DATE, INVENTORY_ITEM_ID, COST_TYPE_ID, ORGANIZATION_ID., which would be stored as 1 row cst_item_costs should be divided into 5 rows which each will have inventory_ITEM_ID,
    ORGANIZATION_ID, LAST_UPDATE_DATE and COST_TYPE_ID.cost for 1 row will be the value of cost of materials and cost element is harcoded at HARDWARE, 2nd row will cost as element of value and the MATERIAL_OVERHEAD_COST cost will be harcoded as overhead, sick 3 ranks have cost resource cost value and cost elemnt resourece... same shall We will have data of 4th and 5th line. Here's the code: is it possible to optimize this code, to reduce the length or improve its performance.

    SELECT
    "PIVOT." "INVENTORY_ITEM_ID$ 1""INVENTORY_ITEM_ID",
    "PIVOT." "ORGANIZATION_ID_1$ 1""ORGANIZATION_ID",
    "PIVOT." "COST_TYPE_ID_1$ 1""COST_TYPE_ID_1",
    "PIVOT." "LAST_UPDATE_DATE_1$ 1""LAST_UPDATE_DATE",
    "PIVOT." "$ 1 '' COST' COST,
    "PIVOT." "COST_ELEMENT$ 1""COST_ELEMENT"
    Of
    (SELECT
    CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' INVENTORY_ITEM_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' INVENTORY_ITEM_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' INVENTORY_ITEM_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' INVENTORY_ITEM_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" INVENTORY_ITEM_ID END "" INVENTORY_ITEM_ID$ 1 ",".
    CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' ORGANIZATION_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' ORGANIZATION_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' ORGANIZATION_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' ORGANIZATION_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" ORGANIZATION_ID END "" ORGANIZATION_ID_1$ 1 ",".
    CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' COST_TYPE_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' COST_TYPE_ID ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' COST_TYPE_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' COST_TYPE_ID ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" COST_TYPE_ID END "" COST_TYPE_ID_1$ 1 ",".
    CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' LAST_UPDATE_DATE ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' LAST_UPDATE_DATE ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' LAST_UPDATE_DATE ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' LAST_UPDATE_DATE ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" LAST_UPDATE_DATE END "" LAST_UPDATE_DATE_1$ 1 ",".
    CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'PIVOT_SOURCE '. "' MATERIAL_COST ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'PIVOT_SOURCE '. "' MATERIAL_OVERHEAD_COST ' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'PIVOT_SOURCE '. "' RESOURCE_COST ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'PIVOT_SOURCE '. "' OUTSIDE_PROCESSING_COST ' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 4 THEN"PIVOT_SOURCE ". "" OVERHEAD_COST "END" COST$ 1 ",".
    CASE WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 0 THEN 'MATTER' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 1 THEN 'MATERIAL OVERHEAD' WHEN 'PIVOT_ROW_GENERATOR '. ' ' ID ' = 2 THEN 'RESOURCE' WHEN 'PIVOT_ROW_GENERATOR '. "" ID "= 3 THEN 'OUTSIDE TREATMENT" WHEN "PIVOT_ROW_GENERATOR". "" ID "= 4 THEN ENDS"OVERHEAD"" COST_ELEMENT$ 1 ".
    Of
    (SELECT
    0 "ID".
    Of
    DOUBLE
    UNION ALL
    SELECT
    1 "ID".
    Of
    DOUBLE
    UNION ALL
    SELECT
    2 "ID".
    Of
    DOUBLE
    UNION ALL
    SELECT
    3 "ID".
    Of
    DOUBLE
    UNION ALL
    SELECT
    4 "ID".
    Of
    DOUBLE) "PIVOT_ROW_GENERATOR."
    (SELECT
    'CST_ITEM_COSTS '. "" MATERIAL_COST ""MATERIAL_COST"
    'CST_ITEM_COSTS '. "" MATERIAL_OVERHEAD_COST ""MATERIAL_OVERHEAD_COST"
    'CST_ITEM_COSTS '. "" RESOURCE_COST ""RESOURCE_COST"
    'CST_ITEM_COSTS '. "" OUTSIDE_PROCESSING_COST ""OUTSIDE_PROCESSING_COST"
    'CST_ITEM_COSTS '. "" OVERHEAD_COST ""OVERHEAD_COST"
    'CST_ITEM_COSTS '. "" INVENTORY_ITEM_ID ""INVENTORY_ITEM_ID"
    'CST_ITEM_COSTS '. "" ORGANIZATION_ID ""ORGANIZATION_ID"
    'CST_ITEM_COSTS '. "" COST_TYPE_ID ""COST_TYPE_ID"
    'CST_ITEM_COSTS '. ' ' LAST_UPDATE_DATE ' 'LAST_UPDATE_DATE '.
    Of
    ("CST_ITEM_COSTS" "CST_ITEM_COSTS") "PIVOT_SOURCE") 'PIVOT '.

    Thanks for your help.

    This is the simplified and formatted code:

    SELECT c.inventory_item_id inventory_item_id$1,
           c.organization_id organization_id_1$1,
           c.cost_type_id cost_type_id_1$1,
           c.last_update_date last_update_date_1$1,
           CASE
             WHEN pivot_row_generator.id = 0 THEN c.material_cost
             WHEN pivot_row_generator.id = 1 THEN c.material_overhead_cost
             WHEN pivot_row_generator.id = 2 THEN c.resource_cost
             WHEN pivot_row_generator.id = 3 THEN c.outside_processing_cost
             WHEN pivot_row_generator.id = 4 THEN c.overhead_cost
           END cost$1,
           CASE
             WHEN pivot_row_generator.id = 0 THEN 'MATERIAL'
             WHEN pivot_row_generator.id = 1 THEN 'MATERIAL OVERHEAD'
             WHEN pivot_row_generator.id = 2 THEN 'RESOURCE'
             WHEN pivot_row_generator.id = 3 THEN 'OUTSIDE PROCESSING'
             WHEN pivot_row_generator.id = 4 THEN 'OVERHEAD'
           END cost_element$1
    FROM   cst_item_costs c,
          (SELECT level - 1 id
           FROM   dual
           CONNECT BY level <= 5) pivot_row_generator
    

    You must put the code with braces around it commands for formatting

  • help query rewrite

    Hello

    could you help me rewrite the query without using joins

    Select * from A, B, C
    LEFT OUTER JOIN
    WE (A.account_no = B.account_no
    AND C.id_type = B.id_type
    AND B.value = 1);


    Thank you
    -Raj

    Select * from A, B, C
    WHERE A.account_no = B.account_no (+)
    AND C.id_type = B.id_type
    AND B.value = 1;

  • A query optimization

    Hi all
    I tried to optimize the following query, what is the best method to grant the following query?
    ---code truncated
    Best regards
    Val

    Published by: debonair Valerie on December 27, 2011 04:02

    Discussion: HOW TO: post a request for tuning SQL statement - posting model
    HOW to: Validate a query of SQL statement tuning - model showing

  • Question about the query optimizer

    For a year during my database of the Conference the following table
    CREATE TABLE TASKS
    (
        "ID" NUMBER NOT NULL ENABLE,
        "START_DATE" DATE,
        "END_DATE" DATE,
        "DESCRIPTION" VARCHAR2(50 BYTE)
    ) ;
    with approximately 1.5 million entries were given. In addition, there were the following query:
    SELECT START_DATE, COUNT(START_DATE) FROM TASKS
    GROUP BY START_DATE
    ORDER BY START_DATE;
    And the Index:
    create index blub on Tasks (start_date asc);
    The main exercise was to speed up queries with indexes. Because all the data is available the optimizer ignores index and just have a full table scan.
    Here the QEP:
    ----------------------------------------------------------------------------                                                                                                                                                                                                                                 
    | Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                                 
    ----------------------------------------------------------------------------                                                                                                                                                                                                                                 
    |   0 | SELECT STATEMENT   |       |  9343 | 74744 |  3423   (6)| 00:00:42 |                                                                                                                                                                                                                                 
    |   1 |  SORT GROUP BY     |       |  9343 | 74744 |  3423   (6)| 00:00:42 |                                                                                                                                                                                                                                 
    |   2 |   TABLE ACCESS FULL| TASKS |  1981K|    15M|  3276   (2)| 00:00:40 |                                                                                                                                                                                                                                 
    ----------------------------------------------------------------------------
    Then we tried to compel him to make the index with this query:
    ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_1;
    
    SELECT /* + INDEX (TASKS BLUB) */ START_DATE, COUNT(START_DATE) FROM TASKS
    GROUP BY START_DATE
    ORDER BY START_DATE;
    but again he ignored the index. The optimizer guide makes clear, that whenever you will use all the data in a table, it must do a full scan.
    So we fooled him doing a scan limited quick index with this query:
    create or replace function bla
    return date deterministic is
      ret date;
    begin
      select MIN(start_date) into ret from Tasks;
      return ret;
    end bla;
    
    ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_1;
    
    SELECT /* + INDEX (TASKS BLUB) */ START_DATE, COUNT(START_DATE) FROM TASKS
    where start_date >= bla
    GROUP BY START_DATE
    ORDER BY START_DATE; 
    now, we got the following QEP:
    -----------------------------------------------------------------------------                                                                                                                                                                                                                                
    | Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                                
    -----------------------------------------------------------------------------                                                                                                                                                                                                                                
    |   0 | SELECT STATEMENT     |      |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                                
    |   1 |  SORT GROUP BY NOSORT|      |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                                
    |*  2 |   INDEX RANGE SCAN   | BLUB |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                                
    ----------------------------------------------------------------------------- 
    So he use the index.

    Now to my two questions:

    1. why should always do a full scan (because the response of optimizer documentation is a bit unsatisfactory)?
    2. After looking at the difference between the costs (FS: 3276 IR: 3) and the time, the system needs (FS: 9.6 s IR: 4.45) why the optimizer refused the plan clearly better?

    Thanks in advance,

    Kai Gödde

    Published by: Kai Gödde on May 30, 2011 18:54

    Published by: Kai Gödde on May 30, 2011 18:56

    The reason for which Oracle is full of sweeping the table corresponding to your request:

    SELECT START_DATE, COUNT(START_DATE) FROM TASKS
    GROUP BY START_DATE
    ORDER BY START_DATE;
    

    and using the index for:

    SELECT /* + INDEX (TASKS BLUB) */ START_DATE, COUNT(START_DATE) FROM TASKS
    where start_date >= bla
    GROUP BY START_DATE
    ORDER BY START_DATE;
    

    has to do with the (possible) null values in the table. Note that the query with a predicate on start_date would have probably used the index even without suspicion.

    The optimizer does not know that there is a start_date value in each row of the table and the group by expression will include NULL values, but because you count start_date (meaning count of non-null of the expression values) the count himself will be null. For example:

    SQL> with t as (
      2     select trunc(sysdate) dt from dual union all
      3     select trunc(sysdate) dt from dual union all
      4     select trunc(sysdate-1) dt from dual union all
      5     select trunc(sysdate-1) dt from dual union all
      6     select to_date(null) from dual)
      7  select dt, count(dt) from t
      8  group by dt;
    
    DT           COUNT(DT)
    ----------- ----------
                         0
    29-MAY-2011          2
    30-MAY-2011          2
    

    Because Oracle does not create an index entry for a line with all null values in the index key, the optimizer is forced full analysis of the table to make sure that it returns all rows. In the query with a predicate on start_date the optimizer knows that no matter what start_date > blah must be non-null.

    To make your first query to use an index, you must declare either start_date as not null (which implies that it is a mandatory field), or if there may be values NULL, but you care not to add a like predicate:

    where start_date is not null);
    

    John

  • Beginner: query optimization

    Hello

    Given that two queries:

    1)

    Select id_msisdn_a, BH_LINK a id_msisdn_b
    where there is no
    (select 1 from bh_node b where b.id_mes = a.id_mes and b.id_msisdn = a.id_msisdn_a)
    and not exists
    (select 1 from bh_node b where b.id_mes = a.id_mes and b.id_msisdn = a.id_msisdn_b)
    Id_msisdn_a group, id_msisdn_b


    ------------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
    ------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 116K | 4990K | 320K (12) | 01:04:09 |
    | 1. HASH GROUP BY. 116K | 4990K | 5928K | 320K (12) | 01:04:09 |
    | 2. ANTI NESTED LOOPS. 116K | 4990K | 319KO (12) | 01:03:53 |
    | 3. ANTI NESTED LOOPS. 11 M | 354 M | 318K (12) | 01:03:45 |
    | 4. RANGE OF PARTITION ALL THE | 556 M | 10G | 285K (2) | 00:57:07 | 1. 6.
    | 5. HASH PARTITION ALL | 556 M | 10G | 285K (2) | 00:57:07 | 1. 4.
    | 6. TABLE ACCESS FULL | BH_LINK | 556 M | 10G | 285K (2) | 00:57:07 | 1. 24.
    | 7. RANGE OF PARTITION ITERATOR. 32 M | 366 M | 0 (0) | 00:00:01 | KEY | KEY |
    | 8. PARTITION HASH ITERATOR. 32 M | 366 M | 0 (0) | 00:00:01 | KEY | KEY |
    |* 9 | INDEX UNIQUE SCAN | PK_BH_NODE | 32 M | 366 M | 0 (0) | 00:00:01 |
    | 10. RANGE OF PARTITION ITERATOR. 32 M | 374 M | 0 (0) | 00:00:01 | KEY | KEY |
    | 11. PARTITION HASH ITERATOR. 32 M | 374 M | 0 (0) | 00:00:01 | KEY | KEY |
    | * 12 | INDEX UNIQUE SCAN | PK_BH_NODE | 32 M | 374 M | 0 (0) | 00:00:01 |
    ------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    9 - access("B".") ID_MSISDN '= 'A'.' ID_MSISDN_B' AND 'B '. ' ID_MES '= 'A'. ("' ID_MES")
    12 - access("B".") ID_MSISDN '= 'A'.' ID_MSISDN_A' AND 'B '. ' ID_MES '= 'A'. ("' ID_MES")










    2)

    Select id_msisdn_a, id_msisdn_b from BH_LINK a, BH_NODE b
    where b.id_mes = a.id_mes
    and not (b.id_msisdn = a.id_msisdn_a) and b.id_msisdn = a.id_msisdn_b
    Id_msisdn_a group, id_msisdn_b


    ----------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |
    ---------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 580 M | 17G | 461G (100) | 999:59:59 |
    | 1. HASH GROUP BY. 580 M | 17G | 235P | 461G (100) | 999:59:59 |
    | 2. RANGE OF PARTITION ALL THE | 5976T | 169P | 18G (100) | 999:59:59 | 1. 6.
    |* 3 | HASH JOIN | 5976T | 169P | 124 M | 18G (100) | 999:59:59 |
    | 4. HASH PARTITION ALL | 32 M | 374 M | 15163 (1) | 00:03:02 | 1. 4.
    | 5. FULL RESTRICTED INDEX SCAN FAST | PK_BH_NODE | 32 M | 374 M | 15163 (1) | 00:03:02 | 1. 24.
    | 6. HASH PARTITION ALL | 556 M | 10G | 285K (2) | 00:57:07 | 1. 4.
    | 7. TABLE ACCESS FULL | BH_LINK | 556 M | 10G | 285K (2) | 00:57:07 | 1. 24.
    ---------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    3 - access("B".") ID_MES '= 'A'.' ID_MES')
    filter ('B'. "" ID_MSISDN "<>' A" "." "" ID_MSISDN_A' OR 'B '. "" ID_MSISDN "<>' A" "." " ID_MSISDN_B')






    /******************
    ALTER TABLE BH_LINK ADD)
    CONSTRAINT PK_BH_LINK
    KEY ELEMENTARY SCHOOL
    (ID_MSISDN_A, ID_MSIS
    DN_B, ID_MES, ID_SUBTIPO_SERVICIO, ID_DESTINO_CONCRETO_SER)
    THE LOCAL INDEX USE);

    ALTER TABLE BH_NODE ADD)
    CONSTRAINT PK_BH_NODE
    KEY ELEMENTARY SCHOOL
    (ID_MSISDN, ID_MES)
    THE LOCAL INDEX USE);
    ************************/



    Two questions:

    ((a) is the same result, produced by 1) a d2)?

    ((b) if the answer is affirmative, may 2) be optimized without using "is"?


    Thanks in advance,
    Jose Luis

    user455401 wrote:
    Thanks for your help.

    Sorry, the second query has an error; the correct is:

    Select id_msisdn_a, id_msisdn_b from BH_LINK a, BH_NODE b
    where b.id_mes = a.id_mes
    and not (b.id_msisdn = a.id_msisdn_a OR b.id_msisdn = a.id_msisdn_b)
    Id_msisdn_a group, id_msisdn_b

    In this way, I invited that the result is equal, is it not?

    No, for the same reason.
    Query 1 excludes any group where any member of the Group has the wrong type of match in bh_node. Thiat is, to decide if a line should be included in the aggregation, you need to know something on other lines in the same group.
    Your new query, like the original query 2, includes a group where a member of the Group has the right type of game. This is not the same. Saying "I drink tea." (which is similar to query 2 and your new query) is not so much saying "Nobody in my family drinks beer." (which is similar to the query 1). These two instructions may by chance be true, or they might be true in some very specific circumstances (for example, everyone in a family drink the same thing, and nobody drinks miore one thing), but in general they are independent of the statements. Your new query such as query 2, none of the group as a whole.

    By the Way, your request doesn't seem to improve 1):

    It is an improvement of Query1. It scans bh_node once, but 1 query scans 2 times bh_node.
    It is not significantl; there faster than 2 of the application, but who cares? Query 2 product of erroneous results; There is no reason to think how fast it is.

    Looking at your problem once again, I think that the solution I posted must do an outer join.

    SELECT    l.id_msisdn_a, l.id_msisdn_b
    FROM               bh_link      l
    LEFT OUTER JOIN       bh_node      n  ON     l.id_mes  = n.ld_mes
    GROUP BY  l.id_msisdn_a, l.id_msisdn_b
    HAVING       COUNT ( CASE
                      WHEN  n.id_msisdn     IN (l.msisdn_a, l.msisdn_b)
                    THEN  1
                  END
                ) = 0
    OR        NVL (l.id_msisdn_a, l.id_msisdn_b)     IS NULL          -- If needed
    ;
    

    Unless post you some examples of data (CREATE TABLE and INSERT statements), I can't test anything, and very similar errors occur.

Maybe you are looking for