Performance of the queries XMLAGG degrading in an exponential way.

There is a serious performance problem with my query using XMLAGG

CREATE TABLE tmp_test_xml

(

acc_ID NUMBER (12).

CLOB CUS_DTLS

)

INSERT INTO tmp_test_xml

SELECT tab.acc_id acc_id

XMLSERIALIZE (DOCUMENT XMLELEMENT ("holders"

XMLAGG (XMLELEMENT ("holder"

XMLELEMENT ("Gender", tab.sex_cde)

XMLELEMENT ("Name", tab.name)

XMLFOREST (tab.drivers_licence AS "DL")

XMLFOREST (tab.empr_name LIKE "emp_name")

XMLELEMENT ("Address", tab.addr)

..

...

...

() ))) AS cus_dtls

ON the TABLE tab

Tab.acc_id group

table 'TABLE' has 3 million records

The Insert performance degrades as follows:

INSERT

10K REB - 1 s

30K REB - 45 dry

50K REB - 3 mins

100K REB - 16 mins

Please let me know if I can improve performance in some way. I can imagine how I can insert records 3 million in here...

There is no problem of table space. Tried the 1 million without XMLAGG CER - 2 minutes.

Y at - it another way to aggregate my xml data. In fact, I'm trying to aggregate the data for all customers for a single account.

Version information:

------------------------------------------------------------------------------------------------------------

Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production

PL/SQL Release 11.2.0.3.0 - Production

CORE Production 11.2.0.3.0

AMT for Linux: Version 11.2.0.3.0 - Production

NLSRTL Version 11.2.0.3.0 - Production

Bravo!

Sofiane

Why do you think the problem is with XMLAgg?

Try with the following definition of the table

CREATE TABLE tmp_test_xml
(
  acc_ID     NUMBER(12),
  CUS_DTLS  XMLTYPE -- changed storage. Defaults to SECUREFILE BINARY XML in your version
)

and also remove the XMLSERIALIZE of your SQL statement as well.

So the performance degradation that show you reads like a memory leak, just testing to see if it is in the conversion of an XMLType to a CLOB.  You can also open an SR with Oracle's Support on the issue as well as they would have been better.

Tags: Oracle Development

Similar Questions

  • Performance of the queries in order to get the most recent price

    Happy new year everyone.

    I have a table of price in my system that has several awards for each product with the date, the price is entered into force.

    I have queries throughout the system to retrieve the most recent actual price for the date of the transaction.

    I can find to implement the easiest way is to have a user-defined function to collect the prize.

    My problem is that many of my questions have access to large amounts of data (for example, transactions) and my table of prices is also big enough - both have millions of records. Using a Pl/SQL function defined by the user in my query, I get a lot of switching context between SQL and PL/SQL and my questions are not well

    Here is an example of code, which simplifies my scenario:

    drop table xo_stock_trans;
    create table xo_stock_trans (item varchar2(25), trans_date date, quantity number(20,4));
    insert into xo_stock_trans values('A',TO_DATE('25-DEC-2014','DD-MON-YYYY'), 4);
    insert into xo_stock_trans values('A',TO_DATE('27-DEC-2014','DD-MON-YYYY'), -2);
    insert into xo_stock_trans values('A',TO_DATE('28-DEC-2014','DD-MON-YYYY'), 5);
    insert into xo_stock_trans values('B',TO_DATE('23-DEC-2014','DD-MON-YYYY'), 20);
    insert into xo_stock_trans values('B',TO_DATE('26-DEC-2014','DD-MON-YYYY'), -6);
    insert into xo_stock_trans values('B',TO_DATE('29-DEC-2014','DD-MON-YYYY'), 15);
    /
    -- Generate lots more data
    BEGIN
        -- Generate more trans dates
        for r in 1..1000
        LOOP
            insert into xo_stock_trans
            select item, trans_date - r - 7 as  trans_date, ROUND(dbms_random.value(1,50),2) as quantity
            from xo_stock_trans
            where trans_date between TO_DATE('23-DEC-2014','DD-MON-YYYY') AND TO_DATE('29-DEC-2014','DD-MON-YYYY')
              and item in ('A','B');
        END LOOP;
        COMMIT;
        -- generate more items
        for lt in 1..12 
        LOOP
            -- generate C,D, E, items
            INSERT into xo_stock_trans
            SELECT chr(ascii(item)+(lt*2)) as item, trans_date, quantity
            from xo_stock_trans
            where item in ('A','B');
            -- generate A1, A2, B1, B2, etc
            for nm in 1..10
            LOOP
                INSERT INTO xo_stock_trans
                select item || to_char(nm), trans_date, quantity
                from xo_stock_trans
                where length(item) = 1;
            END LOOP;
            COMMIT;
        END LOOP;
        COMMIT;
    END;
    /
    create index xo_stock_trans_ix1 on xo_stock_trans (item);
    create index xo_stock_trans_ix2 on xo_stock_trans (trans_date);
    exec dbms_stats.gather_table_stats(ownname =>user, tabname => 'XO_STOCK_TRANS' , estimate_percent => 100, degree => dbms_stats.auto_degree, cascade=>true);
    /
    
    
    drop table xo_prices;
    create table xo_prices (item varchar2(25), price_date date, gross_price number(20,4), net_price number(20,4), special_price number(20,4) );
    insert into xo_prices values ('A', to_date('01-DEC-2014','DD-MON-YYYY'), 10, 8, 6);
    insert into xo_prices values ('A', to_date('25-DEC-2014','DD-MON-YYYY'), 9, 8, 6);
    insert into xo_prices values ('A', to_date('26-DEC-2014','DD-MON-YYYY'), 7, 6, 4);
    insert into xo_prices values ('B', to_date('01-DEC-2014','DD-MON-YYYY'), 5.50, 4.50, 3);
    insert into xo_prices values ('B', to_date('25-DEC-2014','DD-MON-YYYY'), 5.00, 4.00, 3);
    insert into xo_prices values ('B', to_date('26-DEC-2014','DD-MON-YYYY'), 3.50, 2.50, 2);
    /
    -- Generate lots more data
    BEGIN
        -- Generate more price dates
        for r in 1..1000
        LOOP
            insert into xo_prices
            select item, price_date - r - 7 as  price_date,gross_price, net_price, special_price
            from xo_prices
            where price_date between TO_DATE('23-DEC-2014','DD-MON-YYYY') AND TO_DATE('29-DEC-2014','DD-MON-YYYY')
              and item in ('A','B');
        END LOOP;
        COMMIT;
        -- generate more items
        for lt in 1..12 
        LOOP
            -- generate C,D, E, items
            INSERT into xo_prices
            SELECT chr(ascii(item)+(lt*2)) as item, price_date, gross_price + (lt*2), net_price + (lt*2), special_price + (lt*2)
            from xo_prices
            where item in ('A','B');
            -- generate A1, A2, B1, B2, etc
            for nm in 1..10
            LOOP
                INSERT INTO xo_prices
                select item || to_char(nm), price_date, gross_price, net_price, special_price
                from xo_prices
                where length(item) = 1;
            END LOOP;
            COMMIT;
        END LOOP;
    END;
    /
    
    create index xo_prices_ix1 on xo_prices (item, price_date);
    exec dbms_stats.gather_table_stats(ownname =>user, tabname => 'XO_PRICES' , estimate_percent => 100, degree => dbms_stats.auto_degree, cascade=>true);
    /
    
    create or replace function xo_get_price(I_Item in VARCHAR2, I_Date in DATE, i_Price_type IN VARCHAR2) RETURN NUMBER
    IS
        -- Function to get most recent effective price prior to the date
        CURSOR c_get_prices(P_Item VARCHAR2, P_Date VARCHAR2)
        IS
        SELECT gross_price, net_price, special_price
        FROM XO_PRICES
        WHERE item = P_Item
         AND price_date <= P_Date
        ORDER BY price_date desc; -- most recent price
        
        l_gross_price NUMBER(20,4);
        l_net_price NUMBER(20,4);
        l_special_price NUMBER(20,4);
    BEGIN
        OPEN c_get_prices(I_Item, I_Date);
        FETCH c_get_prices INTO l_gross_price, l_net_price, l_special_price;
        CLOSe c_get_prices;
        
        IF I_Price_Type='GROSS' then return l_gross_price;
        ELSIF I_Price_Type= 'NET' then return l_net_price;
        ELSIF I_Price_Type= 'SPECIAL' then return l_special_price;
        END IF;
    END xo_get_price;
    /
    
    -- Here is a typical query I am trying to perform
    select tr.item, tr.trans_date, tr.quantity
        , xo_get_price(tr.item, tr.trans_date, 'GROSS') as gross_price
        , xo_get_price(tr.item, tr.trans_date, 'NET') as net_price
        , xo_get_price(tr.item, tr.trans_date, 'SPECIAL') as special_price
    from xo_stock_trans tr
    where tr.trans_date between '01-AUG-2014' and '31-AUG-2014';
    

    I would like to refactor my request so that I do not use the user Pl/SQL functions, but so far I can't get something that works better than the SQL above. For example, the following query is MUCH longer:

    select tr.item, tr.trans_date, tr.quantity
        , pr.gross_price
        , pr.net_price
        , pr.special_price
    from xo_stock_trans tr
    join xo_prices pr on pr.item = tr.item
                    and pr.price_date = (select max(pr2.price_date)
                                         from xo_prices pr2
                                         where pr2.item = pr.item
                                           and pr2.price_date <= tr.trans_date
                                         )
    where tr.trans_date between '01-AUG-2014' and '31-AUG-2014';
    

    I'm interested to know if anyone has addressed a similar scenario and have managed to write more efficient code.

    I looked at the determinism/manual caching of the function, but the article/date combinations are quite unique and therefore he does not benefit from him.

    Any suggestion under review - parallelism, analytical, pipeline functions, etc.

    Alan

    Hi, Alan.

    Alan Lawlor wrote:

    ...

    My problem is that many of my questions have access to large amounts of data (for example, transactions) and my table of prices is also big enough - both have millions of records. Using a Pl/SQL function defined by the user in my query, I get a lot of switching context between SQL and PL/SQL and my questions are not well...

    You got that right!  User-defined functions can be very practical, but this practice comes with a price.

    What version of Oracle are you using?  The Oracle 12, there is a new feature of 'temporal validity' which may help you.

    In any version, it will be much faster if you add a new column to the xo_prices table.  You can call this end_date, although it would in fact be the date when some other prices took effect.  You might put DATE' 9999-12-31 in the column end_date for current prices.  You can calculate end_date using the analytical function of LEAD.  Be sure to re-calcluate end_date when you insert new rows into xo_prices, or when you update the dates on existing lines.

    Once you have PRICE_DATE and end_date in the XO_PRICES table, you can join this table to get the real price from d by including

    AND d > = xo_prices.price_date

    AND d< >

    in the join condition.

    In some situations, especially when you don't have much different (item, dates) combinations, scalar-sub-queries could be faster than joins.

    Whatever it is, it participates without PL/SQL, so there is no context switching.

  • Performance of the queries of multiple virtual machines in a single call

    Hi all

    I'm a little rusty on what I used to do that, but that was almost a year ago...

    I'm trying to retrieve the performance data for several virtual machines.

    For now, I have this in serial mode, i.e. running QueryPerf on each virtual machine.

    The problem is that I have about 800 of them connected to one of my VC. Needless to say, this take a LOT of time to review all of the VM.

    Y at - it sort of method that will allow me to provide a picture of VM and application (performance counters) on all virtual machines, I guess that this will be faster than going and one by one.

    Thanks in advance!

    Hello

    QueryPerf method accepts an array of data object PerfQuerySpec. Here, each PerfQuerySpec object points to a managed entity for which the statistics are to recover.

    So in your scenario, before calling queryPerf, you can create PerfQuerySpec for each virtual machine, assign to each of them in PerfQuerySpec table. You can then pass this [of PerfQuerySpec] the call of queryPerf to retrieve statistics for all virtual machines.

    Hope that the information above solves your query.

    -Neha

  • performance of the queries

    Hello
    Maybe it's not a question with all the details, but I'm looking for some thoughts.
    Below a query should return around 100 K records and none of the three tables used in this query has more than 100 K records.
    Yet this query on database 11g taking almost 3 hours to return the data, very high level, everything that need attention first?
    (database is already set and stat / the index of the table is good, not full table scan in terms of query-Explain to end)

    Thank you


    SELECT EQXDL. LIST_HEADER_ID,
    RPM LIST_LINE_ID
    EQXDL. NOM_LISTE,
    EQXDL. DESCRIPTION,
    EQXDL. INVENTORY_ITEM_ID,
    EQXDL. AGENDA,
    EQXDL. ITEM_DESC,
    RLM OPERAND,
    LIST_PRICE NULL,
    EQXDL. CURRENCY_CODE,
    EQXDL. PRIMARY_UOM_FLAG,
    RPM PRODUCT_UOM_CODE UOM_CODE,
    RPM PRICING_ATTRIBUTE CCC_PRICING_ATTRIBUTE,
    RPM PRICING_ATTR_VALUE_FROM CCC_PRICING_ATTRIBUTE_VALUE,
    (SELECT USER_SEGMENT_NAME
    OF QP_SEGMENTS_V
    WHERE SEGMENT_MAPPING_COLUMN = PS. PRICING_ATTRIBUTE
    AND PRC_CONTEXT_ID IN
    (SELECT PRC_CONTEXT_ID
    OF QP_PRC_CONTEXTS_V
    WHERE PRC_CONTEXT_CODE = "ATTRIBUTE PRICING"))
    CCC_PRICING_ATT_DISP,
    LOCATION_ID NULL,
    RECORD_STATUS "NEW"
    'CCCAttr' RECORD_COMMENTS,
    SYSDATE CREATION_DATE,
    SYSDATE LAST_UPDATE_DATE,
    : B3 CREATED_BY,.
    : B3 LAST_UPDATED_BY,.
    TEST_PRIC_ATT_DISP (PRICING_ATTR_VALUE_FROM)
    CCC_PRICING_ATT_DISP_VAL,
    : REQUEST_ID B1
    OF QP_LIST_LINES QL.
    QP_PRICING_ATTRIBUTES PS,
    EQIX. EQXQP_PRICELIST_DOWNLOAD EQXDL
    WHERE 1 = 1
    AND QL. LIST_HEADER_ID = PS. LIST_HEADER_ID
    AND QL. LIST_LINE_ID = PS. LIST_LINE_ID
    AND QL. LIST_HEADER_ID = EQXDL. LIST_HEADER_ID
    AND QL. LIST_LINE_ID = EQXDL. LIST_LINE_ID
    AND PS. LIST_HEADER_ID = EQXDL. LIST_HEADER_ID
    AND PS. LIST_LINE_ID = EQXDL. LIST_LINE_ID
    AND PS. PRICING_ATTRIBUTE_CONTEXT = "ATTRIBUTE PRICE."
    AND (SELECT USER_SEGMENT_NAME
    OF QP_SEGMENTS_V
    WHERE SEGMENT_MAPPING_COLUMN = PS. PRICING_ATTRIBUTE
    AND PRC_CONTEXT_ID IN
    (SELECT PRC_CONTEXT_ID
    OF QP_PRC_CONTEXTS_V
    WHERE PRC_CONTEXT_CODE =
    "ATTRIBUTE PRICING")) = "CCC".
    AND THE CASE
    WHEN ((SELECT USER_SEGMENT_NAME
    OF QP_SEGMENTS_V
    WHERE SEGMENT_MAPPING_COLUMN = PS. PRICING_ATTRIBUTE
    AND PRC_CONTEXT_ID IN
    (SELECT PRC_CONTEXT_ID
    OF QP_PRC_CONTEXTS_V
    WHERE PRC_CONTEXT_CODE =
    "ATTRIBUTE PRICING")) =.
    "CCC")
    THEN
    (TEST_PRIC_ATT_DISP (PS. PRICING_ATTR_VALUE_FROM))
    END WITH
    (SELECT LOOKUP_CODE
    OF FND_LOOKUP_VALUES_VL
    WHERE LOOKUP_TYPE IN: B4)
    AND PS. LIST_HEADER_ID =: B2
    AND EQXDL. REQUEST_ID =: B1

    Explain the Plan:

    Plain to explain the Plan: -.

    1 lines were retrieved using the unique index QP. QP_PRC_CONTEXTS_TL_U1.
    2 rows were retrieved using the unique index QP. QP_PRC_CONTEXTS_B_U1.
    3 rows in table PS. QP_PRC_CONTEXTS_B were consulted using rowid obtained from an index.
    4 for each row retrieved in step 1, the operation in step 3 was performed to find a corresponding line.
    5 one or more rows were retrieved using QP index. QP_SEGMENTS_B_U3. The index has been analyzed in ascending order.
    6 rows in table PS. QP_SEGMENTS_B were consulted using rowid obtained from an index.
    7 rows were retrieved using the unique index QP. QP_SEGMENTS_TL_U1.
    8 for each row retrieved by step 6, took the operation from step 7 to find one matching row.
    9 rows in table PS. QP_SEGMENTS_TL were consulted using rowid obtained from an index.
    10 for each row retrieved by step 8, step 9 operation was performed to find a corresponding line.
    11 lines were retrieved using the unique index QP. QP_PRC_CONTEXTS_TL_U1.
    12 rows were retrieved using the unique index QP. QP_PRC_CONTEXTS_B_U1.
    13 table PS lines. QP_PRC_CONTEXTS_B were consulted using rowid obtained from an index.
    14 for each row retrieved by stage 11, took the operation from step 13 to find one matching row.
    15 one or more rows were retrieved using QP index. QP_SEGMENTS_B_U3. The index has been analyzed in ascending order.
    16 rows in table PS. QP_SEGMENTS_B were consulted using rowid obtained from an index.
    17 rows were retrieved using the unique index QP. QP_SEGMENTS_TL_U1.
    18 for each row retrieved by stage 16, the operation to step 17 were performed to find a corresponding line.
    19 table PS lines. QP_SEGMENTS_TL were consulted using rowid obtained from an index.
    20 for each row retrieved by step 18 step 19 operation was performed to find a corresponding line.
    21 one or more rows were retrieved using QP index. QP_PRICING_ATTRIBUTES_N8. The index has been analyzed in ascending order.
    22 rows in table PS. QP_PRICING_ATTRIBUTES were consulted using rowid obtained from an index.
    23 one or more rows were retrieved using index EQIX. EQXQP_PRICELIST_DOWNLOAD_N4. The index has been analyzed in ascending order.
    24 table EQIX lines. EQXQP_PRICELIST_DOWNLOAD were consulted using rowid obtained from an index.
    25 for each row retrieved by 22, the operation to step 24 step was performed to find a corresponding line.
    26 rows were retrieved using the unique index QP. QP_PRC_CONTEXTS_TL_U1.
    27 lines were retrieved using the unique index QP. QP_PRC_CONTEXTS_B_U1.
    28 table PS lines. QP_PRC_CONTEXTS_B were consulted using rowid obtained from an index.
    29 for each row retrieved by 26, the operation to step 28 step was performed to find a corresponding line.
    30 one or more rows were retrieved using QP index. QP_SEGMENTS_B_U3. The index has been analyzed in ascending order.
    31 table PS lines. QP_SEGMENTS_B were consulted using rowid obtained from an index.
    32 rows were retrieved using the unique index QP. QP_SEGMENTS_TL_U1.
    33 for each row retrieved by step 31, the operation to step 32 was performed to find a corresponding line.
    34 rows in table PS. QP_SEGMENTS_TL were consulted using rowid obtained from an index.
    35 for each row retrieved by stage 33, the operation to step 34 was performed to find a corresponding line.
    36 one or more rows were retrieved using index APPLSYS. FND_LOOKUP_VALUES_U1. The index has been analyzed in ascending order.
    37 for each row retrieved by step 25, the operation to step 36 was performed to find a corresponding line.
    38 lines were retrieved using the unique index QP. QP_LIST_LINES_PK.
    39 for each row retrieved by stage 37, 38 step operation was performed to find a corresponding line.
    40 table PS lines. QP_LIST_LINES were consulted using rowid obtained from an index.
    41 for each row retrieved by step 39, the operation in step 40 was performed to find a corresponding line.
    42 UNIQUE HASH
    43. a definition of the view was treated, either a stored view VM_NWVW_2 or as defined by 42 steps.
    44 lines have been returned by the SELECT statement.

    Published by: bobo on February 12, 2011 23:42

    Rewrite using subquery fatcoring, everything seems to indicate that the query return rows only when factor_1 returns IBX so there is no need to evaluate factor_1 still elsewhere - we can just use IBX as a value it. ;)
    But I could be wrong and it could not even help either if the slowness is due to function, that we know nothing
    NOT TESTED!

    with
    factor_1 as
    (SELECT USER_SEGMENT_NAME
       FROM QP_SEGMENTS_V
      WHERE SEGMENT_MAPPING_COLUMN = QP.PRICING_ATTRIBUTE
        AND PRC_CONTEXT_ID IN (SELECT PRC_CONTEXT_ID
                                 FROM QP_PRC_CONTEXTS_V
                                WHERE PRC_CONTEXT_CODE = 'PRICING ATTRIBUTE'
                              )
    ),
    SELECT EQXDL.LIST_HEADER_ID,
           QP.LIST_LINE_ID,
           EQXDL.LIST_NAME,
           EQXDL.DESCRIPTION,
           EQXDL.INVENTORY_ITEM_ID,
           EQXDL.ITEM,
           EQXDL.ITEM_DESC,
           QL.OPERAND,
           NULL LIST_PRICE,
           EQXDL.CURRENCY_CODE,
           EQXDL.PRIMARY_UOM_FLAG,
           QP.PRODUCT_UOM_CODE UOM_CODE,
           QP.PRICING_ATTRIBUTE IBX_PRICING_ATTRIBUTE,
           QP.PRICING_ATTR_VALUE_FROM IBX_PRICING_ATTRIBUTE_VALUE,
           'IBX' IBX_PRICING_ATT_DISP,
           NULL LOCATION_ID,
           'NEW' RECORD_STATUS,
           'IBXAttr' RECORD_COMMENTS,
           SYSDATE CREATION_DATE,
           SYSDATE LAST_UPDATE_DATE,
           :B3 CREATED_BY,
           :B3 LAST_UPDATED_BY,
           TEST_PRIC_ATT_DISP (PRICING_ATTR_VALUE_FROM)
           IBX_PRICING_ATT_DISP_VAL,
           :B1 REQUEST_ID
      FROM QP_LIST_LINES QL,
           QP_PRICING_ATTRIBUTES QP,
           EQIX.EQXQP_PRICELIST_DOWNLOAD EQXDL
     WHERE 1 = 1
       AND QL.LIST_HEADER_ID = QP.LIST_HEADER_ID
       AND QL.LIST_LINE_ID = QP.LIST_LINE_ID
       AND QL.LIST_HEADER_ID = EQXDL.LIST_HEADER_ID
       AND QL.LIST_LINE_ID = EQXDL.LIST_LINE_ID
       AND QP.LIST_HEADER_ID = EQXDL.LIST_HEADER_ID
       AND QP.LIST_LINE_ID = EQXDL.LIST_LINE_ID
       AND QP.PRICING_ATTRIBUTE_CONTEXT = 'PRICING ATTRIBUTE'
       AND (SELECT USER_SEGMENT_NAME FROM factor_1) = 'IBX'
       AND TEST_PRIC_ATT_DISP(QP.PRICING_ATTR_VALUE_FROM) IN (SELECT LOOKUP_CODE
                                                                FROM FND_LOOKUP_VALUES_VL
                                                               WHERE LOOKUP_TYPE IN :B4
                                                             )
       AND QP.LIST_HEADER_ID = :B2
       AND EQXDL.REQUEST_ID = :B1
    

    Concerning

    Etbin

  • performance of the queries on the main tables of the materialized view vs

    Hello

    I'm afraid of strange behavior in db, on my paintings of master UDBMOVEMENT_ORIG (26mil.rows) and UDBIDENTDATA_ORIG (18mil.rows) is created the materialized view TMP_MS_UDB_MV (UDBMOVEMENT stands for this object) that meets certain default conditions and the join on these paintings of master condition. MV got on the lines of 12milions. I created MV for poll not so huge objects, MV got 3GB, paintings of master toghether 12 GB. But I don't understand that physical reads and becomes compatible is less on MV that on the main tables, the final execution time is shorter on the master tables. See my journal below.

    Why?

    Thanks for the replies.


    SQL > set echo on
    SQL > @flush
    SQL > alter system flush buffer_cache;

    Modified system.

    Elapsed time: 00:00:00.20
    SQL > alter system flush shared_pool;

    Modified system.

    Elapsed time: 00:00:00.65
    SQL > SELECT
    UDBMovement.zIdDevice 2, UDBMovement.sDevice, UDBMovement.zIdLocal, UDBMovement.sComputer, UDBMovement.tActionTime, UDBIdentData.sCardSubType, UDBIdentData.sCardType, UDBMovement.cEpan, UDBMovement.cText, UDBMovement.lArtRef, UDBMovement.sArtClassRef, UDBMovement.lSequenz, UDBMovement.sTransMark, UDBMovement.lBlock, UDBMovement.sTransType, UDBMovement.lGlobalID, UDBMovement.sFacility, UDBIdentData.sCardClass, UDBMovement.lSingleAmount, UDBMovement.sVAT, UDBMovement.lVATTot, UDBIdentData.tTarifTimeStart, UDBIdentData.tTarifTimeEnd, UDBIdentData.cLicensePlate, UDBIdentData.lMoneyValue, UDBIdentData.lPointValue, UDBIdentData.lTimeValue, UDBIdentData.tProdTime, UDBIdentData.tExpireDate
    UDBMOVEMENT_orig UDBMovement 3, Udbidentdata_orig UDBIdentData
    4. WHERE
    5 UDBMovement.lGlobalId = UDBIdentData.lGlobalRef (+) AND UDBMovement.sComputer = UDBIdentData.sComputer (+)
    6 AND UDBMovement.sTransType > 0 AND UDBMovement.sDevice < 1000 AND UDBMovement.sDevice > = 0 AND UDBIdentData.sCardType IN (2) AND (bitand(UDBMovement.sSaleFlag,1) = 0 AND bitand(UDBMovement.sSaleFlag,4) = 0) AND UDBMovement.sArtClassRef < 100
    7 AND UDBMovement.tActionTime > = TO_DATE (May 5, 2011 00:00:00 ',' dd/mm/yyyy hh24:mi:ss') + 0.25 AND UDBMovement.tActionTime < TO_DATE (May 5, 2011 00:00:00 ',' dd/mm/yyyy hh24:mi:ss') + 0.5
    8 ORDER BY tActionTime, lBlock, lSequenz;

    4947 selected lines.

    Elapsed time: 00:00:15.84

    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 1768406139

    ------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |
    ------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 7166 | 1238K | 20670 (1) | 00:04:09 |
    | 1. SORT ORDER BY | 7166 | 1238K | 1480K | 20670 (1) | 00:04:09 |
    | 2. NESTED LOOPS |
    | 3. NESTED LOOPS | 7166 | 1238K | 20388 (1) | 00:04:05 |
    |* 4 | TABLE ACCESS BY INDEX ROWID | UDBMOVEMENT_ORIG | 7142 | 809K | 7056 (1) | 00:01:25 |
    |* 5 | INDEX RANGE SCAN | IDX_UDBMOVARTICLE | 10709. 61 (0) | 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | UDBIDENTDATA_PRIM | 1 | | | 1 (0) | 00:00:01 |
    |* 7 | TABLE ACCESS BY INDEX ROWID | UDBIDENTDATA_ORIG | 1. 61. 2 (0) | 00:00:01 |
    ------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    4 - filter("UDBMOVEMENT".") STRANSTYPE"> 0 AND 'UDBMOVEMENT '. "" SDEVICE ' < 1000 AND
    BITAND ("SSALEFLAG", 1) = 0 AND "UDBMOVEMENT". "" SDEVICE ' > = 0 AND BITAND ("UDBMOVEMENT". "SSALEFLAG «(, 4) = 0)" "
    5 - access("UDBMOVEMENT".") TACTIONTIME' > = TO_DATE (' 2011-05-05 06:00 ',' syyyy-mm-jj)
    ('HH24:mi:SS) AND "UDBMOVEMENT". "' TACTIONTIME ', TO_DATE (' 2011-05-05 12:00 ',' syyyy-mm-jj)
    ('HH24:mi:SS) AND "UDBMOVEMENT". ("' SARTCLASSREF" < 100)
    filter ("UDBMOVEMENT". "SARTCLASSREF" < 100)
    6 - access("UDBMOVEMENT".") LGLOBALID "=" UDBIDENTDATA. " "" LGLOBALREF "AND
    'UDBMOVEMENT '. "' SCOMPUTER"="UDBIDENTDATA." ("' SCOMPUTER")
    7 - filter("UDBIDENTDATA".") SCARDTYPE "= 2)


    Statistics
    ----------------------------------------------------------
    543 recursive calls
    0 db block Gets
    84383 compatible Gets
    4485 physical reads
    0 redo size
    533990 bytes sent via SQL * Net to client
    3953 bytes received via SQL * Net from client
    331 SQL * Net back and forth to and from the client
    kinds of 86 (memory)
    0 sorts (disk)
    4947 lines processed

    SQL > @flush
    SQL > alter system flush buffer_cache;

    Modified system.

    Elapsed time: 00:00:00.12
    SQL > alter system flush shared_pool;

    Modified system.

    Elapsed time: 00:00:00.74
    SQL > SELECT UDBMovement.zIdDevice, UDBMovement.sDevice, UDBMovement.zIdLocal, UDBMovement.sComputer, UDBMovement.tActionTime, UDBMovement.sCardSubType, UDBMovement.sCardType, UDBMovement.cEpan, UDBMovement.cText, UDBMovement.lArtRef, UDBMovement.sArtClassRef, UDBMovement.lSequenz, UDBMovement.sTransMark, UDBMovement.lBlock, UDBMovement.sTransType, UDBMovement.lGlobalID, UDBMovement.sFacility, UDBMovement.sCardClass, UDBMovement.lSingleAmount, UDBMovement.sVAT, UDBMovement.lVATTot, UDBMovement.tTarifTimeStart, UDBMovement.tTarifTimeEnd, UDBMovement.cLicensePlate, UDBMovement.lMoneyValue, UDBMovement.lPointValue, UDBMovement.lTimeValue, UDBMovement.tProdTime
    2. OF UDBMOVEMENT WHERE
    3 UDBMovement.sTransType > 0 AND UDBMovement.sDevice < 1000 AND UDBMovement.sDevice > = 0 AND UDBMovement.sCardType IN (2) AND (bitand(UDBMovement.sSaleFlag,1) = 0 AND bitand(UDBMovement.sSaleFlag,4) = 0) AND UDBMovement.sArtClassRef < 100
    4. AND UDBMovement.tActionTime > = TO_DATE (May 5, 2011 00:00:00 ',' the hh24: mi: ss' dd/mm/yyyy) + 0.25
    5 AND UDBMovement.tActionTime < TO_DATE (May 5, 2011 00:00:00 ',' the hh24: mi: ss' dd/mm/yyyy) + 0.5 ORDER BY tActionTime, lBlock, lSequenz;

    4947 selected lines.

    Elapsed time: 00:00:26.46

    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 3648898312

    -----------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    -----------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 2720 | 443K | 2812 (1) | 00:00:34 |
    | 1. SORT ORDER BY | 2720 | 443K | 2812 (1) | 00:00:34 |
    |* 2 | MAT_VIEW ACCESS BY INDEX ROWID | TMP_MS_UDB_MV | 2720 | 443K | 2811 (1) | 00:00:34 |
    |* 3 | INDEX RANGE SCAN | EEETMP_MS_ACTTIMEDEVICE | 2732 | 89 (0) | 00:00:02 |
    -----------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    2 - filter("UDBMOVEMENT".") STRANSTYPE"> 0 AND BITAND ("UDBMOVEMENT". "SSALEFLAG «(, 4) = 0 AND" "
    BITAND ("SSALEFLAG", 1) = 0 AND "UDBMOVEMENT". ("' SARTCLASSREF" < 100)
    3 - access("UDBMOVEMENT".") TACTIONTIME' > = TO_DATE (' 2011-05-05 06:00 ',' syyyy-mm-jj)
    ('HH24:mi:SS) AND "UDBMOVEMENT". "" SDEVICE ' > = 0 AND "UDBMOVEMENT". ' SCARDTYPE ' = 2 AND ".
    'UDBMOVEMENT '. "" TACTIONTIME "< TO_DATE(' 2011-05-05 12:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
    'UDBMOVEMENT '. ("' SDEVICE ' < 1000)
    filter ("UDBMOVEMENT". "SCARDTYPE"= 2 AND "UDBMOVEMENT"." SDEVICE' < 1000 AND
    'UDBMOVEMENT '. ("' SDEVICE ' > = 0)


    Statistics
    ----------------------------------------------------------
    449 recursive calls
    0 db block Gets
    6090 gets coherent
    2837 physical reads
    0 redo size
    531987 bytes sent via SQL * Net to client
    3953 bytes received via SQL * Net from client
    331 SQL * Net back and forth to and from the client
    168 sorts (memory)
    0 sorts (disk)
    4947 lines processed

    SQL > spool off

    Published by: MattSk on February 4, 2013 14:20

    >
    The second query makes a full table of materialized view scan.
    >
    What you base that statement on?

    I do not see any table full scan in terms of the second query. All I see is
    >
    * 2 MAT_VIEW ACCESS BY INDEX ROWID TMP_MS_UDB_MV 2720 443K 2811 (1) 00:00:34

  • Performance of the Oracle

    Hi friends,

    10.2.0.4.0 on Linux

    Background:
    We have the database running on the OS cluster.
    Before the performance of the server on the server issue has Cpu spiked to 100% used. Custom top found some applications (small applications in general, what race in daily batches) were cpu 100% used and the same queries in AWR. For this reason, the database switched to Server B, the queries executed as correctly without problem.
    Server A and B are clustered BONES and using the same hard drive memory.

    After restarting the server/failover, the daily batch runs successfully without any problems for a few days (about 6 to 10 days). The 11th batch runs very slowly.

    We try to find the solution,
    Now, we are not suspecting the question is next to the Oracle. The problem may be with the side OS/hardware.

    All the suggesitions/help will be appreciated.

    Thank you
    KSG

    Hello

    KSG wrote:

    Thanks for your help Nikolay.

    If necessary, I'll tie the AWR report #2, where the batch was properly executed on the failure on server B.

    Thank you

    KSG

    No, it is not necessary.

    The AWR report you posted was for a period of 9 hours, right? It shows CPU DB = 55 000 seconds. 9 hours = 32 400 seconds. For example, the database was using (on average) just 1.5 CPU on 16. I.e. average usage of the CPU by the database was only 10% (and total usage of CPU on the server, as I said earlier, was about 40%, i.e. the database is not yet the main consumer of CPU on the box).

    So based on what I'm leaning to the version that the CPU problem you mentioned was not true. Most likely, you simply misread some of the CPU numbers in reports (like CPU DB = 91%, which could be confusing). To avoid such confusion, I recommend that you go through my blog on the interpretation of the figures of the AWR CPU: reports AWR: interpret the CPU usage | Diagnostician Oracle

    If you leave this part of your account of the event, we have an intermittent problem of performance with a batch. As Jonathan said, SQL plan regression is the most likely explanation of that. If you have the License Pack Tuning and Diagnostic tests, then you can easily diagnose these problems using view ASH (not ASH report), especially if you instrument your code work properly batch:

    (1) in your batch, call dbms_session.set_identifier ()

    (2) once the task is completed, check out the SQL which took as long:

    Select sql_id, count (*) / 10 elapsed_seconds

    from the ashes of dba_hist_active_sess_history

    where client_id =: client_id_you_set_above

    and sample_time between: job_start_time and: job_finish_time

    Group of sql_id

    order of count (*) desc;

    (3) check the hash value of the plan for this SQL query:

    Select begin_interval_time time, plan_hash_value

    of dba_hist_snapshot sn.

    St dba_hist_sqlstat

    where sn.snap_id = st.snap_id

    and sql_id =: sql_id_found_above

    begin_interval_time desc order

    Best regards

    Nikolai

  • information about the performance of the Rules Manager

    I am very interested in this product... and yet, I also find little information on the performance of the oracle community.

    the absolute silence on this product with the usual oracle experts is also a bit suspect.

    I'm sure that there is degradation in performance related to flexibility gains that the tool provides... but what are the costs of performance?

    In addition, there is a "rule Manager utility" announced here... but where you acquire it?

    http://www.Oracle.com/technetwork/database/index-088908.html

    Hello

    I was also interested in this support for the rules of the DB, but I was disappointed to read the news of its demise and the suggestion to buy another product:

    Notice of obsolescence: Manager of rules and Expression filter features database Oracle [ID 1244535.1]
    "The Rules Manager and Expression of data filter Oracle's are removed in the next major version of database Oracle." Support is not available for the life of Oracle Database Release11g Release 2.
    (...)
    Visitors should not start new projects with policy Manager or the Filter Expression. All current user Rules Manager should consider migration to Oracle Business Rules, a component of Oracle Fusion Middleware. "

  • Low power mode will affect the performance of the camera?

    I take my iPhone 6 on a trip backpack my camera. I intend to put the phone in airplane mode and I wonder if I should also set low power mode. If I put it in low power mode, it will reduce the performance of the camera?

    I suggest that you test this configuration by yourself before you leave.

    You can also consider taking the external battery back up like those manufactured by mophie.

  • The operation cannot be performed because the "Firefox" element is in use.

    Whenever I try to download the latest version of Firefox on my mac, I get the following error:

    "The operation cannot be performed because the"Firefox"element is in use."

    I close Firefox and still get this error. I tried this guy 100 times and may not know how to solve this problem. Help!

    I have Mac OS X 10.6.8 v.

    Download the full Firefox installation program and save the file to the desktop
    https://www.Mozilla.org/en-us/Firefox/all.html

    If he has problems with the update or with the permissions then best is to download the full version and trash the version currently installed to do a fresh install of the new version.

    Download a new copy of the Firefox application and save the file to disk on the desktop image

    Your personal data are stored in the Firefox profile folder, so you will not lose your bookmarks and other data to personal when you uninstall and (re) install Firefox.

  • HP 15-g018dx Notebook PC: HP 15-g018dx Notebook PC can upgrade my graphics card to give me a high performance on the game?

    Serial number: [personal information deleted]

    Model: 15-g018dx

    Product number: J5T41UA #ABA

    I tried to figure out if I can update my graphics card and I was wondering if I could, and what graphics card would be better for my laptop for high performance on the game.

    Hello

    No, it uses the CPU/processor AMD Quad-Core APU A6-6310 2.4/1.8GHz:

    http://support.HP.com/au-en/document/c04368180

    You will also need to change the motherboard and it is a very expensive upgrade.

    Kind regards.

  • To increase the performance of the system (Windows XP, Vista, 7 and 8)

    Hi all

    Most people enjoy to keep their work in the computer and working properly, as long as possible. I have provided a few interesting documents that they will help you to keep your system as recent as possible concerning regular maintenance. I am able to provide information for systems already in Windows XP and as recent as Windows 8. What's even better is that these changes and updates can be performed without modification or upgrade any hardware.

    Increase the performance of the XP system

    Increase the Performance of the system Vista

    Increase the performance of the system 7

    Increase the performance of the system 8

    Thank you

    I hope this helps everyone.

  • Can I use an SDXC (512MB) card as a drive hard primary (such as startup HD with El Capitan) on my iMac 5 K - end 2015? It will slow down the performance of the iMac?

    Can I use an SDXC (512MB) card as a drive hard primary (such as startup HD with El Capitan) on my iMac 5 K - end 2015? It will slow down the performance of the iMac?

    I did it with a 32GB SDHC UHS - I Transcend

    I do not recommend under OS X from such media, but it works very well. I was preparing for my review of the CPHA Yosemite and did not want to mess with the internal drive of OS X.

  • Performance of the fans on Satellite L30-134

    Hello

    I have a question about the performance of the cpu on my L30-134 fans (celeron M 410 @1. 46 ghz). Some users posted similar questions on different laptops series L, but I'd love to hear some comments L30 - 134 user/advice.

    The problem is that by default, the fan does not work on the down low all the time, but it is off and goes on every 2-3 minutes hit RPM high and causing a lot of noise. Sometimes however it works down low most of the time, being really quiet and keep the temperature at 48 to 55 degrees, while during for turning on and off the temperature is 58-65.

    There are two options for cooling in Toshiba Power Saver (which came preinstalled) we're max performance, the other is to optimize the battery. The fan seems to work on the Tower on / disable system regardless of the option selected.

    Q1 Do these two options are actually on the fan is constantly working on low speed or activation and deactivation?
    Q2 How can I do the fan runs all the time at low revs?
    Q3 is there third-party security programs to control the fan speed?

    PS: I don't have all of the options on the fan speed in the BIOS

    I'd appreciate your comments. It is not fun to have a noisy fan :)

    Hello

    As you have already mentioned the Toshiba Power saver is preinstalled on your device.
    My phone supports also this utility and energy saver controls the speed of the CPU and cooling mode.
    M unit fan starts to turn if the temperature rises to a higher level, but it depends on the use of the laptop. I don t play much games performance, but I use my camera for graphic design. Sometimes, the fans run really fast and long, but sometimes the laptop is quite.
    My support of intelligence tools speed level 4 CPU processing power and cooling method is the value max performance.
    I think the cooling also depends on the environment

    Moreover, I also read a lot of reviews on the cooling fan rotation and. Sometimes, the update of the BIOS changed the performance of fans

    A small comment to Q3: Don t think that any 3rd party application will control the performance of the fans properly. If the energy saver is not installed on the device, the power of Microsoft options are responsible for execution.

  • Re: Satellite L500-19Z - how to increase the performance of the game?

    Hey

    Basically, I have a Toshiba Satellite L500-19z, with chipsets Intel Series 4 express. I know this isn't a particularly good game, but surely the performance of game should be reasonable?

    I get 80% of the time lag - while on my old laptop (which costs no more than £200) I could play World of Warcraft on it no problem (I played the Sims3 on this laptop and had very few problems. I'm playing Dragon Age, but it is almost impossible.) My laptop is not more than 8 months old.

    I just want to know if there is a way to increase the performance of the game? I thought that maybe to connect my laptop to a PC Tower, but that could lead to the worst game performance, so I said.
    I'm sure I can't replace the graphics card, I've updated my drivers, defragment disks and so on.

    Any help is really appreciated, thanks in advance.

    Hello

    Is there anything to do to increase game performance. It depends on the graphics card and Intel cards are not designed for games, they are a good choice for mobile use because they don t need a lot of power.
    In addition, the peut t graphics card be exchanged, s thereby.

    It s always depending on the game that you can play. Therefore, check the system requirements of all the games you want to play and if your graphics card is supported. Not all games support all graphics cards.

    You must also use low graphics settings in each game. You can use the parameters of t.

    For example on my Satellite U400, possessing the same Intel graphics card, I can play Half Life 2 which has already 5 years but always low medium graphics settings.

  • a bright blue screen protector will affect the performance of the apple pencil?

    a bright blue screen protector will affect the performance of the pencil on my Pro apple iPad?

    He could.

    With any active bluetooth stylus, it's better to NOT HAVE a shield on an iDevice.

Maybe you are looking for

  • How can I get rid of a bar of gap between the title toolbar and the Navigation toolbar?

    It takes a lot of space!

  • Big problems with USB devices

    Since I upgraded to El Capitan I have several, serious problems with USB devices: 1 USB hard drive: everything by watching a video from an external hard drive, the hard drive will be to disassemble all the minutes, the video will be of course hang an

  • Ultiboard PCB outline

    Hi all, I am new to Ultiboard and try to throw a quick Board together.  I'm trying to remove the objectives of optical investment that appear in the upper left corner, bottom left and right corners of the PCB, as their size is disturbing given the bl

  • HP Pavilion dv7 - 7023cl: how to bypass the disable password &amp; systems admin

    Hi, I have a HP Pavilion dv7 - 7023cl laptop At startup, a blue box appears saying "Enter administrator or power on password password. Disabled system code I receive after 3 trials is: 65313517 Help, please.

  • Halo 2 activation

    Hey guys! Well last Boxing day, I bought halo 2 Vista, I was so excited that I installed it on my computer. The computer had integrated graphics card intel, so of course he did not run, I have reinstalled several times and uninstalled. I swapped comp