Huge Jscripts in 11g?

Hello!

Is there an improvement regarding adf huge rc jscrpts in 11g? Dynamic loading?

PaKo

Hi PaKo,

I use an indicator of web.xml context-param taken experimental/no load internally:


  oracle.adfinternal.view.rich.libraryPartitioning.ENABLED
  true

Kind regards
Matt

Tags: Java

Similar Questions

  • In passing the huge parameter to oracle procedure have a performance hit?

    I have a script attached, in which I am trying process/XML parsing in a table (STAGE_TBL) in the XMLTYPE column and insert the data analyzed in another table (PROCESSED_DATA_TBL). The XML file can be huge up to 2MB, which translates into approximately 2000 + lines of analyzed data. The issue I see is when I pass an XML object to a procedure (STAGE_TBL_PROCESS) to analyze its takes about 10 seconds per XML, but rather than from XML if I directly pick up table in the procedure (STAGE_TBL_PROCESS) passing the ID to be about 0.15 seconds. According to the document while params are passed by reference, so why is this variation of performance?

    Details of database Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64-bit version of PL/SQL Production 11.2.0.3.0 - Production "CORE 11.2.0.3.0 Production" TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production

    Note: I could not perform SQL_TRACE or DBMS_STATS as I don't have access to them.

    /*
    This one is taking .15 seconds to process an XML with about 2000 rp_sendRow elements
    */

    DECLARE
     
    CURSOR NewStage IS
      
    SELECT *
      
    FROM STAGE_TBL
      
    WHERE  status = 'N'
      
    ORDER BY PUT_TIME ASC;
      SUBTYPE rt_NewStage
    IS NewStage % rowtype;

      ROW_COUNT INTEGER
    := 0;   -- Return value from calling the procedure
      READ_COUNT INTEGER
    := 0;   -- Number of rows read from the stage table
      INSERT_COUNT_TOTAL INTEGER
    := 0;   -- Number of Inserts Inven records
      ERROR_COUNT INTEGER
    := 0;   -- Number of Inven inserts that did inserted more then 1 row in Inven
      PROCESS_STATUS STATUS
    .MmsStatus;
      STATUS_DESCRIPTION STATUS
    .MmsStatusReason;
      ERRMSG VARCHAR2
    (500);

    PROCEDURE STAGE_TBL_PROCESS (IDDATA IN RAW, PROCESS_STATUS OUT VARCHAR2, STATUS_DESCRIPTION OUT VARCHAR2, ROW_COUNT OUT NUMBER) AS
    /*
      This procedure is to parse the XML from STAGE_TBL and populate the data from XML to PROCESSED_DATA_TBL table

      IN PARAMS
      ----------
      IDDATA - ID from STAGE_TBL
      xData - XMLType field from XML_DOCUMENT of STAGE_TBL

      OUT PARAMS
      -----------
      PROCESS_STATUS - The STATUS of parsing and populating PROCESSED_DATA_TBL
      STATUS_DESCRIPTION - The description of the STATUS of parsing and populating PROCESSED_DATA_TBL
      ROW_COUNT - Number of rows inserted into PROCESSED_DATA_TBL
    */

    BEGIN
      
    INSERT ALL INTO PROCESSED_DATA_TBL 
      
    (PD_ID, 
      STORE
    , 
      SALES_NBR
    , 
      UNIT_COST
    , 
      ST_FLAG
    , 
      ST_DATE
    , 
      ST
    , 
      START_QTY
    , 
      START_VALUE
    , 
      START_ON_ORDER
    , 
      HAND
    , 
      ORDERED
    , 
      COMMITED
    , 
      SALES
    , 
      RECEIVE
    , 
      VALUED
    , 
      ID_1
    , 
      ID_2
    , 
      ID_3
    , 
      UNIT_PRICE
    , 
      EFFECTIVE_DATE
    , 
      STATUS
    , 
      STATUS_DATE
    , 
      STATUS_REASON
    ) 
      
    VALUES (IDDATA 
      
    ,store 
      
    ,SalesNo 
      
    ,UnitCost 
      
    ,StWac 
      
    ,StDt 
      
    ,St 
      
    ,StartQty 
      
    ,StartValue 
      
    ,StartOnOrder 
      
    ,Hand 
      
    ,Ordered 
      
    ,COMMITED 
      
    ,Sales 
      
    ,Rec 
      
    ,Valued 
      
    ,Id1 
      
    ,Id2 
      
    ,Id3 
      
    ,UnitPrice 
      
    ,to_Date(EffectiveDate||' '||EffectiveTime, 'YYYY-MM-DD HH24:MI:SS') 
      
    ,'N'  
      
    ,SYSDATE 
      
    ,'XML PROCESS INSERT')  
      
    WITH T AS
      
    ( SELECT STG.XML_DOCUMENT FROM STAGE_TBL STG WHERE STG.ID = IDDATA)  
    -- This is to parse and fetch the data from XML 
      
    SELECT E.* FROM T, XMLTABLE('rp_send/rp_sendRow' PASSING T.XML_DOCUMENT COLUMNS
      store VARCHAR
    (20) PATH 'store'  
      
    ,SalesNo VARCHAR(20) PATH 'sales' 
      
    ,UnitCost NUMBER PATH 'cost' 
      
    ,StWac VARCHAR(20) PATH 'flag' 
      
    ,StDt DATE PATH 'st-dt' 
      
    ,St NUMBER PATH 'st' 
      
    ,StartQty NUMBER PATH 'qty' 
      
    ,StartValue NUMBER PATH 'value' 
      
    ,StartOnOrder NUMBER PATH 'start-on-order' 
      
    ,Hand NUMBER PATH 'hand' 
      
    ,Ordered NUMBER PATH 'order' 
      
    ,Commited NUMBER PATH 'commit' 
      
    ,Sales NUMBER PATH 'sales' 
      
    ,Rec NUMBER PATH 'rec' 
      
    ,Valued NUMBER PATH 'val' 
      
    ,Id1 VARCHAR(30) PATH 'id-1' 
      
    ,Id2 VARCHAR(30) PATH 'id-2' 
      
    ,Id3 VARCHAR(30) PATH 'id-3' 
      
    ,UnitPrice NUMBER PATH 'unit-pr' 
      
    ,EffectiveDate VARCHAR(30) PATH 'eff-dt' 
      
    ,EffectiveTime VARCHAR(30) PATH 'eff-tm' 
      
    ) E;  
      ROW_COUNT 
    := SQL%ROWCOUNT;  -- Not the # of all the rows inserted.
      PROCESS_STATUS 
    := STATUS.PROCESSED;
      
    IF ROW_COUNT < 1 THEN   -- The insert failed Row Count = 0 No exception thrown
      PROCESS_STATUS 
    := STATUS.ERROR;
      STATUS_DESCRIPTION 
    := 'ERROR Did not insert into Pos Inventory. Reason Unknown';
      
    END IF;
      EXCEPTION
      
    WHEN OTHERS THEN
      ROW_COUNT 
    := 0;
      PROCESS_STATUS 
    := STATUS.ERROR;
      STATUS_DESCRIPTION 
    := 'SqlCode:' || SQLCODE || ' SqlErrMsg:' || SQLERRM;
    END;


    BEGIN
      DBMS_OUTPUT
    .enable(NULL);
     
    FOR A_NewStage IN NewStage
      LOOP
      READ_COUNT
    := READ_COUNT + 1;
      STAGE_TBL_PROCESS
    (A_NewStage.ID, PROCESS_STATUS, STATUS_DESCRIPTION, ROW_COUNT);
      INSERT_COUNT_TOTAL
    := INSERT_COUNT_TOTAL + ROW_COUNT;
      
    IF(ROW_COUNT <= 0 OR PROCESS_STATUS = STATUS.ERROR) THEN
      ERROR_COUNT
    := ERROR_COUNT + 1;
      
    UPDATE STAGE_TBL
      
    SET status  = PROCESS_STATUS,
      status_DATE 
    = SYSDATE,
      status_DESCRIPTION 
    = STATUS_DESCRIPTION
      
    WHERE ID  = A_NewStage.ID;
      
    ELSE
      
    UPDATE STAGE_TBL
      
    SET status  = PROCESS_STATUS,
      status_DATE 
    = SYSDATE,
      status_DESCRIPTION 
    = STATUS_DESCRIPTION,
      SHRED_DT 
    = SYSDATE
      
    WHERE ID  = A_NewStage.ID;
      
    END IF;
      
    COMMIT;
     
    END LOOP;
     
    COMMIT;
     
    IF ERROR_COUNT > 0 THEN
      ERRMSG
    := '** ERROR: ' || ERROR_COUNT || ' Stage records did not insert in to the Processed table correctly';
      RAISE_APPLICATION_ERROR
    (-20001,ErrMsg); 
     
    END IF;
      EXCEPTION
      
    WHEN OTHERS THEN
      RAISE
    ;
    END ;

    /*
    This one is taking 10 seconds to process an XML with about 2000 rp_sendRow elements
    */

    DECLARE
     
    CURSOR NewStage IS
      
    SELECT *
      
    FROM STAGE_TBL
      
    WHERE  status = 'N'
      
    ORDER BY PUT_TIME ASC;
      SUBTYPE rt_NewStage
    IS NewStage % rowtype;

      ROW_COUNT INTEGER
    := 0;   -- Return value from calling the procedure
      READ_COUNT INTEGER
    := 0;   -- Number of rows read from the stage table
      INSERT_COUNT_TOTAL INTEGER
    := 0;   -- Number of Inserts Inven records
      ERROR_COUNT INTEGER
    := 0;   -- Number of Inven inserts that did inserted more then 1 row in Inven
      PROCESS_STATUS STATUS
    .MmsStatus;
      STATUS_DESCRIPTION STATUS
    .MmsStatusReason;
      ERRMSG VARCHAR2
    (500);

    PROCEDURE STAGE_TBL_PROCESS (IDDATA IN RAW, xData IN STAGE_TBL.XML_DOCUMENT%TYPE, PROCESS_STATUS OUT VARCHAR2, STATUS_DESCRIPTION OUT VARCHAR2, ROW_COUNT OUT NUMBER) AS
    /*
      This procedure is to parse the XML from STAGE_TBL and populate the data from XML to PROCESSED_DATA_TBL table

      IN PARAMS
      ----------
      IDDATA - ID from STAGE_TBL
      xData - XMLType field from XML_DOCUMENT of STAGE_TBL

      OUT PARAMS
      -----------
      PROCESS_STATUS - The STATUS of parsing and populating PROCESSED_DATA_TBL
      STATUS_DESCRIPTION - The description of the STATUS of parsing and populating PROCESSED_DATA_TBL
      ROW_COUNT - Number of rows inserted into PROCESSED_DATA_TBL
    */

    BEGIN
      
    INSERT ALL INTO PROCESSED_DATA_TBL 
      
    (PD_ID, 
      STORE
    , 
      SALES_NBR
    , 
      UNIT_COST
    , 
      ST_FLAG
    , 
      ST_DATE
    , 
      ST
    , 
      START_QTY
    , 
      START_VALUE
    , 
      START_ON_ORDER
    , 
      HAND
    , 
      ORDERED
    , 
      COMMITED
    , 
      SALES
    , 
      RECEIVE
    , 
      VALUED
    , 
      ID_1
    , 
      ID_2
    , 
      ID_3
    , 
      UNIT_PRICE
    , 
      EFFECTIVE_DATE
    , 
      STATUS
    , 
      STATUS_DATE
    , 
      STATUS_REASON
    ) 
      
    VALUES (IDDATA 
      
    ,store 
      
    ,SalesNo 
      
    ,UnitCost 
      
    ,StWac 
      
    ,StDt 
      
    ,St 
      
    ,StartQty 
      
    ,StartValue 
      
    ,StartOnOrder 
      
    ,Hand 
      
    ,Ordered 
      
    ,COMMITED 
      
    ,Sales 
      
    ,Rec 
      
    ,Valued 
      
    ,Id1 
      
    ,Id2 
      
    ,Id3 
      
    ,UnitPrice 
      
    ,to_Date(EffectiveDate||' '||EffectiveTime, 'YYYY-MM-DD HH24:MI:SS') 
      
    ,'N'  
      
    ,SYSDATE 
      
    ,'XML PROCESS INSERT')  
    -- This is to parse and fetch the data from XML 
      
    SELECT E.* FROM XMLTABLE('rp_send/rp_sendRow' PASSING xDATA COLUMNS
      store VARCHAR
    (20) PATH 'store'  
      
    ,SalesNo VARCHAR(20) PATH 'sales' 
      
    ,UnitCost NUMBER PATH 'cost' 
      
    ,StWac VARCHAR(20) PATH 'flag' 
      
    ,StDt DATE PATH 'st-dt' 
      
    ,St NUMBER PATH 'st' 
      
    ,StartQty NUMBER PATH 'qty' 
      
    ,StartValue NUMBER PATH 'value' 
      
    ,StartOnOrder NUMBER PATH 'start-on-order' 
      
    ,Hand NUMBER PATH 'hand' 
      
    ,Ordered NUMBER PATH 'order' 
      
    ,Commited NUMBER PATH 'commit' 
      
    ,Sales NUMBER PATH 'sales' 
      
    ,Rec NUMBER PATH 'rec' 
      
    ,Valued NUMBER PATH 'val' 
      
    ,Id1 VARCHAR(30) PATH 'id-1' 
      
    ,Id2 VARCHAR(30) PATH 'id-2' 
      
    ,Id3 VARCHAR(30) PATH 'id-3' 
      
    ,UnitPrice NUMBER PATH 'unit-pr' 
      
    ,EffectiveDate VARCHAR(30) PATH 'eff-dt' 
      
    ,EffectiveTime VARCHAR(30) PATH 'eff-tm' 
      
    ) E;  
      ROW_COUNT 
    := SQL%ROWCOUNT;  -- Not the # of all the rows inserted.
      PROCESS_STATUS 
    := STATUS.PROCESSED;
      
    IF ROW_COUNT < 1 THEN   -- The insert failed Row Count = 0 No exception thrown
      PROCESS_STATUS 
    := STATUS.ERROR;
      STATUS_DESCRIPTION 
    := 'ERROR Did not insert into Pos Inventory. Reason Unknown';
      
    END IF;
      EXCEPTION
      
    WHEN OTHERS THEN
      ROW_COUNT 
    := 0;
      PROCESS_STATUS 
    := STATUS.ERROR;
      STATUS_DESCRIPTION 
    := 'SqlCode:' || SQLCODE || ' SqlErrMsg:' || SQLERRM;
    END;


    BEGIN
      DBMS_OUTPUT
    .enable(NULL);
     
    FOR A_NewStage IN NewStage
      LOOP
      READ_COUNT
    := READ_COUNT + 1;
      STAGE_TBL_PROCESS
    (A_NewStage.ID, A_NewStage.XML_DOCUMENT, PROCESS_STATUS, STATUS_DESCRIPTION, ROW_COUNT);
      INSERT_COUNT_TOTAL
    := INSERT_COUNT_TOTAL + ROW_COUNT;
      
    IF(ROW_COUNT <= 0 OR PROCESS_STATUS = STATUS.ERROR) THEN
      ERROR_COUNT
    := ERROR_COUNT + 1;
      
    UPDATE STAGE_TBL
      
    SET status  = PROCESS_STATUS,
      status_DATE 
    = SYSDATE,
      status_DESCRIPTION 
    = STATUS_DESCRIPTION
      
    WHERE ID  = A_NewStage.ID;
      
    ELSE
      
    UPDATE STAGE_TBL
      
    SET status  = PROCESS_STATUS,
      status_DATE 
    = SYSDATE,
      status_DESCRIPTION 
    = STATUS_DESCRIPTION,
      SHRED_DT 
    = SYSDATE
      
    WHERE ID  = A_NewStage.ID;
      
    END IF;
      
    COMMIT;
     
    END LOOP;
     
    COMMIT;
     
    IF ERROR_COUNT > 0 THEN
      ERRMSG
    := '** ERROR: ' || ERROR_COUNT || ' Stage records did not insert in to the Processed table correctly';
      RAISE_APPLICATION_ERROR
    (-20001,ErrMsg); 
     
    END IF;
      EXCEPTION
      
    WHEN OTHERS THEN
      RAISE
    ;
    END ;

    My
    XML with just one rp_sendRow element, it can go upto 2000 rp_sendRow elements
    <?xml version = \"1.0\" encoding = \"UTF-8\"?> 
    <rp_send xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> 
      
    <rp_sendRow> 
      
    <store>0123</store> 
      
    <sales>022399190</sales> 
      
    <cost>0.01</cost> 
      
    <flag>true</flag> 
      
    <st-dt>2013-04-19</st-dt> 
      
    <st>146.51</st> 
      
    <qty>13.0</qty> 
      
    <value>0.0</value> 
      
    <start-on-order>0.0</start-on-order> 
      
    <hand>0.0</hand> 
      
    <order>0.0</order> 
      
    <commit>0.0</commit> 
      
    <sales>0.0</sales> 
      
    <rec>0.0</rec> 
      
    <val>0.0</val> 
      
    <id-1/> 
      
    <id-2/> 
      
    <id-3/> 
      
    <unit-pr>13.0</unit-pr> 
      
    <eff-dt>2015-06-16</eff-dt> 
      
    <eff-tm>09:12:21</eff-tm> 
      
    </rp_sendRow> 
    </rp_send> 

    The issue I see is when I pass an XML object to a procedure (STAGE_TBL_PROCESS) to analyze its takes about 10 seconds per XML, but rather than from XML if I directly pick up table in the procedure (STAGE_TBL_PROCESS) passing the ID to be about 0.15 seconds.

    In version 11.1, Oracle introduced a new model of storage for the data type XMLType called XML binary.

    Binary XML become the default in 11.2.0.2, to disparage the old storage based on CLOB.

    Binary XML is a format optimized after analysis for the storage and treatment of the XQuery.

    When an XQuery expression is evaluated (through for example XMLTABLE) on an XMLType column stored as binary XML, Oracle can use an ongoing evaluation of XPath that surpasses the query even crushed a transitional XMLType of several order of magnitude.

    You can see that in the action plan of the explain command:

    SQL> SELECT E.*
      2  FROM stage_tbl t
      3     , XMLTABLE('rp_send/rp_sendRow' PASSING t.xml_document
      4         COLUMNS store VARCHAR(20) PATH 'store'
      5               , SalesNo VARCHAR(20) PATH 'sales'
      6               , UnitCost NUMBER PATH 'cost'
      7         ) E ;
    
    Execution Plan
    ----------------------------------------------------------
    Plan hash value: 1134903869
    
    --------------------------------------------------------------------------------
    | Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT   |           |     1 |  2008 |    32   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS      |           |     1 |  2008 |    32   (0)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL| STAGE_TBL |     1 |  2002 |     3   (0)| 00:00:01 |
    |   3 |   XPATH EVALUATION |           |       |       |            |          |
    --------------------------------------------------------------------------------
    

    When the query is executed on a passenger XMLType (for example, a parameter, or a PL/SQL variable), Oracle cannot run the binary model and use a functional assessment based on memory of the XML DOM-like representation.

    You can see that in the plan to explain it by spoting a 'COLLECTION ITERATOR PICKLER FETCH' operation.

    So what explains the difference (in your version) between treatment from a column of XMLType (stored in binary XML format) or a variable or a parameter.

    From 11.2.0.4 and beyond, things have changed a bit with Oracle, introducing a new transitional level of optimization on XMLType.

    The plan of the explain command will show a "XMLTABLE ASSESSMENT' in this case.

  • Load huge data in oracle table

    Hello

    I am using oracle 11g Express Edition, I have formed a .csv file, which includes a 500MB size data that must be downloaded into the oracle table.

    Please suggest which would be the best method to load data into the table. Data are historical employee ticket, i.e. huge data.

    How to make download of mass data in the suggestion of experts need oracle table on this requirement.

    Thank you
    Sudhir

    best way SQL LOADER

  • Question about 10g to 11g Migration

    Hi all

    I'm working on the upgrade of 10 g and 11 g. I have a huge RPD divided into 10 projects so want to do the upgrade for the 5 first projects and related catalogs and then rest five projects

    I want to know when I convert the 5 later, if it is possible to merge them with the ones I've already updated? If Yes, then how? In short, is it possible to do the migration on the basis of the project? What could be the road of blocks as appropriate.

    If so, please let me know how or guide me to a good documentation/blog.

    Thank you
    Ronny

    Hi Ronny,.

    Divide the existing SPR in two with 5 projects each and also the webcat with respective folders of dashboard. Upgrade the first split in 11g, later, you can upgrade the second set of files and merge with the set of files upgraded to 1.

    Kind regards
    DpKa

  • It takes 10 minutes to connect to OBIEE 11g

    Hi all

    I'm on Windows 2008 64 bit machine. I just upgraded the web catalog to 11g and the RPD. When I connect to OBIEE answers using weblogic, it takes 10 minutes for the initial screen to come.

    In my view, that it shouldn't take that long, what could be the reason for the same thing? I even tried logging as one of the defined user in SPR, but still it takes too much time.

    One thing, I'm on IE 9, would that be a problem?

    Thank you
    Ronny

    Yes Ronny. Whenevr you start server Bi, it will try to load init blocks in the memory of the server. If you have the number of blocks init with incorrect connection credentials pool, all blocks will fail which will be recorded in the server log file. This have a huge performance. Try to correct the parameters of CP and see if you find any improvement in performance.

    Kind regards
    DpKa

  • with no prefixes local prefix index a huge difference

    Hello world

    I have a little problem on the partitioning and I didn't understand why. Here is my scenario:

    I created a partitioned table, partition is a date column range partition ona, then I created 2 index a prefix, a non pre-fixed and both of them is local. When I run a query based on the partition column and another (that I've indexed) execution plans are really different. as an example:

      CREATE TABLE PART_HAREKET_TABLE (
        ISLEM_TAR DATE, -- MY PARTITION COLUMN
        ISLEM_REF VARCHAR2(10), -- INDEX COLUMN
        ... -- OTHER COLUMNS HERE
      );
      
    -- load data to the table from one of my prod table...
    
      CREATE INDEX I_PART_HAREKET_1 ON PART_HAREKET_TABLE(ISLEM_TAR, ISLEM_REF) LOCAL;
      
      CREATE INDEX I_PART_HAREKET_2 ON PART_HAREKET_TABLE(ISLEM_REF) LOCAL;
      
      EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'PART_HAREKET_TABLE', ESTIMATE_PERCENT => 100, CASCADE => TRUE);
    After this, I run these queries:
      EXPLAIN PLAN FOR
      select /*+ INDEX(PART_HAREKET_TABLE, I_PART_HAREKET_1 ) */ * 
      from   part_hareket_table 
      where islem_tar = to_Date('22/01/2012','dd/mm/yyyy') and islem_ref like 'KN%';
    
    execution plan:
    
    -------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                          | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    -------------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                   |                    |  1243 |   195K|    19   (0)| 00:00:01 |       |       |
    |   1 |  PARTITION RANGE SINGLE            |                    |  1243 |   195K|    19   (0)| 00:00:01 |    62 |    62 |
    |   2 |   TABLE ACCESS BY LOCAL INDEX ROWID| PART_HAREKET_TABLE |  1243 |   195K|    19   (0)| 00:00:01 |    62 |    62 |
    |*  3 |    INDEX RANGE SCAN                | I_PART_HAREKET_1   |  1243 |       |     5   (0)| 00:00:01 |    62 |    62 |
    -------------------------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
       3 - access("ISLEM_TAR"=TO_DATE(' 2012-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "ISLEM_REF" LIKE 'KN%')
           filter("ISLEM_REF" LIKE 'KN%')
    It is a good cost I think also as predicate info I see ISLEM_TAR and ISLEM_REF.

    When I use this:
      EXPLAIN PLAN FOR
      select /*+ INDEX(PART_HAREKET_TABLE, I_PART_HAREKET_2 ) */ * 
      from   part_hareket_table 
      where islem_tar = to_Date('22/01/2012','dd/mm/yyyy') and islem_ref like 'KN%';
    
    -------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                          | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    -------------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                   |                    |  1243 |   195K|  8209   (1)| 00:01:55 |       |       |
    |   1 |  PARTITION RANGE SINGLE            |                    |  1243 |   195K|  8209   (1)| 00:01:55 |    62 |    62 |
    |*  2 |   TABLE ACCESS BY LOCAL INDEX ROWID| PART_HAREKET_TABLE |  1243 |   195K|  8209   (1)| 00:01:55 |    62 |    62 |
    |*  3 |    INDEX RANGE SCAN                | I_PART_HAREKET_2   |   141K|       |   218   (1)| 00:00:04 |    62 |    62 |
    -------------------------------------------------------------------------------------------------------------------------
     
    Predicate Information (identified by operation id):
    ---------------------------------------------------
     
       2 - filter("ISLEM_TAR"=TO_DATE(' 2012-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       3 - access("ISLEM_REF" LIKE 'KN%')
           filter("ISLEM_REF" LIKE 'KN%')
    as you can see here, there is a huge cost difference and ISLEM_TAR (partitioned column) is also used as a filter not access. These indices are LOCAL

    so I expect that second index (non-prefixed) would be more effective. because oracle already know which partition which must be read and this index is smaller (just a column a) so I thought, Oracle will find appropriate part partition index and will read just this score (for the index and table) thus rows.

    even the time of the request are different, first we (prefix) brings data ms 0.031, second 0.375ms (no prefixes).

    but it's not? what Miss me?

    You may say that ' AS' operator cause than on the ISLEM_REF column. I also use it as equal "=",
    for the first query costs 4, for a second, now cost 8. 2 again...

    the partition size is approximately 440 MB

    an example of similer is also here exist: index Local: prefix or no prefix

    Jonathan Lewis made an example and it works very well...

    My db:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE     11.2.0.2.0     Production"
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    find the appropriate index part partition

    Who is 62 of Partition of Index in both cases.

    In both cases, there is a one-to-one correspondence between the partition table and the index partition.

    However, in the second case, he expects to have to browse through the index entries 141thousand against only 1243 index entries in the first case. Oracle expects to have to read several index blocks in the second case.

    I guess your partitions are per month and not per day. So an index that is located on ISLEM_TAR + ISLEM_REF of columns is a more 'fine' (i.e. more accurate) because it only reads the entries for that one day of the index.
    In the second case, Oracle expects that it returns rowid for several days (which means more ROWID returned from the index) and will filter the lines that do not match the date target when he reads the table. You'll notice that the use of the second filter on date against the table - read more lines of the table (because more ROWID is returned by the index).

    Hemant K Collette

    Published by: Hemant K Collette on 10 July 2012 15:17
    --Added clarification that 'fine' are "more accurate".

  • Size of the table space increases too quickly (DEV_MDS of soa suite 11g)

    I use Oracle 11g SOASuite.

    We are facing a problem of database (table space). After each schema deployment that dev_mds increases of 40 MB. We are therefore very often our processes in the development stage. and while cancellation of the deployment of the process. Ideally the memory should be freed. but it's not free memory.

    Please, help me out of this problem. and it will be appreciated if someone will explain how this schema size increases too quickly.

    Thank you

    Whenever you cancel the deployment a composite of a partition in the composite EM console gets also deleted from the MDS store. But since you must keep all versions of your composite, that your MDS store is bound to grow with each deployment.

    As you mentioned that your size of the composite is huge due to activity of embedded java, it's something that your design should be avoided because there is no resuability for your component java embedded in several versions of the same composite or composite.

    If you are wanting to use Java in your BPEL process I recommend to use a resalable composite with a context of spring that you can reference from different composite materials or different versions of the same composite.

    It also gives you an advantage in terms of modularisation your application, because whenever you need to change your composite Java or BPEL, they can be changed and deployed independently.

  • How to convert pictures from iPhoto?  I need general info and advice.  My library is huge (95 GB).  I am running OSX El Capitan 10.11.6 on iMac.

    How to convert pictures from iPhoto?  I need general info and advice.  I saw the help info but I am very nervous to take on this project, as my library is huge (95GB), organized in several events and albums.  My library is saved (Time Machine SimpleSave HD).  I am running OSX El Capitan 10.11.6 on iMac.

    Have you seen this document?   Updated Photos for OS X - Apple iPhoto support

    https://support.Apple.com/en-GB/HT204655

    Is your iPhoto Library Library on your system drive in the pictures folder? And you have a lot of free space on your system drive? The migration will need additional temporary storage.

    Then drag the iPhoto library icon pictures open to create a new library of Photos of her.

    Your albums appear unchanged in the Photos.  Events will appear as additional albums, because the pictures has no events.  See this link: How Photos handles content and metadata for iPhoto and Aperture - Apple Support

  • Can a huge quantity of slow iphone photos

    Hello

    A simple question. My wife and I have both an iphone 6 (128 for me, 64 for her).

    My phone is slower and slower over time and we the speed difference can be seen when comparing the two. His phone is as fast as the first day.

    The difference, I have an amount 'huge' photograph in my phone. (60 GB, 13 k photos/video).

    This may be the source of the slow?  I still have more than 20 GB of free space.

    Or... what could be the reason...?

    Thank you

    JC

    the storage of the photos should not affect the speed but have several open applications, having applications that are more intense than others.

    Better test would be to restart the two phones, from a cold start and measure the response.

  • Huge CE0682 on the back?

    Hello! I just got my new iPhone SE and I noticed that on the back it has a huge letters with numbers "CE0682"... "just huge. Before I had an iPhone 5 s, and all this writing was smaller. I was wondering if it is normal that they are now so big and why? Thank you

    Try this discussion > on iPhone number 6 on the back, CE0682?

    There is nothing to worry.

  • After completing TUEBL why Firefox saves a huge file of 3 GB in profiles called epub with 50000 + files?

    I found the program Mozbackup always took to back up Firefox, and the compressed backup .pcv file that results has been HUGE - gigabytes. Then I checked my Firefox profile and found that he had stored this huge epub file. Often, I download books from TUEBL but don't see absolutely no use storing one of these in the Firefox profile. This problem is the same in my little Toshiba Netbook, except that the epub file was larger than 4 GB. In computers, I sent the fiolder epub to the trash, and Firefox and Mozbackup now work correctly. If the problem persists OK, I'll permanently delete these epub files.

    I think that there must be the extension. The manual says:

    When you open a file with EPUBReader ePub, ePub file is downloaded and stored in your Firefox profile folder. By clicking on the button Save, you have the possibility to store the ePub file to a location of your choice.

    http://www.epubread.com/en/manual.php#navigation

    Maybe he has a hidden parameter of moving its folder default to, say, Documents? Maybe this feature can be added? There is a link to contact at the bottom of this page to submit questions that are not answered on the site.

  • Huge creation of spacing of the paragraphs in the Thunderbird received emails

    Hello

    Emails suddenly began to rise with the huge paragraphs spacing (the equivalent of several lines of height). It is in both the screen display and the text quoted in the replies. It happened with all the emails of all shippers of all dates, which were viewing with normal paragraph spacing before something happened. Can someone help with the adjustment to fix?

    (A program of dictation, Dragon Naturally Speaking, is the suspect. Strikes dictated inadvertently create absolute chaos in Thunderbird. Any help to stop this from happening would be appreciated too!)

    I entirely agree with you on the troubled Dragon. I only hope that attitude will not spread to other software. Have a dictation program or disable did not affect anything - I don't expect not.

    Fortunately, I had saved with Mozbackup Thunderbird. I've restored according to the attachment. It worked. Then of course tweaks in TB had been changed.

    For the record, I asked on what parameter had been changed and how to change. The same question applies to e-mails that are indented of quoted text, I've got quoteandcompose Manager which removes lines and > s, but that doesn't seem to dash address.

  • presentation of iTunes Store huge (can't scroll or click on)

    When I opened the iTunes store, the default layout is huge.  Album art images are very large, and the right hand navigation is not yet visible.  I don't see any of my scroll bars, and when I use my mouse wheel to scroll, I go directly from the top of the page, directly to the bottom of the page.  I see nothing in between.  I zoom out, so that I could see small icons and small fonts and also shows the right hand navigation, but then my cursor cannot click items.  The location of the mouse and the mouse actions do not align.

    I tried the solution to go in Edition > Preferences > advanced > reset the Cache.  Who has not worked for me.  I use iTunes on a PC of Windows 10 and set to level to the latest version of iTunes.

    Hey Designasaurs,

    If you encounter some problems with how your iTunes store is displayed, try force leaving the app the revival. A quick reboot of your PC may be be useful but not necessary if the renunciation and the stimulus has solved the problem. Follow the steps below to force quit your iTunes Application

    1. right click on the taskbar, a menu should display

    2. click on the Task Manager

    3. look for the iTunes process and highlight

    4. then click on end task.

    Once the application closes completely, try to restart to see if the problem is resolved. Thank you for using communities of Apple Support for assistance.

    Have a great day!

  • What happened to save a bookmark in an existing folder? Now, I have to save and then change the bookmarks - huge time waster

    I'm sorry if I'm simply not see something obvious, but save and organize bookmarks appear to have taken a huge step backward. It seems I can save only a bookmark in the zone unsorted bookmarks and then need to open the bookmarks library to move the bookmark to a folder. In the past, you had options to save a new bookmark in an existing folder or create a new folder and save here.
    Again, I'm sorry if I simply don't see a method to achieve that is right in front of my face.

    • By clicking on the star in the toolbar navigation go to bookmark the page in the "Unsorted Bookmarks" folder and the star lights in blue to show that
    • Bookmarks > 'Bookmark this Page' (Ctrl + D) going to bookmark the page in the folder of bookmarks Menu (you will need to confirm that)
    • 'Bookmark this Page' is accessible via the context menu in the area through the bookmarks menu or navigation in the menu or Firefox menu button drop-down bar (Alt + B)
    • If the URL in the Navigation bar is set bookmark and the star is blue highlight then click on this star highlighted or use the menu item "Bookmark This Page/Edit this bookmark" (Ctrl + D) to change the properties of the bookmark as the name and the location and move the bookmark to another folder or delete the bookmark
  • I get emails from Sparklebox to the United Kingdom. Recently all of a sudden they say they are hugs and Cookies which is a site of recipes in the USA. How can I change?

    Sparklebox is a site of educational resources in the United Kingdom. I get emails from them regularly.
    They started suddenly a couple of weeks when I arrived indicating that they come from hugs and Cookies XOXO that a recipe to the site in the United States who subscribe to.
    Can you please change it to reveal that they are Sparklebox not hugs and Cookies XOXO?

    Sorry if you think that the complaints of not being able to see your question are dumb. But it is very difficult, and after complaining over a year my frustration is very high.

    Regarding the removal of the entry in your address book for hugs and Cookies, is nothing of the sort. Thunderbird use your display name in address book preferences to fill the list.

    Of the last 100 people who came here to complain that the mail of a person was in fact someone else. Their book of addresses for this person was the cause in probably 99 of them.

Maybe you are looking for