SQL performance

Can someone help me on this issue...

I want to reduce the cost of the query below

DELETE FROM Table1

WHERE REITTITUNNUS NOT IN)

(SELECT REITTITUNNUS FROM Table2))

Help, please.

First you issue: alter session enable dml parallel;

Secondly, you add the indicator parallel to your delete statement:

REMOVE / * + parallel (table 1) * / FROM Table1

IF REITTITUNNUS NOT IN

(SELECT REITTITUNNUS FROM Table2);

For more details, see:

Oracle Database Search Results: DML in parallel

Parallel execution of understanding - part 1

Oracle related stuff: parallel DML - Classics (non-direct-path) inserts as select

Tags: Database

Similar Questions

  • You should know about optimizing SQL performance

    Hi gurus

    I want to know/learn some SQL performance settings, can you please guide me how to I achieve this? Thanks in advance

    Shuumail wrote:

    Can thanks for the reply, you offer me any specific book in your mentioned link?

    The subject itself is not my strong suite, but here are some observations:

    I recently bought the book of Kevin Meade. It is self-published and self published, is not as polished as a lot, but the content seems to be very good.  It establishes a methodical approach with a solid knowledge of the subject.

    Over the years, I've become less enamored of what whether by Burleson.  I was a little surprised to see that one of the books was written by him and Tim Hall.  I do not rely on the website of Mr. Hall for a bunch of stuff 'how to set up'.  On the other hand, if I see a google search show site of Burleson, I usually just pass.

    Whatever it is published by Oracle Press will probably be as solid than anything either, although not totally without error (no book is).  Rich Niemiec is unquestionably one of the top experts on the workings of the oracle, so I expect his book to be very authoritarian.

    I've never been disappointed in a book published by O'Reilly.  They are easy readings and so probably would make a good book "start here."

  • Need a tool to investigate SQL performance

    If anyone can share a tool to study the SQL performance?

    Hello

    I don't know if this is a good place but you can opt for Quest Toad, which is one of the most common SQL configuration tool.  However, if you are interesting in SQL performance specifically to your DB, then you can consider SQL tuning packs that come with DB Oracle enterprise edition (careful if you use the standard edition).

    In addition, Oracle has a very nice set of scripts allow you to download... but you must have an account to the Oracle support:

    SQL Tuning-health check Script (SQLHC) (Doc ID 1366133.1)

  • SQL performance monitor

    Hello..

    I'm new to sql performance monitor and I need help...

    When I try to create a sql tuning set it asks for a schema to create the tuning series.

    1. what should I use for this?
    2. a particular schema will contain all the sql statements required by the database?
    3.i have multiple schemas in the database... do I do to get all the sql statements for the performance of the database?


    Thank you

    djgeo.

    Hello
    You can select a user database (no application) for the creation of sets of tunng sql

    Salman

  • upgrade 9i Sql performance problems, 10 g

    I've heard several times sql performance problem if db upgrade, what is his fix, this problem SQLs appear in ADDM report with Advisor tunning/access sql to run & its suggestions bugs?

    PL see MOS Doc 466181.1 (10 g companion Upgrade) - this topic is covered in this document

    HTH
    Srini

  • 6210XS SQL Performance Benchmarking

    Our company has recently acquired some new berries for a new ERP system. I am the senior analyst programmer on the project and I'm a beginner-intermediate level on the SAN storage, virtualization and optimization of the performance of SQL. I need to get the speed and at the head of what to expect from this new equipment and best practices to test and manage. Our current ERP is on HP - UX and Informix battery is alien technology in relation to where we are.

    We have a network services division, which was responsible for managing the side home with ESX and EqualLogic 6500 non - ERP. This team is more known in the general management of this equipment, but less time to devote to this new ERP project so I spend to help everyone to get more confidence in the train and educate me about it. Phew. To obtain meat now.

    Setup: dedicated network 10 GB iSCSI with frames enabled. No set MPIO. Dedicated to storage pools for the 6210xs, 6210 (10 K SAS) and 6510 (7200 K). All about the 10 GB.

    I use a tool called MS SQLIO to test the OPS are / s of the 6210XS. I used one of the focus of test by default example of the doc "using SQLIO.

    brief: 6 minutes test, sequential I / O, 2 queries of suspense, of the size of e/s/o requires 256 bytes and a 15 GB test file. The results were:

    H:\SQLIO>SQLIO - kR-s360-fsequential-o2-b256 - LS - Fparam.txt
    SQLIO v1.5.SG
    using meter system for the timings of the latency, 2343750 counts per second
    file settings used: param.txt
    file h:\testfile.dat with 16 (0-15) son using mask 0 x 0 (0)
    16 son of reading for 360 seconds of file h:\testfile.dat
    using 256 KB sequential IOs
    activation of several i/o per thread with 2 exceptional
    the indicated use size: 15000 MB for the file: h:\testfile.dat
    initialization done
    AGGREGATED DATA:
    flow measurements:
    IOs/sec: 133,93
    MBs/s: 33.48
    latency settings:
    Min_Latency (MS): 61
    Avg_Latency (MS): 238
    Max_Latency (MS): 1269

    I made a new test using different settings and had very different results:

    H:\SQLIO>SQLIO - kW - s10 - frandom-o8-b8 - LS - Fparam.txt
    SQLIO v1.5.SG
    using meter system for the timings of the latency, 2343750 counts per second
    file settings used: param.txt
    file h:\testfile.dat with 8 wires (0-7) using mask 0 x 0 (0)
    8 son writing for 10 seconds in the file h:\testfile.dat
    using random 8 KB IOs
    activation of several i/o per thread with 8 exceptional
    the indicated use size: 102400 MB for the file: h:\testfile.dat
    initialization done
    AGGREGATED DATA:
    flow measurements:
    IOs/s: 24122.61
    MBs/s: 188.45
    latency settings:
    Min_Latency (MS): 0
    Avg_Latency (MS): 2
    Max_Latency (MS): 25

    Novice question - this is obviously not a good result, but I need to figure out why my test is configured incorrectly or why the table struggled to perform under these test conditions. Thank you for taking the time to read and respond.

    Usually performance problems are caused by not having is not the SAN (server, switches, table) set up by best practices and in some cases FW drivers and/or obsolete equipment.

    With ESX generally 99% performance problems are solved with:

    Delayed ACK disabled

    People with disabilities large Offload received

    Ensure using Round Robin of VMware (with e / s through changed to 3), or use the EQL MEM (version the most recent is 1.2) Multipathing

    If you use multiple VMDK (or ROW), in the virtual machine, each should have its own virtual SCSI adapter

    Upgrade to the latest build ESX, switch, and server updates

    Take a look at the links listed here first.  See also the Firmware of Array Release notes.

    Best practices for ESX

    en.Community.Dell.com/.../20434601.aspx

    Configuration Guide for EqualLigic

    en.Community.Dell.com/.../2639.EqualLogic-Configuration-Guide.aspx

    Quick Configuration portal (start Grand Place)

    en.Community.Dell.com/.../3615.Rapid-EqualLogic-configuration-Portal-by-SIS.aspx

    Best practices white papers, look for SQL and ESX

    en.Community.Dell.com/.../2632.Storage-Infrastructure-and-solutions-Team-publications.aspx

    Compatibility matrix

    en.Community.Dell.com/.../20438558

    -Joe

  • SQL performance problem associated with rownum.

    Dear Experts,

    I have a sql

    SELECT TEMP.col1, TEMP.col2, TEMP.col3, TEMP.col4, TEMP.col5, ROWNUM, not

    (SELECT col1, col2, col3, col4 FROM table1 ORDER BY DESC col4 col5) TEMP WHERE rownum between? and?

    When I put the value of range rownum 1 and 100, it works very well. But when I put 101 and 200 no records returned.

    So I modified it as fllows

    SELECT TEMP.col1, TEMP.col2, TEMP.col3, TEMP.col4, TEMP.col5,NWR No. OF

    (SELECT col1, col2, col3, col4, col5, rownum NWR FROM table1 ORDER BY DESC col4) TEMP WHERE NWR between? and?

    It works fine and giving results desire. But the issue here is the modified SQL becomes very slow. It gives results in 20 minutes. Although SQL earlier gave results in a few seconds.

    Records in table1 is 40 million.

    Is there another way to get good performance results accurate?

    Your help will be much appreciated.

    Kind regards

    DD

    Hi try this... If you want that data should be in the specific order (as order by desc col4, then you can use analytical ROW_NUMBER() function as below). Try it out below and let me know in case of any problems

    SELECT TEMP.col1,

    TEMP.col2,

    TEMP.col3,

    TEMP.col4,

    TEMP.col5,

    NWR NO.

    FROM (SELECT col1,

    col2,

    col3,

    COL4,

    col5,

    ROW_NUMBER() over (ORDER BY DESC of col4) rno

    FROM table1)

    NWR WHERE between 101 AND 200;

    (GOLD)

    SELECT TEMP.col1,

    TEMP.col2,

    TEMP.col3,

    TEMP.col4,

    TEMP.col5,

    NWR NO.

    FROM (SELECT col1,

    col2,

    col3,

    COL4,

    col5,

    ROW_NUMBER() OVER(ORDER BY col4 DESC) NWR

    FROM table1)

    WHERE NWR<=>

  • Help with PL/SQL Performance Tuning

    Hi all

    I have a PL/SQL procedure, it works very well. No error and no bugs. However its still holding at the end I use the concatenation operator (|), and I know that its expensive. How to improve the performance of the procedure?

    Here is the code

    create or replace 
    PROCEDURE POST_ADDRESS_CLEANSE AS
    
    
    CURSOR C1 IS
    SELECT Z.ROW_ID,
            Z.NAME
    FROM  STGDATA.ACCOUNT_SOURCE Z;
    
    
    CURSOR  C2 IS 
    SELECT  DISTINCT CLEANSED_NAME || CLEANSED_STREET_ADDRESS || 
            CLEANSED_STREET_ADDRESS_2 || CLEANSED_CITY || CLEANSED_STATE || 
            CLEANSED_POSTAL_CODE AS FULLRECORD
    FROM    STGDATA.ACCOUNT_SOURCE_CLEANSED;
    
    
    V_ROWID Number := 1;
    V_FLAG VARCHAR2(30);
    TEMP_ROW_ID VARCHAR2(10) := NULL;
    
    
    BEGIN
    
    
      -- This loop will update CLEANSED_NAME column in ACCOUNT_SOURCE_CLEANSED table.
      FOR X IN C1 LOOP
        
        TEMP_ROW_ID := TO_CHAR(X.ROW_ID);
        
      UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED A
      SET  A.CLEANSED_NAME = X.NAME
      WHERE A.ROW_ID = TEMP_ROW_ID;
        
        COMMIT;
    
      END LOOP;
      
      -- This loop will update columns EM_PRIMARY_FLAG, EM_GROUP_ID in ACCOUNT_SOURCE_CLEANSED table
      FOR Y IN C2 LOOP
    
    
        UPDATE  STGDATA.ACCOUNT_SOURCE_CLEANSED
        SET     EM_GROUP_ID = V_ROWID
        WHERE   CLEANSED_NAME || CLEANSED_STREET_ADDRESS || CLEANSED_STREET_ADDRESS_2 || 
                CLEANSED_CITY || CLEANSED_STATE || CLEANSED_POSTAL_CODE = Y.FULLRECORD;
    
    
        UPDATE  STGDATA.ACCOUNT_SOURCE_CLEANSED
        SET     EM_PRIMARY_FLAG = 'Y'
        WHERE   CLEANSED_NAME || CLEANSED_STREET_ADDRESS || CLEANSED_STREET_ADDRESS_2 || 
                CLEANSED_CITY || CLEANSED_STATE || CLEANSED_POSTAL_CODE = Y.FULLRECORD 
        AND     ROWNUM = 1;
        
        V_ROWID := V_ROWID + 1;
    
    
        COMMIT;
        
      END LOOP;
      
      UPDATE  STGDATA.ACCOUNT_SOURCE_CLEANSED
      SET     EM_PRIMARY_FLAG = 'N'
      WHERE   EM_PRIMARY_FLAG IS NULL;
      
      COMMIT;
    
    
      --dbms_output.put_line('V_ROW:'||V_ROWID);
      --dbms_output.put_line('CLEANSED_NAME:'||Y.FULLRECORD);  
      
    END POST_ADDRESS_CLEANSE;
    

    Thanks in advance.

    Post edited by: Rooney - added code using the syntax highlight

    Thanks for everyone entered.

    I was able to solve the problem. My solution using the old code with the |, I was able to create an index on the following attributes:

    CLEANSED_NAME | CLEANSED_STREET_ADDRESS |  CLEANSED_STREET_ADDRESS_2 | CLEANSED_CITY | CLEANSED_STATE | CLEANSED_POSTAL_CODE

    I never knew that you can create an index on the concatenated attributes sets. Doing this, I was able to update all of my files and improve performance. All records ran in 80 seconds.

    Thanks again for the help.

  • Examination of dynamic cursor and SQL performance

    Hello world

    I'm a researcher on internet forums and oracle for what is best for my case.


    I tried to rebuild indexes. I had two methods, both works very well, but I'm looking for which is preferable from the point of view of performance

    1 - the use of cursor as in the link below.

    http://www.think-forward.com/SQL/rebuildunusable.htm


    2 - using dynamic SQL, that generated the script file, then run it.

    coil rebuildall.sql
    Select "change the index ' |" owner: '. ' || index_name |' rebuild online; '
    from dba_indexes where status = "UNUSABLE."
    spool off;

    @rebuildall.sql


    Thanks in advance. Your help is appreciated.

    In both cases, that you use dynamic SQL statements, I think that there is no difference in terms of performance, use the method you feel most comfortable

    In all cases, you can follow the timestamps (time is set on & set timing on) and connect the outputs

    Best regards
    Alfonso Vicente
    www.logos.com.uy

  • SQL - pl/sql performance question

    Hi all
    I spent teaching examples reviewed while preparing for certification and stumble on this one:
    "... If you design your application so that all programs that are running an insert on a specific table use the same INSERT statement"that is
    ...
    INSERT INTO order_items
    (order_id, product_id, line_item_id,
    quantity, unit_price)
    VALUES (...)

    your application will run faster if you wrap your dml statement in a procedure from pl/sql instead of use the stand-alone sql statement because of less demand analysis and reduced system Global Area (SGA) memory.
    does not include why it is so. I'm wrong that any sql statement will be cached in the SGA any context it is executed (sql or pl/sql)? And also why this culminates at a minimum analysis?

    Thank you.

    Timin wrote:
    + "If you design your application so that all programs that are running an insert on a specific table use the same INSERT statement, your application will run faster to cause less demand analysis and reduced the system Global Area (SGA) memory." +
    + Your program are also data manipulation language (DML) errors constantly. » +

    I think that what lies is the following.

    You have an application that makes several calls insertion in the database. Suppose emp table.

    Insert into emp (empno, ename, deptno) values (emp_seq.nextval, 'Test', 1);
    
    Insert into emp (empno, deptno, ename) values (emp_seq.nextval, 1, 'new emp');
    
    Insert /*+ append */ into emp (empno, ename, deptno) values (emp_seq.nextval, 'Frank', 2);
    

    These inserts could be from different applications. This is why the spelling may be different.
    All insertions do not use variables binding, but even if they do, they would use still different cursors because commands insert spelly differently and levels of parameter.

    It is better (more efficient) to call an API instead, who performs the insertion.

    execute emp_pkg.Insert_single_emp( 'Test', 1);
    execute emp_pkg.Insert_single_emp( p_ename => 'new emp', p_deptno => 1);
    execute emp_pkg.Insert_single_emp( p_deptno => 2, p_ename => 'Frank');
    

    The api is pl_sql, then a SQL insert statement.

    procedure Insert_single_emp(p_ename in ename.emp%type, p_deptno in deptno.emp%type)
    ...
    begin
    ...
       Insert into emp (empno, ename, deptno)
       values (emp_seq.nextval, p_ename, p_deptno);
    
    end;
    

    All insert statements three will use the binded settings and avoid new analysis view the SQL insert (time analysis).

    I assumed that the cursor is shared and so this means less CMS as well.

    BTW: it is a requirement of Steven Feuerstein typical/suggestion.

    Published by: Sven w. January 7, 2011 18:25

  • SQL Performance question

    Hello

    The following query performs badly when the predicate

    AND (v_actionFlag IS NULL or ACTION_CODE = v_actionFlag)

    is present. In all executions of the query v_actionFlag will be NULL. In addition, because of the plan when the predicate is included, the returned results are incorrect. We seek to treat rows with the lowest priority. With the included predicate query performs the join, gets 20 lines, sorts, and puts back them rather than getting 20 lines with the lowest priority through the index of QUEUE_TAB0 and return of these.

    The questions I have are-

    -Why the predicate affects the query in this way
    -What is the difference between the HASH JOIN ANTI and HASH JOIN RIGHT ANTI


    We were able to remove this predicate as the functionality it supports has not yet been implemented.



    Background

    Version of DB - 10.2.0.4
    optimizer_features_enable - 10.2.0.4
    optimizer_mode - ALL_ROWS
    Table
    
    - table has approximately 475,000 rows and the statistics are up to date
    
    
    sql> desc queue_tab
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     ENTITY_KEY                                NOT NULL NUMBER(12)
     ENTITY_TYPE                               NOT NULL CHAR(1)
     ACTION_CODE                               NOT NULL CHAR(1)
     REPORT_NO                                 NOT NULL NUMBER(12)
     PRIORITY                                  NOT NULL NUMBER(4)
    
    
    
    Indexes
    
    Primary Key (QUEUE_TAB_PK)
    
     ENTITY_KEY                                 
     ENTITY_TYPE                                
     ACTION_CODE                                
     REPORT_NO 
    
    
    Non Unique Index (QUEUE_TAB0)
    
     PRIORITY  
     ENTITY_KEY   
     ENTITY_TYPE  
     ACTION_CODE 
    
    
    
    Cursor
    
    
            SELECT /*+ INDEX_ASC (main QUEUE_TAB0) */
                   REPORT_NO
                 , ENTITY_TYPE
                 , ENTITY_KEY
                 , ACTION_CODE
                 , PRIORITY
              FROM QUEUE_TABV01 main
             WHERE PRIORITY > 1
               AND (v_actionFlag IS NULL OR ACTION_CODE = v_actionFlag )
               AND NOT EXISTS
                   ( SELECT /*+ INDEX_ASC (other QUEUE_TAB_pk) */ 1
                       FROM QUEUE_TABV01 other
                      WHERE main.ENTITY_TYPE = other.ENTITY_TYPE
                        AND main.ENTITY_KEY = other.ENTITY_KEY
                        AND main._ACTION_CODE IN ( constant1, constant2 )
                        AND other.ACTION_CODE IN ( constant3, constant4 ) )
               AND NOT EXISTS
                   ( SELECT 1 FROM QUEUE_TABV01 multi
                      WHERE main.ENTITY_TYPE = multi.ENTITY_TYPE
                        AND main.ENTITY_KEY = multi.ENTITY_KEY
                        AND multi.PRIORITY = 1 )
               AND ROWNUM < rowCount + 1
             ORDER BY PRIORITY, ENTITY_KEY, ENTITY_TYPE,
                      ACTION_CODE;
    
    
                                     
    Plan when predicate "AND (v_actionFlag IS NULL OR ACTION_CODE = v_actionFlag )" is present
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       21      5.53       5.40          2     780463          0          20
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total       23      5.53       5.40          2     780463          0          20
    
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 60     (recursive depth: 1)
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
         20  SORT ORDER BY (cr=780463 pr=2 pw=0 time=5400939 us)
         20   COUNT STOPKEY (cr=780463 pr=2 pw=0 time=5400872 us)
         20    HASH JOIN ANTI (cr=780463 pr=2 pw=0 time=5400823 us)
     459033     TABLE ACCESS BY INDEX ROWID QUEUE_TAB (cr=780460 pr=2 pw=0 time=4640394 us)
     459033      INDEX RANGE SCAN QUEUE_TAB0 (cr=608323 pr=1 pw=0 time=3263977 us)(object id 68038)
      10529       FILTER  (cr=599795 pr=1 pw=0 time=2573230 us)
      10529        INDEX RANGE SCAN QUEUE_TAB_PK (cr=599795 pr=1 pw=0 time=2187209 us)(object id 68037)
          0     INDEX RANGE SCAN QUEUE_TAB0 (cr=3 pr=0 pw=0 time=34 us)(object id 68038)
    
    
    
    
    Plan when predicate "AND (v_actionFlag IS NULL OR ACTION_CODE = v_actionFlag )" is removed
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.02       0.00          0          0          0           0
    Fetch       21      0.05       0.05          0       6035          0          20
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total       23      0.07       0.06          0       6035          0          20
    
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 60     (recursive depth: 1)
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
         20  SORT ORDER BY (cr=6035 pr=0 pw=0 time=54043 us)
         20   COUNT STOPKEY (cr=6035 pr=0 pw=0 time=962 us)
         20    HASH JOIN RIGHT ANTI (cr=6035 pr=0 pw=0 time=920 us)
          0     INDEX RANGE SCAN QUEUE_TAB0 (cr=3 pr=0 pw=0 time=53 us)(object id 68038)
         20     TABLE ACCESS BY INDEX ROWID QUEUE_TAB (cr=6032 pr=0 pw=0 time=701 us)
         20      INDEX RANGE SCAN QUEUE_TAB0 (cr=6001 pr=0 pw=0 time=533 us)(object id 68038)
         40       FILTER  (cr=199 pr=0 pw=0 time=2048 us)
         40        INDEX RANGE SCAN QUEUE_TAB_PK (cr=199 pr=0 pw=0 time=1975 us)(object id 68037)

    user599445 wrote:
    Hello Justin and Camille,

    Thank you for taking the time to look at it. I changed the query to correctly practice the ROWNUM. I run and traced the query with the predicate IS NULL and without, with each track below. As you both have suggested that the predicate appears to have no impact on the plan does. All feedback is appeciated.

    Mark,

    the obvious problem with the new plan is that no record is filtered by the first NOT EXISTS clause (using anti-join operation), and then for each line an index seek is performed that filters the records only about 14 000. It is the search for index that takes most of the time, gets consistent since he performs about 2 e/s logic by research, in total nearly 1 million.

    The last 456 000 rows are then sorted (top n) and the top 20 are returned.

    A possible problem could be that the optimizer does not switch mode optimization first_rows_N due to the variable binding used in the filter ROWNUM.

    You can try to execute the statement using a literal (ROWNUM< 21)="" instead="" of="" the="" bind="" variable="" to="" see="" if="" it="" changes="">

    I think in this case, it could be much more effective for the QUEUE_TAB0 of the market index in the order requested and perform the two NOT EXISTS clause as activities of recursive filters provided as your ROWNUM predicate is generally rather low.

    Be aware however that is you do not use a "binary" NLS_SORT index parameter can not be used for an operation of NOSORT ORDER BY STOPKEY of CHAR values, so please check your settings NLS (NLS_SESSION_PARAMETERS. PARAMETER = "NLS_SORT") in which case the optimizer will not use the index for sorting. Note that the NLS parameters are customer specific and can theoretically be different for each session / client.

    You can test this by using a query simple top N: SELECT * FROM (SELECT * ACTION_CODE, ENTITY_TYPE, ENTITY_KEY, QUEUE_TAB ORDER OF PRIORITY) WHERE ROWNUM<>

    If it does not use the QUEUE_TAB0 index to stop the sort operation, you might have a problem with the NLS parameters.

    In order to prevent the transformation of the GUESSED you can also try adding a hint NO_UNNEST two subqueries ("SELECT / * + NO_UNNEST * /...") ("in the respective subquery) and you can also try to switch mode FIRST_ROWS (n) using for example the FIRST_ROWS indicator (20) in the body of the request (but which must be done by the ROWNUM predicate).

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Using APEX_ITEM in SQL performance issues?

    Hello

    I was wondering if using APEX_ITEM.* in your SQL source for a report would give some performance issues? I expect the report to bring a little more than 3000 documents. When I developed it it was working fine but we had 150 cases or so, but now we have migrated demand during our test the page system will take about 2 minutes to load. Here is my SQL to create the report:

    Select distinct
    initcap (MPa.pa_name) | ' (' || sd. DESIGNATION_CODE | ')' site.
    FRC. REPORT_DESCRIPTION report_category,
    MF. Function FEATURE_DESC
    decode (cmf. SELECTED_FOR_QA, 'Y', 'X', ' N ',' ') QA,.
    () apex_item.select_list_from_query
    21,
    CPF. ASSIGN_TO,
    «Select ss.firstname | "» '' || SS. Surname d, ss. STAFF_NUMBER r
    of snh_staff ss,.
    snh_management_units smu,
    m_pa_snh_area psa
    where ss. MU_UNIT_ID = EMS. UNIT_ID
    and smu. UNIT_ID = psa. UNIT_ID
    and ss. CURRENTLY_EMPLOYED = "Y"
    and psa. SCM_LEAD = "Y"
    and psa. MAIN_AREA = "P"
    and psa.PA_CODE = ' | MPa.pa_code,
    NULL,
    '' YES. ''
    NULL,
    (' ') assign_to,.
    () apex_item.select_list_from_query
    22,
    Decode (to_char (cpf.planned_fieldwork, ' DD/MM /'),)
    30/06 /', 'Q1 ' | TO_CHAR (planned_fieldwork, 'YYYY'),
    30/09 /', 'T2 ' | TO_CHAR (planned_fieldwork, 'YYYY'),
    31/12 /', 'Q3 | TO_CHAR (planned_fieldwork, 'YYYY'),
    31/03 /', 'T4 ' | TO_CHAR (planned_fieldwork-365, "YYYY").
    TO_CHAR (cpf.planned_fieldwork, "YYYY")),
    ' select r d,
    of CM_CYCLE_Q_YEARS') planned_fieldwork,.
    () apex_item.select_list_from_query
    23,
    Decode (to_char (cpf.planned_cmf, ' DD/MM /'),)
    30/06 /', 'Q1 ' | TO_CHAR (planned_cmf, 'YYYY'),
    30/09 /', 'T2 ' | TO_CHAR (planned_cmf, 'YYYY'),
    31/12 /', 'Q3 | TO_CHAR (planned_cmf, 'YYYY'),
    31/03 /', 'T4 ' | TO_CHAR (planned_cmf-365, "YYYY").
    TO_CHAR (cpf.planned_cmf, "YYYY")),
    ' select r d,
    of CM_CYCLE_Q_YEARS') planned_cmf,.
    () apex_item.select_list_from_query
    24,
    CPF.monitoring_method_id,
    "(Select METHOD, MONITORING_METHOD_ID from cm_monitoring_methods where active_flag ="Y"') monitoring_method,
    (apex_item). Text
    25,
    CPF.pre_cycle_comments,
    15,
    255,
    "title =" '. CPF.pre_cycle_comments |' » ',
    'annualPlanningComments '.
    || TO_CHAR (cpf. Comment PLAN_MON_FEATURE_ID)),
    (apex_item). Text
    26,
    TO_CHAR (cpf. CONTRACT_LET, 'MON-DD-YYYY'),
    11,
    (11) contract_let,
    (apex_item). Text
    27,
    TO_CHAR (cpf. CONTRACT_REPORT_PLANNED, 'MON-DD-YYYY'),
    11,
    (11) contract_report,
    (apex_item). Text
    28,
    CPF. ADVISOR_DATA_ENTRY,
    11,
    (11) advisor_entry,
    CMS.complete_percentage | ' ' || status of CMS. Description,
    (apex_item). Text
    29,
    TO_CHAR (cpf. RESULT_SENT_TO_OO, 'MON-DD-YYYY'),
    11,
    (11) result_to_oo,
    CPF. PLAN_MON_FEATURE_ID,
    CMF. MONITORED_FEATURE_ID,
    mpa.PA_CODE,
    MPF. SITE_FEATURE_ID
    of fm_report_category ERS,
    m_feature mf,
    m_pa_features mpf,
    m_protected_area mpa,
    snh_designations sd,
    cm_monitored_features FMC,
    cm_plan_mon_features FCP,
    cm_monitoring_status cms,
    cm_cycles cc,
    msa m_pa_snh_area,
    snh_management_units smu,
    ssa snh_sub_areas
    where frc. REPORT_CATEGORY_ID = mf. REPORT_CATEGORY_ID
    and mf. Feature_code = mpf. FEATURE_CODE
    and mpa.PA_CODE = mpf.PA_CODE
    and mpa. DESIGNATION_ID = sd. DESIGNATION_ID
    and the mpf. SITE_FEATURE_ID = FCM. SITE_FEATURE_ID
    and CME. MONITORED_FEATURE_ID = cpf. MONITORED_FEATURE_ID
    and cms. MONITORING_STATUS_ID = FCM. MONITORING_STATUS_ID
    and cc. CYCLE # = FCM. CYCLE #.
    and msa.PA_CODE = mpa.PA_CODE
    and msa. UNIT_ID = EMS. UNIT_ID
    and msa. SUB_AREA_ID = ass. SUB_AREA_ID
    and cc. CURRENT_CYCLE = 'Y '.
    and msa. MAIN_AREA = 'P '.
    and msa. SCM_LEAD = 'Y '.
    and the mpf. INTEREST_CODE in (1,2,3,9)
    and ((nvl (: P6_REPORTING_CATEGORY, 'TOUT') = 'ALL'))
    and to_char (frc. FCA_FEATURE_CATEGORY_ID) = case nvl (: P6_BROAD_CATEGORY, 'ALL') when "ALL" then to_char (frc. (FCA_FEATURE_CATEGORY_ID) else: P6_BROAD_CATEGORY end)
    or (nvl (: P6_REPORTING_CATEGORY, 'ALL')! = 'ALL')
    and to_char (mf. REPORT_CATEGORY_ID) = case nvl (: P6_REPORTING_CATEGORY, 'ALL') when "ALL" then to_char (mf. REPORT_CATEGORY_ID) else: P6_REPORTING_CATEGORY end))
    and ((nvl (: P6_SNH_SUB_AREA, 'TOUT') = 'ALL'))
    and to_char (msa. UNIT_ID) = case nvl (: P6_SNH_AREA, 'ALL') when "ALL" then to_char (msa. (UNIT_ID) else: P6_SNH_AREA end)
    or (nvl (: P6_SNH_SUB_AREA, 'ALL')! = 'ALL')
    and to_char (msa. SUB_AREA_ID) = case nvl (: P6_SNH_SUB_AREA, 'ALL') when "ALL" then to_char (msa. SUB_AREA_ID) else: P6_SNH_SUB_AREA end))
    and ((nvl (: P6_SITE, 'TOUT')! = 'ALL'))
    and mpa.PA_CODE =: P6_SITE)
    or nvl (: P6_SITE, 'ALL') = 'ALL')

    As you can see I have 9 calls the APEX_ITEM API and when I get out them the works of report that I expect.

    Has anyone else ran into this problem?

    We're currently on APEX: 3.0.1.00.08 and using Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64 bit, Production database.

    Thanks in advance,
    Paul.

    Try to remove all except one of the calls apex_item.select_list_from_query, and then rewrite this one to use subquery factoring. Now compare at the same time for the same query with and without subquery factoring. In addition, 500 lines is a realistic number that you expect someone to change?

  • Query Tunning of Sql performance

    Hello

    I have A table with size tables plus 120,000002 million are size 2 million on the Table B and less than 1 million on table C.

    I had created the Partition and parallel degree 4 on the table Created A. table B with parallel degree 2 and created table C with NOPARALLEL.

    My query is using above tables with joins and insertion in table D use INDICATOR / * + APPEND NOLOGGING * /.
    I ran the command explain on the above criteria, the cost is showing 20767.

    More later created the tables A, B and C with NOPARALLEL. Applied the TIP on Table D / * + APPEND NOLOGGING * /.
    and applied als HINT / * + PARALLEL(A, 4) select PARALLEL(B C, 2) on a query that uses to insert in table D.


    My question, which advised on the creation of PARALLEL degree at the level of the table or query:


    (a) create table with parallel (level 4)

    (b) applying HINT / * + PARALLEL (TABLE A, 4) * / at the level of the query

    Kind regards
    Prakash

    957901 wrote:
    Hello

    I have A table with size tables plus 120,000002 million are size 2 million on the Table B and less than 1 million on table C.

    I had created the Partition and parallel degree 4 on the table Created A. table B with parallel degree 2 and created table C with NOPARALLEL.

    My query is using above tables with joins and insertion in table D use INDICATOR / * + APPEND NOLOGGING * /.
    I ran the command explain on the above criteria, the cost is showing 20767.

    The cost is meaningless out of context.

    More later created the tables A, B and C with NOPARALLEL. Applied the TIP on Table D / * + APPEND NOLOGGING * /.
    and applied als HINT / * + PARALLEL(A, 4) select PARALLEL(B C, 2) on a query that uses to insert in table D.

    My question, which advised on the creation of PARALLEL degree at the level of the table or query:

    (a) create table with parallel (level 4)

    (b) applying HINT / * + PARALLEL (TABLE A, 4) * / at the level of the query

    According to what works best for you to reach.

    Which is another way of saying that you have not provided enough information for anyone here to take an informed decision or a suggestion for you.

    {message: id = 9360002}

    For performance issues/tuning queries, read two threads related to this FAQ: {message identifier: = 9360003}

  • Bad SQL performance, need to advise

    I'm having a problem with the following query:

    SELECT SHIPMENT_NUM
    SHIPPED_DATE
    FROM_ORGANIZATION_ID
    FROM_ORGANIZATION_NAME
    WAYBILL_AIRBILL_NUM
    EXPECTED_RECEIPT_DATE
    RECEIPT_NUM
    EMPLOYE_ID
    BILL_OF_LADING
    FREIGHT_CARRIER_CODE
    PACKING_SLIP
    SHIP_TO_LOCATION_ID
    SHIP_TO_LOCATION
    NUM_OF_CONTAINERS
    COMMENTS
    SHIPMENT_HEADER_ID
    LAST_UPDATED_BY
    LAST_UPDATE_DATE
    LAST_UPDATE_LOGIN
    CREATED_BY
    CREATION_DATE,
    PROGRAM_APPLICATION_ID
    PROGRAM_ID
    PROGRAM_UPDATE_DATE
    REQUEST_ID
    USSGL_TRANSACTION_CODE
    GOVERNMENT_CONTEXT
    RECEIPT_SOURCE_CODE
    ASN_TYPE
    VENDOR_SITE
    $VENDOR_NAME
    ATTRIBUTE_CATEGORY
    ATTRIBUTE1
    ATTRIBUT2
    ATTRIBUT3
    ATTRIBUTE4
    ATTRIBUTE5
    ATTRIBUTE6
    ATTRIBUTE7
    ATTRIBUTE8
    ATTRIBUTE9
    ATTRIBUTE10
    ATTRIBUTE11
    ATTRIBUTE12
    ATTRIBUTE13
    ATTRIBUTE14
    ATTRIBUTE15
    ROW_ID
    OF RCV_MSH_V
    WHERE rcv_msh_v.ship_to_org_id =: 1
    and (rcv_msh_v.shipment_header_id in (select rsl.shipment_header_id
    of rcv_shipment_lines rsl
    where item_id =: 2)) order of shipment_num, expected_receipt_date, shipped_date, shipment_header_id
    --------------------------------

    I looked on occasion and have add some tips to force the CBO to use vice complete sweep of the index table index scans. I also think that there is a problem with the UNION statement. The following view:

    CREATE or REPLACE VIEW FORCE rcv_msh_v (row_id
    shipment_header_id,
    last_updated_by,
    last_update_date,
    last_update_login,
    created_by,
    CREATION_DATE,
    program_application_id,
    PROGRAM_ID,
    program_update_date,
    request_id,
    ussgl_transaction_code,
    government_context,
    Comments,
    bill_of_lading,
    expected_receipt_date,
    freight_carrier_code,
    from_organization_name,
    num_of_containers,
    from_organization_id,
    $VENDOR_NAME,
    vendor_site,
    packing_slip,
    employe_id,
    receipt_num,
    receipt_source_code,
    shipment_num,
    shipped_date,
    ship_to_location,
    ship_to_location_id,
    waybill_airbill_num,
    asn_type,
    attribute_category,
    attribute1,
    attribut2,
    attribut3,
    attribute4,
    attribute5,
    attribute6,
    attribute7,
    attribute8,
    attribute9,
    attribute10,
    attribute11,
    attribute12,
    attribute13,
    attribute14,
    Attribute15,
    vendor_id,
    ship_to_org_id,
    vendor_site_id
    )
    AS
    SELECT rsh. Row_id ROWID, rsh.shipment_header_id, rsh.last_updated_by,
    RSH.last_update_date, rsh.last_update_login, rsh.created_by,
    RSH. CREATION_DATE, rsh.program_application_id, rsh.program_id,
    RSH.program_update_date, rsh.request_id, rsh.ussgl_transaction_code,
    RSH.government_context, rsh.comments, rsh.bill_of_lading,
    RSH.expected_receipt_date, rsh.freight_carrier_code,
    org.ORGANIZATION_NAME from_organization_name, rsh.num_of_containers,.
    RSH.organization_id from_organization_id, NULL, NULL,
    RSH.packing_slip, rsh.employee_id, rsh.receipt_num,
    RSH.receipt_source_code, rsh.shipment_num, rsh.shipped_date,
    HR.location_code ship_to_location, rsh.ship_to_location_id,.
    RSH.waybill_airbill_num, rsh.asn_type, rsh.attribute_category,
    RSH.attribute1, rsh.attribute2, rsh.attribute3, rsh.attribute4,
    RSH.attribute5, rsh.attribute6, rsh.attribute7, rsh.attribute8,
    RSH.attribute9, rsh.attribute10, rsh.attribute11, rsh.attribute12,
    RSH.attribute13, rsh.attribute14, rsh.attribute15, rsh.vendor_id,
    RSH.ship_to_org_id, TO_NUMBER (NULL)
    OF rcv_shipment_headers rsh.
    hr_locations_all_tl RH,
    org_organization_definitions org
    WHERE receipt_source_code IN ('INVENTORY', 'ORDER INTERNAL')
    AND hr.location_id (+) = rsh.ship_to_location_id
    AND human resources. LANGUAGE (+) = USERENV ("LANG")
    AND org.organization_id (+) = rsh.organization_id
    UNION ALL
    SELECT / * + index (top PO_VENDORS_U1) index (rsh RCV_SHIPMENT_HEADERS_N3) * /.
    RSH. Row_id ROWID, rsh.shipment_header_id, rsh.last_updated_by,
    RSH.last_update_date, rsh.last_update_login, rsh.created_by,
    RSH. CREATION_DATE, rsh.program_application_id, rsh.program_id,
    RSH.program_update_date, rsh.request_id, rsh.ussgl_transaction_code,
    RSH.government_context, rsh.comments, rsh.bill_of_lading,
    RSH.expected_receipt_date, rsh.freight_carrier_code, NULL,
    RSH.num_of_containers, TO_NUMBER (NULL),
    POV. $vendor_name $vendor_name, povs.vendor_site_code vendor_site,
    RSH.packing_slip, rsh.employee_id, rsh.receipt_num,
    RSH.receipt_source_code, rsh.shipment_num, rsh.shipped_date,
    HR.location_code ship_to_location, rsh.ship_to_location_id,.
    RSH.waybill_airbill_num, rsh.asn_type, rsh.attribute_category,
    RSH.attribute1, rsh.attribute2, rsh.attribute3, rsh.attribute4,
    RSH.attribute5, rsh.attribute6, rsh.attribute7, rsh.attribute8,
    RSH.attribute9, rsh.attribute10, rsh.attribute11, rsh.attribute12,
    RSH.attribute13, rsh.attribute14, rsh.attribute15, pov.vendor_id,
    RSH.ship_to_org_id, rsh.vendor_site_id
    OF rcv_shipment_headers rsh.
    hr_locations_all_tl RH,
    po_vendors pov,
    POVs po_vendor_sites
    WHERE (receipt_source_code = 'SELLER' AND rsh.asn_type IN ('CRASH', "ASBN")
    )
    AND hr.location_id (+) = rsh.ship_to_location_id
    AND human resources. LANGUAGE (+) = USERENV ("LANG")
    AND pov.vendor_id = rsh.vendor_id
    AND povs.vendor_site_id (+) = rsh.vendor_site_id
    - AND (EXISTS)
    AND (1 (IN)
    SELECT 1
    OF rcv_shipment_lines rsl
    WHERE rsl.shipment_header_id = rsh.shipment_header_id
    AND IN rsl.shipment_line_status_code
    ("EXPECTED", "PARTIALLY RECEIVED", "FULLY RECEIVED")
    AND NOT EXISTS)
    SELECT 1
    BY rcv_transactions_interface rti
    WHERE rti.shipment_line_id =
    RSL.shipment_line_id))
    );

    --------------------------------------------------------------

    Any help/advice would be greatly appreciated.

    Greg

    Published by: user3581064 on August 4, 2010 09:47

    Please read these:

    When your query takes too long
    When your query takes too long...

    How to post a SQL tuning request
    HOW to: Validate a query of SQL statement tuning - model showing

  • 1Z0-024 1Z0-054 vs Performance Tuning

    Hello

    obiously, they are tests for very different versions (8i vs 11g), but if you take that into consideration, I think they are very similar... But in Oracle 8i you had to pass 5 exams (Yes FIVE!) If you want to get your OCP. Today, you get 11G OCP with 3 tests... and if you pass 1Z0-054 you have one cerfitication others...

    Is this true?

    Are different so the two reviews?, it seems that they are very similar (all similar which can be oracle 8i and oracle 11g)

    Anyone who has had 1z0-024 (there are almost 13 years) took 1z0-054?

    Concerning

    Is this true?

    I guess that what you are asking to be true is that the 11g OCP requires three exams (TRUE) and passing the 1Z0 - 054 you grant OCÉ certification in addition to the 11 g OCP (also TRUE).  Theoretically, you could also win the 11g tuning certification without an OCP, but there're so much, we need practical training that must also be met.

    Are different so the two reviews?, it seems that they are very similar (all similar which can be oracle 8i and oracle 11g)

    Well, I don't have a handy list of topics for 1Z0 - 024, so I'll work from subjects for 1Z0 - 054 and extrapolate. First of all, I will list some of the topics in 054 which are features that did not exist in 8i:

    Automatic maintenance AWR, ADDM, tasks, reports of ASH, database Resource manager, planner of the Oracle, SQL tuning automatically, planning Plan SQL, SQL Performance Analyzer, Database Replay, Automatic Shared Memory Management, automatic memory management, space management auto Segment, Segment retractable, Automatic Storage Management, Statspack.

    Please note that the list above is perhaps not complete.  I might have missed some.  However, I think I can answer your question with confidence, saying: "Yes - the two reviews are very different."

Maybe you are looking for

  • IPhone 6s can use in Malaysia

    Dear communities team, Good day to you! Please advise if I buy iphone 6s apple.com (US). Iphone works with the card SIM of Malaysia or carrier worldwide? IPhone 6s can be connected/supported by the cellular network of Malaysia and the WiFi? If I buy

  • Upgrade processor for Satellite M30 106

    Hello I am looking to change the processor in my laptop, but anyone know what is the fastest processor I can use? Best regardsSwarvek

  • How can I keep a window on top while I use other windows

    When I work with a few windows open I want to force one or more to stay at the top, is to always be visible.  Y at - it a command that will do? Thank you

  • How to get the IP address Linksys WRT54GS DHCP assigns to a Wi - Fi connection?

    To connect the Linksys WRT54GS to a demonstration of Microchip DV102411 WiFi Comm card I need the IP address assigned by DHCP. The Administration WRT54GS log is enabled, but there is no incoming or outgoing network activity. The DV102411 can see the

  • HP Envy 6 - 1010ea HDD and SSD upgrade

    Hello I am looking to remove the existing HARD drive and msata SSD with the largest capacity as a 120 / 240 GB msata SSD and 1 TB + HDD for primary storage. I know that come from 2.5 "HDD in a number of variations, especially when it comes to the hei