SQL - pl/sql performance question

Hi all
I spent teaching examples reviewed while preparing for certification and stumble on this one:
"... If you design your application so that all programs that are running an insert on a specific table use the same INSERT statement"that is
...
INSERT INTO order_items
(order_id, product_id, line_item_id,
quantity, unit_price)
VALUES (...)

your application will run faster if you wrap your dml statement in a procedure from pl/sql instead of use the stand-alone sql statement because of less demand analysis and reduced system Global Area (SGA) memory.
does not include why it is so. I'm wrong that any sql statement will be cached in the SGA any context it is executed (sql or pl/sql)? And also why this culminates at a minimum analysis?

Thank you.

Timin wrote:
+ "If you design your application so that all programs that are running an insert on a specific table use the same INSERT statement, your application will run faster to cause less demand analysis and reduced the system Global Area (SGA) memory." +
+ Your program are also data manipulation language (DML) errors constantly. » +

I think that what lies is the following.

You have an application that makes several calls insertion in the database. Suppose emp table.

Insert into emp (empno, ename, deptno) values (emp_seq.nextval, 'Test', 1);

Insert into emp (empno, deptno, ename) values (emp_seq.nextval, 1, 'new emp');

Insert /*+ append */ into emp (empno, ename, deptno) values (emp_seq.nextval, 'Frank', 2);

These inserts could be from different applications. This is why the spelling may be different.
All insertions do not use variables binding, but even if they do, they would use still different cursors because commands insert spelly differently and levels of parameter.

It is better (more efficient) to call an API instead, who performs the insertion.

execute emp_pkg.Insert_single_emp( 'Test', 1);
execute emp_pkg.Insert_single_emp( p_ename => 'new emp', p_deptno => 1);
execute emp_pkg.Insert_single_emp( p_deptno => 2, p_ename => 'Frank');

The api is pl_sql, then a SQL insert statement.

procedure Insert_single_emp(p_ename in ename.emp%type, p_deptno in deptno.emp%type)
...
begin
...
   Insert into emp (empno, ename, deptno)
   values (emp_seq.nextval, p_ename, p_deptno);

end;

All insert statements three will use the binded settings and avoid new analysis view the SQL insert (time analysis).

I assumed that the cursor is shared and so this means less CMS as well.

BTW: it is a requirement of Steven Feuerstein typical/suggestion.

Published by: Sven w. January 7, 2011 18:25

Tags: Database

Similar Questions

  • SQL Performance question

    Hello

    The following query performs badly when the predicate

    AND (v_actionFlag IS NULL or ACTION_CODE = v_actionFlag)

    is present. In all executions of the query v_actionFlag will be NULL. In addition, because of the plan when the predicate is included, the returned results are incorrect. We seek to treat rows with the lowest priority. With the included predicate query performs the join, gets 20 lines, sorts, and puts back them rather than getting 20 lines with the lowest priority through the index of QUEUE_TAB0 and return of these.

    The questions I have are-

    -Why the predicate affects the query in this way
    -What is the difference between the HASH JOIN ANTI and HASH JOIN RIGHT ANTI


    We were able to remove this predicate as the functionality it supports has not yet been implemented.



    Background

    Version of DB - 10.2.0.4
    optimizer_features_enable - 10.2.0.4
    optimizer_mode - ALL_ROWS
    Table
    
    - table has approximately 475,000 rows and the statistics are up to date
    
    
    sql> desc queue_tab
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     ENTITY_KEY                                NOT NULL NUMBER(12)
     ENTITY_TYPE                               NOT NULL CHAR(1)
     ACTION_CODE                               NOT NULL CHAR(1)
     REPORT_NO                                 NOT NULL NUMBER(12)
     PRIORITY                                  NOT NULL NUMBER(4)
    
    
    
    Indexes
    
    Primary Key (QUEUE_TAB_PK)
    
     ENTITY_KEY                                 
     ENTITY_TYPE                                
     ACTION_CODE                                
     REPORT_NO 
    
    
    Non Unique Index (QUEUE_TAB0)
    
     PRIORITY  
     ENTITY_KEY   
     ENTITY_TYPE  
     ACTION_CODE 
    
    
    
    Cursor
    
    
            SELECT /*+ INDEX_ASC (main QUEUE_TAB0) */
                   REPORT_NO
                 , ENTITY_TYPE
                 , ENTITY_KEY
                 , ACTION_CODE
                 , PRIORITY
              FROM QUEUE_TABV01 main
             WHERE PRIORITY > 1
               AND (v_actionFlag IS NULL OR ACTION_CODE = v_actionFlag )
               AND NOT EXISTS
                   ( SELECT /*+ INDEX_ASC (other QUEUE_TAB_pk) */ 1
                       FROM QUEUE_TABV01 other
                      WHERE main.ENTITY_TYPE = other.ENTITY_TYPE
                        AND main.ENTITY_KEY = other.ENTITY_KEY
                        AND main._ACTION_CODE IN ( constant1, constant2 )
                        AND other.ACTION_CODE IN ( constant3, constant4 ) )
               AND NOT EXISTS
                   ( SELECT 1 FROM QUEUE_TABV01 multi
                      WHERE main.ENTITY_TYPE = multi.ENTITY_TYPE
                        AND main.ENTITY_KEY = multi.ENTITY_KEY
                        AND multi.PRIORITY = 1 )
               AND ROWNUM < rowCount + 1
             ORDER BY PRIORITY, ENTITY_KEY, ENTITY_TYPE,
                      ACTION_CODE;
    
    
                                     
    Plan when predicate "AND (v_actionFlag IS NULL OR ACTION_CODE = v_actionFlag )" is present
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       21      5.53       5.40          2     780463          0          20
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total       23      5.53       5.40          2     780463          0          20
    
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 60     (recursive depth: 1)
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
         20  SORT ORDER BY (cr=780463 pr=2 pw=0 time=5400939 us)
         20   COUNT STOPKEY (cr=780463 pr=2 pw=0 time=5400872 us)
         20    HASH JOIN ANTI (cr=780463 pr=2 pw=0 time=5400823 us)
     459033     TABLE ACCESS BY INDEX ROWID QUEUE_TAB (cr=780460 pr=2 pw=0 time=4640394 us)
     459033      INDEX RANGE SCAN QUEUE_TAB0 (cr=608323 pr=1 pw=0 time=3263977 us)(object id 68038)
      10529       FILTER  (cr=599795 pr=1 pw=0 time=2573230 us)
      10529        INDEX RANGE SCAN QUEUE_TAB_PK (cr=599795 pr=1 pw=0 time=2187209 us)(object id 68037)
          0     INDEX RANGE SCAN QUEUE_TAB0 (cr=3 pr=0 pw=0 time=34 us)(object id 68038)
    
    
    
    
    Plan when predicate "AND (v_actionFlag IS NULL OR ACTION_CODE = v_actionFlag )" is removed
    
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.02       0.00          0          0          0           0
    Fetch       21      0.05       0.05          0       6035          0          20
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total       23      0.07       0.06          0       6035          0          20
    
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 60     (recursive depth: 1)
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
         20  SORT ORDER BY (cr=6035 pr=0 pw=0 time=54043 us)
         20   COUNT STOPKEY (cr=6035 pr=0 pw=0 time=962 us)
         20    HASH JOIN RIGHT ANTI (cr=6035 pr=0 pw=0 time=920 us)
          0     INDEX RANGE SCAN QUEUE_TAB0 (cr=3 pr=0 pw=0 time=53 us)(object id 68038)
         20     TABLE ACCESS BY INDEX ROWID QUEUE_TAB (cr=6032 pr=0 pw=0 time=701 us)
         20      INDEX RANGE SCAN QUEUE_TAB0 (cr=6001 pr=0 pw=0 time=533 us)(object id 68038)
         40       FILTER  (cr=199 pr=0 pw=0 time=2048 us)
         40        INDEX RANGE SCAN QUEUE_TAB_PK (cr=199 pr=0 pw=0 time=1975 us)(object id 68037)

    user599445 wrote:
    Hello Justin and Camille,

    Thank you for taking the time to look at it. I changed the query to correctly practice the ROWNUM. I run and traced the query with the predicate IS NULL and without, with each track below. As you both have suggested that the predicate appears to have no impact on the plan does. All feedback is appeciated.

    Mark,

    the obvious problem with the new plan is that no record is filtered by the first NOT EXISTS clause (using anti-join operation), and then for each line an index seek is performed that filters the records only about 14 000. It is the search for index that takes most of the time, gets consistent since he performs about 2 e/s logic by research, in total nearly 1 million.

    The last 456 000 rows are then sorted (top n) and the top 20 are returned.

    A possible problem could be that the optimizer does not switch mode optimization first_rows_N due to the variable binding used in the filter ROWNUM.

    You can try to execute the statement using a literal (ROWNUM< 21)="" instead="" of="" the="" bind="" variable="" to="" see="" if="" it="" changes="">

    I think in this case, it could be much more effective for the QUEUE_TAB0 of the market index in the order requested and perform the two NOT EXISTS clause as activities of recursive filters provided as your ROWNUM predicate is generally rather low.

    Be aware however that is you do not use a "binary" NLS_SORT index parameter can not be used for an operation of NOSORT ORDER BY STOPKEY of CHAR values, so please check your settings NLS (NLS_SESSION_PARAMETERS. PARAMETER = "NLS_SORT") in which case the optimizer will not use the index for sorting. Note that the NLS parameters are customer specific and can theoretically be different for each session / client.

    You can test this by using a query simple top N: SELECT * FROM (SELECT * ACTION_CODE, ENTITY_TYPE, ENTITY_KEY, QUEUE_TAB ORDER OF PRIORITY) WHERE ROWNUM<>

    If it does not use the QUEUE_TAB0 index to stop the sort operation, you might have a problem with the NLS parameters.

    In order to prevent the transformation of the GUESSED you can also try adding a hint NO_UNNEST two subqueries ("SELECT / * + NO_UNNEST * /...") ("in the respective subquery) and you can also try to switch mode FIRST_ROWS (n) using for example the FIRST_ROWS indicator (20) in the body of the request (but which must be done by the ROWNUM predicate).

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • You should know about optimizing SQL performance

    Hi gurus

    I want to know/learn some SQL performance settings, can you please guide me how to I achieve this? Thanks in advance

    Shuumail wrote:

    Can thanks for the reply, you offer me any specific book in your mentioned link?

    The subject itself is not my strong suite, but here are some observations:

    I recently bought the book of Kevin Meade. It is self-published and self published, is not as polished as a lot, but the content seems to be very good.  It establishes a methodical approach with a solid knowledge of the subject.

    Over the years, I've become less enamored of what whether by Burleson.  I was a little surprised to see that one of the books was written by him and Tim Hall.  I do not rely on the website of Mr. Hall for a bunch of stuff 'how to set up'.  On the other hand, if I see a google search show site of Burleson, I usually just pass.

    Whatever it is published by Oracle Press will probably be as solid than anything either, although not totally without error (no book is).  Rich Niemiec is unquestionably one of the top experts on the workings of the oracle, so I expect his book to be very authoritarian.

    I've never been disappointed in a book published by O'Reilly.  They are easy readings and so probably would make a good book "start here."

  • Need a tool to investigate SQL performance

    If anyone can share a tool to study the SQL performance?

    Hello

    I don't know if this is a good place but you can opt for Quest Toad, which is one of the most common SQL configuration tool.  However, if you are interesting in SQL performance specifically to your DB, then you can consider SQL tuning packs that come with DB Oracle enterprise edition (careful if you use the standard edition).

    In addition, Oracle has a very nice set of scripts allow you to download... but you must have an account to the Oracle support:

    SQL Tuning-health check Script (SQLHC) (Doc ID 1366133.1)

  • SQL * Net questions - which forum?

    Does anyone know what forum manages SQL * Net questions?

    THX

    For general issues, it's probably as good a place to ask as any.

    If you have SQL * Net questions that are specific to a particular technology (i.e. how various options of TNS interact with different configurations of CARS), you probably want to ask in the forums dedicated to these products.

    Justin

  • SQL performance monitor

    Hello..

    I'm new to sql performance monitor and I need help...

    When I try to create a sql tuning set it asks for a schema to create the tuning series.

    1. what should I use for this?
    2. a particular schema will contain all the sql statements required by the database?
    3.i have multiple schemas in the database... do I do to get all the sql statements for the performance of the database?


    Thank you

    djgeo.

    Hello
    You can select a user database (no application) for the creation of sets of tunng sql

    Salman

  • upgrade 9i Sql performance problems, 10 g

    I've heard several times sql performance problem if db upgrade, what is his fix, this problem SQLs appear in ADDM report with Advisor tunning/access sql to run & its suggestions bugs?

    PL see MOS Doc 466181.1 (10 g companion Upgrade) - this topic is covered in this document

    HTH
    Srini

  • 6210XS SQL Performance Benchmarking

    Our company has recently acquired some new berries for a new ERP system. I am the senior analyst programmer on the project and I'm a beginner-intermediate level on the SAN storage, virtualization and optimization of the performance of SQL. I need to get the speed and at the head of what to expect from this new equipment and best practices to test and manage. Our current ERP is on HP - UX and Informix battery is alien technology in relation to where we are.

    We have a network services division, which was responsible for managing the side home with ESX and EqualLogic 6500 non - ERP. This team is more known in the general management of this equipment, but less time to devote to this new ERP project so I spend to help everyone to get more confidence in the train and educate me about it. Phew. To obtain meat now.

    Setup: dedicated network 10 GB iSCSI with frames enabled. No set MPIO. Dedicated to storage pools for the 6210xs, 6210 (10 K SAS) and 6510 (7200 K). All about the 10 GB.

    I use a tool called MS SQLIO to test the OPS are / s of the 6210XS. I used one of the focus of test by default example of the doc "using SQLIO.

    brief: 6 minutes test, sequential I / O, 2 queries of suspense, of the size of e/s/o requires 256 bytes and a 15 GB test file. The results were:

    H:\SQLIO>SQLIO - kR-s360-fsequential-o2-b256 - LS - Fparam.txt
    SQLIO v1.5.SG
    using meter system for the timings of the latency, 2343750 counts per second
    file settings used: param.txt
    file h:\testfile.dat with 16 (0-15) son using mask 0 x 0 (0)
    16 son of reading for 360 seconds of file h:\testfile.dat
    using 256 KB sequential IOs
    activation of several i/o per thread with 2 exceptional
    the indicated use size: 15000 MB for the file: h:\testfile.dat
    initialization done
    AGGREGATED DATA:
    flow measurements:
    IOs/sec: 133,93
    MBs/s: 33.48
    latency settings:
    Min_Latency (MS): 61
    Avg_Latency (MS): 238
    Max_Latency (MS): 1269

    I made a new test using different settings and had very different results:

    H:\SQLIO>SQLIO - kW - s10 - frandom-o8-b8 - LS - Fparam.txt
    SQLIO v1.5.SG
    using meter system for the timings of the latency, 2343750 counts per second
    file settings used: param.txt
    file h:\testfile.dat with 8 wires (0-7) using mask 0 x 0 (0)
    8 son writing for 10 seconds in the file h:\testfile.dat
    using random 8 KB IOs
    activation of several i/o per thread with 8 exceptional
    the indicated use size: 102400 MB for the file: h:\testfile.dat
    initialization done
    AGGREGATED DATA:
    flow measurements:
    IOs/s: 24122.61
    MBs/s: 188.45
    latency settings:
    Min_Latency (MS): 0
    Avg_Latency (MS): 2
    Max_Latency (MS): 25

    Novice question - this is obviously not a good result, but I need to figure out why my test is configured incorrectly or why the table struggled to perform under these test conditions. Thank you for taking the time to read and respond.

    Usually performance problems are caused by not having is not the SAN (server, switches, table) set up by best practices and in some cases FW drivers and/or obsolete equipment.

    With ESX generally 99% performance problems are solved with:

    Delayed ACK disabled

    People with disabilities large Offload received

    Ensure using Round Robin of VMware (with e / s through changed to 3), or use the EQL MEM (version the most recent is 1.2) Multipathing

    If you use multiple VMDK (or ROW), in the virtual machine, each should have its own virtual SCSI adapter

    Upgrade to the latest build ESX, switch, and server updates

    Take a look at the links listed here first.  See also the Firmware of Array Release notes.

    Best practices for ESX

    en.Community.Dell.com/.../20434601.aspx

    Configuration Guide for EqualLigic

    en.Community.Dell.com/.../2639.EqualLogic-Configuration-Guide.aspx

    Quick Configuration portal (start Grand Place)

    en.Community.Dell.com/.../3615.Rapid-EqualLogic-configuration-Portal-by-SIS.aspx

    Best practices white papers, look for SQL and ESX

    en.Community.Dell.com/.../2632.Storage-Infrastructure-and-solutions-Team-publications.aspx

    Compatibility matrix

    en.Community.Dell.com/.../20438558

    -Joe

  • Microsoft SQL server question

    The company I work for has a production with SQL Server Standard Edition (64-bit) database server. However when I try to configure Oracle as a linked server, the 'Microsoft OLE DB Oracle provider' does not appear in the drop down menu to the provider. The server is running service pack 2 and the .NET version on the server is 3.5 SP1 and more 2.0 SP2 and 3.0 SP2. Please guide as to what needs to be done.

    Hi Claudine Gupta,

    Welcome to the Microsoft Answers Forum community!

    The question you posted would be better suited to the TechNet community. Please visit the link below to find a community that will provide the best support.

    http://social.technet.Microsoft.com/forums/en-us/w7itpronetworking/threads

    Thank you best regards &,.

    Calogero - Microsoft technical support.
    Visit our Microsoft answers feedback Forum
    http://social.answers.Microsoft.com/forums/en-us/answersfeedback/threads/ and tell us what you think

  • SQL performance problem associated with rownum.

    Dear Experts,

    I have a sql

    SELECT TEMP.col1, TEMP.col2, TEMP.col3, TEMP.col4, TEMP.col5, ROWNUM, not

    (SELECT col1, col2, col3, col4 FROM table1 ORDER BY DESC col4 col5) TEMP WHERE rownum between? and?

    When I put the value of range rownum 1 and 100, it works very well. But when I put 101 and 200 no records returned.

    So I modified it as fllows

    SELECT TEMP.col1, TEMP.col2, TEMP.col3, TEMP.col4, TEMP.col5,NWR No. OF

    (SELECT col1, col2, col3, col4, col5, rownum NWR FROM table1 ORDER BY DESC col4) TEMP WHERE NWR between? and?

    It works fine and giving results desire. But the issue here is the modified SQL becomes very slow. It gives results in 20 minutes. Although SQL earlier gave results in a few seconds.

    Records in table1 is 40 million.

    Is there another way to get good performance results accurate?

    Your help will be much appreciated.

    Kind regards

    DD

    Hi try this... If you want that data should be in the specific order (as order by desc col4, then you can use analytical ROW_NUMBER() function as below). Try it out below and let me know in case of any problems

    SELECT TEMP.col1,

    TEMP.col2,

    TEMP.col3,

    TEMP.col4,

    TEMP.col5,

    NWR NO.

    FROM (SELECT col1,

    col2,

    col3,

    COL4,

    col5,

    ROW_NUMBER() over (ORDER BY DESC of col4) rno

    FROM table1)

    NWR WHERE between 101 AND 200;

    (GOLD)

    SELECT TEMP.col1,

    TEMP.col2,

    TEMP.col3,

    TEMP.col4,

    TEMP.col5,

    NWR NO.

    FROM (SELECT col1,

    col2,

    col3,

    COL4,

    col5,

    ROW_NUMBER() OVER(ORDER BY col4 DESC) NWR

    FROM table1)

    WHERE NWR<=>

  • Help with PL/SQL Performance Tuning

    Hi all

    I have a PL/SQL procedure, it works very well. No error and no bugs. However its still holding at the end I use the concatenation operator (|), and I know that its expensive. How to improve the performance of the procedure?

    Here is the code

    create or replace 
    PROCEDURE POST_ADDRESS_CLEANSE AS
    
    
    CURSOR C1 IS
    SELECT Z.ROW_ID,
            Z.NAME
    FROM  STGDATA.ACCOUNT_SOURCE Z;
    
    
    CURSOR  C2 IS 
    SELECT  DISTINCT CLEANSED_NAME || CLEANSED_STREET_ADDRESS || 
            CLEANSED_STREET_ADDRESS_2 || CLEANSED_CITY || CLEANSED_STATE || 
            CLEANSED_POSTAL_CODE AS FULLRECORD
    FROM    STGDATA.ACCOUNT_SOURCE_CLEANSED;
    
    
    V_ROWID Number := 1;
    V_FLAG VARCHAR2(30);
    TEMP_ROW_ID VARCHAR2(10) := NULL;
    
    
    BEGIN
    
    
      -- This loop will update CLEANSED_NAME column in ACCOUNT_SOURCE_CLEANSED table.
      FOR X IN C1 LOOP
        
        TEMP_ROW_ID := TO_CHAR(X.ROW_ID);
        
      UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED A
      SET  A.CLEANSED_NAME = X.NAME
      WHERE A.ROW_ID = TEMP_ROW_ID;
        
        COMMIT;
    
      END LOOP;
      
      -- This loop will update columns EM_PRIMARY_FLAG, EM_GROUP_ID in ACCOUNT_SOURCE_CLEANSED table
      FOR Y IN C2 LOOP
    
    
        UPDATE  STGDATA.ACCOUNT_SOURCE_CLEANSED
        SET     EM_GROUP_ID = V_ROWID
        WHERE   CLEANSED_NAME || CLEANSED_STREET_ADDRESS || CLEANSED_STREET_ADDRESS_2 || 
                CLEANSED_CITY || CLEANSED_STATE || CLEANSED_POSTAL_CODE = Y.FULLRECORD;
    
    
        UPDATE  STGDATA.ACCOUNT_SOURCE_CLEANSED
        SET     EM_PRIMARY_FLAG = 'Y'
        WHERE   CLEANSED_NAME || CLEANSED_STREET_ADDRESS || CLEANSED_STREET_ADDRESS_2 || 
                CLEANSED_CITY || CLEANSED_STATE || CLEANSED_POSTAL_CODE = Y.FULLRECORD 
        AND     ROWNUM = 1;
        
        V_ROWID := V_ROWID + 1;
    
    
        COMMIT;
        
      END LOOP;
      
      UPDATE  STGDATA.ACCOUNT_SOURCE_CLEANSED
      SET     EM_PRIMARY_FLAG = 'N'
      WHERE   EM_PRIMARY_FLAG IS NULL;
      
      COMMIT;
    
    
      --dbms_output.put_line('V_ROW:'||V_ROWID);
      --dbms_output.put_line('CLEANSED_NAME:'||Y.FULLRECORD);  
      
    END POST_ADDRESS_CLEANSE;
    

    Thanks in advance.

    Post edited by: Rooney - added code using the syntax highlight

    Thanks for everyone entered.

    I was able to solve the problem. My solution using the old code with the |, I was able to create an index on the following attributes:

    CLEANSED_NAME | CLEANSED_STREET_ADDRESS |  CLEANSED_STREET_ADDRESS_2 | CLEANSED_CITY | CLEANSED_STATE | CLEANSED_POSTAL_CODE

    I never knew that you can create an index on the concatenated attributes sets. Doing this, I was able to update all of my files and improve performance. All records ran in 80 seconds.

    Thanks again for the help.

  • Examination of dynamic cursor and SQL performance

    Hello world

    I'm a researcher on internet forums and oracle for what is best for my case.


    I tried to rebuild indexes. I had two methods, both works very well, but I'm looking for which is preferable from the point of view of performance

    1 - the use of cursor as in the link below.

    http://www.think-forward.com/SQL/rebuildunusable.htm


    2 - using dynamic SQL, that generated the script file, then run it.

    coil rebuildall.sql
    Select "change the index ' |" owner: '. ' || index_name |' rebuild online; '
    from dba_indexes where status = "UNUSABLE."
    spool off;

    @rebuildall.sql


    Thanks in advance. Your help is appreciated.

    In both cases, that you use dynamic SQL statements, I think that there is no difference in terms of performance, use the method you feel most comfortable

    In all cases, you can follow the timestamps (time is set on & set timing on) and connect the outputs

    Best regards
    Alfonso Vicente
    www.logos.com.uy

  • Virtualize SQL Cluster - questions

    We want to consolidate and improve our infrastructure SQL.

    We have dedicated two ESX (VI 3.5, update 3) boxes (IBM 3850 M2 - 4 x 4-core @ 2.13 Ghz - 128G of ram).

    We are planing using Microsoft Clustering with a knot on each ESX Server.

    We also use several levels: production, pre-production, development, and test (in dedicated virtual machines) in the 2005 version of SQL or SQL 2008.

    This means that we will have 8 VMs on each server: 4 SQL 2005 machines virtual (one for each level) and 4 in SQL 2008 (also for each level). We want to use a MS (for each instance) with the other ESX Server cluster.

    We also want to install an additional VM for Reporting Services.

    We use versions of Windows 2003 and SQL, x 64 editions company.

    Here are a few questions:

    • We devote 16G of RAM for each active node. Of course, the passive node must be configured as active. This means that the passive virtual machine should also have 16G of RAM. Also we need to dedicate memory since this virtual machine will be pasive mode and not used unless the cluster switch?

      • If the answer is Yes, this means that we will actually use 32 G of ram for each level. With 4 levels and 2 versions of SQL, we devote 32 x 8 = 256 G of ram.

      • Unless I'm not understanding correctly how the memory works in ESX, I'm out of memory for the virtual machine Reporting Services (and all the needs of furture)

    • Knowing that the Windows swap file is 1.5 times the amount of RAM, this means that a VM with 16G needs to have a pagefile of 24G

      • We need to have 24G of swap file already knowing the virtual machine have an own swip file?

    • Knowing the volume of crude OIL are recommended for SQL, is there any issue with several GROSS volumes in a single ESX box? (We want to use 4 GROSS volumes by level (MSTDC, Quorum, journals and databases. That means 16 volumes GROSS)

    Your help is appreciated.

    Sébastien

    • * If the answer is Yes, this means that we will actually use 32 G of ram for each level. With 4 levels and 2 versions of SQL, we devote 32 x 8 = 256 G of ram.

    • Unless I'm not understanding correctly how the memory works in ESX, I'm out of memory for the virtual machine Reporting Services (and all the needs of furture)

    You are planning to book memory?  I don't think this is necessary.  If you use 2 ESXi servers for a battery of SQL servers, you can simply allocate memory, and you should be fine using actions.  If you are short of memory, I want to begin to see how you share memory, are these machines in the same pool of resources?

    Knowing that the Windows swap file is 1.5 times the amount of RAM, this means that a VM with 16G needs to have a pagefile of 24G

    In fact, the pagefile is not a formula, it is dynamic.  I hope that you run Windows 2003 for these VM SQL if if just use automatic paging size, because you can't handle memory better than Windows, and you would be guessing how much swap is needed.  In addition, swap does not mean additional memory some processes such as nucleus and Exchange DLL regardless of the amount of RAM that is available there.  SQL rarely use memory swap, so available RAM is more essential, then by defining a finite number of 1.5 available RAM, you could be on the award or the RAM of hunger for SQL.  Use the settings of AWE in SQL and let it manage memory and the value of the minimum RAM you need.

    • We need to have 24G of swap file already knowing the virtual machine have an own swip file?

    In fact if you reserve memory for a virtual computer, the Exchange is not necessary.  The VM swap file is for memory allocated to the virtual machine, then you will need IF the memory of the VM on the HOST is not exceed if the client needs more RAM.  Widespread.

  • SQL Tuning questions

    Hello

    I feel, the hash join subsequently cause performance problem. Please take a look and suggest me that my interpretation is fair or not:
    SELECT COUNT(*),d.file_id,d.PRINT_LOCATION,m.corporate_id,m.file_uploaded_on,r.PROCESS_DATE as AuthorizeDate,r.AUTHORIZATION_STATUS,r.AUTHORIZATION_LEVEL as AuthorizationLevel 
    FROM CHQPRINT.T_DATA_MASTER_FIELD_DETAILS d,CHQPRINT.T_DATA_CORP_AUTHORIZATION r,CHQPRINT.t_data_file_details m,CHQPRINT.T_DATA_RECORD_DETAILS N 
    WHERE d.file_id=r.FILE_ID 
    and d.RECORD_REFERENCE_NO=r.RECORD_REFERENCE_NO 
    and r.file_id=m.file_id AND N.FILE_ID=R.FILE_ID 
    AND N.RECORD_REFERENCE_NO=R.RECORD_REFERENCE_NO 
    and d.file_id=m.file_id 
    and N.CORPORATE_AUTHORIZATION_DONE='Y' 
    AND TO_DATE(m.FILE_UPLOADED_ON) between ('01-OCT-2010') and ('28-OCT-2010') AND(N.PRINTING_STATUS<>'C'  OR N.PRINTING_STATUS IS NULL)
    GROUP BY d.file_id,d.PRINT_LOCATION,m.corporate_id,m.file_uploaded_on,r.PROCESS_DATE,r.AUTHORIZATION_STATUS,r.AUTHORIZATION_LEVEL
    
    
    Plan hash value: 904523798
    
    --------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                      | Name                      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT               |                           |     1 |    80 |       |  5464   (1)| 00:01:06 |
    |   1 |  HASH GROUP BY                 |                           |     1 |    80 |       |  5464   (1)| 00:01:06 |
    |   2 |   NESTED LOOPS                 |                           |     1 |    80 |       |  5463   (1)| 00:01:06 |
    |*  3 |    HASH JOIN                   |                           |    36 |  2340 |  6376K|  5401   (1)| 00:01:05 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| T_DATA_CORP_AUTHORIZATION |   408 |  9792 |       |    75   (0)| 00:00:01 |
    |   5 |      NESTED LOOPS              |                           |   116K|  5003K|       |  2535   (1)| 00:00:31 |
    |*  6 |       TABLE ACCESS FULL        | T_DATA_FILE_DETAILS       |   286 |  5720 |       |   202   (4)| 00:00:03 |
    |*  7 |       INDEX RANGE SCAN         | PK_DATA_CORPAUTHORIZATION |   408 |       |       |     6   (0)| 00:00:01 |
    |   8 |     INDEX FAST FULL SCAN       | IDX_FILE_REF_PLOC         |   911K|    18M|       |  1120   (1)| 00:00:14 |
    |*  9 |    TABLE ACCESS BY INDEX ROWID | T_DATA_RECORD_DETAILS     |     1 |    15 |       |     2   (0)| 00:00:01 |
    |* 10 |     INDEX UNIQUE SCAN          | PK_DATA_RECORD_DETAILS    |     1 |       |       |     1   (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       3 - access("D"."FILE_ID"="R"."FILE_ID" AND "D"."RECORD_REFERENCE_NO"="R"."RECORD_REFERENCE_NO" AND
                  "D"."FILE_ID"="M"."FILE_ID")
       6 - filter(TO_DATE(INTERNAL_FUNCTION("M"."FILE_UPLOADED_ON"))>=TO_DATE('2010-10-01 00:00:00',
                  'yyyy-mm-dd hh24:mi:ss') AND TO_DATE(INTERNAL_FUNCTION("M"."FILE_UPLOADED_ON"))<=TO_DATE('2010-10-28
                  00:00:00', 'yyyy-mm-dd hh24:mi:ss'))
       7 - access("R"."FILE_ID"="M"."FILE_ID")
       9 - filter(("N"."PRINTING_STATUS" IS NULL OR "N"."PRINTING_STATUS"<>'C') AND
                  "N"."CORPORATE_AUTHORIZATION_DONE"='Y')
      10 - access("N"."FILE_ID"="R"."FILE_ID" AND "N"."RECORD_REFERENCE_NO"="R"."RECORD_REFERENCE_NO")
    
    Elapsed: 00:00:08.49
    
    Statistics
    ----------------------------------------------------------
             22  recursive calls
              0  db block gets
        1149987  consistent gets
            383  physical reads
           1560  redo size
          14528  bytes sent via SQL*Net to client
            679  bytes received via SQL*Net from client
             19  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
            268  rows processed
    Database version: 10.2.0.3

    Kind regards

    Santi says:
    Hi Charles,

    I made the change in the query according to your suggestion. Here's the amended plan:
    (snip)

    Even when the cost into account in the new plan is higher than the previous plan of 2000, but I see a big gain to comply:

    Elapsed: 00:00:00.64
    
    Statistics
    ----------------------------------------------------------
    22  recursive calls
    0  db block gets
    27392  consistent gets
    0  physical reads
    0  redo size
    2441  bytes sent via SQL*Net to client
    503  bytes received via SQL*Net from client
    3  SQL*Net roundtrips to/from client
    1  sorts (memory)
    0  sorts (disk)
    29  rows processed
    

    Kind regards

    What you have done is to give Oracle optimizer increases much better estimates of the actual number of rows that will be returned by operations in the execution plan by eliminating the m.FILE_UPLOADED_ON encapsulated TO_DATE function, so the calculated costs are expected to increase when the cardinality estimates. Note that now the T_DATA_FILE_DETAILS table is more should return 286 lines, but instead 1900 lines - 3993 rows being returned. The estimate of cardinaliy is still 2.1 times less than real, but it's better that be 13.96 times weaker than real with a Cartesian join is passed as a nested loops join. The T_DATA_CORP_AUTHORIZATION table is probably the biggest contributor to the execution time in the new execution plan (this may or may not be a performance issue). During the first run he required 10 156 physical block reads (all physical block reads for this run, check that the DB_FILE_MULTIBLOCK_READ_COUNT parameter is disabled to help potentially diluvium read performance in the future). During the second run that no physical reads has been made, then the execution time decreased slightly. 13 309 of the 27 392 makes more sense are a direct consequence of this table T_DATA_CORP_AUTHORIZATION access - we could see the number of becomes compatible drop using an index to access this table, but performance might be harmful if physical block reads are required.

    There is a small chance that you could see slightly better performance by using clues more and less full table scan, but I suspect that the performance may not improve much more. You could, of course, test by temporarily adding index indicators to the SQL statement (where the indicator GATHER_PLAN_STATISTICS is currently positioned) to see how changing the performance. Once you are satisfied with the performance of the SQL statement, make sure you remove the indication of the GATHER_PLAN_STATISTICS of the SQL statement.

    Charles Hooper
    Co-author of "Expert Oracle practices: Oracle Database Administration of the Oak Table.
    http://hoopercharles.WordPress.com/
    IT Manager/Oracle DBA
    K & M-making Machine, Inc.

  • 2000 MS SQL simple question

    Small question people,

    What is the MS SQL Server 2000 syntax for inserting multiple records with a single INSERT statement?

    I get ODBC barfs with:

    INSERT orders (UserID, FacilitiesOrServicesID, transmitted)
    VALUES ((10486, 2195, 0), (10486, 1156, 0))

    Thank you.

    INSERT INTO
    Orders (UserID, FacilitiesOrServicesID, transmitted)
    SELECT
    10486, 2195, 0
    SELECT UNION
    10486, 1156, 0
    SELECT UNION
    10487, 1776, 99
    etc, etc.

Maybe you are looking for

  • Audio tracks ritardando

    It is possible to program a ritardando in Logic Pro X 10.2.2 file that uses audio only (no lunch)?

  • My 6s not iphone connect to itunes

    I just got an iphone6s and I can't connect it to my itunes, a window pop up said the phone does not work and I need to download a more recent version of itunes, but when I check my itunes for updates it says I have the latest version (11.4), what sho

  • HP ProBook 650 G1: 47W HP ProBook G1 650 processor upgrade

    Hello world. Can I upgrade my i3 cpu (4000 m) on HP ProBook 650 G1 (D9S33AV) HM87 chipset with the I7-4700MQ CPU 47W PDT? In Bios Version: 01.22 has is added support for the Intel I7-4800QM processor (I think MQ). HP ProBook 640 G1, G1 645, 650 G1, a

  • HP 15-r001Sia: can not find the drivers

    My computer has been affected by the virus and new windows have been installed, but failled to get all the drivers and I can't use the other parts of the computer for example I can't connect USB flash, unable to connect to wifi and blue tooth. It say

  • Vision 2010 missing IMAQ USB

    I was unable to run examples or software by using all of the IMAQ USB files. I tried to repair my installation of IMAQdx and always get the same thing. When the file is loaded, my computer search IMAQ USB Grab acquire and can't find it. I want to use