optimization of queries

It produces the same results and what is more effective:

SELECT * FROM table1

WHERE DATE_CREATED BETWEEN ADD_MONTHS (SYSDATE-2) AND SYSDATE;

OR

SELECT * FROM table1

WHERE DATE_CREATED > = SYSDATE - 2;

Hello

02cc2813-d093-4903-930B-0745cb04a886 wrote:

It produces the same results and what is more effective:

SELECT * FROM table1

WHERE DATE_CREATED BETWEEN ADD_MONTHS (SYSDATE-2) AND SYSDATE;

OR

SELECT * FROM table1

WHERE DATE_CREATED > = SYSDATE - 2;

They do not produce the same results.

If you run this code on May 12, 2014, one including lines where date_created is on or after March 12, 2014, but not later than May 12, 2014.

The other includes the lines where date_created is the or after May 10, 2014, and there is no upper limit.

Both are good and effective ways to do what they do.

Tags: Database

Similar Questions

  • Optimization of queries OR clause

    I'm using Oracle 12.1.0.2.

    I have a given below request that has 2 predicate with OR condition that contains text search in 2 tables (eir and eir_notes).    It takes 20 seconds.   The Charly come back in the second if I run the query with a single predicate without the GOLD clause.    Therefore, I solved it by the union to place where. I've included the two sql with the explain plan below command.

    Just for my knowledge, is there advice that I could use to make sql in OR more fast?

    Slow SQL with OR:

    SQL TEXT:

    SELECT  /*+ gather_plan_statistics
    */
          COUNT (*)
      FROM   (SELECT   eir.actn_tx
                FROM   EXAM_INCDT_RPT eir
               WHERE   (   contains (eir.prblm_tx, :1) > 0
                        OR eir.exam_incdt_rpt_id IN (SELECT /*+ qb_name( qb_sub_note)    */
                                                           exam_incdt_rpt_id
                                                       FROM   eir_note
                                                      WHERE   contains (note_tx, :2) > 0)))
    

    SQL_ID  8rwvqyphavc0c, child number 1
    -------------------------------------
    SELECT  /*+ gather_plan_statistics  */   COUNT (*)   FROM   (SELECT  
    eir.actn_tx              FROM   EXAM_INCDT_RPT eir            WHERE     
                       (   contains (eir.prblm_tx, :1) > 0 OR               
               eir.exam_incdt_rpt_id IN (SELECT /*+ qb_name( qb_sub_note)   
     */                                                          
    exam_incdt_rpt_id                                                       
     FROM   eir_note                                                       
    WHERE   contains (note_tx, :2) > 0)                                     
                      ))
    
    Plan hash value: 2956686415
    
    -----------------------------------------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                             | Name           | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
    -----------------------------------------------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                      |                |      1 |        |       |  1343K(100)|          |      1 |00:00:20.24 |    1467K|   6770 |
    |   1 |  SORT AGGREGATE                       |                |      1 |      1 |   105 |            |          |      1 |00:00:20.24 |    1467K|   6770 |
    |*  2 |   FILTER                              |                |      1 |        |       |            |          |   5419 |00:00:21.28 |    1467K|   6770 |
    |   3 |    TABLE ACCESS FULL                  | EXAM_INCDT_RPT |      1 |    447K|    44M|  1343K  (1)| 00:01:45 |    447K|00:00:00.69 |    5819 |   5816 |
    |*  4 |    TABLE ACCESS BY INDEX ROWID BATCHED| EIR_NOTE       |    441K|      1 |   273 |     6   (0)| 00:00:01 |      2 |00:00:02.48 |     567K|    226 |
    |*  5 |     INDEX RANGE SCAN                  | EIR_NOTE_IX1   |    441K|      1 |       |     1   (0)| 00:00:01 |  10753 |00:00:01.65 |     535K|     20 |
    -----------------------------------------------------------------------------------------------------------------------------------------------------------
    
    Query Block Name / Object Alias (identified by operation id):
    -------------------------------------------------------------
    
       1 - SEL$F5BB74E1
       3 - SEL$F5BB74E1 / EIR@SEL$2
       4 - QB_SUB_NOTE  / EIR_NOTE@QB_SUB_NOTE
       5 - QB_SUB_NOTE  / EIR_NOTE@QB_SUB_NOTE
    
    Outline Data
    -------------
    
      /*+
          BEGIN_OUTLINE_DATA
          IGNORE_OPTIM_EMBEDDED_HINTS
          OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
          DB_VERSION('12.1.0.2')
          OPT_PARAM('_optim_peek_user_binds' 'false')
          ALL_ROWS
          OUTLINE_LEAF(@"QB_SUB_NOTE")
          OUTLINE_LEAF(@"SEL$F5BB74E1")
          MERGE(@"SEL$2")
          OUTLINE(@"QB_SUB_NOTE")
          OUTLINE(@"SEL$1")
          OUTLINE(@"SEL$2")
          FULL(@"SEL$F5BB74E1" "EIR"@"SEL$2")
          PQ_FILTER(@"SEL$F5BB74E1" SERIAL)
          INDEX_RS_ASC(@"QB_SUB_NOTE" "EIR_NOTE"@"QB_SUB_NOTE" ("EIR_NOTE"."EXAM_INCDT_RPT_ID"))
          BATCH_TABLE_ACCESS_BY_ROWID(@"QB_SUB_NOTE" "EIR_NOTE"@"QB_SUB_NOTE")
          END_OUTLINE_DATA
      */
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - filter(("CTXSYS"."CONTAINS"("EIR"."PRBLM_TX",:1)>0 OR  IS NOT NULL))
       4 - filter("CTXSYS"."CONTAINS"("NOTE_TX",:2)>0)
       5 - access("EXAM_INCDT_RPT_ID"=:B1)
    
    Column Projection Information (identified by operation id):
    -----------------------------------------------------------
    
       1 - (#keys=0) COUNT(*)[22]
       3 - "EIR".ROWID[ROWID,10], "EIR"."EXAM_INCDT_RPT_ID"[NUMBER,22], "EIR"."PRBLM_TX"[LOB,4000]
       5 - "EIR_NOTE".ROWID[ROWID,10]
    

    Explain the plan of sql with union

    SQL TEXT:

    SELECT /*+ gather_plan_statistics */
          COUNT (*)
      FROM   (SELECT   *
                FROM   EXAM_INCDT_RPT eir
               WHERE   contains (eir.prblm_tx, :1) > 0
              UNION ALL
              SELECT   *
                FROM   EXAM_INCDT_RPT
               WHERE   exam_incdt_rpt_id IN (SELECT   exam_incdt_rpt_id
                                               FROM   eir_note
                                              WHERE   contains (note_tx, :2) > 0))
    
    
    

    The use_concat flag may be appropriate.

    https://docs.Oracle.com/database/121/SQLRF/sql_elements006.htm#BABIAFIB

    The USE_CONCAT index tells the optimizer to transform handset OR -the economic situation in the WHERE clause of a query in a composite application using the UNION ALL fixed operator. Without this indication, this transformation only occurs if the query using concatenations is less expensive than the cost without them. The USE_CONCAT Council overrides consideration of cost. For example:

    SELECT /*+ USE_CONCAT */ * FROM employees e WHERE manager_id = 108 OR department_id = 110;
    
  • MVIEW log truncation

    Hi all

    I have a question about logs mview.

    While analyzing my AWR reports I found that most of the query at the top of the page are on my logs MView. How to optimize these queries?

    I intend to truncate the table $ MLog too, then after truncating tables MLg that I do a full refresh or my fast regular refresh is correct?

    We use a server Oracle 11 GR 2.

    Let me know if any more information is required for the same.

    Thank you

    AJ

    You can follow this doc for you reference

    236233.1

  • ASO - formula Member MDX

    Dear,

    In my current requirement that I have cam across, I applied under the formula of dynamic dimension ASO member.

    VC_YTD--(SOMME (PeriodsToDate ([période].))) Generations (2) [period]. CurrentMember), [see]. [VariableCost]))

    FC_YTD - (SUM (PeriodsToDate ([period].)) Generations (2) [period]. CurrentMember), [see]. [FixedCost]))

    FixedCost - SUM ({DESCENDANTS ([Custom1]. (CURRENTMEMBER,10,LEAVES)}, [FC_FIS])

    VariableCost - SUM ({DESCENDANTS ([Custom1]. (CURRENTMEMBER,10,LEAVES)}, [VC_FIS])

    FC_FIS - SUM ({DESCENDANTS ([Custom1]. (CURRENTMEMBER,10,LEAVES)}, [VC_FIS])


    VC_FIS-

    CASE WHEN IsLevel ([account]. CurrentMember, 0) THEN

    IIf (IsAncestor ([A_4000000], [account]. (CurrentMember), (([MTD] * ([BegBalance], [NoLocation], [NoCostCenter], [NoProduct], [UserInput], [Budget], [approuvé], [Local], [aucune Entity],[FY14],[MTD])) / 100), 0)

    on the other

    SUM ({DESCENDANTS ([account]. (CURRENTMEMBER,10,LEAVES)}, [FC_FIS])

    End

    I know this isn't the right way but due to requiremnet I do the same thing.

    Now when I retriev the Member in excel my excel is dying upward and it gives no result.

    In the newspapers, I get below error-

    [FixedCost] member formula is complex. If possible, add a non-empty directive for scattered data optimization.

    The Member [VariableCost] is complex. If possible, add a non-empty directive for scattered data optimization.

    The Member [VC_FIS] is complex. If possible, add a non-empty directive for scattered data optimization.

    Can you suggest me to this.

    Thank you.

    try using NONEMPTYSUBSET()

    "This can help to optimize the queries based on a wide range for which all non-empty combinations is known to be small. NonEmptySubset reduced the size of all the presence of a metric; for example, you can ask the non-empty subset of descendants for specific units.

    NonEmptySubset is used to reduce the size of a set before analytical later retrieval. »

    The thing is once, I got this message and used NONEMPTYSUBSET and the warning does not go far so I found myself just ignore him. Some problems are simply complicated.

  • Presentation of Jonathan Lewis on statistics more intelligent in 11g

    After viewing the excellent presentation of Jonathan Lewis, ( http://www.speak-tech.com/lewis-20130610 ), the essential

    I'm on is for the "approximate_ndv" the true value and to avoid using histograms.

    So, I checked my global preferences and confirmed that they were set to the default values, specifically:

    SNAME SPARE4

    ------------------- ----------------------------

    APPROXIMATE_NDV FALSE

    METHOD_OPT FOR ALL COLUMNS SIZE AUTO


    Thus, by the presentation of Jonathan, I'll put APPROXIMATE_NDV to TRUE.

    But my question is in what regards METHOD_OPT: That I should (or would that be safe)

    to set the global_prefs for this " " FOR ALL COLUMNS size 1 ".


    Basically, it would eliminate all histograms (possibly when it updates the statistics), including those that Oracle uses on its own internal tables.


    977635 wrote:

    As a follow-up on the issue, in our case, our application almost exclusively uses liternals instead of bind variables.

    But because of the link peeking (at our histograms generated from the default setting of method_opt), we found that it seems to have better performance with cursor_sharing set to EXACT (instad of STRENGTH).

    But, if I understand correctly, if we removed the histograms and the value cursor_sharing STRENGTH, then the optimizer wouldn't bind peeking (since we have no histograms) and our performance improve because we would then use the estimated number of distinct values instead.  Is this OK, or I'm confusing?

    It's a small issue that does need a long answer. A starting point, however, would be one of my reviews of 'Philosophy': http://jonathanlewis.wordpress.com/2009/05/06/philosophy-1/

    Your first paragraph says that worked better histograms with literals with bind variables built by cursor_sharing forced - which is typical; If you have data that is sufficiently skewed that you need histograms, then you need to use literals so that the optimizer can see the skew when it optimizes.

    If you delete the histograms (while Oracle Gets a view "average" of your data) and set cursor_sharing to force then Oracle will always look, but it will not be able to tell if the peeked value is a special case, and SOMETIMES it is better to develop an extreme plan.

    Oracle 9i's response was to allow cursor_sharing must be set to "similar" - but this meant that Oracle could then rewrite a query to use links but can re-optimize some queries very frequently because take a look at the new variable of binding every time - resulting in excessive optimization and a large number of cursors of the child; the presence of histograms on columns in the predicates has been a trigger for re-optimization.

    11g adaptive cursor sharing introduced and suggested using cursor_sharing = force if you have excessive use of literals - you can get several child cursors after a statement involving the histograms, but ideally it should be only a very small number by statement.  The feature is still a bit fragile.

    A useful tip is / * + cursor_sharing_exact * / so whatever you do with cursor sharing, you may put this in all the instructions where you don't want Oracle to perform the conversion to the dregs. It is perhaps that a judicious use of this feature is enough to give you the best compromise of performance and stability.

    Bottom line - histograms are difficult, you probably need some and you may need to their engineer carefully, but your front-end code needs to know about them, if you want the best performance.

    Concerning

    Jonathan Lewis

  • Question on to RBO and CBO

    Hello
    Please clear some doubts. 1 interview questions had me...

    (1) how oracle will decide of RBO and CBO used for optimization of queries? is there any setting that must be defined in the init.ora file or any other parameter will decide to choose oracle RBO/CBO?

    (2) if tab1, tab2, tab3, tab4, use tab5 in sql stmnt and only on tab1, tab2 statistics are collected and then what is senerio b in terms of RBO/CBO?
    I m using tab1.col1, tab2.col1, tab3.col1, tab14col1, tab5.col1 in join my condition.

    Maybe my question is not clear b but as he asked for me somehow.


    (3) an application works well, but since a few days they take too much time for execute.what might be right?

    Note: nothing has changed in sqlcode, nothing works on back-end on the database, no lines have been removed from tables, no lock on the tables by anyone,... so what could be the reason and
    How can we find the reason?

    I tried searching on the net also these responses but did not cool answer.then remember that oracle expert community forum... Help, please...


    Rgds,
    PC

    If you use Oracle 9.2 and the optimizer_mode is CHOOSE, the CBO will be used if any object referenced in the query statistics are available. The CBO is highly likely to enter a poor plan if some objects have statistics and others do not.

    If you use Oracle 9.2 and the optimizer_mode RULE, the RBO will be used or not all objects have statistics unless you use features that require the CBO.

    If you use Oracle 9.2 and the optimizer_mode ALL_ROWS or FIRST_ROWS_n (where n is 1, 10, 100 or 1000), the CBO will be used or not all objects have statistics.

    Justin

  • Single column supporting different FK of various tables [Design issue]

    What would be a good design for a table with a column that can be worth a few different tables FK.
    The Fk constraint doesn't have to be applied.

    My thought is to simply make an int with another column in the table that defines the origin of the value.

    Is there a better way to do it?

    Thank you

    If the FK column in this table must exist in the table of a parent or another? Hmmm. As FK relationships must be able to be provided by the database to be useful, maybe...

    Create two columns, one for the FK to one parent and one for the FK to parent b. You can then declare the FK relationship in the database so that it can enforce and make profit in the optimization of queries. Then add a check constraint to the table to assert that none of the columns is null, and only one of then has a value (while the other is null). Outer join to both tables parent with a DECODING on the query returns the appropriate parent (research) value of the result set.


    SELECT c.col_a,
    DECODE( p1.col_b, NULL, p2.col_c, p1.col_b )
    FROM child c,
    parent_a p1,
    parent_b p2
    WHERE c.id = p1.c_id (+)
    AND c.id = p2.cid (+);

  • Join the table orders from clause

    Hi all

    Who is the effective way to join the tables in from clause. I have two tables first with 20 lakh records and second containing 10 lakh recods.
     
    QUERY 1:  SELECT T4.ID,T4.ISO_NAME  FROM T,T4 
    WHERE T4.ISO_NAME LIKE '%US%' AND T.ID=T4.ID;
    
    QUERY 2:  SELECT T4.ID,T4.ISO_NAME  FROM T4,T 
    WHERE T4.ID=T.ID AND  T4.ISO_NAME LIKE '%US%';
    
    T(ID IS PRIMARY KEY) 
    (20 lakh records)
    
    T4 (ID IS PRIMARY KEY ) 
    (10 lakh records)
    ---------------------
    ID     ISO_NAME
    100  US,UK,IN,BR
    101  UK,US,BR,IN
    102  BR,UK,US,IN
    
    
    Note: No index on ISO_NAME .
    Who is the effective query 1 or 2. Please suggest me if you have an idea to rewrite the query.



    Kind regards
    Rajasekhar

    Published by: SuNRiZz68 on January 29, 2009 04:22

    In practical terms, Alex is right. Sometimes it matter what table is selected first, but does the CBO generally a very good job of deciding what you need to select the first (assuming that your statistics are up to date) but this is the situation you are trying to avoid as much as possible.

    If you specify a table main command tables in the clause is not reliable and should be used - but think before using advice and don't do that when necessary.

    Which table to select depends firstly on the join method in the execution plan. Nested loops joins perform better by selecting in the smaller table first, make a loop on the largest table. Joins the smaller set hash table in memory first, and then go through the larger table, perform searches in memory. He can't make any difference, what table is read using first the merger joins and sort.

    Back to your original question. Using the cost-based optimizer, both queries will probably roll the same because newer versions of Oracle (9i, 10g) often transform queries for efficiency before the execution anyway. According to what do you or do not request should probably run a nested loop or hash join. With a small set of data creaing index and using a search of nested loops will probably be faster to avoid full table scans. the '%' in the LIKE clause leader would ignore an index on the ISO_NAME column in any case if a main column may be used in a composite index. All this is based on the approximation using the information provided; Tuning questions should always be tested to unexpected developments.

  • Optimization of simple queries

    Hello

    I use Oracle 10 g r2.

    I have this simple query that seems to take too long to execute:
    DECLARE
         nb_mesures INTEGER;
         min_day DATE;
         max_day DATE;
    BEGIN
         SELECT
              COUNT(meas_id),
              MIN(meas_day),
              MAX(meas_day)
         INTO
              nb_mesures,
              min_day,
              max_day
         FROM
              geodetic_measurements gm 
              INNER JOIN
              operation_measurements om 
              ON gm.meas_id = om.ogm_meas_id 
         WHERE ogm_op_id = 0;
         htp.p(nb_mesures||' measurements from '||min_day||' to '||max_day);
    END;
    -Tables (about 11,000 records for the 'Operations' table) and 800,000 for the other 2:
    "Operation_measurements" is the table that is the link between the other 2 (get the 2 keys).
    SQL> DESCRIBE OPERATIONS
    
    Nom                  NULL     Type
    -------------------- -------- ------------
    OP_ID                NOT NULL NUMBER(7)
    OP_PARENT_OP_ID               NUMBER(7)
    OP_RESPONSIBLE       NOT NULL VARCHAR2(10)
    OP_DESCRIPT                   VARCHAR2(80)
    OP_VEDA_NAME         NOT NULL VARCHAR2(10)
    OP_BEGIN             NOT NULL DATE
    OP_END                        DATE
    OP_INSERT_DATE                DATE
    OP_LAST_UPDATE                DATE
    OP_INSERT_BY                  VARCHAR2(50)
    OP_UPDATE_BY                  VARCHAR2(50)
    
    SQL> DESCRIBE OPERATION_MEASUREMENTS
    
    Nom                  NULL     Type
    -------------------- -------- ------------
    OGM_MEAS_ID          NOT NULL NUMBER(7)
    OGM_OP_ID            NOT NULL NUMBER(6)
    OGM_INSERT_DATE               DATE
    OGM_LAST_UPDATE               DATE
    OGM_INSERT_BY                 VARCHAR2(50)
    OGM_UPDATE_BY                 VARCHAR2(50)
    
    SQL> DESCRIBE GEODETIC_MEASUREMENTS
    
    Nom                  NULL     Type
    -------------------- -------- ------------
    MEAS_ID              NOT NULL NUMBER(7)
    MEAS_TYPE            NOT NULL VARCHAR2(2)
    MEAS_TEAM            NOT NULL VARCHAR2(10)
    MEAS_DAY             NOT NULL DATE
    MEAS_OBJ_ID          NOT NULL NUMBER(6)
    MEAS_STATUS                   VARCHAR2(1)
    MEAS_COMMENT                  VARCHAR2(150)
    MEAS_DIRECTION                VARCHAR2(1)
    MEAS_DIST_MODE                VARCHAR2(2)
    MEAS_SPAT_ID         NOT NULL NUMBER(7)
    MEAS_INST_ID                  NUMBER(7)
    MEAS_DECALAGE                 NUMBER(8,5)
    MEAS_INST_HEIGHT              NUMBER(8,5)
    MEAS_READING         NOT NULL NUMBER(11,5)
    MEAS_CORRECT_READING          NUMBER(11,5)
    MEAS_HUMID_TEMP               NUMBER(4,1)
    MEAS_DRY_TEMP                 NUMBER(4,1)
    MEAS_PRESSURE                 NUMBER(4)
    MEAS_HUMIDITY                 NUMBER(2)
    MEAS_CONSTANT                 NUMBER(8,5)
    MEAS_ROLE                     VARCHAR2(1)
    MEAS_INSERT_DATE              DATE
    MEAS_LAST_UPDATE              DATE
    MEAS_INSERT_BY                VARCHAR2(50)
    MEAS_UPDATE_BY                VARCHAR2(50)
    MEAS_TILT_MODE                VARCHAR2(4000) 
    -Explain the plan (I'm not familiar with the plans explain... command):
    --------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------------
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |                        |     1 |    19 |   256  (10)| 00:00:02 |
    |   1 |  SORT AGGREGATE               |                        |     1 |    19 |            |          |
    |   2 |   NESTED LOOPS                |                        |    75 |  1425 |   256  (10)| 00:00:02 |
    |*  3 |    TABLE ACCESS FULL          | OPERATION_MEASUREMENTS |    75 |   600 |    90  (27)| 00:00:01 |
    |   4 |    TABLE ACCESS BY INDEX ROWID| GEODETIC_MEASUREMENTS  |     1 |    11 |     3   (0)| 00:00:01 |
    |*  5 |     INDEX UNIQUE SCAN         | MEAS_PK_2              |     1 |       |     2  (50)| 00:00:01 |
    --------------------------------------------------------------------------------------------------------
    How to optimize this query?

    Thank you.

    Yann.

    Looks like you are missing a FK-index on the table in the Middle, for FK goes to OPERATIONS.

    Currently this:

    WHERE ogm_op_id = 0;
    

    Is calculated by a followed by a filter operation full table scan. Assuming that OP_ID is pretty selective, an index on OGM_OP_ID might do the trick here.

  • query optimization

    Hello guys,.

    I made a request, but the execution time is too long, could you help me optimize it please?

    Kind regards

    Try to change your return type of the object root for virtual machines, VMWVirtualMachine.

    and the path to the reference of the root to VMWModel, virtualCenters, virtualMachineCollection, virtualMachines

    A property on or below the return object would then be your path to the where clause.

    There is no end to subject to return anything below VMWVirtualMachine, unless you create 1 line per object in a relationship 1 - several with the object root, such as processors or logical disks.

    In this case, the path in which the comparison of the clause is triggering additional works useless back up the tree view of the object of return, which can work, but not very efficiently in large environments.

    Note that aggregations will cause some performance issues with large sets of data, and how they are created can alter performance.  Always test the performance first without filter or aggregation, then add filters, then add aggregations.

    Aggregations are generally limited in capabilities in queries, in any case.  In general, I remove the aggregation, make the filter here, then the output power to a WCF service to the aggregation (s) the need.

  • The ultimate way to optimize SQL statements?

    Hi all

    I hope that this discovery you in a good mood

    I have a few SQL I need to optimize.

    In your experience what is the best way to find all the bottlenecks 'True' in the SQL itself.

    It's my workflow:

    Step 1:

    Explain the plain: find all the analyses of full Table right at the front.

    Step 2:

    Run the SQL through Grid control at the place where you can just "run" SQL here.

    Step 3:

    Create a SQL Tuning with the SQL Code that has just been executed.

    Step 4:

    Run SQL Access Advisor and use the SQL code that has just been run.

    Step 5:

    To run the SQL SQL Tuning Advisor.

    I know there are several way to do it, but what is your "workflow."

    Thanks for any help...

    Thank you

    Lady Allora.

    I was approached by one of the Dev and he asked me to run faster than 5 minutes.

    OK - but it is an ARTIFICIAL target and is NOT necessarily realistic or feasible.

    This makes it a RESEARCH project. You are looking for, you analyze, you try to determine if this goal is realistic or even possible.

    There is only one table that has rows 3 million inside and they do a "union all" join on tables that have a few hundred thousand lines in them.

    This means that EACH of the queries in this "union all" can be considered as a separate entity. You can take each of the queries ONE AT A TIME and determine:

    1. how many lines it can return

    2. what execution plan used

    3. what indexes and statistics are available. Are there stats? They are up to date? How they were collected?

    4. that if whatever, in execution plans seems out of the ordinary - perhaps an index is used when it shouldn't, perhaps an index is NOT used when it is possible, perhaps a clue is needed

    All this is LOOKING. As others have said if you know how many rows in the table, how many need to be returned and indexes are available, you can estimate the time MINIMUM it would take to get them. No matter what you do you will not get any faster than that.

    And for a UNION ALL of four motions operation, you can add these 4 'planned' to get the time MINIMUM it might take. That becomes your time back as LOW as POSSIBLE.

    If it is greater than the target, your goal is NOT possible.

    Ok, so many people say over and over again and more than a presence FULL ACCESS TABLE in the plan to explain is not bad.

    NO! This is NOT what they say. You wrong interpret what they tell you.

    Full scans are NOT NECESSARILY bad. Same 14 of them are NOT NECESSARILY bad. One or more of them COULD BE bad or even all of them could be bad.

    They tell you not to make the assumption that a FTS in the plan is bad. Are not these analyses and make the research/control over them, but what you do NOT assume that you must try to eliminate one or all of them.

  • Installation and optimization of Windows 8 with RAID

    Original title: windows installation in dell inspiron 8 5523.

    Hello

    Recently, I bought a dell inspiron with 32 GB ssd and 500 GB of hard drive.

    I try to install fresh windows 8.

    someone please give me step by step of the instructions for installation and optimization with RAID.

    concerning

    Hi Venkatavaradhan,

    1. who is the current operating system installed on the PC?

    2. what exactly happens when you try to install the Windows 8?


    Refer to this article on how to install Windows 8.
    How to perform a new installation of Windows
    http://Windows.Microsoft.com/en-CA/Windows-8/clean-install

    To install Windows and optimization with RAID, please post your request in the TechNet forum.

    Please visit the link below to find a community that will provide the support you want.

    http://social.technet.Microsoft.com/forums/en-us/w8itproinstall/threads

    Hope this information helps. If you have other windows related to queries, we will be happy to help you.

  • Oracle 11g - force rule-based optimizer for a SQL ID

    We have a provider application and cost based works very well for tens of thousands of queries but nothing that does not work.  He joined between two tables and use a select max.   When the internal table contains thousands of lines, is no longer using the index and works for a very long time, impact on performance.  It works fine using the rule-based optimizer.

    While the seller will solve this problem, at the same time is possible to force a single specific ID of SQL to use a basic rule optimizer?  This aspect must be on the sides of the database that we cannot change the query itself.

    Download the map you want in a non-produits or prod environment.

    Use a sql profile to apply this plan to your prod environment.

    https://OraStory.WordPress.com/2015/08/11/SQL-plan-management-choices/

  • How to avoid the ORA-01722 in Collection queries?

    I'm in the middle of 11 g migration of 12 c and reproduciple face ORA-01722: virheellinen (invalid number) errors with selects for existing collections.

    Never paid any attention to it, but good to know that this can happen.

    So it's potentially get ORA-01722 invalid number

    select to_number(coll.c007) c007
      from my,apex_collections coll
     where 
          my.id                 =  coll.n002  -- my_id
      and coll.collection_name = 'TUNE_COLL'
      and to_number(coll.c007) = 1  -- first level 
    
    

    While this isn't:

    select to_number(c007) FROM (
    select c007
    from my,apex_collections coll
    where 
          my.id                 =  coll.n002  -- my_id
      and coll.collection_name = 'TUNE_COLL'
    --  and to_number(coll.c007) = 1  -- first level
    ) where  to_number(c007) = 1  -- first level
    

    What is nasty, it's that the responsibe 25 pages of theme with parts of report gets berzerk when error messages between and so on.

    I read the following articles, and it really seems that it comes and goes...

    Robert Schaefer's blog: Spooky "ORA-01722: invalid number" errors when using the collections of the Apex

    http://deneskubicek.blogspot.fi/2013/03/Apex-collections-and-joins.html

    But did not find any quick way to resolve all queries on collections. Somehow, this seems to be because I exceeds the number of columns n001-005 and start using vc c001... columns for numbers.

    So what is the best alternative to doing things?

    1.) divide collections a number c001... so that the numbers will always n001 numbers...   (it is a time consuming task)

    2.) extra make selects select collection... is it enough to convert vc/characters to numbers.

    3.) or in any another cool tip - are there for example views on collections?

    Paavo /rgrds

    Paavo wrote:

    The error comes randomly, and maybe it has something to do with the existing 'temporary' collections already.

    They may have something extraordinary in c007 which cannot be to_numbered with different collection_name.

    It has developed before, and this is indeed the cause. The optimizer apply predicates in a different order than that was used earlier. That is, instead of limiting the lines of the collection by the name of the collection to the collection of TUNE_COLL first, then filtering by the value converted C007, she asks C007 firstly, clause that straightens to lines of collections where C007 value can not be converted to a number. This tends to occur during or after the SUMMIT or base data at level, as in this case.

    The fix is beef - cross selects all collections and difficulty where the conditions so that they appear only at specific collection_name?

    Surely it is necessary on all access collection still? And as explained above, it will not necessarily solve the problem, because the optimizer can change the execution plan and apply predicates in any order.

    Why you have exceeded the n001-n005 number of columns? What values are held in these? Preferably, I would suggest using these columns to store digital keys that are used in joins, values put numerical values only referenced in query projections in the columns of characters c001-c050 (and of course to do the same for date values). Then, you can create a view of the collection which explicitly converts values of the column returned to numbers or dates. Using views on top of collections, it is good practice in any case because it allows explicit column names to use in queries rather than a lot of impenetrable n00x and c0xx, which avoids thinking sitting developers "which column is now the postal code?

  • RH2015 - loading and the search optimization

    Hello

    Environment: HR 2015 12.0.2.384, generating merged WebHelp - nearly 40 HR projects in total and 7000 topics.

    Question: loading table of contents and search are very slow, and I need to find a way to optimize their. However, I do not understand how they work, so I hope that someone on the Forum could give me some advice.

    Current situation:

    I tested this a few weeks - the results below.

    First load of OCD:

    • Local computer:
      • That is to say 11-25 seconds
      • Chrome - 2 seconds
    • Site client-side:
      • IE - 20 seconds
      • Chrome - 10 seconds

    First search:

    • Local computer:
      • IE and Chrome - 2 seconds.
    • Site client-side:
      • IE and Chrome - 25 seconds.

    In research settings, I activated syntax highlighting, showing the context, hide the column rank and the total number of search results. (No substring search.) The activation of the option AND by default does not seem to change the speed. Because we include a PDF version of each online help module, I also tried to exclude the search PDF files, but has not made a difference either.

    (I also tried to generate the same content to the HTML5 format and the table of contents did load faster, but research takes 2 x longer...)

    Unfortunately our website client-side is protected by password, so I can't give the link.

    Issues related to the:

    1. As far as I can tell, a bunch of JS files in folders wh * are responsible for the table of contents and search - I'm on the right track? No one knows exactly how these files are generated? I opened the Console to the Chrome developer while searching for a term and it looks like a JS TAKES a few milliseconds of load, but there are so many files it adds up to nearly a half-minute.
    2. If I had to reduce the number of HR modules (for example, 20 instead of 40), it would make a difference, or I should basically the same number of JS files altogether, just put in different folders?
    3. If I had to reduce the number of subjects that would help?
    4. Is there something I can do to speed up the loading of content on IE? There is a big difference compared to Chrome, and most of our clients use IE...

    Thank you very much in advance!

    There are several issues at stake here that affect performance: the amount of topics, merged help and the performance of the server.

    In the WebHelp output, you can select options for speed optimization. Try to set this option to "Local area network", even if you publish on the web. This option controls if you have files wh * more and more small or big but fewer files wh *. Network speed have increased so that you can use the local area network option. And there's an added bonus: each file that needs to be loaded requires a download from the server. And have a lot of downloads for many small files is much slower than fewer calls for larger files. This is because for each file on the server, the server must read the file and send it to the customer. If the server can do this with larger files, you can benefit from the best internet connections of the past 10 years. Have a fast internet connection does not resolve the issue where you have to collect many small files.

    Second, merged help adds a huge head on download times. Basically, RoboHelp load of the main project. And then, it will make all steps load for each merged as well project. This slows down the process enormously. This is due to scripts in aid of the merger, but also because of the issue I described before. Reducing the number of projects were merged by moving in fewer and larger projects, will also help.

    Third, you can reduce the content to speed up searches. For research, the number of subjects is irrelevant. What interests us is the amount of content. When you have subjects less with more content, research database will have the same size as less content subjects. Of course, the subjects have less reduced also the table of contents entries and will speed up the loading of the table of contents. The research is more affected by the number of merged projects. Don't forget: many means of overhead projects merged.

    For customer oriented site, your computer must download the files from the remote server. For local help, the browser can access all the content immediately on the disc. If your server is not cached, you get a lot of overhead on the server side, slow things down. And if the server has a slow connection download (server upload max is your max download), which will still slow down the process. Especially if you have several people trying to access the content at the same time, the server must treat a large number of applications. But for any server moderately modern, queries should not be a problem.

Maybe you are looking for