performance degrades after the collection of statistics

Oracle 11 g 2 OEL 5

We have several very large tables (40 million lines and more) and recently we gathered stats on tables and it degraded our performance. He began to do table scans complete rather than use the index. The same queries are very well in other environments. Only difference is the collection stats. Logically, the performance should be better after the collection of statistics. But it is rather poor.

I ran track 10053 on request and I see that the cardinality and the cost is medium high in the inefficient environment. A test, I have restored the old stats in the environment and it put everything back to normal - the query runs quickly again. Note that the restored stats were collected for more than a year. Should not collect us statistics regularly on very large tables?

Thank you.

Hello

the stats of the default collection behavior is to determine the number of cells (i.e. a histogram is necessary or not and if yes, how accurately must be) automatically, depending on distribution and usage of the column of data in different types of predicates. This means that, in many cases collection histogram is almost a random process - once you get a histogram, the next time that you don't have, even if there is almost no change. This is (unfortunately) the normal behavior.

I could quite at the bottom of your question - the optimizer esteem seem to be all correct in the second case, it is not clear to me why the plan that is so bad (there are also some other problems, as lines of 40G supposed to be returned by one of the nested loops, or estimation of cardinality missing for another nested loop). But in any case, histograms and bind variables do not mix, so you can just solve your problem by specifying method_opt => "for columns size 1' to disable the histograms for this table.

Best regards
Nikolai

Tags: Database

Similar Questions

  • Estimated cardinality changed after the collection of statistics

    Hello

    Grateful if anyone can help answer why there is an extra line in the DRB estimated after gather his stats.
    Windows Oracle 10.2.0.4.0

    SQL > select * from t
    2 where id between 6000 and 7000;

    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 1601196873

    --------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    --------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 1001 | 13013. 6 (0). 00:00:01 |
    |* 1 | TABLE ACCESS FULL | T | 1001 | 13013. 6 (0). 00:00:01 |
    --------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 - filter("ID">=6000 AND "ID"<=7000)

    Note
    -----
    -dynamic sampling used for this statement

    SQL > exec dbms_stats.gather_table_stats (user, 't');

    PL/SQL procedure successfully completed.

    SQL > select * from t
    2 where id between 6000 and 7000;

    Execution plan
    ----------------------------------------------------------
    Hash value of plan: 1601196873

    --------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time |
    --------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 1002 | 3006 | 6 (0). 00:00:01 |
    |* 1 | TABLE ACCESS FULL | T | 1002 | 3006 | 6 (0). 00:00:01 |
    --------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    1 - filter("ID">=6000 AND "ID"<=7000)

    SQL >

    Hi Hemant

    I don't think that the dynamic sampling would be launched for the first time.

    If you carefully check the first post to the OP, you will notice the "dynamics of sampling used for this statement" text in the output of the first run. Therefore, dynamic sampling is used.

    Best regards
    Chris Antognini

    Troubleshooting Oracle, Apress 2008 performance
    http://top.Antognini.ch

  • Performance degradation after filesystemio_option = none setall.

    Hi all

    We deal with the degradation in performance after the filesystemio_option = setall from none on my two servers as mentioned below.

    Red Hat Enterprise Linux AS release 4 (Nahant Update 7) 2.6.9 55.le ELhugemem (32 bit)
    Red Hat Enterprise Linux Server release 5.2 (Tikanga) 2.6.18 92.1.10.el5 (64-bit)

    We see a lot of disc that passes. We expected "* filesystemio_option = setall *" will improve performance, but it's degrading. "." Get us slow complains.

    Please let me know we need to put something else with this... like all parameter otimizer (e.g. optimizer_index_cost_adj, optimizer_index_caching).


    Help, please.

    Hi Suraj,


    You changed filesystemio_options setall zero, therefore, the most likely reason for the degradation of performance after setall crossing is the implementation of the arm. Direct I/O will jump the filesystem buffer cache, and so Oracle read directly on the disk in the database buffer cache. However, in a system where direct i/o is not being implemented, which is what you had until you missed recently with this setting, it is likely that you have a cache buffers undersized of database, but that was ok, because many (more), your database has been done have been effectively supported by the o/s file system buffer cache. But you presented direct IO and wiped out the ability of the o/s of service all e/s physical filesystem buffer cache. This means that each absence of cache on the database buffer cache, turns into a real, physical, spin-the-disk IO, move car-head, physical. And you have performance implications.

    OK, end of speculation. Now, assuming that what I have described above actually happens, what to do? Why direct I/O lower is that the buffer non - heading e/s? If its performance doesn't have to be higher?

    Well, when you have a system that uses I/O in the buffer and you spend to direct I/O, almost always will have to increase the size of the database buffer cache. The problem is that you have taken a chunk of memory away from the the o/s, which he used to your IO buffer and avoid the physical i/o. So, now, you have to make up for it, by increasing the size of the database buffer cache. You can do it, without having to buy more memory for the box, because the o/s will most need to use as much memory for file system buffers.

    So, what to do? It is better to go? Well, on the whole, it is a good idea to use direct i/o and give Oracle a cache buffers more large database, for the simple fact that (especially on a server that is dedicated to being an Oracle database server), Oracle has much more sophisticated caching algorithms and a better understanding of the different types of data set caching and so should be able to use more effective use of memory, only the (relatively) brain dead algorithms for kernel mechanisms and the cache file system.

    But, again, everything comes down to this:
    What problem are you trying to solve? Do you have any I/O related issues? Do you have any compelling reason to implement direct i/o? Rule #1 is 'if it ain't broke, don't fix it."you violated the rule #1? :-)

    Finally, since you are on Linux, you can use 'free' to see how much memory is on the box, how much is free, and how much is dedicated to the cache file system buffers. This answer is already quite long, so I won't go into details, however, if you are not familiar with the command, the results could be misleading. Read the manual page and try to be clear about the understanding before making any assumptions concerning the release.

    Hope that helps,

    -Mark

  • Performance degraded after replacing hard drive

    Original title: WEI drops to 1

    Hi all

    The hard drive failed and I got the computer guy on the street has replaced another.  After the installation of the new HD, my computer works very much on slow.  I checked the score WEI graphics/games - he dropped to 1 old 5.6 on the old HD partition.   Back to the store, the guy doesn't seem to know anything.
    It's a laptop HP, Windows 7, intel core-i7 8 GB of RAM
    Help, please!
    Thank you.

    N2K

    Did you re-install the video driver?

    Most often when a device does not work, it is because the driver is damaged. Re-install a new copy should help

    http://Windows.Microsoft.com/en-us/Windows7/update-a-driver-for-hardware-that-isn ' t-work correctly? SignedIn = 1

  • Specifying the user name for the collection of statistics of servers windows performance monitor?

    When using load OATS tests collect statistics of servers Windows (perfmon stats), the load test server runs under the user Id must be a member of the monitoring group of the Performance on all Windows servers, I'm watching. I have a user ID standard I use for this type of surveillance. It is automatically configured on all servers in our company. Where can I specify this user ID and password in OATS load tests?

    The only reference I can find in the documentation says to change the user ID and un_mot_de_passe in the Oracle Load Test Agent Serivce. I don't think that's what I want. And there is no installed service named 'Oracle Load Testing Service Agent' on my server in any case.

    Thanks for the help!

    Hello Bob
    You should have a service called Service Oracle Application Testing Suite Agent on each server running a load test agent. If this is not the case, there is a problem it somethere and you can't run a load test.
    This is the service you need to run as the specific user (go to control panel-> service-> change the service to log on under your username).

    And tell us the resultl
    See you soon
    JB

  • Clarification on the collection of statistics

    Hello

    My very slow query performns and it contains the tables of partitions and uses parallel and index.

    Last_analysed columns shows that he is 11 - July (2 weeks)

    What statistics to be gathered every day?

    What will be the timeduration?

    S

    Also check the STALE_STATS column in the views of the statistical dictionary (ALL_TAB_STATISTICS, ALL_IND_STATISTICS).

    The optimizer statistics management

    This indicates when there was enough DML operations on the subject to warrant an update of statistics. The auto night stats collection work should give priority to update statistics on these objects.

  • Performance degradation when the increase of the queues in the same queue space

    Hello

    We run Tuxedo 8.1, 32-bit with patch level 258 in our windows Server 2003 based production environment.

    We had to add a few more lines (about 10) in our current premises of the queue. However, what we have noticed is that there is a degradation in rate messages. It takes about 10% longer to enqueue a message and process messages that before these new lines have been introduced (despite the place is not the case for the majority of these new queues).

    How can we improve this performance in a tuxedo?

    Kind regards
    Asim

    Hey Asim,

    How important is this degradation? I can imagine a very small amount simply by adding additional lines, but I think it would be quite negligible. If the server TMQUEUE is simply more busy, because there are several queues, so more queue/wait operations. If this is the case, you can run several servers TMQUEUE to increase the flow of the queue space.

    Kind regards
    Todd little
    Chief Architect of Oracle Tuxedo

  • Performance degraded after sometimes navigation

    After a certain period of internet browsing, all of a sudden my browsing becomes slow and I'm not able to open right click. If I click right click then blank tab shows no data. I use firefox 3.6.24

    Please see the help article for what to do when Firefox is slow - how to make it faster

    If this answer solved your problem, please check the issue as "resolved". This way you will help others in a similar situation.

  • Perform aggregations in the Collections

    Hello

    I have a nested table of the folder inside a PL/SQL procedure that has values in the form of

    5 4 X

    X 5 3

    I have a table that has a value in the form of

    C1 c2 c3

    X       5       6.

    I need to update the table with the average of the values for X that is my new data id in the table would be

    C1 c2 c3

    5 X 3.5

    I can't create a new table to store the temporary result nor am that I allow me to create an object or a collection outside of PL/SQL. So I was using collections as an alternative

    Please let me know how this is possible or if I'm doomed because of my lack of privileges.

    Thank you

    There is no easy way. You need to loop through it. You can use the PL/SQL collection type in SQL.

    SQL > select * from aa.

    COL1 COL2 COL3
    ---------- ---------- ----------
    x                   5          4

    SQL > declare
    2 tbl type is table of a.a. % rowtype index by pls_integer;
    3
    tbl l_arr 4;
    5 l_col1_val varchar2 (10): = 'x '.
    6 l_col2_sum integer: = 0;
    7 l_col3_sum integer: = 0;
    8 l_cnt integer: = 0;
    9 start
    10 l_arr (1) .col1: = 'x '.
    11 l_arr (1) .col2: = 5;
    12 l_arr (1) .col3: = 3;
    13
    14 l_arr (2) .col1: = 'x '.
    15 l_arr (2) .col2: = 5;
    16 l_arr (2) .col3: = 4;
    17
    18 because I in 1.l_arr.count
    19 loop
    20 if l_arr (i) .col1 = l_col1_val then
    21 l_cnt: = l_cnt + 1;
    22 l_col2_sum: = l_col2_sum + l_arr (i) .col2;
    23 l_col3_sum: = l_col3_sum + l_arr (i) .col3;
    24 end if;
    25 end of loop;
    26
    27 day aa
    28 set col2 = l_col2_sum / l_cnt
    29, col3 = l_col3_sum / l_cnt
    30 where col1 = l_col1_val;
    ends 31;
    32.

    PL/SQL procedure successfully completed.

    SQL > select * from aa.

    COL1 COL2 COL3
    ---------- ---------- ----------
    x                   5        3.5

    SQL >

  • With respect to the analysis and collection of statistics

    Hello

    Why the collection of statistics is necessary in the DB?

    Thank you

    Hello Balmo-Oracle

    You can start reading this document for example: Managing optimizer statistics

    or this: http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-stats-concepts-110711-1354477.pdf

    The best way if you check yourself in google.

    Best regards, David

  • Collection of statistics with the cascade option is slow

    Hi all.

    The database is 11.2.0.3 on a linux machine.

    I published the following command, but the session was a bit slow.

    The size of the table is about 50 GB and has 3 clues.

    I said 'degree = 8' for parallel processing.

    When collecting statistics on the table, parallel slaves have been invoked and gather statistics on the table ended pretty quickly.

    However, when he goes to the collection of statistics on indexes, only an active session was invocked and so "level = 8" option is ignored.

    My question is:

    Do I need to use dbms_stats.gahter_index_stats instead of the option "cascade" in order to gather statistics on indexes with parallelism?

    exec dbms_stats.gather_table_stats(ownname=>'SDPSTGOUT',tabname=>'OUT_SDP_CONTACT_HIS',estimate_percent=>10, degree=>8 , method_opt=>'FOR ALL COLUMNS SIZE 1',Granularity=>'ALL',cascade=>TRUE);
    Thanks in advance.

    Best regards.

    Hello

    This could happen due to the index being created as NOPARALLEL. Try to redefine with DOP = 8 and see if that helps (running a quick test to verify this before making any expensive DDLS).

    Best regards
    Nikolai

  • Execute "dbms_stats.gather_table_stats in the start = &gt; run until the peak hours = &gt; pause = &gt; restart from where he was arrested after the rush hour" mode.

    Guys,

    Please let know us if it is possible to run dbms_stats.gather_table_stats in "start = > Execute until off-peak hours = > take a break during peak times = > restart from where he was arrested after the rush hour" mode.

    I partitioned table with mammoth number of records requiring full table statistics collection once all 3 months. Stats gathering is so expensive that it takes almost 2 days to complete.  Our goal is to ensure that no SQL that runs during peak hours (15:00 to 22:00 hours) is affected by the collection of statistics. That's why we would like the stats collection to run above mode.

    Ideas, suggestions would be very appreciated and I will receive with the profound esteem.

    -Bugs

    Like others, I wonder why full stats are required.

    You nightly stats together people with reduced mobility?

    Check dba_tab_modifications for the changes made to the table, your stats every night, whether its stats are needed comes from there so if a partition has no 10% change, why you re - collect his stats for her?  There are a few other things going on behind the scenes, but close enough.  Even if you set additional stats there and kick that off, he'll simply choose the following who did not, risking several work trying to do the same thing

    theres probably 10 different ways to act how you're good with plql but generate your work statistics with something like that, then open multiple windows of sqlplus and divide it into sections while you are in the collection of statistics on several pieces at the same time.    Failure could write some autonomous plsql proc that analyzes a partition that went in there and call that any amount of time passing in the partition in a loop, but you get the idea of gathering statistics on more than one partition at the same time.

    Change the value of table_owner and table_name

    exec DBMS_STATS. FLUSH_DATABASE_MONITORING_INFO;

    Select the command from

    (select (nvl(a.updates,0) + nvl(a.inserts,0) + nvl(a.deletes,0)) * 100)

    / nvl(b.num_rows,1) as change.

    "exec DBMS_STATS. GATHER_TABLE_STATS (' |) '''' || "TABLE_OWNER" | '''' || ',' || '''' || 'TABLE_NAME ' | '''' || ', granularity =>' | '''' || 'PARTITION | ''''||  ', PARTNAME =>' | '''' ||  a.PARTITION_NAME | '''' || ', ESTIMATE_PERCENT => DBMS_STATS. AUTO_SAMPLE_SIZE, METHOD_OPT => ' | '''' || 'FOR ALL COLUMNS SIZE AUTO ' | '''' || ')' as a command

    of dba_tab_modifications a, b dba_tab_partitions

    where a.table_name as "TABLE PARTITIONNEE YOUR BIG".

    and a.table_name = b.table_name

    and a.partition_name = b.partition_name

    and nvl(a.updates,0) + nvl(a.inserts,0) + nvl(a.deletes,0) > 0

    )

    where to change > = 10

  • Collection of statistics on partitioned and non-partitioned tables

    Hi all
    My DB is 11.1

    I find that the collection of statistics on partitioned tables are really slow.
    TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE LAST_ANALYZED PARTITIONED COMPRESSION
    ------------------------------ ---------- ---------- ----------- ------------- ----------- -----------
    O_FCT_BP1                        112123170     843140    11212317 8/30/2011 3:5            NO                    DISABLED
    LEON_123456                      112096060     521984    11209606 8/30/2011 4:2           NO                   ENABLED
    O_FCT                           115170000     486556      115170 8/29/2011 6:3            YES         
    
    SQL> SELECT COUNT(*)  FROM user_tab_subpartitions
      2  WHERE table_name =O_FCT'
      3  ;
    
      COUNT(*)
    ----------
           112
    I used the following script:
    BEGIN
      DBMS_STATS.GATHER_TABLE_STATS(ownname          => user,
                                    tabname          => O_FCT',
                                    method_opt       => 'for all columns size auto',
                                    degree           => 4,
                                    estimate_percent =>10,
                                    granularity      => 'ALL',
                                    cascade          => false);
    END;
    /
    It costs 2 minutes for the first two tables to gather statistics respectively, but more than 10 minutes for the partitioned table.
    Time of collection of statistics represents a large part of the time of the whole lot.
    Most of the work of the lot are at full load, which case all partitions and subpartitions will be affected and we cannot collect just specified partitions.

    Does anyone have experiences on this subject? Thank you very much.

    Best regards
    Leon

    Published by: user12064076 on August 30, 2011 01:45

    Hi Leon

    Why don't collect you statistics to the partition level? If your data partitions will not change after a day (score range date for ex), you can simply do to the partition level

    GRANULARITY-online 'SCORE' for partition level and
    GRANULARITY-online 'SUBPARTITION' for subpartition level

    You collect global stats whenever you can not require.

    Published by: user12035575 on August 30, 2011 01:50

  • First Pro CC 2015.0.1 performance degradation

    May have figured out the answer, but maybe ask here anyway will help others...

    Background: race Windows 7 Pro on an Intel Core i7 to 4.0 Ghz processor, 16 GB of ram; PNY/Nvidia Quadro K2200, 4 GB of ram. three 2 TB Western Digital Black HDs put in place for performance (as I read somewhere..).

    Project is really minimal. All Canon 5DM 2 (*.) MOV clips) from 6 to 10 minutes each. Now to eight sequences, more than one to three separate clips, about 14 minutes longer. Audio is of the clips only.

    Get all the clips to each new sequence: Motion/scale 105 and Motion/Rotation-1, 0; Color Lumetri; Unsharp mask; Volume control; Vocal Enhancer; and a low-pass filter on them. Some of the specific parameters (in particular, Lumetri color and low-pass filter) are adjusted back within a single sequence.

    As I continue to add footage and edit each, I see a gradual but a degradation in performance deep, to the point that the last two sequences can't play the 'Fit' and 'reading of 1/2 resolution' without stopping every second or two, sometimes for four or five seconds. I'm * not * do the death "Has become" refractory Glas.

    All sequences have the infamous "Red Bar".

    As soon as I 'Render effects in work area' the red bar turns green and I seem to be good to go.

    So, questions:

    (1) if I know I'll apply the effects specific to all the clips to each new sequence, just do that, rendered effects just here and before I do all the detailed changes and pick up a cup of coffee until the new sequence has a green bar?

    I think I know the answer to that, but confirmation would be appreciated.

    (2) effects that I use on each clip, which struck such a deep performance: effects on the video itself, or on the audio?

    Again, I think I know the answer to that, but once again confirmation would be appreciated.


    Then:


    (3) is the Premiere Pro keep track of * all * another non-rendus sequences and clips, so that even if I'm working only a sequence at the same time, all previous, unreturned sequences (their presence is kept in memory, or in a file somewhere that is constantly struck on) are slow and cumulative performance, such as the harder I work and add new sequences , more performance degrade until the PP is simply unusable. Again, this is * not * a project complex, by any means. I do not understand why work on the eighth or ninth sequence causes such a performance deeply moved when working on the first or the second does not.

    If this is the case, a revision of software might be in order, such as immediate execution is only affected by the not rendered clip, one that is being developed at the moment.

    1 n ° make your changes first, and then add the effects when you're done editing and export loans.

    2. sharpness filter.

  • Collection of statistics online

    Hi all

    Is it possible to collect statistics for a schema that use sound. When I try to analyse the tables to a diagram, it shows that the statistics for this table are locked. So is it possible that rather than analyze a table, one by one, I can go for gathering statistics for schema objects of this scheme while is still in use (such as DML or select statements issued on these schema objects).

    DB version: 10.2.0.4
    Version of the OS: RHEL 5.8

    DB type: CARS


    Kind regards
    Imran Khan

    Imran khan says:
    Hi Mark,

    Why is there a question about the inability to update statistics? Someone has locked the collection of statistics.

    How can we check if statistics collection has been locked? Are you talking about on the level of the db, or for a particular schema or its purpose? As far as I know in oracle 10g statistics are collected automatically (if I have a lot of m).

    Kind regards
    Imran Khan

    Published by: imran khan on December 24, 2012 07:33

    Well, as SB said, just look at the relevant information in the documentation for your version.

    If you don't have local copies of the documentation for your version, you can find copies for recent versions here:

    http://www.Oracle.com/technetwork/indexes/documentation/index.html

    (or docs.oracle.com or tahiti.oracle.com)

    I recommend you watch DBMS_STATS package and the [USER |] ALL THE | S/n] _TAB_STATISTICS view (s).

    Those who should have what you need.

Maybe you are looking for